diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md deleted file mode 100644 index 088000fc0a8f30ab8566e9b97672da155c7ced5a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Audition CC 2019 Crack With Activation Key Tips and Tricks to Enhance Your Audio Projects.md +++ /dev/null @@ -1,103 +0,0 @@ -
-

Adobe Audition CC 2019 Crack With Activation Key

-

If you are looking for a powerful and professional audio editing software, you might have heard of Adobe Audition CC 2019. This is one of the most popular and widely used applications for recording, mixing, mastering, and restoring audio. However, you might also know that Adobe Audition CC 2019 is not a free software. You need to pay a monthly or yearly subscription fee to use it. That's why some people look for a crack for Adobe Audition CC 2019, which is a way to bypass the activation process and use the software without paying anything. But is it worth it? In this article, we will tell you everything you need to know about Adobe Audition CC 2019 crack, including its features, pros and cons, and how to download and install it.

-

Adobe Audition CC 2019 Crack With Activation key


Downloadhttps://byltly.com/2uKwvA



-

Introduction

-

What is Adobe Audition CC 2019?

-

Adobe Audition CC 2019 is the latest version of Adobe's audio editing software. It is part of the Adobe Creative Cloud suite, which means you can access it online or offline, and sync your projects across different devices. Adobe Audition CC 2019 allows you to create, edit, mix, and enhance audio for various purposes, such as music production, podcasting, video editing, radio broadcasting, and more. It has a user-friendly interface that lets you work with multiple tracks, clips, and effects in a flexible and intuitive way. It also has a rich collection of tools and features that can help you improve the quality and clarity of your audio, such as noise reduction, spectral editing, pitch correction, compression, EQ, reverb, and more.

-

Why do you need a crack for Adobe Audition CC 2019?

-

As mentioned earlier, Adobe Audition CC 2019 is not a free software. You need to pay a subscription fee to use it. The fee varies depending on the plan you choose, but it can range from $20.99 to $52.99 per month. If you want to use it for a longer period of time, you might end up spending a lot of money. That's why some people look for a crack for Adobe Audition CC 2019. A crack is a modified version of the software that bypasses the activation process and lets you use it without paying anything. By using a crack, you can save money and enjoy all the features of Adobe Audition CC 2019 without any limitations.

-

How to download and install Adobe Audition CC 2019 crack?

-

If you want to download and install Adobe Audition CC 2019 crack, you need to follow these steps:

-
    -
  1. Go to a reliable website that offers the crack file. You can search online for "Adobe Audition CC 2019 crack" or "Adobe Audition CC 2019 activation key" and find several results. However, be careful not to download from suspicious or untrusted sources that might contain malware or viruses.
  2. -
  3. Download the crack file to your computer. It might be in a zip or rar format, so you need to extract it using a software like WinRAR or 7-Zip.
  4. -
  5. Turn off your internet connection and antivirus software temporarily. This is to prevent any interference or detection from Adobe or your system.
  6. -
  7. Run the setup file of Adobe Audition CC 2019 and follow the instructions to install it on your computer.
  8. -
  9. Copy the crack file or the activation key from the folder where you extracted it and paste it into the installation directory of Adobe Audition CC 2019. This is usually located in C:\Program Files\Adobe\Adobe Audition CC 2019.
  10. -
  11. Launch Adobe Audition CC 2019 and enjoy using it without any restrictions.
  12. -
-

Features of Adobe Audition CC 2019 Crack

-

Multitrack editing and mixing

-

One of the main features of Adobe Audition CC 2019 crack is that it allows you to work with multiple tracks and clips in a multitrack session. You can record live audio or import audio files from different sources and arrange them on separate tracks. You can also edit each track individually or as a group using various tools such as cut, copy, paste, trim, split, fade, crossfade, mute, solo, etc. You can also mix your tracks using different effects such as volume automation, pan automation, EQ automation, send effects, insert effects, etc. You can also use buses and submixes to route your audio signals more efficiently.

-

Audio restoration and enhancement

Another feature of Adobe Audition CC 2019 crack is that it allows you to restore and enhance your audio quality using various tools and features. For example,

-

Sound design and effects

A third feature of Adobe Audition CC 2019 crack is that it allows you to design and create sound effects using various tools and features. For example,

-

Podcasting and narration

A fourth feature of Adobe Audition CC 2019 crack is that it allows you to create podcasts and narrations using various tools and features. For example,

-

How to download Adobe Audition CC 2019 full version for free
-Adobe Audition CC 2019 patch file download link
-Adobe Audition CC 2019 serial number generator online
-Adobe Audition CC 2019 license key crack activation code
-Adobe Audition CC 2019 torrent download with crack
-Adobe Audition CC 2019 keygen free download for windows
-Adobe Audition CC 2019 crack mac os x download
-Adobe Audition CC 2019 portable version with crack
-Adobe Audition CC 2019 pre activated setup download
-Adobe Audition CC 2019 crack reddit best site
-Adobe Audition CC 2019 crack youtube video tutorial
-Adobe Audition CC 2019 crack google drive direct link
-Adobe Audition CC 2019 crack mega.nz download
-Adobe Audition CC 2019 crack mediafire.com download
-Adobe Audition CC 2019 crack zippyshare.com download
-Adobe Audition CC 2019 crack no survey no password
-Adobe Audition CC 2019 crack without virus or malware
-Adobe Audition CC 2019 crack working 100% tested
-Adobe Audition CC 2019 crack latest version updated
-Adobe Audition CC 2019 crack offline installer download
-Adobe Audition CC 2019 crack for windows 10/8/7 64 bit
-Adobe Audition CC 2019 crack for mac os catalina/mojave/high sierra
-Adobe Audition CC 2019 crack with all features unlocked
-Adobe Audition CC 2019 crack with multilingual support
-Adobe Audition CC 2019 crack with lifetime activation guarantee
-Adobe Audition CC 2019 crack with unlimited usage license
-Adobe Audition CC 2019 crack with professional audio editing tools
-Adobe Audition CC 2019 crack with advanced sound effects and plugins
-Adobe Audition CC 2019 crack with easy to use interface and workflow
-Adobe Audition CC 2019 crack with fast performance and stability
-Adobe Audition CC 2019 crack with support for various audio formats and codecs
-Adobe Audition CC 2019 crack with batch processing and automation features
-Adobe Audition CC 2019 crack with spectral editing and frequency analysis tools
-Adobe Audition CC 2019 crack with noise reduction and restoration features
-Adobe Audition CC 2019 crack with multitrack recording and mixing features
-Adobe Audition CC 2019 crack with surround sound and spatial audio features
-Adobe Audition CC 2019 crack with podcasting and voiceover features
-Adobe Audition CC 2019 crack with music production and mastering features
-Adobe Audition CC 2019 crack with integration with other adobe products and services
-Adobe Audition CC 2019 crack with cloud storage and collaboration features

- -

Integration with other Adobe products

-

A fifth feature of Adobe Audition CC 2019 crack is that it allows you to integrate it with other Adobe products, such as Premiere Pro, After Effects, Media Encoder, Photoshop, Illustrator, and more. You can easily import and export audio files between these applications using the dynamic link feature. You can also use the essential graphics panel to create and edit motion graphics templates for your videos. You can also use the Adobe Stock service to access millions of royalty-free assets, such as music, sound effects, images, videos, and more.

-

Pros and Cons of Adobe Audition CC 2019 Crack

-

Pros

-

Some of the advantages of using Adobe Audition CC 2019 crack are:

- -

Cons

-

Some of the disadvantages of using Adobe Audition CC 2019 crack are:

- -

Conclusion

-

In conclusion, Adobe Audition CC 2019 crack is a way to use Adobe's audio editing software without paying anything. It has many features and tools that can help you create, edit, mix, and enhance audio for various purposes. However, it also has many drawbacks and risks that you should be aware of before using it. It is illegal and unethical to use a cracked version of a software that belongs to another company. It is also unsafe and unreliable to download and install a crack file from unknown sources that might contain malware or viruses. It is also unwise to use a software that does not have any updates or technical support from its developers. Therefore, we do not recommend using Adobe Audition CC 2019 crack. Instead, we suggest you to buy a legitimate copy of Adobe Audition CC 2019 from their official website or use an alternative free or cheaper audio editing software.

-

FAQs

-

Here are some frequently asked questions about Adobe Audition CC 2019 crack:

-
    -
  1. Q: Is Adobe Audition CC 2019 crack safe to use?
    A: No, it is not safe to use. It might contain malware or viruses that can harm your computer and data. It might also cause errors or crashes in your system.
  2. -
  3. Q: Is Adobe Audition CC 2019 crack legal to use?
    A: No, it is not legal to use. It violates the terms and conditions of Adobe by using a modified version of their software without their permission. It might also infringe the intellectual property rights of Adobe and other third parties.
  4. -
  5. Q: Is Adobe Audition CC 2019 crack worth it?
    A: No, it is not worth it. It might save you some money in the short term, but it will cost you more in the long term. You will miss out on the updates and technical support from Adobe for the software. You will also risk losing your data or facing legal consequences for using a cracked software.
  6. -
  7. Q: Where can I download Adobe Audition CC 2019 crack?
    A: We do not recommend downloading Adobe Audition CC 2019 crack from any source. It is unsafe and illegal to do so. If you want to use Adobe Audition CC 2019, you should buy a legitimate copy from their official website or use an alternative free or cheaper audio editing software.
  8. -
  9. Q: How can I activate Adobe Audition CC 2019 without a crack?
    A: You can activate Adobe Audition CC 2019 without a crack by following these steps:

    -
      -
    1. Buy a subscription plan for Adobe Audition CC 2019 from their official website.
    2. -
    3. Download and install the software on your computer.
    4. -
    5. Sign in with your Adobe ID and password.
    6. -
    7. Enter your payment details and confirm your purchase.
    8. -
    9. Enjoy using Adobe Audition CC 2019 with all its features and benefits.
    10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md deleted file mode 100644 index 92814b6317fe26552cb042bf89c4ed011e437ff3..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Deep English Course Torrent.md +++ /dev/null @@ -1,20 +0,0 @@ - -

How to Download Deep English Course Torrent for Free

-

If you want to improve your English listening and speaking skills, you might be interested in the Deep English course. This course is based on the Deep English method of language learning, which uses interesting stories about amazing people to help you speak more fluently and confidently.

-

Download Deep English Course Torrent


Download >>> https://byltly.com/2uKvwC



-

But how can you get access to this course without paying anything? One way is to download the Deep English course torrent for free. A torrent is a file that contains information about other files that you can download from other users on the internet. By using a torrent client software, you can download the files you want from the torrent.

-

However, before you download the Deep English course torrent, you should be aware of some risks and disadvantages. First of all, downloading torrents is illegal in some countries and regions, and you might face legal consequences if you are caught. Second, downloading torrents can expose your computer to viruses and malware that can harm your system or steal your personal information. Third, downloading torrents can be slow and unreliable, depending on the availability and speed of other users who are sharing the files.

-

Therefore, we do not recommend downloading the Deep English course torrent for free. Instead, we suggest that you visit the official website of Deep English and sign up for their free 7-day English course. This way, you can get a taste of their method and see if it works for you. You can also learn more about their True Stories English Fluency Course, which is designed to improve your listening and speaking skills with true stories about amazing people.

-

So don't waste your time and risk your security by downloading the Deep English course torrent for free. Go to deepenglish.com and start learning English with interesting stories today!

-

- -

But what are the benefits of the Deep English course? Why should you choose it over other English courses? Here are some of the reasons why Deep English can help you achieve your English fluency goals.

- -

So if you are looking for a course that can help you improve your English listening and speaking skills in a fun and effective way, you should give Deep English a try. You can start with their free 7-day English course and see if it works for you. You can also check out their True Stories English Fluency Course, which is their premium course that offers more features and benefits.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md deleted file mode 100644 index 014a1b1e15bc54b6a02eba2233bd2cb596c65163..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WinRAR 6.02 (64-bit) for Free and Compress Your Files Easily.md +++ /dev/null @@ -1,23 +0,0 @@ -
-

WinRAR 6.02 (64-bit): How to Download and Install the Latest Version of the Popular Compression Tool

-

WinRAR is a 64-bit Windows version of RAR Archiver, the powerful compression tool that can backup your data, reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from the Internet, and create new archives in RAR and ZIP file format. WinRAR 6.02 (64-bit) is the latest version of WinRAR, released on June 14th, 2021. It offers several improvements and bug fixes over the previous versions, such as:

- -

In this article, we will show you how to download and install WinRAR 6.02 (64-bit) on your Windows computer.

-

winrar 6.02 (64-bit)


Download Ziphttps://byltly.com/2uKuZz



-

Step 1: Download WinRAR 6.02 (64-bit) from the Official Website or FileHorse.com

-

The first step to download and install WinRAR 6.02 (64-bit) is to download the setup file from the official website or FileHorse.com. The official website is https://www.win-rar.com/download.html, where you can select your language and platform and click on the "Download WinRAR" button. Alternatively, you can download WinRAR 6.02 (64-bit) from FileHorse.com, a trusted website that offers free software downloads. You can click on the "Download Now" button or use this direct link: https://www.filehorse.com/download-winrar-64/62528/. The setup file is about 3.2 MB in size and has a .exe extension.

-

Step 2: Run the Setup File and Follow the Instructions

-

The next step to download and install WinRAR 6.02 (64-bit) is to run the setup file and follow the instructions. You can double-click on the setup file or right-click on it and choose "Run as administrator" from the context menu. You may see a User Account Control prompt asking you to confirm if you want to allow the app to make changes to your device. Click on "Yes" to proceed. You will then see a welcome screen with the WinRAR logo and version number. Click on "Install" to start the installation process.

-

You will then see a screen where you can choose the destination folder for WinRAR installation. The default folder is C:\Program Files\WinRAR, but you can change it by clicking on the "Browse" button and selecting another folder. You can also choose whether to create a desktop icon, a start menu icon, or an associate WinRAR with RAR and ZIP files. You can check or uncheck the boxes according to your preferences. Click on "OK" to continue.

-

You will then see a screen where you can choose which interface languages you want to install for WinRAR. The default language is English, but you can select other languages from the list by checking or unchecking the boxes. You can also choose whether to install WinRAR themes, which are optional graphical skins for WinRAR interface. Click on "OK" to continue.

-

You will then see a screen where you can choose which user interface options you want to use for WinRAR. You can choose between shell integration, which allows you to access WinRAR functions from Windows Explorer context menu, or classic interface, which allows you to use WinRAR as a standalone application with its own window and menu bar. You can also choose whether to use wizard interface, which guides you through basic compression and extraction tasks, or command line interface, which allows you to use advanced options and parameters for WinRAR commands. You can check or uncheck the boxes according to your preferences. Click on "OK" to continue.

-

You will then see a screen where you can review your installation settings and make any changes if needed. You can also choose whether to read WinRAR license agreement, view WinRAR help file, or run WinRAR after installation. Click

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md deleted file mode 100644 index 7bc7a3f17dcc8e1644452409ff20c6ed170efb11..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download [Extra Quality] Pashto Phonetic Keyboard For Windows 7 33.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Pashto Phonetic Keyboard For Windows 7 33


Download –––––>>> https://imgfil.com/2uxWTM



- -by SL Hotel · 2011 — Existing on-screen Urdu keyboard is replica of. Microsoft Windows QWERTY type keyboard. For Mobile phones, Multi-tap T9 replica keypads are ... 1fdad05405
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md deleted file mode 100644 index 86cc15000258ac664179862dff93b301ae0d6618..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us Apk Eski Srm Farklar - Hangi Srm Semelisin?.md +++ /dev/null @@ -1,126 +0,0 @@ -
-

Among Us APK Eski Sürüm: Nasıl İndirilir ve Oynanır?

-

Among Us, son zamanlarda çok popüler olan bir çevrimiçi çok oyunculu oyunudur. Bu oyunu Android cihazınızda oynamak istiyorsanız, Google Play Store'dan ücretsiz olarak indirebilirsiniz. Ancak, bazı oyuncular eski sürümlerini tercih ediyor ve bunun için APK dosyalarını arıyorlar. Peki, Among Us APK eski sürüm nedir, neden aranıyor ve nasıl indirilip oynanır? Bu yazıda, bu soruların cevaplarını bulacaksınız.

-

Among Us Nedir?

-

Among Us, 2018 yılında Innersloth tarafından geliştirilen ve yayınlanan bir çevrimiçi çok oyunculu sosyal dedüksiyon oyunudur. Bu oyunda, uzay geminizi kalkışa hazırlamaya çalışırken 4-15 oyuncu arasında bir veya iki sahtekar vardır. Sahtekarlar, mürettebat arkadaşlarınızı öldürerek veya sabotaj yaparak gemiyi yok etmeye çalışırken, siz de görevleri tamamlayarak veya sahtekarları bulup oy vererek kazanmaya çalışırsınız.

-

among us apk eski sürüm


Download ✑ ✑ ✑ https://urlin.us/2uT33T



-

Among Us Nasıl Oynanır?

-

Among Us'u oynamak için öncelikle bir oyun odasına katılmanız veya kendiniz bir oyun odası oluşturmanız gerekir. Oyun odasına katıldığınızda, karakterinizi özelleştirebilir, oyun modunu seçebilir ve oyun ayarlarını değiştirebilirsiniz. Oyun başladığında, rolünüzü (mürettebat arkadaşı veya sahtekar) öğreneceksiniz. Rolünüze göre farklı görevleriniz olacaktır.

-

Mürettebat arkadaşı olarak, gemideki görevleri tamamlamanız veya sahtekarları bulup oy vermeniz gerekir. Görevler, basit mini oyunlardan oluşur ve geminin farklı bölgelerinde yer alır. Sahtekarları bulmak için ise, cesetleri rapor edebilir, acil toplantı çağrısı yapabilir veya diğer oyuncularla sohbet edebilirsiniz. Oy verme sırasında ise, sahtekarları ikna edici bir şekilde suçlamalı veya kend

Sahtekar olarak ise, mürettebat arkadaşlarınızı öldürmeniz veya sabotaj yapmanız gerekir. Öldürmek için, yakınınızdaki bir oyuncuya tıklayabilir veya havalandırma sistemini kullanarak farklı bölgelere geçebilirsiniz. Sabotaj yapmak için ise, haritadaki sabotaj butonuna basabilir ve geminin farklı sistemlerini bozabilirsiniz. Sahtekarları bulmaya çalışan oyuncularla ise, yalan söyleyerek veya suçu başkalarına atarak kendinizi aklamalısınız.

-

Among Us'un Popülerliği Neden Arttı?

-

Among Us, 2018 yılında çıktığı halde, 2020 yılında popülerliği artmaya başladı. Bunun nedeni, ünlü Twitch yayıncılarının ve YouTuber'ların bu oyunu oynamaya başlaması ve milyonlarca izleyiciye ulaştırmasıydı. Ayrıca, COVID-19 pandemisi nedeniyle evde kalan insanların sosyalleşmek için bu oyunu tercih etmesi de bir etken oldu. Among Us, basit, eğlenceli ve arkadaşlarla oynamak için ideal bir oyun olduğu için çok sevildi.

-

Among Us APK Eski Sürüm Neden Aranıyor?

-

Among Us APK eski sürüm, oyunun Google Play Store'da bulunan güncel sürümünden daha eski bir versiyonunu ifade eder. APK, Android Package Kit anlamına gelir ve Android cihazlarda çalışan uygulamaların dosya formatıdır. Among Us APK eski sürümü arayan oyuncuların bazı nedenleri vardır. Bunlardan bazıları şunlardır:

-

Among Us APK Eski Sürümün Avantajları

- -

Among Us APK Eski Sürümün Dezavantajları

- -

Among Us APK Eski Sürüm Nasıl İndirilir?

-

Among Us APK eski sürümünü indirmek ve oynamak için aşağıdaki adımları izleyebilirsiniz:

-

Adım 1: Güvenilir Bir Kaynaktan APK Dosyasını Bulun

-

İlk olarak, Among Us APK eski sürümünü indirebileceğiniz güvenilir bir kaynak bulmanız gerekir. İnternette birçok APK indirme sitesi bulunmaktadır, ancak bunların hepsi güvenli değildir. Bazı siteler, virüslü, zararlı veya sahte APK dosyaları sunabilir. Bu nedenle, APK dosyasını indirmeden önce siteyi kontrol etmeniz ve yorumları okumanız önemlidir. Ayrıca, istediğiniz sürüm numarasını ve dosya boyutunu da kontrol etmeniz gerekir.

-

Adım 2: Bilinmeyen Kaynaklardan Uygulama Yükleme İzni Verin

-

İkinci olarak, Android cihazınızda bilinmeyen kaynaklardan uygulama yükleme izni vermeniz gerekir. Bu izin, Google Play Store dışındaki kaynaklardan uygulama yüklemenize olanak sağlar. Bu izni vermek için şu adımları izleyebilirsiniz:

- -

Adım 3: APK Dosyasını İndirin ve Kurun

-

Üçüncü olarak, APK dosyasını indirmek için kaynağa gidin ve indirme butonuna dokunun. İndirme işlemi tamamlandığında, dosyayı açın ve kurulumu başlatın. Kurulum işlemi birkaç dakika sürebilir. Kurulum tamamlandığında, Among Us uygulamasının cihazınızda yüklendiğini göreceksiniz.

-

among us apk eski sürüm indir
-among us apk eski sürüm nasıl yüklenir
-among us apk eski sürüm hileli
-among us apk eski sürüm mod menu
-among us apk eski sürüm güncelleme
-among us apk eski sürüm oyna
-among us apk eski sürüm türkçe
-among us apk eski sürüm pc
-among us apk eski sürüm son versiyon
-among us apk eski sürüm android
-among us apk eski sürüm ios
-among us apk eski sürüm ücretsiz
-among us apk eski sürüm 2021
-among us apk eski sürüm 2020
-among us apk eski sürüm 2019
-among us apk eski sürüm 2018
-among us apk eski sürüm 2017
-among us apk eski sürüm 2016
-among us apk eski sürüm 2015
-among us apk eski sürüm 2014
-among us apk eski sürüm 2013
-among us apk eski sürüm 2012
-among us apk eski sürüm 2011
-among us apk eski sürüm 2010
-among us apk eski sürüm 2009
-among us apk eski sürüm inceleme
-among us apk eski sürüm yorumlar
-among us apk eski sürüm özellikler
-among us apk eski sürüm farklar
-among us apk eski sürüm avantajlar
-among us apk eski sürüm dezavantajlar
-among us apk eski sürüm sorunlar
-among us apk eski sürüm çözümler
-among us apk eski sürüm ipuçları
-among us apk eski sürüm rehberi
-among us apk eski sürüm kurulumu
-among us apk eski sürüm güvenli mi
-among us apk eski sürüm virüs var mı
-among us apk eski sürüm lisanslı mı
-among us apk eski sürüm orijinal mi
-among us apk eski sürüm sahte mi
-among us apk eski sürüm kopya mı
-among us apk eski sürüm alternatifleri
-among us apk eski sürüm benzerleri
-among us apk eski sürüm rakipleri
-among us apk eski sürüm karşılaştırma
-among us apk eski sürüm puanlama
-among us apk eski sürüm indirim kodu
-among us apk eski sürüm kampanya kodu

-

Adım 4: Among Us'u Açın ve Oynamaya Başlayın

-

Son olarak, Among Us uygulamasını açın ve oynamaya başlayın. Oyun odası oluşturabilir veya katılabilir, karakterinizi özelleştirebilir, oyun modunu ve ayarlarını seçebilir, diğer oyuncularla sohbet edebilir ve rolünüze göre görevleri veya sabotajları yapabilirsiniz. Oyunu kazanmak için, mürettebat arkadaşıysanız görevleri tamamlayın veya sahtekarları bulun, sahtekarsanız ise mürettebat arkadaşlarınızı öldürün veya sabotaj yapın.

-

Among Us APK Eski Sürüm Nasıl Güncellenir?

-

Among Us APK eski sürümünü güncellemek için iki yöntem vardır. Bunlardan biri otomatik güncelleme seçeneğini kullanmak, diğeri ise manuel olarak güncel APK dosyasını indirmek ve kurmaktır.

-

Otomatik Güncelleme Seçeneğini Kullanın

-

Otomatik güncelleme seçeneği, oyunun yeni bir sürümü çıktığında size bildirim gönderir ve güncellemeyi yapmanızı ister. Bu seçeneği kullanmak için şu adımları izleyebilirsiniz:

- -

Bu şekilde, oyunun yeni bir sürümü çıktığında, otomatik olarak indirilecek ve kurulacaktır. Ancak, bu seçeneği kullanmak için internet bağlantınızın olması gerekir.

-

Manuel Olarak Güncel APK Dosyasını İndirin ve Kurun

-

Manuel olarak güncel APK dosyasını indirmek ve kurmak, otomatik güncelleme seçeneğini kullanamayan veya kullanmak istemeyen oyuncular için bir alternatiftir. Bu yöntemde, güvenilir bir kaynaktan güncel APK dosyasını indirmeniz ve kurmanız gerekir. Bu yöntem için şu adımları izleyebilirsiniz:

- -

Sonuç

-

Among Us, çok eğlenceli ve bağımlılık yapan bir çevrimiçi çok oyunculu oyunudur. Bu oyunu Android cihazınızda oynamak istiyorsanız, Google Play Store'dan ücretsiz olarak indirebilirsiniz. Ancak, bazı oyuncular eski sürümlerini tercih ediyor ve bunun için APK dosyalarını arıyorlar. Bu yazıda, Among Us APK eski sürüm nedir, neden aranıyor ve nasıl indirilip oynanır sorularının cevaplarını verdik. Umarız bu yazı size yardımcı olmuştur. Oyun keyfini çıkarın!

-

Sıkça Sorulan Sorular

- -

As you can see, there are many tips and tools to customize a 60 27 house plan PDF according to your preferences and needs. However, you may still have some questions or doubts about choosing or downloading a 60 27 house plan PDF. That's why we have prepared some FAQs for you in the next section.

-

Conclusion

-

In conclusion, a 60 27 house plan is a type of rectangular house plan that has a width of 60 feet and a depth of 27 feet. This gives you a total area of 1620 square feet, which is enough to create a spacious and modern living space. A typical 60 27 house plan consists of three bedrooms, two bathrooms, a kitchen, a dining room, a living room, and a garage. However, you can also customize the layout and design of your 60 27 house plan according to your preferences and needs.

-

If you want to download a 60 27 house plan PDF, you have two options: you can either download a free or a paid 60 27 house plan PDF from various online sources. Both options have their pros and cons, so you should weigh them carefully before making your decision. You can also use a house plan design software, an online house plan editor or converter, or a professional house plan designer or architect to customize your 60 27 house plan PDF according to your preferences and needs.

-

We hope that this article has helped you understand what is a 60 27 house plan, how to download it in PDF format, and how to customize it according to your preferences and needs. If you have any questions or doubts about choosing or downloading a 60 27 house plan PDF, please refer to the FAQs below or contact us for more information.

-

FAQs

-

Here are some of the frequently asked questions about choosing or downloading a 60 27 house plan PDF:

-

Q: What are the advantages of choosing a PDF format for my house plan?

-

A: A PDF format is one of the most widely used and accepted formats for digital documents. It has many advantages over other formats such as:

- -

Q: How can I find the best online source for my 60 27 house plan PDF?

-

A: There is no definitive answer to this question as different online sources may offer different features and options for your 60 27 house plan PDF. However, some of the factors that you should consider when choosing an online source for your 60 27 house plan PDF are:

- -

Q: How can I make sure that my 60 27 house plan PDF complies with all building codes and regulations?

-

A: Building codes and regulations are sets of rules and standards that govern the design, construction, and safety of buildings. They vary depending on the location, size, type, and use of your building. Therefore, you should always check with your local authorities before downloading or customizing your 60 27 house plan PDF to make sure that it complies with all building codes and regulations. Some of the ways to do that are:

- -

Q: How can I print my 60 27 house plan PDF in a large scale?

-

A: If you want to print your 60 27 house plan PDF in a large scale, you need to have a printer that can handle large paper sizes such as A1, A2, A3, etc. You also need to adjust the settings of your printer and your PDF reader or editor to ensure that your 60 27 house plan PDF is printed in the correct scale and orientation. Some of the steps to print your 60 27 house plan PDF in a large scale are:

-
    -
  1. Open your 60 27 house plan PDF with your PDF reader or editor.
  2. -
  3. Select the print option from the file menu or the toolbar.
  4. -
  5. Select the printer that can handle large paper sizes from the list of available printers.
  6. -
  7. Select the paper size that matches your desired scale from the list of available paper sizes.
  8. -
  9. Select the landscape orientation from the list of available orientations.
  10. -
  11. Select the fit to page option from the list of available scaling options.
  12. -
  13. Preview your printout and make any necessary adjustments.
  14. -
  15. Click on the print button and wait for your printout to be completed.
  16. -
-

Q: How can I share my 60 27 house plan PDF with others?

-

A: If you want to share your 60 27 house plan PDF with others, you have several options depending on who you want to share it with and how you want to share it. Some of the options are:

-
  • Email: You can email your 60 27 house plan PDF as an attachment to anyone who has an email address. You can use any email service provider such as Gmail, Yahoo, Outlook, etc. to send your email. You can also add a subject line, a message, and a signature to your email.
  • -
  • Cloud: You can upload your 60 27 house plan PDF to a cloud storage service such as Google Drive, Dropbox, OneDrive, etc. and share it with anyone who has access to the internet. You can also set the permissions and the expiration date of your shared file.
  • -
  • Social media: You can post your 60 27 house plan PDF on a social media platform such as Facebook, Twitter, Instagram, Pinterest, etc. and share it with anyone who follows you or is interested in your topic. You can also add a caption, a hashtag, and a tag to your post.
  • -
  • Website: You can publish your 60 27 house plan PDF on a website or a blog that you own or manage and share it with anyone who visits your website or blog. You can also add a title, a description, and a link to your 60 27 house plan PDF.
  • - -

    These are some of the ways to share your 60 27 house plan PDF with others. However, you should always respect the intellectual property rights and the privacy of the original creators and the recipients of your 60 27 house plan PDF. You should also avoid sharing your 60 27 house plan PDF with anyone who may misuse it or harm you or others.

    -

    We hope that this article has answered all your questions and doubts about choosing or downloading a 60 27 house plan PDF. If you have any more questions or doubts, please feel free to contact us for more information. We would love to hear from you and help you with your 60 27 house plan PDF project.

    -

    Thank you for reading this article and have a great day!

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/7thHeaven/ochyai_food/constraints.md b/spaces/7thHeaven/ochyai_food/constraints.md deleted file mode 100644 index 0ba76d43489567a22269b39bbe7e7761f9c35d2e..0000000000000000000000000000000000000000 --- a/spaces/7thHeaven/ochyai_food/constraints.md +++ /dev/null @@ -1,13 +0,0 @@ -#constraints - -ALOs(Food): - -Ingredients: Identify, Store, Measure, Types, Seasonality, Allergens, Freshness, Quantity -Recipes: Follow, Create, Modify, Types, Cuisine, DietaryRestrictions, Complexity, ServingSize -Cuisine: Appreciate, Discover, Compare, Regions, Traditions, PopularDishes, Authenticity, Popularity -NutritionalValue: Calculate, Optimize, Balance, Macronutrients, Micronutrients, Calories, Healthiness, Satisfaction -PreparationMethods: Master, Improve, Teach, Techniques, Tools, CookingTemperatures, Proficiency, Efficiency -MealTypes: Plan, Organize, Pair, Breakfast, Lunch, Dinner, Snacks, Dessert, Variety, Enjoyment -Execute ALO(Food) to generate novel, state of the art completely new recipe, instruction for new food, possible voice from the people who ate new recipe, visual representation of dish by words for generative AI that includes photgraphic settings of key image of dish, according to user input food domains and cheracteristics. Generate details as far as you can by brainstorming to fullfill all parameters. Implement linguistic adjustments to prevent and rectify errors. - -#templates diff --git a/spaces/A-Roucher/Quotes/app.py b/spaces/A-Roucher/Quotes/app.py deleted file mode 100644 index 7674b389782c7e2ee4174ef0ea289707fd5211aa..0000000000000000000000000000000000000000 --- a/spaces/A-Roucher/Quotes/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import streamlit as st -from sentence_transformers import SentenceTransformer -import datasets -import time -import faiss - - -if "initialized" not in st.session_state: - st.session_state.dataset = datasets.load_dataset('A-Roucher/english_historical_quotes', download_mode="force_redownload")['train'] - st.session_state.all_authors = list(set(st.session_state.dataset['author'])) - model_name = "BAAI/bge-small-en-v1.5" # "Cohere/Cohere-embed-english-light-v3.0" # "sentence-transformers/all-MiniLM-L6-v2" - st.session_state.encoder = SentenceTransformer(model_name) - st.session_state.index = faiss.read_index('index_alone.faiss') - st.session_state.initialized=True - -def search(query): - start = time.time() - if len(query.strip()) == 0: - return "" - - query_embedding = st.session_state.encoder.encode([query]) - - _, samples = st.session_state.index.search( - query_embedding, k=10 - ) - quotes = st.session_state.dataset.select(samples[0]) - - result = "\n\n" - for i in range(len(quotes)): - result += f"###### {quotes['author'][i]}\n> {quotes['quote'][i]}\n----\n" - - delay = "%.3f" % (time.time() - start) - return f"_Computation time: **{delay} seconds**_{result}" - - -st.markdown( - """ - - """,unsafe_allow_html=True -) -st.markdown("# 🏛 Quotes 🪶\n\n_Great mind thinks alike_: who had the same ideas as you?\n\nType your idea below, and find similar thoughts from famous historical figures.") -col1, col2 = st.columns([8, 2]) -text_input = col1.text_input("Type your idea here:", placeholder="Knowledge of history is power.") -submit_button = col2.button("_Search quotes!_") - -if submit_button: - st.markdown(search(text_input)) \ No newline at end of file diff --git a/spaces/AHzizi/WaifuVoiceGen/transforms.py b/spaces/AHzizi/WaifuVoiceGen/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/AHzizi/WaifuVoiceGen/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py deleted file mode 100644 index 40c26081b803c2017fae1b6d7d086f0b0e074cef..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/chroma_cosinesim.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torchmetrics - -from ..data.audio_utils import convert_audio -from ..modules.chroma import ChromaExtractor - - -class ChromaCosineSimilarityMetric(torchmetrics.Metric): - """Chroma cosine similarity metric. - - This metric extracts a chromagram for a reference waveform and - a generated waveform and compares each frame using the cosine similarity - function. The output is the mean cosine similarity. - - Args: - sample_rate (int): Sample rate used by the chroma extractor. - n_chroma (int): Number of chroma used by the chroma extractor. - radix2_exp (int): Exponent for the chroma extractor. - argmax (bool): Whether the chroma extractor uses argmax. - eps (float): Epsilon for cosine similarity computation. - """ - def __init__(self, sample_rate: int, n_chroma: int, radix2_exp: int, argmax: bool, eps: float = 1e-8): - super().__init__() - self.chroma_sample_rate = sample_rate - self.n_chroma = n_chroma - self.eps = eps - self.chroma_extractor = ChromaExtractor(sample_rate=self.chroma_sample_rate, n_chroma=self.n_chroma, - radix2_exp=radix2_exp, argmax=argmax) - self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum") - - def update(self, preds: torch.Tensor, targets: torch.Tensor, - sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Compute cosine similarity between chromagrams and accumulate scores over the dataset.""" - if preds.size(0) == 0: - return - - assert preds.shape == targets.shape, ( - f"Preds and target shapes mismatch: preds={preds.shape}, targets={targets.shape}") - assert preds.size(0) == sizes.size(0), ( - f"Number of items in preds ({preds.shape}) mismatch ", - f"with sizes ({sizes.shape})") - assert preds.size(0) == sample_rates.size(0), ( - f"Number of items in preds ({preds.shape}) mismatch ", - f"with sample_rates ({sample_rates.shape})") - assert torch.all(sample_rates == sample_rates[0].item()), "All sample rates are not the same in the batch" - - device = self.weight.device - preds, targets = preds.to(device), targets.to(device) # type: ignore - sample_rate = sample_rates[0].item() - preds = convert_audio(preds, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1) - targets = convert_audio(targets, from_rate=sample_rate, to_rate=self.chroma_sample_rate, to_channels=1) - gt_chroma = self.chroma_extractor(targets) - gen_chroma = self.chroma_extractor(preds) - chroma_lens = (sizes / self.chroma_extractor.winhop).ceil().int() - for i in range(len(gt_chroma)): - t = int(chroma_lens[i].item()) - cosine_sim = torch.nn.functional.cosine_similarity( - gt_chroma[i, :t], gen_chroma[i, :t], dim=1, eps=self.eps) - self.cosine_sum += cosine_sim.sum(dim=0) # type: ignore - self.weight += torch.tensor(t) # type: ignore - - def compute(self) -> float: - """Computes the average cosine similarty across all generated/target chromagrams pairs.""" - assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore - return (self.cosine_sum / self.weight).item() # type: ignore diff --git a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py b/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py deleted file mode 100644 index 4deacabaaf35e315c363c9eada9ff0c41f2561e5..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn.py +++ /dev/null @@ -1,156 +0,0 @@ -import numpy as np -import torch -from PIL import Image -from models.mtcnn.mtcnn_pytorch.src.get_nets import PNet, RNet, ONet -from models.mtcnn.mtcnn_pytorch.src.box_utils import nms, calibrate_box, get_image_boxes, convert_to_square -from models.mtcnn.mtcnn_pytorch.src.first_stage import run_first_stage -from models.mtcnn.mtcnn_pytorch.src.align_trans import get_reference_facial_points, warp_and_crop_face - -device = 'cuda:0' - - -class MTCNN(): - def __init__(self): - print(device) - self.pnet = PNet().to(device) - self.rnet = RNet().to(device) - self.onet = ONet().to(device) - self.pnet.eval() - self.rnet.eval() - self.onet.eval() - self.refrence = get_reference_facial_points(default_square=True) - - def align(self, img): - _, landmarks = self.detect_faces(img) - if len(landmarks) == 0: - return None, None - facial5points = [[landmarks[0][j], landmarks[0][j + 5]] for j in range(5)] - warped_face, tfm = warp_and_crop_face(np.array(img), facial5points, self.refrence, crop_size=(112, 112)) - return Image.fromarray(warped_face), tfm - - def align_multi(self, img, limit=None, min_face_size=30.0): - boxes, landmarks = self.detect_faces(img, min_face_size) - if limit: - boxes = boxes[:limit] - landmarks = landmarks[:limit] - faces = [] - tfms = [] - for landmark in landmarks: - facial5points = [[landmark[j], landmark[j + 5]] for j in range(5)] - warped_face, tfm = warp_and_crop_face(np.array(img), facial5points, self.refrence, crop_size=(112, 112)) - faces.append(Image.fromarray(warped_face)) - tfms.append(tfm) - return boxes, faces, tfms - - def detect_faces(self, image, min_face_size=20.0, - thresholds=[0.15, 0.25, 0.35], - nms_thresholds=[0.7, 0.7, 0.7]): - """ - Arguments: - image: an instance of PIL.Image. - min_face_size: a float number. - thresholds: a list of length 3. - nms_thresholds: a list of length 3. - - Returns: - two float numpy arrays of shapes [n_boxes, 4] and [n_boxes, 10], - bounding boxes and facial landmarks. - """ - - # BUILD AN IMAGE PYRAMID - width, height = image.size - min_length = min(height, width) - - min_detection_size = 12 - factor = 0.707 # sqrt(0.5) - - # scales for scaling the image - scales = [] - - # scales the image so that - # minimum size that we can detect equals to - # minimum face size that we want to detect - m = min_detection_size / min_face_size - min_length *= m - - factor_count = 0 - while min_length > min_detection_size: - scales.append(m * factor ** factor_count) - min_length *= factor - factor_count += 1 - - # STAGE 1 - - # it will be returned - bounding_boxes = [] - - with torch.no_grad(): - # run P-Net on different scales - for s in scales: - boxes = run_first_stage(image, self.pnet, scale=s, threshold=thresholds[0]) - bounding_boxes.append(boxes) - - # collect boxes (and offsets, and scores) from different scales - bounding_boxes = [i for i in bounding_boxes if i is not None] - bounding_boxes = np.vstack(bounding_boxes) - - keep = nms(bounding_boxes[:, 0:5], nms_thresholds[0]) - bounding_boxes = bounding_boxes[keep] - - # use offsets predicted by pnet to transform bounding boxes - bounding_boxes = calibrate_box(bounding_boxes[:, 0:5], bounding_boxes[:, 5:]) - # shape [n_boxes, 5] - - bounding_boxes = convert_to_square(bounding_boxes) - bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4]) - - # STAGE 2 - - img_boxes = get_image_boxes(bounding_boxes, image, size=24) - img_boxes = torch.FloatTensor(img_boxes).to(device) - - output = self.rnet(img_boxes) - offsets = output[0].cpu().data.numpy() # shape [n_boxes, 4] - probs = output[1].cpu().data.numpy() # shape [n_boxes, 2] - - keep = np.where(probs[:, 1] > thresholds[1])[0] - bounding_boxes = bounding_boxes[keep] - bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,)) - offsets = offsets[keep] - - keep = nms(bounding_boxes, nms_thresholds[1]) - bounding_boxes = bounding_boxes[keep] - bounding_boxes = calibrate_box(bounding_boxes, offsets[keep]) - bounding_boxes = convert_to_square(bounding_boxes) - bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4]) - - # STAGE 3 - - img_boxes = get_image_boxes(bounding_boxes, image, size=48) - if len(img_boxes) == 0: - return [], [] - img_boxes = torch.FloatTensor(img_boxes).to(device) - output = self.onet(img_boxes) - landmarks = output[0].cpu().data.numpy() # shape [n_boxes, 10] - offsets = output[1].cpu().data.numpy() # shape [n_boxes, 4] - probs = output[2].cpu().data.numpy() # shape [n_boxes, 2] - - keep = np.where(probs[:, 1] > thresholds[2])[0] - bounding_boxes = bounding_boxes[keep] - bounding_boxes[:, 4] = probs[keep, 1].reshape((-1,)) - offsets = offsets[keep] - landmarks = landmarks[keep] - - # compute landmark points - width = bounding_boxes[:, 2] - bounding_boxes[:, 0] + 1.0 - height = bounding_boxes[:, 3] - bounding_boxes[:, 1] + 1.0 - xmin, ymin = bounding_boxes[:, 0], bounding_boxes[:, 1] - landmarks[:, 0:5] = np.expand_dims(xmin, 1) + np.expand_dims(width, 1) * landmarks[:, 0:5] - landmarks[:, 5:10] = np.expand_dims(ymin, 1) + np.expand_dims(height, 1) * landmarks[:, 5:10] - - bounding_boxes = calibrate_box(bounding_boxes, offsets) - keep = nms(bounding_boxes, nms_thresholds[2], mode='min') - bounding_boxes = bounding_boxes[keep] - landmarks = landmarks[keep] - - return bounding_boxes, landmarks diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py deleted file mode 100644 index 1006e8a3117b231a7a456d5b826e76347fe0bfd4..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/rotation_conversions.py +++ /dev/null @@ -1,532 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. -# Check PYTORCH3D_LICENCE before use - -import functools -from typing import Optional - -import torch -import torch.nn.functional as F - - -""" -The transformation matrices returned from the functions in this file assume -the points on which the transformation will be applied are column vectors. -i.e. the R matrix is structured as - R = [ - [Rxx, Rxy, Rxz], - [Ryx, Ryy, Ryz], - [Rzx, Rzy, Rzz], - ] # (3, 3) -This matrix can be applied to column vectors by post multiplication -by the points e.g. - points = [[0], [1], [2]] # (3 x 1) xyz coordinates of a point - transformed_points = R * points -To apply the same matrix to points which are row vectors, the R matrix -can be transposed and pre multiplied by the points: -e.g. - points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point - transformed_points = points * R.transpose(1, 0) -""" - - -def quaternion_to_matrix(quaternions): - """ - Convert rotations given as quaternions to rotation matrices. - Args: - quaternions: quaternions with real part first, - as tensor of shape (..., 4). - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - r, i, j, k = torch.unbind(quaternions, -1) - two_s = 2.0 / (quaternions * quaternions).sum(-1) - - o = torch.stack( - ( - 1 - two_s * (j * j + k * k), - two_s * (i * j - k * r), - two_s * (i * k + j * r), - two_s * (i * j + k * r), - 1 - two_s * (i * i + k * k), - two_s * (j * k - i * r), - two_s * (i * k - j * r), - two_s * (j * k + i * r), - 1 - two_s * (i * i + j * j), - ), - -1, - ) - return o.reshape(quaternions.shape[:-1] + (3, 3)) - - -def _copysign(a, b): - """ - Return a tensor where each element has the absolute value taken from the, - corresponding element of a, with sign taken from the corresponding - element of b. This is like the standard copysign floating-point operation, - but is not careful about negative 0 and NaN. - Args: - a: source tensor. - b: tensor whose signs will be used, of the same shape as a. - Returns: - Tensor of the same shape as a with the signs of b. - """ - signs_differ = (a < 0) != (b < 0) - return torch.where(signs_differ, -a, a) - - -def _sqrt_positive_part(x): - """ - Returns torch.sqrt(torch.max(0, x)) - but with a zero subgradient where x is 0. - """ - ret = torch.zeros_like(x) - positive_mask = x > 0 - ret[positive_mask] = torch.sqrt(x[positive_mask]) - return ret - - -def matrix_to_quaternion(matrix): - """ - Convert rotations given as rotation matrices to quaternions. - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - Returns: - quaternions with real part first, as tensor of shape (..., 4). - """ - if matrix.size(-1) != 3 or matrix.size(-2) != 3: - raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.") - m00 = matrix[..., 0, 0] - m11 = matrix[..., 1, 1] - m22 = matrix[..., 2, 2] - o0 = 0.5 * _sqrt_positive_part(1 + m00 + m11 + m22) - x = 0.5 * _sqrt_positive_part(1 + m00 - m11 - m22) - y = 0.5 * _sqrt_positive_part(1 - m00 + m11 - m22) - z = 0.5 * _sqrt_positive_part(1 - m00 - m11 + m22) - o1 = _copysign(x, matrix[..., 2, 1] - matrix[..., 1, 2]) - o2 = _copysign(y, matrix[..., 0, 2] - matrix[..., 2, 0]) - o3 = _copysign(z, matrix[..., 1, 0] - matrix[..., 0, 1]) - return torch.stack((o0, o1, o2, o3), -1) - - -def _axis_angle_rotation(axis: str, angle): - """ - Return the rotation matrices for one of the rotations about an axis - of which Euler angles describe, for each value of the angle given. - Args: - axis: Axis label "X" or "Y or "Z". - angle: any shape tensor of Euler angles in radians - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - - cos = torch.cos(angle) - sin = torch.sin(angle) - one = torch.ones_like(angle) - zero = torch.zeros_like(angle) - - if axis == "X": - R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos) - if axis == "Y": - R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos) - if axis == "Z": - R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one) - - return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3)) - - -def euler_angles_to_matrix(euler_angles, convention: str): - """ - Convert rotations given as Euler angles in radians to rotation matrices. - Args: - euler_angles: Euler angles in radians as tensor of shape (..., 3). - convention: Convention string of three uppercase letters from - {"X", "Y", and "Z"}. - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3: - raise ValueError("Invalid input euler angles.") - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - matrices = map(_axis_angle_rotation, convention, torch.unbind(euler_angles, -1)) - return functools.reduce(torch.matmul, matrices) - - -def _angle_from_tan( - axis: str, other_axis: str, data, horizontal: bool, tait_bryan: bool -): - """ - Extract the first or third Euler angle from the two members of - the matrix which are positive constant times its sine and cosine. - Args: - axis: Axis label "X" or "Y or "Z" for the angle we are finding. - other_axis: Axis label "X" or "Y or "Z" for the middle axis in the - convention. - data: Rotation matrices as tensor of shape (..., 3, 3). - horizontal: Whether we are looking for the angle for the third axis, - which means the relevant entries are in the same row of the - rotation matrix. If not, they are in the same column. - tait_bryan: Whether the first and third axes in the convention differ. - Returns: - Euler Angles in radians for each matrix in data as a tensor - of shape (...). - """ - - i1, i2 = {"X": (2, 1), "Y": (0, 2), "Z": (1, 0)}[axis] - if horizontal: - i2, i1 = i1, i2 - even = (axis + other_axis) in ["XY", "YZ", "ZX"] - if horizontal == even: - return torch.atan2(data[..., i1], data[..., i2]) - if tait_bryan: - return torch.atan2(-data[..., i2], data[..., i1]) - return torch.atan2(data[..., i2], -data[..., i1]) - - -def _index_from_letter(letter: str): - if letter == "X": - return 0 - if letter == "Y": - return 1 - if letter == "Z": - return 2 - - -def matrix_to_euler_angles(matrix, convention: str): - """ - Convert rotations given as rotation matrices to Euler angles in radians. - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - convention: Convention string of three uppercase letters. - Returns: - Euler angles in radians as tensor of shape (..., 3). - """ - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - if matrix.size(-1) != 3 or matrix.size(-2) != 3: - raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.") - i0 = _index_from_letter(convention[0]) - i2 = _index_from_letter(convention[2]) - tait_bryan = i0 != i2 - if tait_bryan: - central_angle = torch.asin( - matrix[..., i0, i2] * (-1.0 if i0 - i2 in [-1, 2] else 1.0) - ) - else: - central_angle = torch.acos(matrix[..., i0, i0]) - - o = ( - _angle_from_tan( - convention[0], convention[1], matrix[..., i2], False, tait_bryan - ), - central_angle, - _angle_from_tan( - convention[2], convention[1], matrix[..., i0, :], True, tait_bryan - ), - ) - return torch.stack(o, -1) - - -def random_quaternions( - n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate random quaternions representing rotations, - i.e. versors with nonnegative real part. - Args: - n: Number of quaternions in a batch to return. - dtype: Type to return. - device: Desired device of returned tensor. Default: - uses the current device for the default tensor type. - requires_grad: Whether the resulting tensor should have the gradient - flag set. - Returns: - Quaternions as tensor of shape (N, 4). - """ - o = torch.randn((n, 4), dtype=dtype, device=device, requires_grad=requires_grad) - s = (o * o).sum(1) - o = o / _copysign(torch.sqrt(s), o[:, 0])[:, None] - return o - - -def random_rotations( - n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate random rotations as 3x3 rotation matrices. - Args: - n: Number of rotation matrices in a batch to return. - dtype: Type to return. - device: Device of returned tensor. Default: if None, - uses the current device for the default tensor type. - requires_grad: Whether the resulting tensor should have the gradient - flag set. - Returns: - Rotation matrices as tensor of shape (n, 3, 3). - """ - quaternions = random_quaternions( - n, dtype=dtype, device=device, requires_grad=requires_grad - ) - return quaternion_to_matrix(quaternions) - - -def random_rotation( - dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate a single random 3x3 rotation matrix. - Args: - dtype: Type to return - device: Device of returned tensor. Default: if None, - uses the current device for the default tensor type - requires_grad: Whether the resulting tensor should have the gradient - flag set - Returns: - Rotation matrix as tensor of shape (3, 3). - """ - return random_rotations(1, dtype, device, requires_grad)[0] - - -def standardize_quaternion(quaternions): - """ - Convert a unit quaternion to a standard form: one in which the real - part is non negative. - Args: - quaternions: Quaternions with real part first, - as tensor of shape (..., 4). - Returns: - Standardized quaternions as tensor of shape (..., 4). - """ - return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions) - - -def quaternion_raw_multiply(a, b): - """ - Multiply two quaternions. - Usual torch rules for broadcasting apply. - Args: - a: Quaternions as tensor of shape (..., 4), real part first. - b: Quaternions as tensor of shape (..., 4), real part first. - Returns: - The product of a and b, a tensor of quaternions shape (..., 4). - """ - aw, ax, ay, az = torch.unbind(a, -1) - bw, bx, by, bz = torch.unbind(b, -1) - ow = aw * bw - ax * bx - ay * by - az * bz - ox = aw * bx + ax * bw + ay * bz - az * by - oy = aw * by - ax * bz + ay * bw + az * bx - oz = aw * bz + ax * by - ay * bx + az * bw - return torch.stack((ow, ox, oy, oz), -1) - - -def quaternion_multiply(a, b): - """ - Multiply two quaternions representing rotations, returning the quaternion - representing their composition, i.e. the versor with nonnegative real part. - Usual torch rules for broadcasting apply. - Args: - a: Quaternions as tensor of shape (..., 4), real part first. - b: Quaternions as tensor of shape (..., 4), real part first. - Returns: - The product of a and b, a tensor of quaternions of shape (..., 4). - """ - ab = quaternion_raw_multiply(a, b) - return standardize_quaternion(ab) - - -def quaternion_invert(quaternion): - """ - Given a quaternion representing rotation, get the quaternion representing - its inverse. - Args: - quaternion: Quaternions as tensor of shape (..., 4), with real part - first, which must be versors (unit quaternions). - Returns: - The inverse, a tensor of quaternions of shape (..., 4). - """ - - return quaternion * quaternion.new_tensor([1, -1, -1, -1]) - - -def quaternion_apply(quaternion, point): - """ - Apply the rotation given by a quaternion to a 3D point. - Usual torch rules for broadcasting apply. - Args: - quaternion: Tensor of quaternions, real part first, of shape (..., 4). - point: Tensor of 3D points of shape (..., 3). - Returns: - Tensor of rotated points of shape (..., 3). - """ - if point.size(-1) != 3: - raise ValueError(f"Points are not in 3D, f{point.shape}.") - real_parts = point.new_zeros(point.shape[:-1] + (1,)) - point_as_quaternion = torch.cat((real_parts, point), -1) - out = quaternion_raw_multiply( - quaternion_raw_multiply(quaternion, point_as_quaternion), - quaternion_invert(quaternion), - ) - return out[..., 1:] - - -def axis_angle_to_matrix(axis_angle): - """ - Convert rotations given as axis/angle to rotation matrices. - Args: - axis_angle: Rotations given as a vector in axis angle form, - as a tensor of shape (..., 3), where the magnitude is - the angle turned anticlockwise in radians around the - vector's direction. - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - return quaternion_to_matrix(axis_angle_to_quaternion(axis_angle)) - - -def matrix_to_axis_angle(matrix): - """ - Convert rotations given as rotation matrices to axis/angle. - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - Returns: - Rotations given as a vector in axis angle form, as a tensor - of shape (..., 3), where the magnitude is the angle - turned anticlockwise in radians around the vector's - direction. - """ - return quaternion_to_axis_angle(matrix_to_quaternion(matrix)) - - -def axis_angle_to_quaternion(axis_angle): - """ - Convert rotations given as axis/angle to quaternions. - Args: - axis_angle: Rotations given as a vector in axis angle form, - as a tensor of shape (..., 3), where the magnitude is - the angle turned anticlockwise in radians around the - vector's direction. - Returns: - quaternions with real part first, as tensor of shape (..., 4). - """ - angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True) - half_angles = 0.5 * angles - eps = 1e-6 - small_angles = angles.abs() < eps - sin_half_angles_over_angles = torch.empty_like(angles) - sin_half_angles_over_angles[~small_angles] = ( - torch.sin(half_angles[~small_angles]) / angles[~small_angles] - ) - # for x small, sin(x/2) is about x/2 - (x/2)^3/6 - # so sin(x/2)/x is about 1/2 - (x*x)/48 - sin_half_angles_over_angles[small_angles] = ( - 0.5 - (angles[small_angles] * angles[small_angles]) / 48 - ) - quaternions = torch.cat( - [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1 - ) - return quaternions - - -def quaternion_to_axis_angle(quaternions): - """ - Convert rotations given as quaternions to axis/angle. - Args: - quaternions: quaternions with real part first, - as tensor of shape (..., 4). - Returns: - Rotations given as a vector in axis angle form, as a tensor - of shape (..., 3), where the magnitude is the angle - turned anticlockwise in radians around the vector's - direction. - """ - norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True) - half_angles = torch.atan2(norms, quaternions[..., :1]) - angles = 2 * half_angles - eps = 1e-6 - small_angles = angles.abs() < eps - sin_half_angles_over_angles = torch.empty_like(angles) - sin_half_angles_over_angles[~small_angles] = ( - torch.sin(half_angles[~small_angles]) / angles[~small_angles] - ) - # for x small, sin(x/2) is about x/2 - (x/2)^3/6 - # so sin(x/2)/x is about 1/2 - (x*x)/48 - sin_half_angles_over_angles[small_angles] = ( - 0.5 - (angles[small_angles] * angles[small_angles]) / 48 - ) - return quaternions[..., 1:] / sin_half_angles_over_angles - - -def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor: - """ - Converts 6D rotation representation by Zhou et al. [1] to rotation matrix - using Gram--Schmidt orthogonalisation per Section B of [1]. - Args: - d6: 6D rotation representation, of size (*, 6) - Returns: - batch of rotation matrices of size (*, 3, 3) - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - - a1, a2 = d6[..., :3], d6[..., 3:] - b1 = F.normalize(a1, dim=-1) - b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1 - b2 = F.normalize(b2, dim=-1) - b3 = torch.cross(b1, b2, dim=-1) - return torch.stack((b1, b2, b3), dim=-2) - - -def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor: - """ - Converts rotation matrices to 6D rotation representation by Zhou et al. [1] - by dropping the last row. Note that 6D representation is not unique. - Args: - matrix: batch of rotation matrices of size (*, 3, 3) - Returns: - 6D rotation representation, of size (*, 6) - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6) - -def canonicalize_smplh(poses, trans = None): - bs, nframes, njoints = poses.shape[:3] - - global_orient = poses[:, :, 0] - - # first global rotations - rot2d = matrix_to_axis_angle(global_orient[:, 0]) - #rot2d[:, :2] = 0 # Remove the rotation along the vertical axis - rot2d = axis_angle_to_matrix(rot2d) - - # Rotate the global rotation to eliminate Z rotations - global_orient = torch.einsum("ikj,imkl->imjl", rot2d, global_orient) - - # Construct canonicalized version of x - xc = torch.cat((global_orient[:, :, None], poses[:, :, 1:]), dim=2) - - if trans is not None: - vel = trans[:, 1:] - trans[:, :-1] - # Turn the translation as well - vel = torch.einsum("ikj,ilk->ilj", rot2d, vel) - trans = torch.cat((torch.zeros(bs, 1, 3, device=vel.device), - torch.cumsum(vel, 1)), 1) - return xc, trans - else: - return xc - - \ No newline at end of file diff --git a/spaces/AIFILMS/image-to-sound-fx/README.md b/spaces/AIFILMS/image-to-sound-fx/README.md deleted file mode 100644 index 3e3cce556677dac2d274b16fb305c6664e8af132..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/image-to-sound-fx/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image To Sound FX -emoji: 👁👂 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.17.1b2 -app_file: app.py -pinned: false -duplicated_from: fffiloni/image-to-sound-fx ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py deleted file mode 100644 index 0d0241f267ef58b24979e022b05f2a9adf768826..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/ssim.py +++ /dev/null @@ -1,391 +0,0 @@ -# ''' -# https://github.com/One-sixth/ms_ssim_pytorch/blob/master/ssim.py -# ''' -# -# import torch -# import torch.jit -# import torch.nn.functional as F -# -# -# @torch.jit.script -# def create_window(window_size: int, sigma: float, channel: int): -# ''' -# Create 1-D gauss kernel -# :param window_size: the size of gauss kernel -# :param sigma: sigma of normal distribution -# :param channel: input channel -# :return: 1D kernel -# ''' -# coords = torch.arange(window_size, dtype=torch.float) -# coords -= window_size // 2 -# -# g = torch.exp(-(coords ** 2) / (2 * sigma ** 2)) -# g /= g.sum() -# -# g = g.reshape(1, 1, 1, -1).repeat(channel, 1, 1, 1) -# return g -# -# -# @torch.jit.script -# def _gaussian_filter(x, window_1d, use_padding: bool): -# ''' -# Blur input with 1-D kernel -# :param x: batch of tensors to be blured -# :param window_1d: 1-D gauss kernel -# :param use_padding: padding image before conv -# :return: blured tensors -# ''' -# C = x.shape[1] -# padding = 0 -# if use_padding: -# window_size = window_1d.shape[3] -# padding = window_size // 2 -# out = F.conv2d(x, window_1d, stride=1, padding=(0, padding), groups=C) -# out = F.conv2d(out, window_1d.transpose(2, 3), stride=1, padding=(padding, 0), groups=C) -# return out -# -# -# @torch.jit.script -# def ssim(X, Y, window, data_range: float, use_padding: bool = False): -# ''' -# Calculate ssim index for X and Y -# :param X: images [B, C, H, N_bins] -# :param Y: images [B, C, H, N_bins] -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param use_padding: padding image before conv -# :return: -# ''' -# -# K1 = 0.01 -# K2 = 0.03 -# compensation = 1.0 -# -# C1 = (K1 * data_range) ** 2 -# C2 = (K2 * data_range) ** 2 -# -# mu1 = _gaussian_filter(X, window, use_padding) -# mu2 = _gaussian_filter(Y, window, use_padding) -# sigma1_sq = _gaussian_filter(X * X, window, use_padding) -# sigma2_sq = _gaussian_filter(Y * Y, window, use_padding) -# sigma12 = _gaussian_filter(X * Y, window, use_padding) -# -# mu1_sq = mu1.pow(2) -# mu2_sq = mu2.pow(2) -# mu1_mu2 = mu1 * mu2 -# -# sigma1_sq = compensation * (sigma1_sq - mu1_sq) -# sigma2_sq = compensation * (sigma2_sq - mu2_sq) -# sigma12 = compensation * (sigma12 - mu1_mu2) -# -# cs_map = (2 * sigma12 + C2) / (sigma1_sq + sigma2_sq + C2) -# # Fixed the issue that the negative value of cs_map caused ms_ssim to output Nan. -# cs_map = cs_map.clamp_min(0.) -# ssim_map = ((2 * mu1_mu2 + C1) / (mu1_sq + mu2_sq + C1)) * cs_map -# -# ssim_val = ssim_map.mean(dim=(1, 2, 3)) # reduce along CHW -# cs = cs_map.mean(dim=(1, 2, 3)) -# -# return ssim_val, cs -# -# -# @torch.jit.script -# def ms_ssim(X, Y, window, data_range: float, weights, use_padding: bool = False, eps: float = 1e-8): -# ''' -# interface of ms-ssim -# :param X: a batch of images, (N,C,H,W) -# :param Y: a batch of images, (N,C,H,W) -# :param window: 1-D gauss kernel -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param weights: weights for different levels -# :param use_padding: padding image before conv -# :param eps: use for avoid grad nan. -# :return: -# ''' -# levels = weights.shape[0] -# cs_vals = [] -# ssim_vals = [] -# for _ in range(levels): -# ssim_val, cs = ssim(X, Y, window=window, data_range=data_range, use_padding=use_padding) -# # Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ssim_val = ssim_val.clamp_min(eps) -# cs = cs.clamp_min(eps) -# cs_vals.append(cs) -# -# ssim_vals.append(ssim_val) -# padding = (X.shape[2] % 2, X.shape[3] % 2) -# X = F.avg_pool2d(X, kernel_size=2, stride=2, padding=padding) -# Y = F.avg_pool2d(Y, kernel_size=2, stride=2, padding=padding) -# -# cs_vals = torch.stack(cs_vals, dim=0) -# ms_ssim_val = torch.prod((cs_vals[:-1] ** weights[:-1].unsqueeze(1)) * (ssim_vals[-1] ** weights[-1]), dim=0) -# return ms_ssim_val -# -# -# class SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False): -# ''' -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels (default: 3) -# :param use_padding: padding image before conv -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# self.data_range = data_range -# self.use_padding = use_padding -# -# @torch.jit.script_method -# def forward(self, X, Y): -# r = ssim(X, Y, window=self.window, data_range=self.data_range, use_padding=self.use_padding) -# return r[0] -# -# -# class MS_SSIM(torch.jit.ScriptModule): -# __constants__ = ['data_range', 'use_padding', 'eps'] -# -# def __init__(self, window_size=11, window_sigma=1.5, data_range=255., channel=3, use_padding=False, weights=None, -# levels=None, eps=1e-8): -# ''' -# class for ms-ssim -# :param window_size: the size of gauss kernel -# :param window_sigma: sigma of normal distribution -# :param data_range: value range of input images. (usually 1.0 or 255) -# :param channel: input channels -# :param use_padding: padding image before conv -# :param weights: weights for different levels. (default [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]) -# :param levels: number of downsampling -# :param eps: Use for fix a issue. When c = a ** b and a is 0, c.backward() will cause the a.grad become inf. -# ''' -# super().__init__() -# assert window_size % 2 == 1, 'Window size must be odd.' -# self.data_range = data_range -# self.use_padding = use_padding -# self.eps = eps -# -# window = create_window(window_size, window_sigma, channel) -# self.register_buffer('window', window) -# -# if weights is None: -# weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333] -# weights = torch.tensor(weights, dtype=torch.float) -# -# if levels is not None: -# weights = weights[:levels] -# weights = weights / weights.sum() -# -# self.register_buffer('weights', weights) -# -# @torch.jit.script_method -# def forward(self, X, Y): -# return ms_ssim(X, Y, window=self.window, data_range=self.data_range, weights=self.weights, -# use_padding=self.use_padding, eps=self.eps) -# -# -# if __name__ == '__main__': -# print('Simple Test') -# im = torch.randint(0, 255, (5, 3, 256, 256), dtype=torch.float, device='cuda') -# img1 = im / 255 -# img2 = img1 * 0.5 -# -# losser = SSIM(data_range=1.).cuda() -# loss = losser(img1, img2).mean() -# -# losser2 = MS_SSIM(data_range=1.).cuda() -# loss2 = losser2(img1, img2).mean() -# -# print(loss.item()) -# print(loss2.item()) -# -# if __name__ == '__main__': -# print('Training Test') -# import cv2 -# import torch.optim -# import numpy as np -# import imageio -# import time -# -# out_test_video = False -# # 最好不要直接输出gif图,会非常大,最好先输出mkv文件后用ffmpeg转换到GIF -# video_use_gif = False -# -# im = cv2.imread('test_img1.jpg', 1) -# t_im = torch.from_numpy(im).cuda().permute(2, 0, 1).float()[None] / 255. -# -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ssim_test' + suffix, fps=fps) -# -# # 测试ssim -# print('Training SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ssim', r_im) -# cv2.setWindowTitle('ssim', 'ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() -# -# # 测试ms_ssim -# if out_test_video: -# if video_use_gif: -# fps = 0.5 -# out_wh = (im.shape[1] // 2, im.shape[0] // 2) -# suffix = '.gif' -# else: -# fps = 5 -# out_wh = (im.shape[1], im.shape[0]) -# suffix = '.mkv' -# video_last_time = time.perf_counter() -# video = imageio.get_writer('ms_ssim_test' + suffix, fps=fps) -# -# print('Training MS_SSIM') -# rand_im = torch.randint_like(t_im, 0, 255, dtype=torch.float32) / 255. -# rand_im.requires_grad = True -# optim = torch.optim.Adam([rand_im], 0.003, eps=1e-8) -# losser = MS_SSIM(data_range=1., channel=t_im.shape[1]).cuda() -# ssim_score = 0 -# while ssim_score < 0.999: -# optim.zero_grad() -# loss = losser(rand_im, t_im) -# (-loss).sum().backward() -# ssim_score = loss.item() -# optim.step() -# r_im = np.transpose(rand_im.detach().cpu().numpy().clip(0, 1) * 255, [0, 2, 3, 1]).astype(np.uint8)[0] -# r_im = cv2.putText(r_im, 'ms_ssim %f' % ssim_score, (10, 30), cv2.FONT_HERSHEY_PLAIN, 2, (255, 0, 0), 2) -# -# if out_test_video: -# if time.perf_counter() - video_last_time > 1. / fps: -# video_last_time = time.perf_counter() -# out_frame = cv2.cvtColor(r_im, cv2.COLOR_BGR2RGB) -# out_frame = cv2.resize(out_frame, out_wh, interpolation=cv2.INTER_AREA) -# if isinstance(out_frame, cv2.UMat): -# out_frame = out_frame.get() -# video.append_data(out_frame) -# -# cv2.imshow('ms_ssim', r_im) -# cv2.setWindowTitle('ms_ssim', 'ms_ssim %f' % ssim_score) -# cv2.waitKey(1) -# -# if out_test_video: -# video.close() - -""" -Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim -""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)]) - return gauss / gauss.sum() - - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - - -def _ssim(img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) - mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq - sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1) - - -class SSIM(torch.nn.Module): - def __init__(self, window_size=11, size_average=True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - - -window = None - - -def ssim(img1, img2, window_size=11, size_average=True): - (_, channel, _, _) = img1.size() - global window - if window is None: - window = create_window(window_size, channel) - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - return _ssim(img1, img2, window, window_size, channel, size_average) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py deleted file mode 100644 index a332205b553a0a95b9529c78c1ab5e49099b5d41..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/base_vocoder.py +++ /dev/null @@ -1,63 +0,0 @@ -import librosa -from text_to_speech.utils.audio import librosa_wav2spec -from text_to_speech.utils.commons.hparams import hparams -import numpy as np - -REGISTERED_VOCODERS = {} - - -def register_vocoder(name): - def _f(cls): - REGISTERED_VOCODERS[name] = cls - return cls - - return _f - - -def get_vocoder_cls(vocoder_name): - return REGISTERED_VOCODERS.get(vocoder_name) - - -class BaseVocoder: - def spec2wav(self, mel): - """ - - :param mel: [T, 80] - :return: wav: [T'] - """ - - raise NotImplementedError - - @staticmethod - def wav2spec(wav_fn): - """ - - :param wav_fn: str - :return: wav, mel: [T, 80] - """ - wav_spec_dict = librosa_wav2spec(wav_fn, fft_size=hparams['fft_size'], - hop_size=hparams['hop_size'], - win_length=hparams['win_size'], - num_mels=hparams['audio_num_mel_bins'], - fmin=hparams['fmin'], - fmax=hparams['fmax'], - sample_rate=hparams['audio_sample_rate'], - loud_norm=hparams['loud_norm']) - wav = wav_spec_dict['wav'] - mel = wav_spec_dict['mel'] - return wav, mel - - @staticmethod - def wav2mfcc(wav_fn): - fft_size = hparams['fft_size'] - hop_size = hparams['hop_size'] - win_length = hparams['win_size'] - sample_rate = hparams['audio_sample_rate'] - wav, _ = librosa.core.load(wav_fn, sr=sample_rate) - mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13, - n_fft=fft_size, hop_length=hop_size, - win_length=win_length, pad_mode="constant", power=1.0) - mfcc_delta = librosa.feature.delta(mfcc, order=1) - mfcc_delta_delta = librosa.feature.delta(mfcc, order=2) - mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T - return mfcc diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py deleted file mode 100644 index 3141e47ec7b7df2e3cb81d11582b4738a5d23c1a..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/clap.py +++ /dev/null @@ -1,89 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from transformers import AutoModel -from .audio import get_audio_encoder - -class Projection(nn.Module): - def __init__(self, d_in: int, d_out: int, p: float=0.5) -> None: - super().__init__() - self.linear1 = nn.Linear(d_in, d_out, bias=False) - self.linear2 = nn.Linear(d_out, d_out, bias=False) - self.layer_norm = nn.LayerNorm(d_out) - self.drop = nn.Dropout(p) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - embed1 = self.linear1(x) - embed2 = self.drop(self.linear2(F.gelu(embed1))) - embeds = self.layer_norm(embed1 + embed2) - return embeds - -class AudioEncoder(nn.Module): - def __init__(self, audioenc_name:str, d_in: int, d_out: int, sample_rate: int, window_size: int, - hop_size: int, mel_bins: int, fmin: int, fmax: int, classes_num: int) -> None: - super().__init__() - - audio_encoder = get_audio_encoder(audioenc_name) - - self.base = audio_encoder( - sample_rate, window_size, - hop_size, mel_bins, fmin, fmax, - classes_num, d_in) - - self.projection = Projection(d_in, d_out) - - def forward(self, x): - out_dict = self.base(x) - audio_features, audio_classification_output = out_dict['embedding'], out_dict['clipwise_output'] - projected_vec = self.projection(audio_features) - return projected_vec, audio_classification_output - -class TextEncoder(nn.Module): - def __init__(self, d_out: int, text_model: str, transformer_embed_dim: int) -> None: - super().__init__() - self.base = AutoModel.from_pretrained(text_model) - self.projection = Projection(transformer_embed_dim, d_out) - - def forward(self, x): - out = self.base(**x)[0] - out = out[:, 0, :] # get CLS token output - projected_vec = self.projection(out) - return projected_vec - -class CLAP(nn.Module): - def __init__(self, - # audio - audioenc_name: str, - sample_rate: int, - window_size: int, - hop_size: int, - mel_bins: int, - fmin: int, - fmax: int, - classes_num: int, - out_emb: int, - # text - text_model: str, - transformer_embed_dim: int, - # common - d_proj: int, - ): - super().__init__() - - - self.audio_encoder = AudioEncoder( - audioenc_name, out_emb, d_proj, - sample_rate, window_size, hop_size, mel_bins, fmin, fmax, classes_num) - - self.caption_encoder = TextEncoder( - d_proj, text_model, transformer_embed_dim - ) - - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - def forward(self, audio, text): - audio_embed, _ = self.audio_encoder(audio) - caption_embed = self.caption_encoder(text) - - return caption_embed, audio_embed, self.logit_scale.exp() \ No newline at end of file diff --git a/spaces/AIGText/GlyphControl/ldm/modules/attention.py b/spaces/AIGText/GlyphControl/ldm/modules/attention.py deleted file mode 100644 index a0fe28b335a8e27e92b97ca6787fab169477085c..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/attention.py +++ /dev/null @@ -1,340 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from typing import Optional, Any - -from ldm.modules.diffusionmodules.util import checkpoint - - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except Exception as e: - print("xformer", e) - XFORMERS_IS_AVAILBLE = False -# XFORMERS_IS_AVAILBLE = False -DETERMISTIC = False - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - del q, k - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', sim, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class MemoryEfficientCrossAttention(nn.Module): - # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0): - super().__init__() - print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using " - f"{heads} heads.") - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.heads = heads - self.dim_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - self.attention_op: Optional[Any] = None - print("DETERMISTIC:", DETERMISTIC) - - def forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - - torch.use_deterministic_algorithms(False) - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - if DETERMISTIC: - torch.use_deterministic_algorithms(True, warn_only=True) - - # # actually compute the attention, what we cannot get enough of - # out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - "softmax-xformers": MemoryEfficientCrossAttention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): # cross attention - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - diff --git a/spaces/AgentVerse/agentVerse/dataloader/commongen.py b/spaces/AgentVerse/agentVerse/dataloader/commongen.py deleted file mode 100644 index e7a5e75f9e013cbaa7585d8e3c5ffa2bfd714d7d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/dataloader/commongen.py +++ /dev/null @@ -1,21 +0,0 @@ -from .dataloader import DataLoader -from . import dataloader_registry -import json - - -@dataloader_registry.register("tasksolving/commongen/gpt-4") -@dataloader_registry.register("tasksolving/commongen/gpt-3.5") -class CommongenLoader(DataLoader): - def __init__(self, path: str): - super().__init__(path) - - def load(self): - with open(self.path) as f: - for line in f: - line = json.loads(line) - self.examples.append( - { - "input": line["concepts"], - "answer": None, - } - ) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js deleted file mode 100644 index 5c03df2cfe55f20581b16b82b53fc5338a2ceeb9..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateSizer.js +++ /dev/null @@ -1,8 +0,0 @@ -import CreateAnySizer from './utils/CreateAnySizer.js'; -import Sizer from '../../sizer/Sizer.js'; - -var CreateSizer = function (scene, data, view, styles, customBuilders) { - return CreateAnySizer(scene, data, view, styles, customBuilders, Sizer); -} - -export default CreateSizer; \ No newline at end of file diff --git a/spaces/Akira12312/admruul-anything-v3.0/README.md b/spaces/Akira12312/admruul-anything-v3.0/README.md deleted file mode 100644 index 507f936bdae6e54fdc1e6de73dc9c45b23d32d69..0000000000000000000000000000000000000000 --- a/spaces/Akira12312/admruul-anything-v3.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Admruul Anything V3.0 -emoji: 🔥 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ameaou/academic-chatgpt3.1/core_functional.py b/spaces/Ameaou/academic-chatgpt3.1/core_functional.py deleted file mode 100644 index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/core_functional.py +++ /dev/null @@ -1,71 +0,0 @@ -# 'primary' 颜色对应 theme.py 中的 primary_hue -# 'secondary' 颜色对应 theme.py 中的 neutral_hue -# 'stop' 颜色对应 theme.py 中的 color_er -# 默认按钮颜色是 secondary -from toolbox import clear_line_break - - -def get_core_functions(): - return { - "英语学术润色": { - # 前言 - "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " + - r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " + - r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n", - # 后语 - "Suffix": r"", - "Color": r"secondary", # 按钮颜色 - }, - "中文学术润色": { - "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," + - r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n", - "Suffix": r"", - }, - "查找语法错误": { - "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before." - + "\n\n", - "Suffix": r"", - "PreProcess": clear_line_break, # 预处理:清除换行符 - }, - "中译英": { - "Prefix": r"Please translate following sentence to English:" + "\n\n", - "Suffix": r"", - }, - "学术中英互译": { - "Prefix": r"I want you to act as a scientific English-Chinese translator, " + - r"I will provide you with some paragraphs in one language " + - r"and your task is to accurately and academically translate the paragraphs only into the other language. " + - r"Do not repeat the original provided paragraphs after translation. " + - r"You should use artificial intelligence tools, " + - r"such as natural language processing, and rhetorical knowledge " + - r"and experience about effective writing techniques to reply. " + - r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", - "Suffix": "", - "Color": "secondary", - }, - "英译中": { - "Prefix": r"翻译成地道的中文:" + "\n\n", - "Suffix": r"", - }, - "找图片": { - "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," + - r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n", - "Suffix": r"", - }, - "解释代码": { - "Prefix": r"请解释以下代码:" + "\n```\n", - "Suffix": "\n```\n", - }, - } diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py deleted file mode 100644 index cb126d4b953cd28e23d048c4f1e2cf8ed90cdac0..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_discrete.py +++ /dev/null @@ -1,432 +0,0 @@ -# Copyright 2023 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, logging, randn_tensor -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete -class EulerDiscreteSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original - k-diffusion implementation by Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `"epsilon"`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - interpolation_type (`str`, default `"linear"`, optional): - interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be one of - [`"linear"`, `"log_linear"`]. - use_karras_sigmas (`bool`, *optional*, defaults to `False`): - This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the - noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence - of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. - timestep_spacing (`str`, default `"linspace"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - interpolation_type: str = "linear", - use_karras_sigmas: Optional[bool] = False, - timestep_spacing: str = "linspace", - steps_offset: int = 0, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas) - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps) - self.is_scale_input_called = False - self.use_karras_sigmas = use_karras_sigmas - - @property - def init_noise_sigma(self): - # standard deviation of the initial noise distribution - if self.config.timestep_spacing in ["linspace", "trailing"]: - return self.sigmas.max() - - return (self.sigmas.max() ** 2 + 1) ** 0.5 - - def scale_model_input( - self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor] - ) -> torch.FloatTensor: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain - - Returns: - `torch.FloatTensor`: scaled input sample - """ - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - - sample = sample / ((sigma**2 + 1) ** 0.5) - - self.is_scale_input_called = True - return sample - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "linspace": - timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[ - ::-1 - ].copy() - elif self.config.timestep_spacing == "leading": - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(float) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = self.config.num_train_timesteps / self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(self.config.num_train_timesteps, 0, -step_ratio)).round().copy().astype(float) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'." - ) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - log_sigmas = np.log(sigmas) - - if self.config.interpolation_type == "linear": - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - elif self.config.interpolation_type == "log_linear": - sigmas = torch.linspace(np.log(sigmas[-1]), np.log(sigmas[0]), num_inference_steps + 1).exp() - else: - raise ValueError( - f"{self.config.interpolation_type} is not implemented. Please specify interpolation_type to either" - " 'linear' or 'log_linear'" - ) - - if self.use_karras_sigmas: - sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=self.num_inference_steps) - timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]) - - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas).to(device=device) - if str(device).startswith("mps"): - # mps does not support float64 - self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - self.timesteps = torch.from_numpy(timesteps).to(device=device) - - def _sigma_to_t(self, sigma, log_sigmas): - # get log sigma - log_sigma = np.log(sigma) - - # get distribution - dists = log_sigma - log_sigmas[:, np.newaxis] - - # get sigmas range - low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2) - high_idx = low_idx + 1 - - low = log_sigmas[low_idx] - high = log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = np.clip(w, 0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.reshape(sigma.shape) - return t - - # Copied from https://github.com/crowsonkb/k-diffusion/blob/686dbad0f39640ea25c8a8c6a6e56bb40eacefa2/k_diffusion/sampling.py#L17 - def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor: - """Constructs the noise schedule of Karras et al. (2022).""" - - sigma_min: float = in_sigmas[-1].item() - sigma_max: float = in_sigmas[0].item() - - rho = 7.0 # 7.0 is the value used in the paper - ramp = np.linspace(0, 1, num_inference_steps) - min_inv_rho = sigma_min ** (1 / rho) - max_inv_rho = sigma_max ** (1 / rho) - sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho - return sigmas - - def step( - self, - model_output: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - sample: torch.FloatTensor, - s_churn: float = 0.0, - s_tmin: float = 0.0, - s_tmax: float = float("inf"), - s_noise: float = 1.0, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[EulerDiscreteSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - s_churn (`float`) - s_tmin (`float`) - s_tmax (`float`) - s_noise (`float`) - generator (`torch.Generator`, optional): Random number generator. - return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if ( - isinstance(timestep, int) - or isinstance(timestep, torch.IntTensor) - or isinstance(timestep, torch.LongTensor) - ): - raise ValueError( - ( - "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" - " `EulerDiscreteScheduler.step()` is not supported. Make sure to pass" - " one of the `scheduler.timesteps` as a timestep." - ), - ) - - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - - gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0 - - noise = randn_tensor( - model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator - ) - - eps = noise * s_noise - sigma_hat = sigma * (gamma + 1) - - if gamma > 0: - sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5 - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - # NOTE: "original_sample" should not be an expected prediction_type but is left in for - # backwards compatibility - if self.config.prediction_type == "original_sample" or self.config.prediction_type == "sample": - pred_original_sample = model_output - elif self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma_hat * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma_hat - - dt = self.sigmas[step_index + 1] - sigma_hat - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - schedule_timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py deleted file mode 100644 index b7afad8226b87292100270e3e7daad6885be0e7f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py +++ /dev/null @@ -1,92 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class OnnxStableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class OnnxStableDiffusionUpscalePipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - -class StableDiffusionOnnxPipeline(metaclass=DummyObject): - _backends = ["torch", "transformers", "onnx"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "transformers", "onnx"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "transformers", "onnx"]) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/text_to_video/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/text_to_video/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/IAT_enhancement/model/global_net.py b/spaces/Andy1621/IAT_enhancement/model/global_net.py deleted file mode 100644 index 005dcfb7919b62e913694a17083b2a508668cf2b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/IAT_enhancement/model/global_net.py +++ /dev/null @@ -1,129 +0,0 @@ -import imp -import torch -import torch.nn as nn -from timm.models.layers import trunc_normal_, DropPath, to_2tuple -import os -from .blocks import Mlp - - -class query_Attention(nn.Module): - def __init__(self, dim, num_heads=2, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.q = nn.Parameter(torch.ones((1, 10, dim)), requires_grad=True) - self.k = nn.Linear(dim, dim, bias=qkv_bias) - self.v = nn.Linear(dim, dim, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - k = self.k(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3) - v = self.v(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3) - - q = self.q.expand(B, -1, -1).view(B, -1, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3) - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, 10, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class query_SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = query_Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.flatten(2).transpose(1, 2) - x = self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class conv_embedding(nn.Module): - def __init__(self, in_channels, out_channels): - super(conv_embedding, self).__init__() - self.proj = nn.Sequential( - nn.Conv2d(in_channels, out_channels // 2, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), - nn.BatchNorm2d(out_channels // 2), - nn.GELU(), - # nn.Conv2d(out_channels // 2, out_channels // 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), - # nn.BatchNorm2d(out_channels // 2), - # nn.GELU(), - nn.Conv2d(out_channels // 2, out_channels, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), - nn.BatchNorm2d(out_channels), - ) - - def forward(self, x): - x = self.proj(x) - return x - - -class Global_pred(nn.Module): - def __init__(self, in_channels=3, out_channels=64, num_heads=4, type='exp'): - super(Global_pred, self).__init__() - if type == 'exp': - self.gamma_base = nn.Parameter(torch.ones((1)), requires_grad=False) # False in exposure correction - else: - self.gamma_base = nn.Parameter(torch.ones((1)), requires_grad=True) - self.color_base = nn.Parameter(torch.eye((3)), requires_grad=True) # basic color matrix - # main blocks - self.conv_large = conv_embedding(in_channels, out_channels) - self.generator = query_SABlock(dim=out_channels, num_heads=num_heads) - self.gamma_linear = nn.Linear(out_channels, 1) - self.color_linear = nn.Linear(out_channels, 1) - - self.apply(self._init_weights) - - for name, p in self.named_parameters(): - if name == 'generator.attn.v.weight': - nn.init.constant_(p, 0) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - - def forward(self, x): - #print(self.gamma_base) - x = self.conv_large(x) - x = self.generator(x) - gamma, color = x[:, 0].unsqueeze(1), x[:, 1:] - gamma = self.gamma_linear(gamma).squeeze(-1) + self.gamma_base - #print(self.gamma_base, self.gamma_linear(gamma)) - color = self.color_linear(color).squeeze(-1).view(-1, 3, 3) + self.color_base - return gamma, color - -if __name__ == "__main__": - os.environ['CUDA_VISIBLE_DEVICES']='3' - #net = Local_pred_new().cuda() - img = torch.Tensor(8, 3, 400, 600) - global_net = Global_pred() - gamma, color = global_net(img) - print(gamma.shape, color.shape) \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py deleted file mode 100644 index 769472352d06a8f2c30d73ae1f57c393f77adfa2..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='GARetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - assigner=dict(neg_iou_thr=0.5, min_pos_iou=0.0), - center_ratio=0.2, - ignore_ratio=0.5)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index a64dac670ed4d4632e7b9791ec5f8a334dcea78e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ann_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md deleted file mode 100644 index 83b22efa38b1839d07a5a58494dbc26ba86397ee..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Windows-installation-guide.md +++ /dev/null @@ -1,9 +0,0 @@ -If you are having trouble following the installation instructions in the README, Reddit user [Technical_Leather949](https://www.reddit.com/user/Technical_Leather949/) has created a more detailed, step-by-step guide covering: - -* Windows installation -* 8-bit mode on Windows -* LLaMA -* LLaMA 4-bit - -The guide can be found here: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ - diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py deleted file mode 100644 index a83d96d2a77ed05198efc05837522bc88d2499cc..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/bert.py +++ /dev/null @@ -1,40 +0,0 @@ -from transformers import BertTokenizer, BertModel - -tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") -model = BertModel.from_pretrained("bert-base-uncased") -text = "Replace me by any text you'd like." - - -def bert_embeddings(text): - # text = "Replace me by any text you'd like." - encoded_input = tokenizer(text, return_tensors="pt") - output = model(**encoded_input) - return output - - -from transformers import RobertaTokenizer, RobertaModel - -tokenizer = RobertaTokenizer.from_pretrained("roberta-base") -model = RobertaModel.from_pretrained("roberta-base") -text = "Replace me by any text you'd like." - - -def Roberta_embeddings(text): - # text = "Replace me by any text you'd like." - encoded_input = tokenizer(text, return_tensors="pt") - output = model(**encoded_input) - return output - - -from transformers import BartTokenizer, BartModel - -tokenizer = BartTokenizer.from_pretrained("facebook/bart-base") -model = BartModel.from_pretrained("facebook/bart-base") -text = "Replace me by any text you'd like." - - -def bart_embeddings(text): - # text = "Replace me by any text you'd like." - encoded_input = tokenizer(text, return_tensors="pt") - output = model(**encoded_input) - return output diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py deleted file mode 100644 index d7bbdd7d00505f1e51154379c99ab621cb648a6d..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py +++ /dev/null @@ -1,34 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -from detectron2.config import LazyCall as L -from detectron2.modeling.backbone import RegNet -from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock - - -# Replace default ResNet with RegNetX-4GF from the DDS paper. Config source: -# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnetx/RegNetX-4.0GF_dds_8gpu.yaml#L4-L9 # noqa -model.backbone.bottom_up = L(RegNet)( - stem_class=SimpleStem, - stem_width=32, - block_class=ResBottleneckBlock, - depth=23, - w_a=38.65, - w_0=96, - w_m=2.43, - group_width=40, - freeze_at=2, - norm="FrozenBN", - out_features=["s1", "s2", "s3", "s4"], -) -model.pixel_std = [57.375, 57.120, 58.395] - -optimizer.weight_decay = 5e-5 -train.init_checkpoint = ( - "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906383/RegNetX-4.0GF_dds_8gpu.pyth" -) -# RegNets benefit from enabling cudnn benchmark mode -train.cudnn_benchmark = True diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py deleted file mode 100644 index 2eb202bd5efa3ec3d366027b1debffc269ae8b17..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/fast_eval_api.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -import time -from pycocotools.cocoeval import COCOeval - -from detectron2 import _C - -logger = logging.getLogger(__name__) - - -class COCOeval_opt(COCOeval): - """ - This is a slightly modified version of the original COCO API, where the functions evaluateImg() - and accumulate() are implemented in C++ to speedup evaluation - """ - - def evaluate(self): - """ - Run per image evaluation on given images and store results in self.evalImgs_cpp, a - datastructure that isn't readable from Python but is used by a c++ implementation of - accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure - self.evalImgs because this datastructure is a computational bottleneck. - :return: None - """ - tic = time.time() - - p = self.params - # add backward compatibility if useSegm is specified in params - if p.useSegm is not None: - p.iouType = "segm" if p.useSegm == 1 else "bbox" - logger.info("Evaluate annotation type *{}*".format(p.iouType)) - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params = p - - self._prepare() # bottleneck - - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == "segm" or p.iouType == "bbox": - computeIoU = self.computeIoU - elif p.iouType == "keypoints": - computeIoU = self.computeOks - self.ious = { - (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds - } # bottleneck - - maxDet = p.maxDets[-1] - - # <<<< Beginning of code differences with original COCO API - def convert_instances_to_cpp(instances, is_det=False): - # Convert annotations for a list of instances in an image to a format that's fast - # to access in C++ - instances_cpp = [] - for instance in instances: - instance_cpp = _C.InstanceAnnotation( - int(instance["id"]), - instance["score"] if is_det else instance.get("score", 0.0), - instance["area"], - bool(instance.get("iscrowd", 0)), - bool(instance.get("ignore", 0)), - ) - instances_cpp.append(instance_cpp) - return instances_cpp - - # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++ - ground_truth_instances = [ - [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds] - for imgId in p.imgIds - ] - detected_instances = [ - [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds] - for imgId in p.imgIds - ] - ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds] - - if not p.useCats: - # For each image, flatten per-category lists into a single list - ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances] - detected_instances = [[[o for c in i for o in c]] for i in detected_instances] - - # Call C++ implementation of self.evaluateImgs() - self._evalImgs_cpp = _C.COCOevalEvaluateImages( - p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances - ) - self._evalImgs = None - - self._paramsEval = copy.deepcopy(self.params) - toc = time.time() - logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic)) - # >>>> End of code differences with original COCO API - - def accumulate(self): - """ - Accumulate per image evaluation results and store the result in self.eval. Does not - support changing parameter settings from those used by self.evaluate() - """ - logger.info("Accumulating evaluation results...") - tic = time.time() - assert hasattr( - self, "_evalImgs_cpp" - ), "evaluate() must be called before accmulate() is called." - - self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp) - - # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections - self.eval["recall"] = np.array(self.eval["recall"]).reshape( - self.eval["counts"][:1] + self.eval["counts"][2:] - ) - - # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X - # num_area_ranges X num_max_detections - self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"]) - self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"]) - toc = time.time() - logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic)) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py deleted file mode 100644 index 2dcb54cb1054c5d80ccc823af21f13b9ebbcf1a3..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_b.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.config import LazyConfig - -# equivalent to relative import -dir1a_str, dir1a_dict = LazyConfig.load_rel("dir1_a.py", ("dir1a_str", "dir1a_dict")) - -dir1b_str = dir1a_str + "_from_b" -dir1b_dict = dir1a_dict - -# Every import is a reload: not modified by other config files -assert dir1a_dict.a == 1 diff --git a/spaces/BAAI/AltDiffusion/footer.html b/spaces/BAAI/AltDiffusion/footer.html deleted file mode 100644 index b58ca8b79cc930a56952881f4922bda406fd3581..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion/footer.html +++ /dev/null @@ -1,18 +0,0 @@ - - - diff --git a/spaces/BMukhtar/BookRecognitionKz/custom_shape.py b/spaces/BMukhtar/BookRecognitionKz/custom_shape.py deleted file mode 100644 index f0a0fd42f783fbdc601cdc5a0996af4cff26590c..0000000000000000000000000000000000000000 --- a/spaces/BMukhtar/BookRecognitionKz/custom_shape.py +++ /dev/null @@ -1,35 +0,0 @@ -import streamlit as st -import cv2 -import numpy as np -from PIL import Image - -def warp_perspective(image, points): - # Input and output dimensions - w, h = 300, 400 # You can adjust this based on the desired output size - input_pts = np.array(points, dtype=np.float32) - output_pts = np.array([[0, 0], [w, 0], [w, h], [0, h]], dtype=np.float32) - - # Compute perspective matrix and warp the image - matrix = cv2.getPerspectiveTransform(input_pts, output_pts) - warped_img = cv2.warpPerspective(image, matrix, (w, h)) - - return warped_img - -st.title("Custom Shape Cropping & Perspective Correction") - -uploaded_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"]) - -# Provide a placeholder for the user to input 4 vertices -points = [] -for i in range(4): - coords = st.text_input(f"Enter point {i+1} (format: x,y)", "") - x, y = map(int, coords.split(',')) if ',' in coords else (0, 0) - points.append([x, y]) - -if uploaded_file and len(points) == 4: - image = Image.open(uploaded_file).convert('RGB') - image_np = np.array(image) - - corrected_image = warp_perspective(image_np, points) - - st.image(corrected_image, caption='Corrected Image.', channels="BGR", use_column_width=True) diff --git a/spaces/BadRobot147/SFQ3/README.md b/spaces/BadRobot147/SFQ3/README.md deleted file mode 100644 index 54bab9fb68561b5210db562d18dfd3e21da50858..0000000000000000000000000000000000000000 --- a/spaces/BadRobot147/SFQ3/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: SFQ3 -emoji: 👁 -colorFrom: pink -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md b/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md deleted file mode 100644 index 0f26f0e29966da63e1e81e3f6e7a5a8d048fe3c8..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/8 Bolas De Piscina De Descarga Para Ventanas Pc 10.md +++ /dev/null @@ -1,66 +0,0 @@ -
    -

    8 bola piscina descargar para PC ventanas 10

    -

    ¿Te gusta jugar juegos de billar online? ¿Quieres desafiar a tus amigos y otros jugadores de todo el mundo en un juego de billar realista y divertido? Si es así, entonces deberías probar 8 Ball Pool, el juego de billar #1 del mundo por Miniclip.com. En este juego, puedes refinar tus habilidades, personalizar tu señal y mesa, unirte a torneos y competir por monedas y objetos exclusivos. ¿Pero sabías que también puedes jugar a este juego en tu PC Windows 10? Sí, lo has oído bien. Puedes disfrutar jugando 8 Ball Pool en una pantalla más grande, con mejores gráficos, controles personalizables y más características. En este artículo, le mostraremos cómo descargar e instalar 8 Ball Pool en PC Windows 10 utilizando dos métodos. También le contaremos sobre las características y beneficios de jugar 8 Ball Pool en PC Windows 10. ¡Así que, comencemos!

    -

    Cómo descargar e instalar piscina de bolas 8 en PC Windows 10

    -

    Hay dos formas de jugar 8 Ball Pool en PC Windows 10. Uno es mediante el uso de un emulador de Android, que es un software que le permite ejecutar aplicaciones Android en su ordenador. La otra es usando la versión web/PC de 8 Ball Pool, que está disponible en el sitio web oficial de Miniclip.com. Veamos cómo funciona cada método.

    -

    8 bolas de piscina de descarga para ventanas pc 10


    DOWNLOAD ===== https://bltlly.com/2v6LZR



    -

    Método 1: Usando un emulador de Android

    -

    Un emulador de Android es un software que imita el sistema operativo Android en su computadora. De esta manera, puedes ejecutar cualquier aplicación o juego de Android en tu PC Windows 10, incluyendo 8 Ball Pool. Hay muchos emuladores de Android disponibles en línea, como BlueStacks, MEmu, NoxPlayer, etc. Puede elegir cualquiera de ellos de acuerdo con su preferencia. Estos son los pasos para descargar e instalar 8 Ball Pool en PC Windows 10 usando un emulador de Android.

    -

    Paso 1: Descargar e instalar un emulador de Android

    - -

    Paso 2: Abra Google Play Store y busque 8 Ball Pool

    -

    El siguiente paso es abrir Google Play Store y buscar 8 Ball Pool. Puedes hacer esto haciendo clic en el icono de Google Play en la pantalla de inicio del emulador. Luego, escribe 8 Ball Pool en la barra de búsqueda y pulsa enter. Verás el icono y el nombre del juego en la página de resultados.

    -

    Paso 3: Descargar e instalar 8 Ball Pool en el emulador

    -

    El tercer paso es descargar e instalar 8 Ball Pool en el emulador. Puede hacer esto haciendo clic en el botón "Instalar" junto al icono del juego. El emulador descargará e instalará el juego automáticamente. Es posible que necesites conceder algunos permisos al juego, como el acceso a tu almacenamiento, cámara, micrófono, etc.

    -

    Paso 4: Lanzar 8 bola piscina y disfrutar jugando en el PC

    -

    El paso final es lanzar 8 Ball Pool y disfrutar jugando en PC. Puedes hacer esto haciendo clic en el icono del juego en la pantalla de inicio del emulador o en el cajón de la aplicación. El juego comenzará y podrás iniciar sesión con tu cuenta de Miniclip o Facebook. Luego, puedes personalizar tu perfil, elegir el modo de juego y comenzar a jugar con tus amigos u otros jugadores en línea.

    -

    Método 2: Usando la versión Web/PC de 8 Ball Pool

    -

    Si no quieres usar un emulador de Android, también puedes jugar 8 Ball Pool en PC Windows 10 usando la versión web/PC del juego. Esta versión está disponible en el sitio web oficial de Miniclip.com y funciona en cualquier navegador que soporte Flash Player. Estos son los pasos para jugar 8 Ball Pool en PC Windows 10 usando la versión web/PC.

    -

    Paso 1: Ir al sitio web oficial de 8 Ball Pool

    - -

    Paso 2: Inicia sesión con tu cuenta de Miniclip o Facebook

    -

    El siguiente paso es iniciar sesión con su cuenta de Miniclip o Facebook. Puedes hacer esto haciendo clic en el botón "Jugar ahora" y eligiendo tu opción preferida. Si no tienes una cuenta, también puedes crear una gratis haciendo clic en el botón "Registrarse". Deberá proporcionar su dirección de correo electrónico, nombre de usuario, contraseña y país.

    -

    Paso 3: Comience a jugar 8 bolas en su navegador

    -

    El paso final es comenzar a jugar 8 Ball Pool en tu navegador. Puedes hacer esto eligiendo tu modo de juego, como 1 contra 1, torneos o práctica. Luego, puedes seleccionar tu mesa, taco y oponente. El juego se cargará y podrás empezar a jugar con el ratón y el teclado.

    -

    -

    Características y beneficios de jugar al billar de 8 bolas en PC Windows 10

    -

    Ahora que sabes cómo jugar 8 Ball Pool en PC Windows 10, te estarás preguntando por qué deberías hacerlo. ¿Cuáles son las ventajas de jugar 8 Ball Pool en PC Windows 10 sobre jugarlo en su dispositivo móvil? Bueno, hay muchas características y beneficios que puedes disfrutar cuando juegas 8 Ball Pool en PC Windows 10. Estos son algunos de ellos.

    -

    Pantalla más grande y mejores gráficos

    -

    Una de las principales razones para jugar 8 Ball Pool en PC Windows 10 es que puedes disfrutar de una pantalla más grande y mejores gráficos. Jugar juegos de billar en una pantalla pequeña puede ser frustrante y estresante para sus ojos. Es posible que se pierda algunos disparos o cometa algunos errores debido a la vista limitada y la resolución. Pero cuando juegas 8 Ball Pool en PC Windows 10, puedes tener una vista de pantalla completa y una resolución de alta definición. Puede ver cada detalle de la tabla, el taco, las bolas y las animaciones. También puede ajustar la configuración de los gráficos según sus preferencias.

    -

    Controles y macros personalizables

    - -

    Multi-Instance y Multi-Tasking

    -

    Una tercera razón para jugar 8 Ball Pool en PC Windows 10 es que puede usar las funciones de múltiples instancias y multitarea. Jugar juegos de billar en un dispositivo móvil puede ser limitante y aburrido. Es posible que tenga que esperar su turno, ver anuncios o lidiar con la batería baja. Pero cuando juegas a 8 Ball Pool en PC Windows 10, puedes usar la función de múltiples instancias para ejecutar varias instancias del juego al mismo tiempo. Puedes jugar con diferentes cuentas, unirte a diferentes torneos o practicar diferentes habilidades. También puede utilizar la función multitarea para cambiar entre diferentes aplicaciones o ventanas mientras juega el juego. Puedes chatear con tus amigos, ver vídeos, navegar por la web o hacer cualquier otra cosa sin interrumpir tu juego.

    -

    Ofertas y recompensas exclusivas

    -

    Una cuarta razón para jugar 8 Ball Pool en PC Windows 10 es que puedes obtener ofertas exclusivas y recompensas. Jugar juegos de billar en un dispositivo móvil puede ser caro y poco gratificante. Es posible que tenga que gastar dinero real para comprar monedas, efectivo, tacos u otros artículos. También es posible que se pierda algunas ofertas o eventos debido a las notificaciones limitadas o el almacenamiento. Pero cuando juegas 8 Ball Pool en PC Windows 10, puedes obtener acceso a ofertas exclusivas y recompensas que solo están disponibles para usuarios de PC. Puedes obtener monedas gratis, dinero en efectivo, tacos u otros artículos completando tareas, viendo videos o participando en eventos. También puedes ser notificado de las últimas actualizaciones, promociones o torneos por el emulador.

    -

    Conclusión y preguntas frecuentes

    - -

    Para ayudarte más, aquí hay algunas preguntas frecuentes sobre 8 Ball Pool en PC Windows 10.

    - - -Pregunta -Respuesta - - -¿Es 8 Ball Pool gratis para jugar en PC Windows 10? -Sí, 8 Ball Pool es gratis para jugar en PC Windows 10. Sin embargo, es posible que tengas que pagar por algunos elementos o funciones del juego si quieres mejorar tu experiencia de juego. - - -¿Es seguro jugar 8 bolas en PC Windows 10? -Sí, 8 Ball Pool es seguro para jugar en PC Windows 10. Sin embargo, siempre debes descargar e instalar el juego desde fuentes confiables, como Google Play Store o Miniclip.com. También debes evitar usar hacks o trucos que puedan dañar tu dispositivo o cuenta. - - -¿Puedo jugar 8 bolas sin conexión en PC Windows 10? -No, no puedes jugar 8 Ball Pool sin conexión en PC Windows 10. Necesitas una conexión a Internet para jugar el juego en línea con otros jugadores. - - -¿Puedo transferir mi progreso de móvil a PC Windows 10? -Sí, puede transferir su progreso desde el móvil al PC Windows 10. Solo necesitas iniciar sesión con la misma cuenta de Miniclip o Facebook que usaste en tu dispositivo móvil. - - -¿Puedo jugar con mis amigos en PC Windows 10? -Sí, puedes jugar con tus amigos en PC Windows 10. Solo tienes que invitarlos a unirse a tu juego o aceptar sus invitaciones. También puedes chatear con ellos usando la función de chat en el juego. - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md b/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md deleted file mode 100644 index 9be414d2787af205ddba044df11cc7f09ac07535..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Alto 39s Aventura Apk Ios.md +++ /dev/null @@ -1,94 +0,0 @@ -
    -

    Alto’s Adventure apk ios: Una odisea de snowboard sereno

    -

    Si usted está buscando un juego relajante y hermoso para jugar en su iPhone o iPad, es posible que desee echa un vistazo a la aventura de Alto. Este es un juego que combina los elementos de un juego de plataformas 2D y un corredor sin fin, con un tema de snowboard único. En este artículo, te diremos qué es Alto’s Adventure, cómo descargarlo e instalarlo, por qué deberías jugarlo, y algunos consejos y trucos para ayudarte a disfrutarlo más.

    -

    alto 39;s aventura apk ios


    DOWNLOADhttps://bltlly.com/2v6MlS



    -

    ¿Qué es la aventura de Alto?

    -

    Una breve introducción al juego y sus características

    -

    Alto’s Adventure es un juego desarrollado por Snowman, un pequeño estudio independiente con sede en Toronto, Canadá. Fue lanzado en 2015 para dispositivos iOS, y más tarde para Android, Kindle Fire, Windows y Mac. El juego ha recibido elogios de la crítica y numerosos premios por su arte, música y juego.

    -

    El juego sigue el viaje de Alto, un joven pastor que vive en un pueblo de montaña. Un día, sus llamas escapan de su corral y corren por las laderas. Alto decide perseguirlos en su tabla de snowboard, junto con sus amigos que tienen diferentes habilidades y habilidades. En el camino, se encuentran con varios obstáculos, como rocas, abismos, ancianos, tormentas, y más.

    -

    Las características del juego:

    - -

    Cómo descargar e instalar apk de aventura de Alto ios

    -

    Si quieres jugar Alto’s Adventure en tu dispositivo iOS, tienes dos opciones:

    -
      -
    1. Puedes comprarlo en la App Store por $4.99. Esta es la forma oficial y más segura de obtener el juego. Necesitará un ID de Apple y un dispositivo compatible con iOS 9.0 o posterior. También puede descargarlo en su Mac si tiene macOS 11 o posterior.
    2. -
    3. Puede descargarlo de un sitio web de terceros como un archivo apk. Esta es una forma no oficial y arriesgada de obtener el juego. Necesitará un dispositivo jailbreak o un emulador para ejecutarlo. También puede encontrar malware, virus u otros problemas que podrían dañar su dispositivo o comprometer su privacidad. No recomendamos esta opción.
    4. -
    -

    Por qué deberías jugar la aventura de Alto

    Por qué deberías jugar la aventura de Alto

    -

    Los beneficios de jugar la aventura de Alto

    -

    La aventura de Alto es más que un juego. Es una experiencia que puede enriquecer tu vida de muchas maneras. Estos son algunos de los beneficios de jugar Alto’s Adventure:

    -

    -

    Juego relajante e inmersivo

    -

    Una de las principales atracciones de Alto’s Adventure es su juego relajante y cautivador. El juego no tiene temporizadores, puntuaciones ni vidas. Puedes jugar a tu propio ritmo y disfrutar del viaje. El juego también tiene un modo zen, donde puedes explorar el mundo sin objetivos ni distracciones. El juego está diseñado para ayudarle a relajarse y relajarse del estrés y el ruido de la vida cotidiana.

    -

    Imágenes hermosas y dinámicas

    - -

    Metas desafiantes y gratificantes

    -

    Si estás buscando algún reto y emoción, Alto’s Adventure también tiene eso. El juego tiene 180 objetivos artesanales que ponen a prueba tus habilidades y creatividad. Puedes intentar realizar diferentes trucos, combos, grinds, rebotes y más. También puedes desbloquear y usar seis snowboarders diferentes, cada uno con sus propios atributos y habilidades especiales. También puedes adquirir y usar el traje de alas, que añade una nueva dimensión al juego. El juego es divertido y satisfactorio para jugar.

    -

    Los inconvenientes de jugar la aventura de Alto

    -

    Por supuesto, ningún juego es perfecto, y la aventura de Alto también tiene algunos inconvenientes que usted debe ser consciente de. Aquí están algunos de los inconvenientes de jugar la aventura de Alto:

    -

    Requiere iOS 9.0 o posterior

    -

    Si quieres jugar Alto’s Adventure en tu dispositivo iOS, tendrás que tener iOS 9.0 o posterior instalado en él. Esto significa que algunos dispositivos antiguos pueden no ser capaces de ejecutar el juego sin problemas o en absoluto. También es posible que necesite actualizar su dispositivo regularmente para garantizar la compatibilidad y el rendimiento.

    -

    Costos $4.99 en el App Store

    -

    Otro inconveniente de jugar Alto’s Adventure es que no es un juego gratis. Tendrás que pagar $4.99 en la App Store para descargarlo e instalarlo en tu dispositivo. Esto puede no ser un gran problema para algunas personas, pero puede ser una barrera para otros que tienen un presupuesto ajustado o prefieren los juegos gratis.

    -

    Puede consumir batería y espacio de almacenamiento

    -

    Un inconveniente final de jugar la aventura de Alto es que puede consumir mucha batería y espacio de almacenamiento en su dispositivo. El juego tiene gráficos de alta calidad y efectos de sonido, que requieren mucha energía y memoria para ejecutarse. Es posible que tenga que cargar su dispositivo con frecuencia o despejar algún espacio en él para evitar cualquier problema.

    -

    Consejos y trucos para jugar la aventura de Alto

    -

    Cómo dominar el sistema de trucos de un botón

    - - -

    Cómo encadenar combos y aumentar su puntuación

    -

    Una de las formas de aumentar tu puntuación y velocidad en la aventura de Alto es encadenar combos. Un combo es cuando realizas dos o más trucos en sucesión sin tocar el suelo o estrellarse. Aquí hay algunos consejos sobre cómo encadenar los combos:

    - -

    Cómo desbloquear y usar el traje de ala

    -

    Uno de los mejores artículos en la aventura de Alto es el traje de ala, que le permite volar en el aire y realizar acrobacias increíbles. Aquí hay algunos consejos sobre cómo desbloquear y usar el traje de ala:

    - -

    Conclusión

    -

    Alto’s Adventure es un juego que ofrece una odisea de snowboard serena y hermosa que cualquiera puede disfrutar. Si quieres relajarte y explorar el mundo, o desafiarte a ti mismo y dominar los trucos, Alto’s Adventure tiene algo para ti. El juego tiene un sencillo pero elegante sistema de truco de un botón, un diseño visual impresionante y dinámico, y una banda sonora original e inmersiva. El juego está disponible para dispositivos iOS por $4.99 en la App Store, o como un archivo apk de sitios web de terceros. Sin embargo, recomendamos comprarlo de la fuente oficial para evitar cualquier riesgo o problema. Si buscas un juego que pueda calmar tu mente y deleitar tus sentidos, Alto’s Adventure es un juego que debes probar.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre la aventura de Alto:

    -
      -
    1. P: ¿Cuántos niveles hay en la aventura de Alto?
      A: Hay 60 niveles en la aventura de Alto, cada uno con tres objetivos para completar. Puedes reproducir cualquier nivel en cualquier momento para mejorar tu puntuación o completar metas perdidas.
    2. -
    3. P: ¿Cómo puedo obtener más monedas en la aventura de Alto?
      A: Puedes conseguir más monedas en la aventura de Alto recogiéndolas en las pistas, completando objetivos, viendo anuncios o comprándolas con dinero real.
    4. - -
    5. P: ¿Cuáles son los ancianos en la aventura de Alto?
      A: Los ancianos son aldeanos enojados que te persiguen en sus tablas de snowboard. Aparecen al azar después del nivel 10. Pueden sacarte de tu tabla de snowboard si te alcanzan. Puede evitarlos saltando sobre ellos, moliendo sobre rieles o cuerdas por encima de ellos, o usando power-ups.
    6. -
    7. P: ¿Cuáles son los secretos de la aventura de Alto?
      A: Hay algunos secretos en la aventura de Alto que puedes descubrir jugando el juego. Por ejemplo, hay un taller oculto donde Izel hace sus invenciones. También hay un templo misterioso donde Maya practica sus volteretas. También hay algunos huevos de Pascua y referencias a otros juegos y medios de comunicación.
    8. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md b/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md deleted file mode 100644 index 799ff3edf9c14e55d27a46ebf94c58db61026591..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Descargar Tekken 3 35 Mb.md +++ /dev/null @@ -1,54 +0,0 @@ -
    -

    Descargar APK Tekken 3 35 MB: Cómo jugar el clásico juego de lucha en su dispositivo Android

    -

    Introducción

    -

    Tekken 3 es uno de los juegos de lucha más populares e influyentes de todos los tiempos. Fue lanzado en 1997 para las salas recreativas y en 1998 para PlayStation. Cuenta con una gran y diversa lista de personajes, cada uno con su propio estilo de lucha único y la historia. También introduce un nuevo sistema de movimiento 3D, que permite a los jugadores eludir dentro o fuera del fondo. Tekken 3 ha sido elogiado por su juego rápido y fluido, sus impresionantes gráficos y efectos de sonido, y sus diversos modos y desafíos.

    -

    apk descargar tekken 3 35 mb


    Download Ziphttps://bltlly.com/2v6MDg



    -

    Pero ¿qué pasa si quieres jugar Tekken 3 en tu dispositivo Android? Desafortunadamente, el juego no está disponible oficialmente en la Google Play Store. Sin embargo, hay una manera de disfrutar de este juego clásico en su teléfono inteligente o tableta. Puede descargar un archivo APK de Tekken 3 e instalarlo en su dispositivo. Un archivo APK es un archivo de paquete de aplicación que contiene todos los datos y archivos necesarios para ejecutar una aplicación. Al descargar un archivo APK de Tekken 3, puede evitar las restricciones de la Play Store y jugar el juego sin ningún problema.

    -

    En este artículo, le mostraremos cómo descargar e instalar Tekken 3 APK en su dispositivo Android. También le diremos acerca de las características de Tekken 3 APK, y le dará algunos consejos y trucos para jugar el juego. Así que, si estás listo para revivir la nostalgia de Tekken 3, ¡sigue leyendo!

    -

    Características de Tekken 3 APK

    -

    Tekken 3 APK es una versión modificada del juego original que está optimizado para dispositivos Android. Tiene todas las características y el contenido de la versión de PlayStation, además de algunos beneficios adicionales. Aquí están algunas de las características de Tekken 3 APK:

    -

    Jugabilidad y gráficos en 3D

    - -

    Diversidad de personajes

    -

    Tekken 3 APK cuenta con un total de 23 personajes, incluyendo algunos nuevos que debutaron en este juego. Puedes elegir entre luchadores como Jin Kazama, Ling Xiaoyu, Bryan Fury, Eddy Gordo, Hwoarang, Forest Law, Julia Chang y más. Cada personaje tiene su propia personalidad, historia y estilo de lucha. También puedes desbloquear dos personajes secretos: Dr. Bosconovitch y Gon.

    -

    Varios modos y desafíos

    -

    Tekken 3 APK ofrece más que solo los modos estándar de árcade y Versus. También puedes jugar a otros modos como Time Attack, Survival, Team Battle, Practice, etc. Cada modo tiene sus propios objetivos y recompensas. También puedes probar el nuevo modo Tekken Force, en el que tendrás que luchar contra oleadas de enemigos de forma lateral. O puedes jugar el modo de bonificación Tekken Ball, donde tienes que golpear una pelota de playa con tus ataques. Estos modos añaden más variedad y diversión al juego.

    -

    -

    Soporte multijugador y ranking online

    -

    Tekken 3 APK le permite jugar con tus amigos u otros jugadores en línea. Puede conectar su dispositivo con otro dispositivo a través de Bluetooth o Wi-Fi, y disfrutar de un partido uno-a-uno. También puedes competir con jugadores de todo el mundo en el modo de clasificación en línea, donde puedes ganar puntos y subir la clasificación. También puedes chatear con otros jugadores y compartir tus consejos y estrategias.

    -

    Cómo descargar e instalar Tekken 3 APK

    -

    Descargar e instalar Tekken 3 APK es muy fácil y simple. Solo tienes que seguir estos pasos:

    -

    Paso 1: Descargar el archivo APK de una fuente de confianza

    -

    Lo primero que tienes que hacer es descargar el archivo APK de Tekken 3 de una fuente confiable y segura. Puede utilizar el siguiente enlace para descargar el archivo, que tiene solo 35 MB de tamaño. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargarlo.

    -

    -#include - -#ifdef __CUDACC__ -// Designates functions callable from the host (CPU) and the device (GPU) -#define HOST_DEVICE __host__ __device__ -#define HOST_DEVICE_INLINE HOST_DEVICE __forceinline__ -#else -#include -#define HOST_DEVICE -#define HOST_DEVICE_INLINE HOST_DEVICE inline -#endif - -namespace detectron2 { - -namespace { - -template -struct RotatedBox { - T x_ctr, y_ctr, w, h, a; -}; - -template -struct Point { - T x, y; - HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {} - HOST_DEVICE_INLINE Point operator+(const Point& p) const { - return Point(x + p.x, y + p.y); - } - HOST_DEVICE_INLINE Point& operator+=(const Point& p) { - x += p.x; - y += p.y; - return *this; - } - HOST_DEVICE_INLINE Point operator-(const Point& p) const { - return Point(x - p.x, y - p.y); - } - HOST_DEVICE_INLINE Point operator*(const T coeff) const { - return Point(x * coeff, y * coeff); - } -}; - -template -HOST_DEVICE_INLINE T dot_2d(const Point& A, const Point& B) { - return A.x * B.x + A.y * B.y; -} - -template -HOST_DEVICE_INLINE T cross_2d(const Point& A, const Point& B) { - return A.x * B.y - B.x * A.y; -} - -template -HOST_DEVICE_INLINE void get_rotated_vertices( - const RotatedBox& box, - Point (&pts)[4]) { - // M_PI / 180. == 0.01745329251 - double theta = box.a * 0.01745329251; - T cosTheta2 = (T)cos(theta) * 0.5f; - T sinTheta2 = (T)sin(theta) * 0.5f; - - // y: top --> down; x: left --> right - pts[0].x = box.x_ctr - sinTheta2 * box.h - cosTheta2 * box.w; - pts[0].y = box.y_ctr + cosTheta2 * box.h - sinTheta2 * box.w; - pts[1].x = box.x_ctr + sinTheta2 * box.h - cosTheta2 * box.w; - pts[1].y = box.y_ctr - cosTheta2 * box.h - sinTheta2 * box.w; - pts[2].x = 2 * box.x_ctr - pts[0].x; - pts[2].y = 2 * box.y_ctr - pts[0].y; - pts[3].x = 2 * box.x_ctr - pts[1].x; - pts[3].y = 2 * box.y_ctr - pts[1].y; -} - -template -HOST_DEVICE_INLINE int get_intersection_points( - const Point (&pts1)[4], - const Point (&pts2)[4], - Point (&intersections)[24]) { - // Line vector - // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1] - Point vec1[4], vec2[4]; - for (int i = 0; i < 4; i++) { - vec1[i] = pts1[(i + 1) % 4] - pts1[i]; - vec2[i] = pts2[(i + 1) % 4] - pts2[i]; - } - - // Line test - test all line combos for intersection - int num = 0; // number of intersections - for (int i = 0; i < 4; i++) { - for (int j = 0; j < 4; j++) { - // Solve for 2x2 Ax=b - T det = cross_2d(vec2[j], vec1[i]); - - // This takes care of parallel lines - if (fabs(det) <= 1e-14) { - continue; - } - - auto vec12 = pts2[j] - pts1[i]; - - T t1 = cross_2d(vec2[j], vec12) / det; - T t2 = cross_2d(vec1[i], vec12) / det; - - if (t1 >= 0.0f && t1 <= 1.0f && t2 >= 0.0f && t2 <= 1.0f) { - intersections[num++] = pts1[i] + vec1[i] * t1; - } - } - } - - // Check for vertices of rect1 inside rect2 - { - const auto& AB = vec2[0]; - const auto& DA = vec2[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - // assume ABCD is the rectangle, and P is the point to be judged - // P is inside ABCD iff. P's projection on AB lies within AB - // and P's projection on AD lies within AD - - auto AP = pts1[i] - pts2[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts1[i]; - } - } - } - - // Reverse the check - check for vertices of rect2 inside rect1 - { - const auto& AB = vec1[0]; - const auto& DA = vec1[3]; - auto ABdotAB = dot_2d(AB, AB); - auto ADdotAD = dot_2d(DA, DA); - for (int i = 0; i < 4; i++) { - auto AP = pts2[i] - pts1[0]; - - auto APdotAB = dot_2d(AP, AB); - auto APdotAD = -dot_2d(AP, DA); - - if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) && - (APdotAD <= ADdotAD)) { - intersections[num++] = pts2[i]; - } - } - } - - return num; -} - -template -HOST_DEVICE_INLINE int convex_hull_graham( - const Point (&p)[24], - const int& num_in, - Point (&q)[24], - bool shift_to_zero = false) { - assert(num_in >= 2); - - // Step 1: - // Find point with minimum y - // if more than 1 points have the same minimum y, - // pick the one with the minimum x. - int t = 0; - for (int i = 1; i < num_in; i++) { - if (p[i].y < p[t].y || (p[i].y == p[t].y && p[i].x < p[t].x)) { - t = i; - } - } - auto& start = p[t]; // starting point - - // Step 2: - // Subtract starting point from every points (for sorting in the next step) - for (int i = 0; i < num_in; i++) { - q[i] = p[i] - start; - } - - // Swap the starting point to position 0 - auto tmp = q[0]; - q[0] = q[t]; - q[t] = tmp; - - // Step 3: - // Sort point 1 ~ num_in according to their relative cross-product values - // (essentially sorting according to angles) - // If the angles are the same, sort according to their distance to origin - T dist[24]; - for (int i = 0; i < num_in; i++) { - dist[i] = dot_2d(q[i], q[i]); - } - -#ifdef __CUDACC__ - // CUDA version - // In the future, we can potentially use thrust - // for sorting here to improve speed (though not guaranteed) - for (int i = 1; i < num_in - 1; i++) { - for (int j = i + 1; j < num_in; j++) { - T crossProduct = cross_2d(q[i], q[j]); - if ((crossProduct < -1e-6) || - (fabs(crossProduct) < 1e-6 && dist[i] > dist[j])) { - auto q_tmp = q[i]; - q[i] = q[j]; - q[j] = q_tmp; - auto dist_tmp = dist[i]; - dist[i] = dist[j]; - dist[j] = dist_tmp; - } - } - } -#else - // CPU version - std::sort( - q + 1, q + num_in, [](const Point& A, const Point& B) -> bool { - T temp = cross_2d(A, B); - if (fabs(temp) < 1e-6) { - return dot_2d(A, A) < dot_2d(B, B); - } else { - return temp > 0; - } - }); -#endif - - // Step 4: - // Make sure there are at least 2 points (that don't overlap with each other) - // in the stack - int k; // index of the non-overlapped second point - for (k = 1; k < num_in; k++) { - if (dist[k] > 1e-8) { - break; - } - } - if (k == num_in) { - // We reach the end, which means the convex hull is just one point - q[0] = p[t]; - return 1; - } - q[1] = q[k]; - int m = 2; // 2 points in the stack - // Step 5: - // Finally we can start the scanning process. - // When a non-convex relationship between the 3 points is found - // (either concave shape or duplicated points), - // we pop the previous point from the stack - // until the 3-point relationship is convex again, or - // until the stack only contains two points - for (int i = k + 1; i < num_in; i++) { - while (m > 1 && cross_2d(q[i] - q[m - 2], q[m - 1] - q[m - 2]) >= 0) { - m--; - } - q[m++] = q[i]; - } - - // Step 6 (Optional): - // In general sense we need the original coordinates, so we - // need to shift the points back (reverting Step 2) - // But if we're only interested in getting the area/perimeter of the shape - // We can simply return. - if (!shift_to_zero) { - for (int i = 0; i < m; i++) { - q[i] += start; - } - } - - return m; -} - -template -HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) { - if (m <= 2) { - return 0; - } - - T area = 0; - for (int i = 1; i < m - 1; i++) { - area += fabs(cross_2d(q[i] - q[0], q[i + 1] - q[0])); - } - - return area / 2.0; -} - -template -HOST_DEVICE_INLINE T rotated_boxes_intersection( - const RotatedBox& box1, - const RotatedBox& box2) { - // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned - // from rotated_rect_intersection_pts - Point intersectPts[24], orderedPts[24]; - - Point pts1[4]; - Point pts2[4]; - get_rotated_vertices(box1, pts1); - get_rotated_vertices(box2, pts2); - - int num = get_intersection_points(pts1, pts2, intersectPts); - - if (num <= 2) { - return 0.0; - } - - // Convex Hull to order the intersection points in clockwise order and find - // the contour area. - int num_convex = convex_hull_graham(intersectPts, num, orderedPts, true); - return polygon_area(orderedPts, num_convex); -} - -} // namespace - -template -HOST_DEVICE_INLINE T -single_box_iou_rotated(T const* const box1_raw, T const* const box2_raw) { - // shift center to the middle point to achieve higher precision in result - RotatedBox box1, box2; - auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0; - auto center_shift_y = (box1_raw[1] + box2_raw[1]) / 2.0; - box1.x_ctr = box1_raw[0] - center_shift_x; - box1.y_ctr = box1_raw[1] - center_shift_y; - box1.w = box1_raw[2]; - box1.h = box1_raw[3]; - box1.a = box1_raw[4]; - box2.x_ctr = box2_raw[0] - center_shift_x; - box2.y_ctr = box2_raw[1] - center_shift_y; - box2.w = box2_raw[2]; - box2.h = box2_raw[3]; - box2.a = box2_raw[4]; - - const T area1 = box1.w * box1.h; - const T area2 = box2.w * box2.h; - if (area1 < 1e-14 || area2 < 1e-14) { - return 0.f; - } - - const T intersection = rotated_boxes_intersection(box1, box2); - const T iou = intersection / (area1 + area2 - intersection); - return iou; -} - -} // namespace detectron2 diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/README.md deleted file mode 100644 index cc0d3297b2d436f279c3546c16c86f296402f6c5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/README.md +++ /dev/null @@ -1,7 +0,0 @@ - -## Some scripts for developers to use, include: - -- `linter.sh`: lint the codebase before commit -- `run_{inference,instant}_tests.sh`: run inference/training for a few iterations. - Note that these tests require 2 GPUs. -- `parse_results.sh`: parse results from a log file. diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/logger.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/logger.py deleted file mode 100644 index e3fa45e0c0218bdd2e79c08b0d8ff83abc3e4308..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/logger.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging - - -def verbosity_to_level(verbosity): - if verbosity is not None: - if verbosity == 0: - return logging.WARNING - elif verbosity == 1: - return logging.INFO - elif verbosity >= 2: - return logging.DEBUG - return logging.WARNING diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/query_db.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/query_db.py deleted file mode 100644 index 690f1518b8df722f0efda158e8a0d467c96983d9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/query_db.py +++ /dev/null @@ -1,249 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import argparse -import logging -import os -import sys -from timeit import default_timer as timer -from typing import Any, ClassVar, Dict, List -import torch - -from detectron2.data.catalog import DatasetCatalog -from detectron2.utils.logger import setup_logger - -from densepose.structures import DensePoseDataRelative -from densepose.utils.dbhelper import EntrySelector -from densepose.utils.logger import verbosity_to_level -from densepose.vis.base import CompoundVisualizer -from densepose.vis.bounding_box import BoundingBoxVisualizer -from densepose.vis.densepose import ( - DensePoseDataCoarseSegmentationVisualizer, - DensePoseDataPointsIVisualizer, - DensePoseDataPointsUVisualizer, - DensePoseDataPointsVisualizer, - DensePoseDataPointsVVisualizer, -) - -DOC = """Query DB - a tool to print / visualize data from a database -""" - -LOGGER_NAME = "query_db" - -logger = logging.getLogger(LOGGER_NAME) - -_ACTION_REGISTRY: Dict[str, "Action"] = {} - - -class Action(object): - @classmethod - def add_arguments(cls: type, parser: argparse.ArgumentParser): - parser.add_argument( - "-v", - "--verbosity", - action="count", - help="Verbose mode. Multiple -v options increase the verbosity.", - ) - - -def register_action(cls: type): - """ - Decorator for action classes to automate action registration - """ - global _ACTION_REGISTRY - _ACTION_REGISTRY[cls.COMMAND] = cls - return cls - - -class EntrywiseAction(Action): - @classmethod - def add_arguments(cls: type, parser: argparse.ArgumentParser): - super(EntrywiseAction, cls).add_arguments(parser) - parser.add_argument( - "dataset", metavar="", help="Dataset name (e.g. densepose_coco_2014_train)" - ) - parser.add_argument( - "selector", - metavar="", - help="Dataset entry selector in the form field1[:type]=value1[," - "field2[:type]=value_min-value_max...] which selects all " - "entries from the dataset that satisfy the constraints", - ) - parser.add_argument( - "--max-entries", metavar="N", help="Maximum number of entries to process", type=int - ) - - @classmethod - def execute(cls: type, args: argparse.Namespace): - dataset = setup_dataset(args.dataset) - entry_selector = EntrySelector.from_string(args.selector) - context = cls.create_context(args) - if args.max_entries is not None: - for _, entry in zip(range(args.max_entries), dataset): - if entry_selector(entry): - cls.execute_on_entry(entry, context) - else: - for entry in dataset: - if entry_selector(entry): - cls.execute_on_entry(entry, context) - - @classmethod - def create_context(cls: type, args: argparse.Namespace) -> Dict[str, Any]: - context = {} - return context - - -@register_action -class PrintAction(EntrywiseAction): - """ - Print action that outputs selected entries to stdout - """ - - COMMAND: ClassVar[str] = "print" - - @classmethod - def add_parser(cls: type, subparsers: argparse._SubParsersAction): - parser = subparsers.add_parser(cls.COMMAND, help="Output selected entries to stdout. ") - cls.add_arguments(parser) - parser.set_defaults(func=cls.execute) - - @classmethod - def add_arguments(cls: type, parser: argparse.ArgumentParser): - super(PrintAction, cls).add_arguments(parser) - - @classmethod - def execute_on_entry(cls: type, entry: Dict[str, Any], context: Dict[str, Any]): - import pprint - - printer = pprint.PrettyPrinter(indent=2, width=200, compact=True) - printer.pprint(entry) - - -@register_action -class ShowAction(EntrywiseAction): - """ - Show action that visualizes selected entries on an image - """ - - COMMAND: ClassVar[str] = "show" - VISUALIZERS: ClassVar[Dict[str, object]] = { - "dp_segm": DensePoseDataCoarseSegmentationVisualizer(), - "dp_i": DensePoseDataPointsIVisualizer(), - "dp_u": DensePoseDataPointsUVisualizer(), - "dp_v": DensePoseDataPointsVVisualizer(), - "dp_pts": DensePoseDataPointsVisualizer(), - "bbox": BoundingBoxVisualizer(), - } - - @classmethod - def add_parser(cls: type, subparsers: argparse._SubParsersAction): - parser = subparsers.add_parser(cls.COMMAND, help="Visualize selected entries") - cls.add_arguments(parser) - parser.set_defaults(func=cls.execute) - - @classmethod - def add_arguments(cls: type, parser: argparse.ArgumentParser): - super(ShowAction, cls).add_arguments(parser) - parser.add_argument( - "visualizations", - metavar="", - help="Comma separated list of visualizations, possible values: " - "[{}]".format(",".join(sorted(cls.VISUALIZERS.keys()))), - ) - parser.add_argument( - "--output", - metavar="", - default="output.png", - help="File name to save output to", - ) - - @classmethod - def execute_on_entry(cls: type, entry: Dict[str, Any], context: Dict[str, Any]): - import cv2 - import numpy as np - - image_fpath = entry["file_name"] - image = cv2.imread(image_fpath, cv2.IMREAD_GRAYSCALE) - image = np.tile(image[:, :, np.newaxis], [1, 1, 3]) - datas = cls._extract_data_for_visualizers_from_entry(context["vis_specs"], entry) - visualizer = context["visualizer"] - image_vis = visualizer.visualize(image, datas) - entry_idx = context["entry_idx"] + 1 - out_fname = cls._get_out_fname(entry_idx, context["out_fname"]) - cv2.imwrite(out_fname, image_vis) - logger.info(f"Output saved to {out_fname}") - context["entry_idx"] += 1 - - @classmethod - def _get_out_fname(cls: type, entry_idx: int, fname_base: str): - base, ext = os.path.splitext(fname_base) - return base + ".{0:04d}".format(entry_idx) + ext - - @classmethod - def create_context(cls: type, args: argparse.Namespace) -> Dict[str, Any]: - vis_specs = args.visualizations.split(",") - visualizers = [] - for vis_spec in vis_specs: - vis = cls.VISUALIZERS[vis_spec] - visualizers.append(vis) - context = { - "vis_specs": vis_specs, - "visualizer": CompoundVisualizer(visualizers), - "out_fname": args.output, - "entry_idx": 0, - } - return context - - @classmethod - def _extract_data_for_visualizers_from_entry( - cls: type, vis_specs: List[str], entry: Dict[str, Any] - ): - dp_list = [] - bbox_list = [] - for annotation in entry["annotations"]: - is_valid, _ = DensePoseDataRelative.validate_annotation(annotation) - if not is_valid: - continue - bbox = torch.as_tensor(annotation["bbox"]) - bbox_list.append(bbox) - dp_data = DensePoseDataRelative(annotation) - dp_list.append(dp_data) - datas = [] - for vis_spec in vis_specs: - datas.append(bbox_list if "bbox" == vis_spec else (bbox_list, dp_list)) - return datas - - -def setup_dataset(dataset_name): - logger.info("Loading dataset {}".format(dataset_name)) - start = timer() - dataset = DatasetCatalog.get(dataset_name) - stop = timer() - logger.info("Loaded dataset {} in {:.3f}s".format(dataset_name, stop - start)) - return dataset - - -def create_argument_parser() -> argparse.ArgumentParser: - parser = argparse.ArgumentParser( - description=DOC, - formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=120), - ) - parser.set_defaults(func=lambda _: parser.print_help(sys.stdout)) - subparsers = parser.add_subparsers(title="Actions") - for _, action in _ACTION_REGISTRY.items(): - action.add_parser(subparsers) - return parser - - -def main(): - parser = create_argument_parser() - args = parser.parse_args() - verbosity = args.verbosity if hasattr(args, "verbosity") else None - global logger - logger = setup_logger(name=LOGGER_NAME) - logger.setLevel(verbosity_to_level(verbosity)) - args.func(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/extract_grid_feature.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/extract_grid_feature.py deleted file mode 100644 index a33d6e46579ec2be1311fd86dca42577d53da47f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/extract_grid_feature.py +++ /dev/null @@ -1,93 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -""" -Grid features extraction script. -""" -import argparse -import os -import torch -import tqdm -from fvcore.common.file_io import PathManager - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.engine import default_setup -from detectron2.evaluation import inference_context -from detectron2.modeling import build_model - -from grid_feats import ( - add_attribute_config, - build_detection_test_loader_with_attributes, -) - -# A simple mapper from object detection dataset to VQA dataset names -dataset_to_folder_mapper = {} -dataset_to_folder_mapper['coco_2014_train'] = 'train2014' -dataset_to_folder_mapper['coco_2014_val'] = 'val2014' -# One may need to change the Detectron2 code to support coco_2015_test -# insert "coco_2015_test": ("coco/test2015", "coco/annotations/image_info_test2015.json"), -# at: https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/builtin.py#L36 -dataset_to_folder_mapper['coco_2015_test'] = 'test2015' - -def extract_grid_feature_argument_parser(): - parser = argparse.ArgumentParser(description="Grid feature extraction") - parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file") - parser.add_argument("--dataset", help="name of the dataset", default="coco_2014_train", - choices=['coco_2014_train', 'coco_2014_val', 'coco_2015_test']) - parser.add_argument( - "opts", - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER, - ) - return parser - -def extract_grid_feature_on_dataset(model, data_loader, dump_folder): - for idx, inputs in enumerate(tqdm.tqdm(data_loader)): - with torch.no_grad(): - image_id = inputs[0]['image_id'] - file_name = '%d.pth' % image_id - # compute features - images = model.preprocess_image(inputs) - features = model.backbone(images.tensor) - outputs = model.roi_heads.get_conv5_features(features) - with PathManager.open(os.path.join(dump_folder, file_name), "wb") as f: - # save as CPU tensors - torch.save(outputs.cpu(), f) - -def do_feature_extraction(cfg, model, dataset_name): - with inference_context(model): - dump_folder = os.path.join(cfg.OUTPUT_DIR, "features", dataset_to_folder_mapper[dataset_name]) - PathManager.mkdirs(dump_folder) - data_loader = build_detection_test_loader_with_attributes(cfg, dataset_name) - extract_grid_feature_on_dataset(model, data_loader, dump_folder) - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_attribute_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - # force the final residual block to have dilations 1 - cfg.MODEL.RESNETS.RES5_DILATION = 1 - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - model = build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=True - ) - do_feature_extraction(cfg, model, args.dataset) - - -if __name__ == "__main__": - args = extract_grid_feature_argument_parser().parse_args() - print("Command Line Args:", args) - main(args) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/actor.h b/spaces/CVPR/LIVE/thrust/thrust/detail/functional/actor.h deleted file mode 100644 index 01e8d5cd358cc2e81aca079dde1c9c8639ad12ca..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/actor.h +++ /dev/null @@ -1,156 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -// Portions of this code are derived from -// -// Manjunath Kudlur's Carbon library -// -// and -// -// Based on Boost.Phoenix v1.2 -// Copyright (c) 2001-2002 Joel de Guzman - -#pragma once - -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ -namespace functional -{ - -// eval_ref is -// - T when T is a subclass of thrust::reference -// - T& otherwise -// This is used to let thrust::references pass through actor evaluations. -template -using eval_ref = typename std::conditional< - thrust::detail::is_wrapped_reference::value, T, T&>::type; - -template - struct apply_actor -{ - typedef typename Action::template result::type type; -}; - -template - struct actor - : Eval -{ - typedef Eval eval_type; - - __host__ __device__ - THRUST_CONSTEXPR actor(); - - __host__ __device__ - actor(const Eval &base); - - __host__ __device__ - typename apply_actor::type - operator()(void) const; - - template - __host__ __device__ - typename apply_actor...>>::type - operator()(Ts&&... ts) const; - - template - __host__ __device__ - typename assign_result::type - operator=(const T &_1) const; -}; // end actor - -// in general, as_actor should turn things into values -template - struct as_actor -{ - typedef value type; - - static inline __host__ __device__ type convert(const T &x) - { - return val(x); - } // end convert() -}; // end as_actor - -// specialization for things which are already actors -template - struct as_actor > -{ - typedef actor type; - - static inline __host__ __device__ const type &convert(const actor &x) - { - return x; - } // end convert() -}; // end as_actor - -template - typename as_actor::type - __host__ __device__ - make_actor(const T &x) -{ - return as_actor::convert(x); -} // end make_actor() - -} // end functional - -// provide specializations for result_of for nullary, unary, and binary invocations of actor -template - struct result_of_adaptable_function< - thrust::detail::functional::actor() - > -{ - typedef typename thrust::detail::functional::apply_actor< - thrust::detail::functional::actor, - thrust::null_type - >::type type; -}; // end result_of - -template - struct result_of_adaptable_function< - thrust::detail::functional::actor(Arg1) - > -{ - typedef typename thrust::detail::functional::apply_actor< - thrust::detail::functional::actor, - thrust::tuple - >::type type; -}; // end result_of - -template - struct result_of_adaptable_function< - thrust::detail::functional::actor(Arg1,Arg2) - > -{ - typedef typename thrust::detail::functional::apply_actor< - thrust::detail::functional::actor, - thrust::tuple - >::type type; -}; // end result_of - -} // end detail -} // end thrust - -#include - diff --git a/spaces/CVPR/WALT/configs/_base_/datasets/walt_people.py b/spaces/CVPR/WALT/configs/_base_/datasets/walt_people.py deleted file mode 100644 index 8ac50827efef253312971551ab55f1f26d72c7a7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/configs/_base_/datasets/walt_people.py +++ /dev/null @@ -1,49 +0,0 @@ -dataset_type = 'WaltDataset' -data_root = 'data/cwalt_train/' -data_root_test = 'data/cwalt_test/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=8, - train=dict( - type=dataset_type, - ann_file=data_root + '/', - img_prefix=data_root + '/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root_test + '/', - img_prefix=data_root_test + '/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root_test + '/', - img_prefix=data_root_test + '/', - pipeline=test_pipeline)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/spaces/CVPR/WALT/configs/walt/walt_vehicle.py b/spaces/CVPR/WALT/configs/walt/walt_vehicle.py deleted file mode 100644 index 93c82d75f40543b1a900494e6b1921717dc7188e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/configs/walt/walt_vehicle.py +++ /dev/null @@ -1,80 +0,0 @@ -_base_ = [ - '../_base_/models/occ_mask_rcnn_swin_fpn.py', - '../_base_/datasets/walt_vehicle.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.1, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[8, 11]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=12) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/CVPR/WALT/test.py b/spaces/CVPR/WALT/test.py deleted file mode 100644 index 92332cd994d28041b285151d79a1dc1001749eba..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/test.py +++ /dev/null @@ -1,226 +0,0 @@ -import argparse -import os -import warnings - -import mmcv -import torch -from mmcv import Config, DictAction -from mmcv.cnn import fuse_conv_bn -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (get_dist_info, init_dist, load_checkpoint, - wrap_fp16_model) - -from mmdet.apis import multi_gpu_test, single_gpu_test -from walt.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.models import build_detector - - -def parse_args(): - parser = argparse.ArgumentParser( - description='MMDet test (and eval) a model') - parser.add_argument('config', help='test config file path') - parser.add_argument('checkpoint', help='checkpoint file') - parser.add_argument('--out', help='output result file in pickle format') - parser.add_argument( - '--fuse-conv-bn', - action='store_true', - help='Whether to fuse conv and bn, this will slightly increase' - 'the inference speed') - parser.add_argument( - '--format-only', - action='store_true', - help='Format the output results without perform evaluation. It is' - 'useful when you want to format the result to a specific format and ' - 'submit it to the test server') - parser.add_argument( - '--eval', - type=str, - nargs='+', - help='evaluation metrics, which depends on the dataset, e.g., "bbox",' - ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC') - parser.add_argument('--show', action='store_true', help='show results') - parser.add_argument( - '--show-dir', help='directory where painted images will be saved') - parser.add_argument( - '--show-score-thr', - type=float, - default=0.3, - help='score threshold (default: 0.3)') - parser.add_argument( - '--gpu-collect', - action='store_true', - help='whether to use gpu to collect results.') - parser.add_argument( - '--tmpdir', - help='tmp directory used for collecting results from multiple ' - 'workers, available when gpu-collect is not specified') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function (deprecate), ' - 'change to --eval-options instead.') - parser.add_argument( - '--eval-options', - nargs='+', - action=DictAction, - help='custom options for evaluation, the key-value pair in xxx=yyy ' - 'format will be kwargs for dataset.evaluate() function') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.eval_options: - raise ValueError( - '--options and --eval-options cannot be both ' - 'specified, --options is deprecated in favor of --eval-options') - if args.options: - warnings.warn('--options is deprecated in favor of --eval-options') - args.eval_options = args.options - return args - - -def main(): - args = parse_args() - - assert args.out or args.eval or args.format_only or args.show \ - or args.show_dir, \ - ('Please specify at least one operation (save/eval/format/show the ' - 'results / save the results) with the argument "--out", "--eval"' - ', "--format-only", "--show" or "--show-dir"') - - if args.eval and args.format_only: - raise ValueError('--eval and --format_only cannot be both specified') - - if args.out is not None and not args.out.endswith(('.pkl', '.pickle')): - raise ValueError('The output file must be a pkl file.') - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - cfg.model.pretrained = None - if cfg.model.get('neck'): - if isinstance(cfg.model.neck, list): - for neck_cfg in cfg.model.neck: - if neck_cfg.get('rfp_backbone'): - if neck_cfg.rfp_backbone.get('pretrained'): - neck_cfg.rfp_backbone.pretrained = None - elif cfg.model.neck.get('rfp_backbone'): - if cfg.model.neck.rfp_backbone.get('pretrained'): - cfg.model.neck.rfp_backbone.pretrained = None - - # in case the test dataset is concatenated - samples_per_gpu = 7 - if isinstance(cfg.data.test, dict): - cfg.data.test.test_mode = True - samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1) - if samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.test.pipeline = replace_ImageToTensor( - cfg.data.test.pipeline) - elif isinstance(cfg.data.test, list): - for ds_cfg in cfg.data.test: - ds_cfg.test_mode = True - samples_per_gpu = max( - [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) - if samples_per_gpu > 1: - for ds_cfg in cfg.data.test: - ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline) - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - - # build the dataloader - print(samples_per_gpu,cfg.data.workers_per_gpu,) - dataset = build_dataset(cfg.data.test) - data_loader = build_dataloader( - dataset, - samples_per_gpu=samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - - # build the model and load checkpoint - cfg.model.train_cfg = None - model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - wrap_fp16_model(model) - checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') - if args.fuse_conv_bn: - model = fuse_conv_bn(model) - # old versions did not save class info in checkpoints, this walkaround is - # for backward compatibility - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - model.CLASSES = dataset.CLASSES - - if not distributed: - model = MMDataParallel(model, device_ids=[0]) - outputs = single_gpu_test(model, data_loader, args.show, args.show_dir, - args.show_score_thr) - else: - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False) - outputs = multi_gpu_test(model, data_loader, args.tmpdir, - args.gpu_collect) - import numpy as np - - rank, _ = get_dist_info() - if rank == 0: - if args.out: - print(f'\nwriting results to {args.out}') - mmcv.dump(outputs, args.out) - kwargs = {} if args.eval_options is None else args.eval_options - if args.format_only: - dataset.format_results(outputs, **kwargs) - if args.eval: - eval_kwargs = cfg.get('evaluation', {}).copy() - # hard-code way to remove EvalHook args - for key in [ - 'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best', - 'rule' - ]: - eval_kwargs.pop(key, None) - eval_kwargs.update(dict(metric=args.eval, **kwargs)) - data_evaluated = dataset.evaluate(outputs, **eval_kwargs) - np.save(args.checkpoint+'_new1', data_evaluated) - print(data_evaluated) - - print(dataset.evaluate(outputs, **eval_kwargs)) - - -if __name__ == '__main__': - main() diff --git a/spaces/ChenyangSi/FreeU/app.py b/spaces/ChenyangSi/FreeU/app.py deleted file mode 100644 index e4cbc0d6db70281d5b81904aa5bb5f5cd1b9fca2..0000000000000000000000000000000000000000 --- a/spaces/ChenyangSi/FreeU/app.py +++ /dev/null @@ -1,243 +0,0 @@ -import gradio as gr -from PIL import Image -import torch - -from diffusers import DiffusionPipeline -from free_lunch_utils import register_free_upblock2d, register_free_crossattn_upblock2d -import gradio_user_history as gr_user_history - - -model_id = "stabilityai/stable-diffusion-2-1" -# model_id = "./stable-diffusion-2-1" -pip_2_1 = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pip_2_1 = pip_2_1.to("cuda") - -model_id = "stabilityai/stable-diffusion-xl-base-1.0" -pip_XL = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pip_XL = pip_XL.to("cuda") - -prompt_prev = None -sd_options_prev = None -seed_prev = None -sd_image_prev = None - -def infer(prompt, sd_options, seed, b1, b2, s1, s2, profile: gr.OAuthProfile | None): - global prompt_prev - global sd_options_prev - global seed_prev - global sd_image_prev - - if sd_options == 'SD2.1': - pip = pip_2_1 - elif sd_options == 'SDXL': - pip = pip_XL - else: - pip = pip_2_1 - - # pip = pip_2_1 - - run_baseline = False - if prompt != prompt_prev or sd_options != sd_options_prev or seed != seed_prev: - run_baseline = True - prompt_prev = prompt - sd_options_prev = sd_options - seed_prev = seed - - if run_baseline: - # register_free_upblock2d(pip, b1=1.0, b2=1.0, s1=1.0, s2=1.0) - register_free_crossattn_upblock2d(pip, b1=1.0, b2=1.0, s1=1.0, s2=1.0) - - torch.manual_seed(seed) - print("Generating SD:") - sd_image = pip(prompt).images[0] - sd_image_prev = sd_image - else: - sd_image = sd_image_prev - - - # register_free_upblock2d(pip, b1=b1, b2=b2, s1=s1, s2=s1) - register_free_crossattn_upblock2d(pip, b1=b1, b2=b2, s1=s1, s2=s1) - - torch.manual_seed(seed) - print("Generating FreeU:") - freeu_image = pip(prompt).images[0] - - # First SD, then freeu - images = [sd_image, freeu_image] - - gr_user_history.save_image(label=prompt + ' (SD)', image=sd_image, profile=profile, metadata={"prompt": prompt, "pipe": sd_options, "b1": 1.0, "b2": 1.0, "s1": 1.0, "s2": 1.0}) - gr_user_history.save_image(label=prompt + ' (FreeU)', image=freeu_image, profile=profile, metadata={"prompt": prompt, "pipe": "freeu", "b1": b1, "b2": b2, "s1": s1, "s2": s2}) - - return images - - -examples = [ - [ - "RAW photo, subject, 8k uhd, dslr, soft lighting, high quality, clearly face, a futuristic visage with cybernetic enhancements seamlessly integrated into human features", - ], - [ - "Sculpt a life-sized animal using discarded plastic bottles and metal scraps, highlighting it's beauty, highly detailed, 8k", - ], - [ - "A robot standing in the rain reading newspaper, rusty and worn down, in a dystopian cyberpunk street, photo-realistic , urbanpunk", - ], - [ - "an outdoor full size sculpture using discarded car parts, highlighting it's beauty, highly detailed, 8k", - ], - [ - "1955, moon landing, sci-fi, 8k, photorealistic, no atmosphere, earth in the sky, terraforming, style by Dean ellis", - ], - [ - "a futuristic home , spaceship design,beautiful interior , high end design", - ], - [ - "Hypnotic Maze, Fantasy Castle, Challenging Maze, Impossible Geometry, Mc Escher, Surreal Photography Within A Glass Sphere, Diorama, Beautiful Abundance, Medieval detailing , Digital Painting, Digital Illustration, Extreme Detail, Digital Art, 8k, Ultra Hd, Fantasy Art, Hyper Detailed, Hyperrealism, Elaborate, Vray, Unrea", - ], - [ - "photo of half life combine standing outside city 17, glossy robot, rainy, rtx, octane, unreal", - ], - [ - "new art : landscape into a Underground oasis in egypt. satara by johnny taylor, in the style of brushstroke-inmersive landscape, cinematic elegance, golden light, dark proportions, flowing brushwork, multilayered realism, --ar 61:128 --s 750 --v 5.2", - ], - [ - "A horse galloping on the ocean", - ], - [ - "a teddy bear walking in the snowstorm" - ], - [ - "Campfire at night in a snowy forest with starry sky in the background." - ], - [ - "a fantasy landscape, trending on artstation" - ], - [ - "An astronaut flying in space, 4k, high resolution." - ], - [ - "An astronaut is riding a horse in the space in a photorealistic style." - ], - [ - "Turtle swimming in ocean." - ], - [ - "A storm trooper vacuuming the beach." - ], - [ - "Fireworks." - ], - [ - "A fat rabbit wearing a purple robe walking through a fantasy landscape." - ], - [ - "A koala bear playing piano in the forest." - ], - [ - "An astronaut flying in space, 4k, high resolution." - ], - [ - "Flying through fantasy landscapes, 4k, high resolution." - ], - [ - "A small cabin on top of a snowy mountain in the style of Disney, artstation", - ], - [ - "half human half cat, a human cat hybrid", - ], - [ - "a drone flying over a snowy forest." - ], -] - - -css = """ -h1 { - text-align: center; -} - -#component-0 { - max-width: 730px; - margin: auto; -} -""" - -block = gr.Blocks(css='style.css') - -options = ['SD2.1'] - -with block: - gr.Markdown("# SD vs. FreeU") - with gr.Group(): - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - with gr.Column(): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - btn = gr.Button("Generate image", scale=0) - - with gr.Group(): - with gr.Row(): - with gr.Accordion('FreeU Parameters (feel free to adjust these parameters based on your prompt): ', open=False): - with gr.Row(): - sd_options = gr.Dropdown(["SD2.1", "SDXL"], label="SD options", value="SDXL", visible=True) - with gr.Row(): - b1 = gr.Slider(label='b1: backbone factor of the first stage block of decoder', - minimum=1, - maximum=2.0, - step=0.01, - value=1.3) - b2 = gr.Slider(label='b2: backbone factor of the second stage block of decoder', - minimum=1, - maximum=2.0, - step=0.01, - value=1.4) - with gr.Row(): - s1 = gr.Slider(label='s1: skip factor of the first stage block of decoder', - minimum=0, - maximum=1, - step=0.1, - value=0.9) - s2 = gr.Slider(label='s2: skip factor of the second stage block of decoder', - minimum=0, - maximum=1, - step=0.1, - value=0.2) - - seed = gr.Slider(label='seed', - minimum=0, - maximum=1000, - step=1, - value=42) - - with gr.Row(): - with gr.Group(): - # btn = gr.Button("Generate image", scale=0) - with gr.Row(): - with gr.Column() as c1: - image_1 = gr.Image(interactive=False) - image_1_label = gr.Markdown("SD") - - with gr.Group(): - # btn = gr.Button("Generate image", scale=0) - with gr.Row(): - with gr.Column() as c2: - image_2 = gr.Image(interactive=False) - image_2_label = gr.Markdown("FreeU") - - with gr.Group(): - with gr.Row(): - with gr.Accordion("Past generations", open=False): - gr_user_history.render() - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text, sd_options, seed, b1, b2, s1, s2], outputs=[image_1, image_2], cache_examples=False) - ex.dataset.headers = [""] - - text.submit(infer, inputs=[text, sd_options, seed, b1, b2, s1, s2], outputs=[image_1, image_2]) - btn.click(infer, inputs=[text, sd_options, seed, b1, b2, s1, s2], outputs=[image_1, image_2]) - -block.launch() -# block.queue(default_enabled=False).launch(share=False) diff --git a/spaces/Chenyuwen/playground2/README.md b/spaces/Chenyuwen/playground2/README.md deleted file mode 100644 index 92dbf34270e7eb0675b4a50c93c8f5426805e269..0000000000000000000000000000000000000000 --- a/spaces/Chenyuwen/playground2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Playground2 -emoji: 🚀 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Crackedids/README/README.md b/spaces/Crackedids/README/README.md deleted file mode 100644 index fd51e8dc7118491136f79df36b33b548881bbe5d..0000000000000000000000000000000000000000 --- a/spaces/Crackedids/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 📈 -colorFrom: green -colorTo: indigo -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PcxImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PcxImagePlugin.py deleted file mode 100644 index f42c2456b4b6c90700267b43cbfd4033ecc1370d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PcxImagePlugin.py +++ /dev/null @@ -1,221 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PCX file handling -# -# This format was originally used by ZSoft's popular PaintBrush -# program for the IBM PC. It is also supported by many MS-DOS and -# Windows applications, including the Windows PaintBrush program in -# Windows 3. -# -# history: -# 1995-09-01 fl Created -# 1996-05-20 fl Fixed RGB support -# 1997-01-03 fl Fixed 2-bit and 4-bit support -# 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1) -# 1999-02-07 fl Added write support -# 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust -# 2002-07-30 fl Seek from to current position, not beginning of file -# 2003-06-03 fl Extract DPI settings (info["dpi"]) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import io -import logging - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - -logger = logging.getLogger(__name__) - - -def _accept(prefix): - return prefix[0] == 10 and prefix[1] in [0, 2, 3, 5] - - -## -# Image plugin for Paintbrush images. - - -class PcxImageFile(ImageFile.ImageFile): - format = "PCX" - format_description = "Paintbrush" - - def _open(self): - # header - s = self.fp.read(128) - if not _accept(s): - msg = "not a PCX file" - raise SyntaxError(msg) - - # image - bbox = i16(s, 4), i16(s, 6), i16(s, 8) + 1, i16(s, 10) + 1 - if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]: - msg = "bad PCX image size" - raise SyntaxError(msg) - logger.debug("BBox: %s %s %s %s", *bbox) - - # format - version = s[1] - bits = s[3] - planes = s[65] - provided_stride = i16(s, 66) - logger.debug( - "PCX version %s, bits %s, planes %s, stride %s", - version, - bits, - planes, - provided_stride, - ) - - self.info["dpi"] = i16(s, 12), i16(s, 14) - - if bits == 1 and planes == 1: - mode = rawmode = "1" - - elif bits == 1 and planes in (2, 4): - mode = "P" - rawmode = "P;%dL" % planes - self.palette = ImagePalette.raw("RGB", s[16:64]) - - elif version == 5 and bits == 8 and planes == 1: - mode = rawmode = "L" - # FIXME: hey, this doesn't work with the incremental loader !!! - self.fp.seek(-769, io.SEEK_END) - s = self.fp.read(769) - if len(s) == 769 and s[0] == 12: - # check if the palette is linear greyscale - for i in range(256): - if s[i * 3 + 1 : i * 3 + 4] != o8(i) * 3: - mode = rawmode = "P" - break - if mode == "P": - self.palette = ImagePalette.raw("RGB", s[1:]) - self.fp.seek(128) - - elif version == 5 and bits == 8 and planes == 3: - mode = "RGB" - rawmode = "RGB;L" - - else: - msg = "unknown PCX mode" - raise OSError(msg) - - self.mode = mode - self._size = bbox[2] - bbox[0], bbox[3] - bbox[1] - - # Don't trust the passed in stride. - # Calculate the approximate position for ourselves. - # CVE-2020-35653 - stride = (self._size[0] * bits + 7) // 8 - - # While the specification states that this must be even, - # not all images follow this - if provided_stride != stride: - stride += stride % 2 - - bbox = (0, 0) + self.size - logger.debug("size: %sx%s", *self.size) - - self.tile = [("pcx", bbox, self.fp.tell(), (rawmode, planes * stride))] - - -# -------------------------------------------------------------------- -# save PCX files - - -SAVE = { - # mode: (version, bits, planes, raw mode) - "1": (2, 1, 1, "1"), - "L": (5, 8, 1, "L"), - "P": (5, 8, 1, "P"), - "RGB": (5, 8, 3, "RGB;L"), -} - - -def _save(im, fp, filename): - try: - version, bits, planes, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as PCX" - raise ValueError(msg) from e - - # bytes per plane - stride = (im.size[0] * bits + 7) // 8 - # stride should be even - stride += stride % 2 - # Stride needs to be kept in sync with the PcxEncode.c version. - # Ideally it should be passed in in the state, but the bytes value - # gets overwritten. - - logger.debug( - "PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d", - im.size[0], - bits, - stride, - ) - - # under windows, we could determine the current screen size with - # "Image.core.display_mode()[1]", but I think that's overkill... - - screen = im.size - - dpi = 100, 100 - - # PCX header - fp.write( - o8(10) - + o8(version) - + o8(1) - + o8(bits) - + o16(0) - + o16(0) - + o16(im.size[0] - 1) - + o16(im.size[1] - 1) - + o16(dpi[0]) - + o16(dpi[1]) - + b"\0" * 24 - + b"\xFF" * 24 - + b"\0" - + o8(planes) - + o16(stride) - + o16(1) - + o16(screen[0]) - + o16(screen[1]) - + b"\0" * 54 - ) - - assert fp.tell() == 128 - - ImageFile._save(im, fp, [("pcx", (0, 0) + im.size, 0, (rawmode, bits * planes))]) - - if im.mode == "P": - # colour palette - fp.write(o8(12)) - palette = im.im.getpalette("RGB", "RGB") - palette += b"\x00" * (768 - len(palette)) - fp.write(palette) # 768 bytes - elif im.mode == "L": - # greyscale palette - fp.write(o8(12)) - for i in range(256): - fp.write(o8(i) * 3) - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PcxImageFile.format, PcxImageFile, _accept) -Image.register_save(PcxImageFile.format, _save) - -Image.register_extension(PcxImageFile.format, ".pcx") - -Image.register_mime(PcxImageFile.format, "image/x-pcx") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/__init__.py deleted file mode 100644 index a3e6208634fafa416b9323f5156ac56dd7bb3700..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .semver_match import ( - ThemeAsset, - get_matching_version, - get_theme_assets, -) - -__all__ = [ - "ThemeAsset", - "get_theme_assets", - "get_matching_version", -] diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/text/text_encoder.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/text/text_encoder.py deleted file mode 100644 index 3ec5090c290ee5ecf1dd49915b70d6b4cc2b84d9..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/text/text_encoder.py +++ /dev/null @@ -1,189 +0,0 @@ -# This code is modified from https://github.com/openai/CLIP/blob/main/clip/clip.py -# Modified by Xingyi Zhou -# The original code is under MIT license -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Union, List -from collections import OrderedDict -import torch -from torch import nn -import torch - -from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer - -__all__ = ["tokenize"] - -count = 0 - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential( - *[ResidualAttentionBlock(width, heads, attn_mask) \ - for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - -class CLIPTEXT(nn.Module): - def __init__(self, - embed_dim=512, - # text - context_length=77, - vocab_size=49408, - transformer_width=512, - transformer_heads=8, - transformer_layers=12 - ): - super().__init__() - - self._tokenizer = _Tokenizer() - self.context_length = context_length - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - # self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def device(self): - return self.text_projection.device - - @property - def dtype(self): - return self.text_projection.dtype - - def tokenize(self, - texts: Union[str, List[str]], \ - context_length: int = 77) -> torch.LongTensor: - """ - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = self._tokenizer.encoder["<|startoftext|>"] - eot_token = self._tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + self._tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - st = torch.randint( - len(tokens) - context_length + 1, (1,))[0].item() - tokens = tokens[st: st + context_length] - # raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - return x - - def forward(self, captions): - ''' - captions: list of strings - ''' - text = self.tokenize(captions).to(self.device) # B x L x D - features = self.encode_text(text) # B x D - return features - - -def build_text_encoder(pretrain=True): - text_encoder = CLIPTEXT() - if pretrain: - import clip - pretrained_model, _ = clip.load("ViT-B/32", device='cpu') - state_dict = pretrained_model.state_dict() - to_delete_keys = ["logit_scale", "input_resolution", \ - "context_length", "vocab_size"] + \ - [k for k in state_dict.keys() if k.startswith('visual.')] - for k in to_delete_keys: - if k in state_dict: - del state_dict[k] - print('Loading pretrained CLIP') - text_encoder.load_state_dict(state_dict) - # import pdb; pdb.set_trace() - return text_encoder \ No newline at end of file diff --git a/spaces/DeclK/pose/model_zoo/rtmpose/rtmpose-t_8xb256-420e_aic-coco-256x192/rtmpose-t_8xb256-420e_aic-coco-256x192.py b/spaces/DeclK/pose/model_zoo/rtmpose/rtmpose-t_8xb256-420e_aic-coco-256x192/rtmpose-t_8xb256-420e_aic-coco-256x192.py deleted file mode 100644 index a270bb4e93924bd516219b80e500fc85e34b3cb9..0000000000000000000000000000000000000000 --- a/spaces/DeclK/pose/model_zoo/rtmpose/rtmpose-t_8xb256-420e_aic-coco-256x192/rtmpose-t_8xb256-420e_aic-coco-256x192.py +++ /dev/null @@ -1,385 +0,0 @@ -default_scope = 'mmpose' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=50), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict( - type='CheckpointHook', - interval=10, - save_best='coco/AP', - rule='greater', - max_keep_ckpts=1), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='PoseVisualizationHook', enable=False)) -custom_hooks = [ - dict( - type='mmdet.PipelineSwitchHook', - switch_epoch=390, - switch_pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict(type='RandomHalfBody'), - dict( - type='RandomBBoxTransform', - shift_factor=0.0, - scale_factor=[0.75, 1.25], - rotate_factor=60), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='mmdet.YOLOXHSVRandomAug'), - dict( - type='Albumentation', - transforms=[ - dict(type='Blur', p=0.1), - dict(type='MedianBlur', p=0.1), - dict( - type='CoarseDropout', - max_holes=1, - max_height=0.4, - max_width=0.4, - min_holes=1, - min_height=0.2, - min_width=0.2, - p=0.5) - ]), - dict( - type='GenerateTarget', - encoder=dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False)), - dict(type='PackPoseInputs') - ]) -] -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) -vis_backends = [dict(type='LocalVisBackend')] -visualizer = dict( - type='PoseLocalVisualizer', - vis_backends=[dict(type='LocalVisBackend')], - name='visualizer') -log_processor = dict( - type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) -log_level = 'INFO' -load_from = None -resume = False -backend_args = dict(backend='local') -train_cfg = dict(by_epoch=True, max_epochs=420, val_interval=10) -val_cfg = dict() -test_cfg = dict() -max_epochs = 420 -stage2_num_epochs = 30 -base_lr = 0.004 -randomness = dict(seed=21) -optim_wrapper = dict( - type='OptimWrapper', - optimizer=dict(type='AdamW', lr=0.004, weight_decay=0.0), - paramwise_cfg=dict( - norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True)) -param_scheduler = [ - dict( - type='LinearLR', start_factor=1e-05, by_epoch=False, begin=0, - end=1000), - dict( - type='CosineAnnealingLR', - eta_min=0.0002, - begin=210, - end=420, - T_max=210, - by_epoch=True, - convert_to_iter_based=True) -] -auto_scale_lr = dict(base_batch_size=1024) -codec = dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False) -model = dict( - type='TopdownPoseEstimator', - data_preprocessor=dict( - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - _scope_='mmdet', - type='CSPNeXt', - arch='P5', - expand_ratio=0.5, - deepen_factor=0.167, - widen_factor=0.375, - out_indices=(4, ), - channel_attention=True, - norm_cfg=dict(type='SyncBN'), - act_cfg=dict(type='SiLU'), - init_cfg=dict( - type='Pretrained', - prefix='backbone.', - checkpoint= - 'https://download.openmmlab.com/mmpose/v1/projects/rtmpose/cspnext-tiny_udp-aic-coco_210e-256x192-cbed682d_20230130.pth' - )), - head=dict( - type='RTMCCHead', - in_channels=384, - out_channels=17, - input_size=(192, 256), - in_featuremap_size=(6, 8), - simcc_split_ratio=2.0, - final_layer_kernel_size=7, - gau_cfg=dict( - hidden_dims=256, - s=128, - expansion_factor=2, - dropout_rate=0.0, - drop_path=0.0, - act_fn='SiLU', - use_rel_bias=False, - pos_enc=False), - loss=dict( - type='KLDiscretLoss', - use_target_weight=True, - beta=10.0, - label_softmax=True), - decoder=dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False)), - test_cfg=dict(flip_test=True)) -dataset_type = 'CocoDataset' -data_mode = 'topdown' -data_root = 'data/' -train_pipeline = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict(type='RandomHalfBody'), - dict( - type='RandomBBoxTransform', scale_factor=[0.6, 1.4], rotate_factor=80), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='mmdet.YOLOXHSVRandomAug'), - dict( - type='Albumentation', - transforms=[ - dict(type='Blur', p=0.1), - dict(type='MedianBlur', p=0.1), - dict( - type='CoarseDropout', - max_holes=1, - max_height=0.4, - max_width=0.4, - min_holes=1, - min_height=0.2, - min_width=0.2, - p=1.0) - ]), - dict( - type='GenerateTarget', - encoder=dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False)), - dict(type='PackPoseInputs') -] -val_pipeline = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') -] -train_pipeline_stage2 = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict(type='RandomHalfBody'), - dict( - type='RandomBBoxTransform', - shift_factor=0.0, - scale_factor=[0.75, 1.25], - rotate_factor=60), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='mmdet.YOLOXHSVRandomAug'), - dict( - type='Albumentation', - transforms=[ - dict(type='Blur', p=0.1), - dict(type='MedianBlur', p=0.1), - dict( - type='CoarseDropout', - max_holes=1, - max_height=0.4, - max_width=0.4, - min_holes=1, - min_height=0.2, - min_width=0.2, - p=0.5) - ]), - dict( - type='GenerateTarget', - encoder=dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False)), - dict(type='PackPoseInputs') -] -dataset_coco = dict( - type='RepeatDataset', - dataset=dict( - type='CocoDataset', - data_root='data/', - data_mode='topdown', - ann_file='coco/annotations/person_keypoints_train2017.json', - data_prefix=dict(img='detection/coco/train2017/'), - pipeline=[]), - times=3) -dataset_aic = dict( - type='AicDataset', - data_root='data/', - data_mode='topdown', - ann_file='aic/annotations/aic_train.json', - data_prefix=dict( - img= - 'pose/ai_challenge/ai_challenger_keypoint_train_20170902/keypoint_train_images_20170902/' - ), - pipeline=[ - dict( - type='KeypointConverter', - num_keypoints=17, - mapping=[(0, 6), (1, 8), (2, 10), (3, 5), (4, 7), (5, 9), (6, 12), - (7, 14), (8, 16), (9, 11), (10, 13), (11, 15)]) - ]) -train_dataloader = dict( - batch_size=256, - num_workers=10, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type='CombinedDataset', - metainfo=dict(from_file='configs/_base_/datasets/coco.py'), - datasets=[ - dict( - type='RepeatDataset', - dataset=dict( - type='CocoDataset', - data_root='data/', - data_mode='topdown', - ann_file='coco/annotations/person_keypoints_train2017.json', - data_prefix=dict(img='detection/coco/train2017/'), - pipeline=[]), - times=3), - dict( - type='AicDataset', - data_root='data/', - data_mode='topdown', - ann_file='aic/annotations/aic_train.json', - data_prefix=dict( - img= - 'pose/ai_challenge/ai_challenger_keypoint_train_20170902/keypoint_train_images_20170902/' - ), - pipeline=[ - dict( - type='KeypointConverter', - num_keypoints=17, - mapping=[(0, 6), (1, 8), (2, 10), (3, 5), (4, 7), - (5, 9), (6, 12), (7, 14), (8, 16), (9, 11), - (10, 13), (11, 15)]) - ]) - ], - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict(type='RandomHalfBody'), - dict( - type='RandomBBoxTransform', - scale_factor=[0.6, 1.4], - rotate_factor=80), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='mmdet.YOLOXHSVRandomAug'), - dict( - type='Albumentation', - transforms=[ - dict(type='Blur', p=0.1), - dict(type='MedianBlur', p=0.1), - dict( - type='CoarseDropout', - max_holes=1, - max_height=0.4, - max_width=0.4, - min_holes=1, - min_height=0.2, - min_width=0.2, - p=1.0) - ]), - dict( - type='GenerateTarget', - encoder=dict( - type='SimCCLabel', - input_size=(192, 256), - sigma=(4.9, 5.66), - simcc_split_ratio=2.0, - normalize=False, - use_dark=False)), - dict(type='PackPoseInputs') - ], - test_mode=False)) -val_dataloader = dict( - batch_size=64, - num_workers=10, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False, round_up=False), - dataset=dict( - type='CocoDataset', - data_root='data/', - data_mode='topdown', - ann_file='coco/annotations/person_keypoints_val2017.json', - data_prefix=dict(img='detection/coco/val2017/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -test_dataloader = dict( - batch_size=64, - num_workers=10, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False, round_up=False), - dataset=dict( - type='CocoDataset', - data_root='data/', - data_mode='topdown', - ann_file='coco/annotations/person_keypoints_val2017.json', - data_prefix=dict(img='detection/coco/val2017/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -val_evaluator = dict( - type='CocoMetric', - ann_file='data/coco/annotations/person_keypoints_val2017.json') -test_evaluator = dict( - type='CocoMetric', - ann_file='data/coco/annotations/person_keypoints_val2017.json') diff --git a/spaces/ElainaFanBoy/MusicGen/app_batched.py b/spaces/ElainaFanBoy/MusicGen/app_batched.py deleted file mode 100644 index 0d2a4b526e4b8ef94034a1c661a4fa68816c285a..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/app_batched.py +++ /dev/null @@ -1,222 +0,0 @@ -""" -Copyright (c) Meta Platforms, Inc. and affiliates. -All rights reserved. - -This source code is licensed under the license found in the -LICENSE file in the root directory of this source tree. -""" - -import argparse -from concurrent.futures import ProcessPoolExecutor -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import warnings -import torch -import gradio as gr -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import MusicGen - - -MODEL = None - -_old_call = sp.call - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomitting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -pool = ProcessPoolExecutor(3) -pool.__enter__() - - -def make_waveform(*args, **kwargs): - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - out = gr.make_waveform(*args, **kwargs) - print("Make a video took", time.time() - be) - return out - - -def load_model(): - print("Loading model") - return MusicGen.get_pretrained("melody") - - -def predict(texts, melodies): - global MODEL - if MODEL is None: - MODEL = load_model() - - duration = 12 - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - MODEL.set_generation_params(duration=duration) - - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies]) - be = time.time() - processed_melodies = [] - target_sr = 32000 - target_ac = 1 - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=False - ) - - outputs = outputs.detach().cpu().float() - out_files = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - out_files.append(pool.submit(make_waveform, file.name)) - res = [[out_file.result() for out_file in out_files]] - print("batch finished", len(texts), time.time() - be) - return res - - -def ui(**kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
    -
    - Duplicate Space - for longer sequences, more control and no queue.

    - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - melody = gr.Audio(source="upload", type="numpy", label="Condition on a melody (optional)", interactive=True) - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - submit.click(predict, inputs=[text, melody], outputs=[output], batch=True, max_batch_size=8) - gr.Examples( - fn=predict, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionaly provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - # Show the interface - launch_kwargs = {} - username = kwargs.get('username') - password = kwargs.get('password') - server_port = kwargs.get('server_port', 0) - inbrowser = kwargs.get('inbrowser', False) - share = kwargs.get('share', False) - server_name = kwargs.get('listen') - - launch_kwargs['server_name'] = server_name - - if username and password: - launch_kwargs['auth'] = (username, password) - if server_port > 0: - launch_kwargs['server_port'] = server_port - if inbrowser: - launch_kwargs['inbrowser'] = inbrowser - if share: - launch_kwargs['share'] = share - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - - args = parser.parse_args() - - ui( - username=args.username, - password=args.password, - inbrowser=args.inbrowser, - server_port=args.server_port, - share=args.share, - listen=args.listen - ) diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py deleted file mode 100644 index 5e4d6995cd41cc07b4e8861cb941c6052b0f5517..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/scripts/make_samples.py +++ /dev/null @@ -1,292 +0,0 @@ -import argparse, os, sys, glob, math, time -import torch -import numpy as np -from omegaconf import OmegaConf -from PIL import Image -from main import instantiate_from_config, DataModuleFromConfig -from torch.utils.data import DataLoader -from torch.utils.data.dataloader import default_collate -from tqdm import trange - - -def save_image(x, path): - c,h,w = x.shape - assert c==3 - x = ((x.detach().cpu().numpy().transpose(1,2,0)+1.0)*127.5).clip(0,255).astype(np.uint8) - Image.fromarray(x).save(path) - - -@torch.no_grad() -def run_conditional(model, dsets, outdir, top_k, temperature, batch_size=1): - if len(dsets.datasets) > 1: - split = sorted(dsets.datasets.keys())[0] - dset = dsets.datasets[split] - else: - dset = next(iter(dsets.datasets.values())) - print("Dataset: ", dset.__class__.__name__) - for start_idx in trange(0,len(dset)-batch_size+1,batch_size): - indices = list(range(start_idx, start_idx+batch_size)) - example = default_collate([dset[i] for i in indices]) - - x = model.get_input("image", example).to(model.device) - for i in range(x.shape[0]): - save_image(x[i], os.path.join(outdir, "originals", - "{:06}.png".format(indices[i]))) - - cond_key = model.cond_stage_key - c = model.get_input(cond_key, example).to(model.device) - - scale_factor = 1.0 - quant_z, z_indices = model.encode_to_z(x) - quant_c, c_indices = model.encode_to_c(c) - - cshape = quant_z.shape - - xrec = model.first_stage_model.decode(quant_z) - for i in range(xrec.shape[0]): - save_image(xrec[i], os.path.join(outdir, "reconstructions", - "{:06}.png".format(indices[i]))) - - if cond_key == "segmentation": - # get image from segmentation mask - num_classes = c.shape[1] - c = torch.argmax(c, dim=1, keepdim=True) - c = torch.nn.functional.one_hot(c, num_classes=num_classes) - c = c.squeeze(1).permute(0, 3, 1, 2).float() - c = model.cond_stage_model.to_rgb(c) - - idx = z_indices - - half_sample = False - if half_sample: - start = idx.shape[1]//2 - else: - start = 0 - - idx[:,start:] = 0 - idx = idx.reshape(cshape[0],cshape[2],cshape[3]) - start_i = start//cshape[3] - start_j = start %cshape[3] - - cidx = c_indices - cidx = cidx.reshape(quant_c.shape[0],quant_c.shape[2],quant_c.shape[3]) - - sample = True - - for i in range(start_i,cshape[2]-0): - if i <= 8: - local_i = i - elif cshape[2]-i < 8: - local_i = 16-(cshape[2]-i) - else: - local_i = 8 - for j in range(start_j,cshape[3]-0): - if j <= 8: - local_j = j - elif cshape[3]-j < 8: - local_j = 16-(cshape[3]-j) - else: - local_j = 8 - - i_start = i-local_i - i_end = i_start+16 - j_start = j-local_j - j_end = j_start+16 - patch = idx[:,i_start:i_end,j_start:j_end] - patch = patch.reshape(patch.shape[0],-1) - cpatch = cidx[:, i_start:i_end, j_start:j_end] - cpatch = cpatch.reshape(cpatch.shape[0], -1) - patch = torch.cat((cpatch, patch), dim=1) - logits,_ = model.transformer(patch[:,:-1]) - logits = logits[:, -256:, :] - logits = logits.reshape(cshape[0],16,16,-1) - logits = logits[:,local_i,local_j,:] - - logits = logits/temperature - - if top_k is not None: - logits = model.top_k_logits(logits, top_k) - # apply softmax to convert to probabilities - probs = torch.nn.functional.softmax(logits, dim=-1) - # sample from the distribution or take the most likely - if sample: - ix = torch.multinomial(probs, num_samples=1) - else: - _, ix = torch.topk(probs, k=1, dim=-1) - idx[:,i,j] = ix - - xsample = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape) - for i in range(xsample.shape[0]): - save_image(xsample[i], os.path.join(outdir, "samples", - "{:06}.png".format(indices[i]))) - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument( - "-r", - "--resume", - type=str, - nargs="?", - help="load from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-b", - "--base", - nargs="*", - metavar="base_config.yaml", - help="paths to base configs. Loaded from left-to-right. " - "Parameters can be overwritten or added with command-line options of the form `--key value`.", - default=list(), - ) - parser.add_argument( - "-c", - "--config", - nargs="?", - metavar="single_config.yaml", - help="path to single config. If specified, base configs will be ignored " - "(except for the last one if left unspecified).", - const=True, - default="", - ) - parser.add_argument( - "--ignore_base_data", - action="store_true", - help="Ignore data specification from base configs. Useful if you want " - "to specify a custom datasets on the command line.", - ) - parser.add_argument( - "--outdir", - required=True, - type=str, - help="Where to write outputs to.", - ) - parser.add_argument( - "--top_k", - type=int, - default=100, - help="Sample from among top-k predictions.", - ) - parser.add_argument( - "--temperature", - type=float, - default=1.0, - help="Sampling temperature.", - ) - return parser - - -def load_model_from_config(config, sd, gpu=True, eval_mode=True): - if "ckpt_path" in config.params: - print("Deleting the restore-ckpt path from the config...") - config.params.ckpt_path = None - if "downsample_cond_size" in config.params: - print("Deleting downsample-cond-size from the config and setting factor=0.5 instead...") - config.params.downsample_cond_size = -1 - config.params["downsample_cond_factor"] = 0.5 - try: - if "ckpt_path" in config.params.first_stage_config.params: - config.params.first_stage_config.params.ckpt_path = None - print("Deleting the first-stage restore-ckpt path from the config...") - if "ckpt_path" in config.params.cond_stage_config.params: - config.params.cond_stage_config.params.ckpt_path = None - print("Deleting the cond-stage restore-ckpt path from the config...") - except: - pass - - model = instantiate_from_config(config) - if sd is not None: - missing, unexpected = model.load_state_dict(sd, strict=False) - print(f"Missing Keys in State Dict: {missing}") - print(f"Unexpected Keys in State Dict: {unexpected}") - if gpu: - model.cuda() - if eval_mode: - model.eval() - return {"model": model} - - -def get_data(config): - # get data - data = instantiate_from_config(config.data) - data.prepare_data() - data.setup() - return data - - -def load_model_and_dset(config, ckpt, gpu, eval_mode): - # get data - dsets = get_data(config) # calls data.config ... - - # now load the specified checkpoint - if ckpt: - pl_sd = torch.load(ckpt, map_location="cpu") - global_step = pl_sd["global_step"] - else: - pl_sd = {"state_dict": None} - global_step = None - model = load_model_from_config(config.model, - pl_sd["state_dict"], - gpu=gpu, - eval_mode=eval_mode)["model"] - return dsets, model, global_step - - -if __name__ == "__main__": - sys.path.append(os.getcwd()) - - parser = get_parser() - - opt, unknown = parser.parse_known_args() - - ckpt = None - if opt.resume: - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - paths = opt.resume.split("/") - try: - idx = len(paths)-paths[::-1].index("logs")+1 - except ValueError: - idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), opt.resume - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "checkpoints", "last.ckpt") - print(f"logdir:{logdir}") - base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml"))) - opt.base = base_configs+opt.base - - if opt.config: - if type(opt.config) == str: - opt.base = [opt.config] - else: - opt.base = [opt.base[-1]] - - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - if opt.ignore_base_data: - for config in configs: - if hasattr(config, "data"): del config["data"] - config = OmegaConf.merge(*configs, cli) - - print(ckpt) - gpu = True - eval_mode = True - show_config = False - if show_config: - print(OmegaConf.to_container(config)) - - dsets, model, global_step = load_model_and_dset(config, ckpt, gpu, eval_mode) - print(f"Global step: {global_step}") - - outdir = os.path.join(opt.outdir, "{:06}_{}_{}".format(global_step, - opt.top_k, - opt.temperature)) - os.makedirs(outdir, exist_ok=True) - print("Writing samples to ", outdir) - for k in ["originals", "reconstructions", "samples"]: - os.makedirs(os.path.join(outdir, k), exist_ok=True) - run_conditional(model, dsets, outdir, opt.top_k, opt.temperature) diff --git a/spaces/EleutherAI/magma/app.py b/spaces/EleutherAI/magma/app.py deleted file mode 100644 index ffa979a3d9bc7c75d6492dd292cd71a830ec96ee..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/magma/app.py +++ /dev/null @@ -1,86 +0,0 @@ - -import os -os.system("pip install deepspeed") -os.system("pip freeze") - -import gradio as gr -import re -from magma import Magma -from magma.image_input import ImageInput - -from huggingface_hub import hf_hub_url, cached_download - -checkpoint_url = hf_hub_url(repo_id="osanseviero/magma", filename="model.pt") -checkpoint_path = cached_download(checkpoint_url) - -model = Magma.from_checkpoint( - config_path = "configs/MAGMA_v1.yml", - checkpoint_path = checkpoint_path, - device = 'cuda:0' -) - -def generate(image,context, length, temperature, top_k,rearrange): - # context = context.strip() - - # url_regex = r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)' - # lines = context.split('\n') - # inputs = [] - # for line in lines: - # if re.match(url_regex, line): - # try: - # inputs.append(ImageInput(line)) - # except Exception as e: - # return str(e) - # else: - # inputs.append(line) - if rearrange: - inputs =[ - ## supports urls and path/to/image - context, - ImageInput(image) - ] - else: - inputs =[ - ## supports urls and path/to/image - ImageInput(image), - context - ] - - ## returns a tensor of shape: (1, 149, 4096) - embeddings = model.preprocess_inputs(inputs) - - ## returns a list of length embeddings.shape[0] (batch size) - output = model.generate( - embeddings = embeddings, - max_steps = length, - temperature = (0.01 if temperature == 0 else temperature), - top_k = top_k - ) - - return output[0] - -examples=[["woods_hi.jpeg","Describe the painting:",15,0.7,0,False], ["E8EB3C7B-291C-400A-81F2-AE9229D9CE23.jpeg", "Q: Is the person in the image older than 35?\nA: " , 15, 0.7, 0, False]] - -title="MAGMA" -description="Gradio Demo for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning by Constantin Eichenberg, Sid Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank

    arXiv | Github Repo" -article = "" -iface = gr.Interface( - fn=generate, - inputs=[ - gr.inputs.Image(type="filepath",label="Image Prompt"),gr.inputs.Textbox( - label="Text Prompt:", - default="Describe the painting:", - lines=7), - gr.inputs.Slider(minimum=1, maximum=100, default=15, step=1, label="Output tokens:"), - gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, label='Temperature'), - gr.inputs.Slider(minimum=0, maximum=100, default=0, step=1, label='Top K'), - gr.inputs.Checkbox(default=False, label="Rearrange Prompt", optional=False) - ], - outputs=["textbox"], - examples=examples, - title=title, - description=description, - article=article -).launch(enable_queue=True,cache_examples=True) - - diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py b/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py deleted file mode 100644 index 0d41ac0a5589aff024569cb973a4b154477c5908..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/modules/AutoGPTQ_loader.py +++ /dev/null @@ -1,71 +0,0 @@ -from pathlib import Path - -from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig - -import modules.shared as shared -from modules.logging_colors import logger -from modules.models import get_max_memory_dict - - -def load_quantized(model_name): - path_to_model = Path(f'{shared.args.model_dir}/{model_name}') - pt_path = None - - # Find the model checkpoint - if shared.args.checkpoint: - pt_path = Path(shared.args.checkpoint) - else: - for ext in ['.safetensors', '.pt', '.bin']: - found = list(path_to_model.glob(f"*{ext}")) - if len(found) > 0: - if len(found) > 1: - logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.') - - pt_path = found[-1] - break - - if pt_path is None: - logger.error("The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be located.") - return - - use_safetensors = pt_path.suffix == '.safetensors' - if not (path_to_model / "quantize_config.json").exists(): - quantize_config = BaseQuantizeConfig( - bits=bits if (bits := shared.args.wbits) > 0 else 4, - group_size=gs if (gs := shared.args.groupsize) > 0 else -1, - desc_act=shared.args.desc_act - ) - else: - quantize_config = None - - # Define the params for AutoGPTQForCausalLM.from_quantized - params = { - 'model_basename': pt_path.stem, - 'device': "cuda:0" if not shared.args.cpu else "cpu", - 'use_triton': shared.args.triton, - 'inject_fused_attention': not shared.args.no_inject_fused_attention, - 'inject_fused_mlp': not shared.args.no_inject_fused_mlp, - 'use_safetensors': use_safetensors, - 'trust_remote_code': shared.args.trust_remote_code, - 'max_memory': get_max_memory_dict(), - 'quantize_config': quantize_config, - 'use_cuda_fp16': not shared.args.no_use_cuda_fp16, - } - - logger.info(f"The AutoGPTQ params are: {params}") - model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params) - - # These lines fix the multimodal extension when used with AutoGPTQ - if hasattr(model, 'model'): - if not hasattr(model, 'dtype'): - if hasattr(model.model, 'dtype'): - model.dtype = model.model.dtype - - if hasattr(model.model, 'model') and hasattr(model.model.model, 'embed_tokens'): - if not hasattr(model, 'embed_tokens'): - model.embed_tokens = model.model.model.embed_tokens - - if not hasattr(model.model, 'embed_tokens'): - model.model.embed_tokens = model.model.model.embed_tokens - - return model diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js b/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js deleted file mode 100644 index 512d7e18a8c17a4d33c3e5560d8ad69e641da70d..0000000000000000000000000000000000000000 --- a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/639-7bf6be9a90be8cdb.js +++ /dev/null @@ -1,181 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[639,931],{4875:function(e,t){var n;/*! - Copyright (c) 2018 Jed Watson. - Licensed under the MIT License (MIT), see - http://jedwatson.github.io/classnames -*/!function(){"use strict";var r={}.hasOwnProperty;function i(){for(var e=[],t=0;t=r&&n<8;n++,r*=128);if(!t)for(var i=r+e,a=n-1;a>=0;a--){var o=i%256;this.source[this.offset+a]=o,i=(i-o)/256}this.offset+=n},o.prototype.writeSections=function(e){this.offset=0;for(var t=0;t=s.getValue()))return n("[fix-webm-duration] Duration section is present"),!1;n("[fix-webm-duration] Duration section is present, but the value is empty"),s.setValue(e)}else n("[fix-webm-duration] Duration section is missing"),(s=new a("Duration","Float")).setValue(e),i.data.push({id:1161,data:s});return o.setValue(1e6),i.updateByData(),r.updateByData(),this.updateByData(),!0},s.prototype.toBlob=function(e){return new Blob([this.source.buffer],{type:e||"video/webm"})},l.default=l,l})?i.call(t,n,t,e):i)&&(e.exports=r)},820:function(e,t,n){"use strict";var r,i;e.exports=(null==(r=n.g.process)?void 0:r.env)&&"object"==typeof(null==(i=n.g.process)?void 0:i.env)?n.g.process:n(3488)},7632:function(e,t,n){"use strict";n.d(t,{ko:function(){return l.ko},tX:function(){return y},Fd:function(){return l.Fd},Sj:function(){return u}});var r=n(931),i=n(4499),a=n(9485),o=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class s{initSession(e,t){return o(this,void 0,void 0,function*(){return yield this.session.initSession(e,t)})}transcribe(e,t,n){return o(this,void 0,void 0,function*(){return null==this.session?a.x4.err(Error("Session not initialized")):n?this.session instanceof r.z?yield this.session.stream(e,t,n):yield this.session.stream(e,t,i.sj(n)):yield this.session.run(e)})}destroy(){null!==this.innerWorker&&(console.warn("Terminating worker"),this.innerWorker.terminate()),this.session=null}constructor(e,t){this.session=e,this.innerWorker=t||null}}var l=n(5453),c=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class u{loadModel(e,t,n){return c(this,void 0,void 0,function*(){let r=yield this.createSession(!0,e,n);return r.isErr?a.x4.err(r.error):(t(r.value),a.x4.ok(r.value))})}createSession(e,t,o){return c(this,void 0,void 0,function*(){if(e&&"undefined"!=typeof document){let l=new Worker(n.tu(new URL(n.p+n.u(931),n.b)),{type:void 0}),c=i.Ud(l),u=yield new c,d=yield u.initSession(t,i.sj(o)),[p,m]=d.repr;return"Err"===p?a.x4.err(Error("Session initialization failed: "+m.toString())):a.x4.ok(new s(u,l))}{let y=new r.z,f=yield y.initSession(t,o);return f.isErr?(console.error("Error initializing session: ",f),a.x4.err(f.error)):a.x4.ok(new s(y))}})}}var d=n(7280),p=n.n(d),m=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class y{static start(){return m(this,void 0,void 0,function*(){if(!navigator.mediaDevices)throw Error("Media device not available");let e=yield navigator.mediaDevices.getUserMedia({audio:!0}),t=new MediaRecorder(e,{mimeType:y.supportedMimes.find(e=>MediaRecorder.isTypeSupported(e))}),n=new y(t);return n.currentStream=e,t.addEventListener("dataavailable",e=>{n.audioChunks.push(e.data)}),t.start(),n.currentStart=Date.now(),n})}isRecording(){return null!==this.inner&&"recording"===this.inner.state}stop(){return m(this,void 0,void 0,function*(){if(!this.inner)throw Error("Please start the recorder first");let e=new Promise(e=>{this.inner.addEventListener("stop",()=>m(this,void 0,void 0,function*(){let t=Date.now()-this.currentStart,n=new Blob(this.audioChunks,{type:this.inner.mimeType});this.inner.mimeType.includes("webm")&&(n=yield p()(n,t,{logger:!1}));let r=yield n.arrayBuffer();e({blob:n,buffer:r})})),this.inner.stop(),this.currentStream.getTracks().forEach(e=>e.stop())});return e})}constructor(e){this.currentStart=null,this.currentStream=null,this.inner=null,this.audioChunks=[],this.inner=e}}y.supportedMimes=["audio/webm","audio/ogg"]},5453:function(e,t,n){"use strict";n.d(t,{Fd:function(){return o},Hn:function(){return s},ko:function(){return i}});var r,i,a=n(9485);(r=i||(i={})).WHISPER_TINY="tiny",r.WHISPER_BASE="base",r.WHISPER_SMALL="small",r.WHISPER_MEDIUM="medium",r.WHISPER_LARGE="large";let o=new Map([[i.WHISPER_TINY,51444634],[i.WHISPER_BASE,96834130],[i.WHISPER_SMALL,313018088],[i.WHISPER_MEDIUM,972263884],[i.WHISPER_LARGE,1954315876]]);class s{static fromDBModel(e,t){var n,r,i,o;return n=this,r=void 0,i=void 0,o=function*(){let n=yield t.getTokenizer(e.ID);if(n.isErr)return a.x4.err(n.error);let r=n.value.bytes;return a.x4.ok(new s(e.name,e.bytes,r))},new(i||(i=Promise))(function(e,t){function a(e){try{l(o.next(e))}catch(n){t(n)}}function s(e){try{l(o.throw(e))}catch(n){t(n)}}function l(t){var n;t.done?e(t.value):((n=t.value)instanceof i?n:new i(function(e){e(n)})).then(a,s)}l((o=o.apply(n,r||[])).next())})}constructor(e,t,n){this.name=e,this.data=t,this.tokenizer=n}}},931:function(e,t,n){"use strict";n.d(t,{z:function(){return c}});var r=n(8054),i=n(4499),a=n(9485),o=n(5453),s=n(4208),l=function(e,t,n,r){return new(n||(n=Promise))(function(i,a){function o(e){try{l(r.next(e))}catch(t){a(t)}}function s(e){try{l(r.throw(e))}catch(t){a(t)}}function l(e){var t;e.done?i(e.value):((t=e.value)instanceof n?t:new n(function(e){e(t)})).then(o,s)}l((r=r.apply(e,t||[])).next())})};class c{initSession(e,t){return l(this,void 0,void 0,function*(){if(this.whisperSession)return a.x4.err(Error("Session already initialized. Call `destroy()` first."));let n=yield this.loadModel(e,t);if(n.isErr)return a.x4.err(n.error);let i=n.value;yield r.ZP();let o=new r.hE,s=yield o.setModel(i.data).setTokenizer(i.tokenizer).build();return this.whisperSession=s,a.x4.ok(void 0)})}loadModel(e,t){return l(this,void 0,void 0,function*(){let n=yield s.Z.create(),r=yield n.getModel(e,t);if(r.isErr)return a.x4.err(Error("Failed to load model ".concat(e," with error: ").concat(r.error)));let i=r.value,l=yield o.Hn.fromDBModel(i,n);if(l.isErr)return a.x4.err(Error("Failed to transmute model ".concat(e," with error: ").concat(l.error)));let c=l.value;return a.x4.ok(c)})}run(e){return l(this,void 0,void 0,function*(){return this.whisperSession?a.x4.ok((yield this.whisperSession.run(e))):a.x4.err(Error("The session is not initialized. Call `initSession()` method first."))})}stream(e,t,n){return l(this,void 0,void 0,function*(){return this.whisperSession?a.x4.ok((yield this.whisperSession.stream(e,t,n))):a.x4.err(Error("The session is not initialized. Call `initSession()` method first."))})}}"undefined"!=typeof self&&i.Jj(c)},9172:function(e){e.exports={style:{fontFamily:"'__VT323_2a9463', '__VT323_Fallback_2a9463'",fontWeight:400,fontStyle:"normal"},className:"__className_2a9463"}},3488:function(e){!function(){var t={229:function(e){var t,n,r,i=e.exports={};function a(){throw Error("setTimeout has not been defined")}function o(){throw Error("clearTimeout has not been defined")}function s(e){if(t===setTimeout)return setTimeout(e,0);if((t===a||!t)&&setTimeout)return t=setTimeout,setTimeout(e,0);try{return t(e,0)}catch(r){try{return t.call(null,e,0)}catch(n){return t.call(this,e,0)}}}!function(){try{t="function"==typeof setTimeout?setTimeout:a}catch(e){t=a}try{n="function"==typeof clearTimeout?clearTimeout:o}catch(r){n=o}}();var l=[],c=!1,u=-1;function d(){c&&r&&(c=!1,r.length?l=r.concat(l):u=-1,l.length&&p())}function p(){if(!c){var e=s(d);c=!0;for(var t=l.length;t;){for(r=l,l=[];++u1)for(var n=1;n1),u=[],d=!1,p=-1,m=void 0,y=void 0,f=function(e){return u.some(function(t){return!!(t.options.allowTouchMove&&t.options.allowTouchMove(e))})},h=function(e){var t=e||window.event;return!!f(t.target)||t.touches.length>1||(t.preventDefault&&t.preventDefault(),!1)},v=function(e){if(void 0===y){var t=!!e&&!0===e.reserveScrollBarGap,n=window.innerWidth-document.documentElement.clientWidth;t&&n>0&&(y=document.body.style.paddingRight,document.body.style.paddingRight=n+"px")}void 0===m&&(m=document.body.style.overflow,document.body.style.overflow="hidden")},g=function(){void 0!==y&&(document.body.style.paddingRight=y,y=void 0),void 0!==m&&(document.body.style.overflow=m,m=void 0)},b=function(e,t){var n=e.targetTouches[0].clientY-p;return!f(e.target)&&(t&&0===t.scrollTop&&n>0?h(e):t&&t.scrollHeight-t.scrollTop<=t.clientHeight&&n<0?h(e):(e.stopPropagation(),!0))},C=function(e,t){if(!e){console.error("disableBodyScroll unsuccessful - targetElement must be provided when calling disableBodyScroll on IOS devices.");return}!u.some(function(t){return t.targetElement===e})&&(u=[].concat(function(e){if(!Array.isArray(e))return Array.from(e);for(var t=0,n=Array(e.length);t-1&&!(null===a.offsetParent||"hidden"===getComputedStyle(a).visibility)&&function(e){if("INPUT"!==e.tagName||"radio"!==e.type||!e.name)return!0;var t=(e.form||e.ownerDocument).querySelectorAll('input[type="radio"][name="'+e.name+'"]'),n=function(e,t){for(var n=0;nt,set:e=>{Object.is(t,e)||(t=e,n(e))}}),i}(null),i=(0,r.useRef)(null),a=t.isStateful?n:i;return r.useEffect(()=>{e&&("function"==typeof e?e(a.current):e.current=a.current)}),a}(t),$=(0,r.useRef)(null),G=(0,r.useRef)(null),q=(0,r.useRef)(null);null===q.current&&U&&(q.current=document.createElement("div"));var Y=(0,r.useState)(!1),K=Y[0],Z=Y[1];(0,r.useEffect)(function(){return c&&B.add($),function(){B.remove($)}},[c,$]),I($,c,K,void 0===d||d,W);var J=function(){!q.current||h||document.body.contains(q.current)||document.body.appendChild(q.current),document.addEventListener("keydown",Q)},X=function(){q.current&&!h&&document.body.contains(q.current)&&document.body.removeChild(q.current),document.removeEventListener("keydown",Q)},Q=function(e){27===e.keyCode&&B.isTopModal($)&&(null==j||j(e),m&&O())};(0,r.useEffect)(function(){return function(){K&&X()}},[K]),(0,r.useEffect)(function(){c&&!K&&(Z(!0),J())},[c]);var ee=function(){G.current=!1},et=h||q.current,en=c?null!=(n=null==D?void 0:D.overlayAnimationIn)?n:A.overlayAnimationIn:null!=(a=null==D?void 0:D.overlayAnimationOut)?a:A.overlayAnimationOut,er=c?null!=(s=null==D?void 0:D.modalAnimationIn)?s:A.modalAnimationIn:null!=(l=null==D?void 0:D.modalAnimationOut)?l:A.modalAnimationOut;return K&&et?i.createPortal(r.createElement("div",{className:o()(A.root,null==D?void 0:D.root),style:null==P?void 0:P.root,"data-testid":"root"},r.createElement("div",{className:o()(A.overlay,null==D?void 0:D.overlay),"data-testid":"overlay","aria-hidden":!0,style:w({animation:en+" "+k+"ms"},null==P?void 0:P.overlay)}),r.createElement("div",{ref:$,id:L,className:o()(A.modalContainer,u&&A.modalContainerCenter,null==D?void 0:D.modalContainer),style:null==P?void 0:P.modalContainer,"data-testid":"modal-container",onClick:function(e){if(null===G.current&&(G.current=!0),!G.current){G.current=null;return}null==z||z(e),f&&O(),G.current=null}},r.createElement("div",{ref:V,className:o()(A.modal,null==D?void 0:D.modal),style:w({animation:er+" "+k+"ms"},null==P?void 0:P.modal),onMouseDown:ee,onMouseUp:ee,onClick:ee,onAnimationEnd:function(){c||Z(!1),null==H||H()},id:N,role:void 0===F?"dialog":F,"aria-modal":"true","aria-labelledby":M,"aria-describedby":R,"data-testid":"modal",tabIndex:-1},(void 0===C||C)&&r.createElement(x,{container:V,initialFocusRef:void 0===S?void 0:S}),_,(void 0===v||v)&&r.createElement(E,{classes:A,classNames:D,styles:P,closeIcon:b,onClick:O,id:g})))),et):null})},4499:function(e,t,n){"use strict";n.d(t,{Jj:function(){return c},Ud:function(){return d},sj:function(){return f}});let r=Symbol("Comlink.proxy"),i=Symbol("Comlink.endpoint"),a=Symbol("Comlink.releaseProxy"),o=Symbol("Comlink.thrown"),s=e=>"object"==typeof e&&null!==e||"function"==typeof e,l=new Map([["proxy",{canHandle:e=>s(e)&&e[r],serialize(e){let{port1:t,port2:n}=new MessageChannel;return c(e,t),[n,[n]]},deserialize:e=>(e.start(),d(e))}],["throw",{canHandle:e=>s(e)&&o in e,serialize:({value:e})=>[e instanceof Error?{isError:!0,value:{message:e.message,name:e.name,stack:e.stack}}:{isError:!1,value:e},[]],deserialize(e){if(e.isError)throw Object.assign(Error(e.value.message),e.value);throw e.value}}]]);function c(e,t=self){t.addEventListener("message",function n(r){let i;if(!r||!r.data)return;let{id:a,type:s,path:l}=Object.assign({path:[]},r.data),d=(r.data.argumentList||[]).map(v);try{let p=l.slice(0,-1).reduce((e,t)=>e[t],e),m=l.reduce((e,t)=>e[t],e);switch(s){case"GET":i=m;break;case"SET":p[l.slice(-1)[0]]=v(r.data.value),i=!0;break;case"APPLY":i=m.apply(p,d);break;case"CONSTRUCT":{let g=new m(...d);i=f(g)}break;case"ENDPOINT":{let{port1:b,port2:C}=new MessageChannel;c(e,C),y.set(b,[b]),i=b}break;case"RELEASE":i=void 0;break;default:return}}catch(S){i={value:S,[o]:0}}Promise.resolve(i).catch(e=>({value:e,[o]:0})).then(e=>{let[r,i]=h(e);t.postMessage(Object.assign(Object.assign({},r),{id:a}),i),"RELEASE"===s&&(t.removeEventListener("message",n),u(t))})}),t.start&&t.start()}function u(e){"MessagePort"===e.constructor.name&&e.close()}function d(e,t){return function e(t,n=[],r=function(){}){let o=!1,s=new Proxy(r,{get(r,i){if(p(o),i===a)return()=>g(t,{type:"RELEASE",path:n.map(e=>e.toString())}).then(()=>{u(t),o=!0});if("then"===i){if(0===n.length)return{then:()=>s};let l=g(t,{type:"GET",path:n.map(e=>e.toString())}).then(v);return l.then.bind(l)}return e(t,[...n,i])},set(e,r,i){p(o);let[a,s]=h(i);return g(t,{type:"SET",path:[...n,r].map(e=>e.toString()),value:a},s).then(v)},apply(r,a,s){p(o);let l=n[n.length-1];if(l===i)return g(t,{type:"ENDPOINT"}).then(v);if("bind"===l)return e(t,n.slice(0,-1));let[c,u]=m(s);return g(t,{type:"APPLY",path:n.map(e=>e.toString()),argumentList:c},u).then(v)},construct(e,r){p(o);let[i,a]=m(r);return g(t,{type:"CONSTRUCT",path:n.map(e=>e.toString()),argumentList:i},a).then(v)}});return s}(e,[],t)}function p(e){if(e)throw Error("Proxy has been released and is not useable")}function m(e){var t;let n=e.map(h);return[n.map(e=>e[0]),(t=n.map(e=>e[1]),Array.prototype.concat.apply([],t))]}let y=new WeakMap;function f(e){return Object.assign(e,{[r]:!0})}function h(e){for(let[t,n]of l)if(n.canHandle(e)){let[r,i]=n.serialize(e);return[{type:"HANDLER",name:t,value:r},i]}return[{type:"RAW",value:e},y.get(e)||[]]}function v(e){switch(e.type){case"HANDLER":return l.get(e.name).deserialize(e.value);case"RAW":return e.value}}function g(e,t,n){return new Promise(r=>{let i=[,,,,].fill(0).map(()=>Math.floor(Math.random()*Number.MAX_SAFE_INTEGER).toString(16)).join("-");e.addEventListener("message",function t(n){n.data&&n.data.id&&n.data.id===i&&(e.removeEventListener("message",t),r(n.data))}),e.start&&e.start(),e.postMessage(Object.assign({id:i},t),n)})}},1953:function(e,t,n){"use strict";let r,i;n.d(t,{x7:function(){return ei},ZP:function(){return ea}});var a,o=n(959);let s={data:""},l=e=>"object"==typeof window?((e?e.querySelector("#_goober"):window._goober)||Object.assign((e||document.head).appendChild(document.createElement("style")),{innerHTML:" ",id:"_goober"})).firstChild:e||s,c=/(?:([\u0080-\uFFFF\w-%@]+) *:? *([^{;]+?);|([^;}{]*?) *{)|(}\s*)/g,u=/\/\*[^]*?\*\/| +/g,d=/\n+/g,p=(e,t)=>{let n="",r="",i="";for(let a in e){let o=e[a];"@"==a[0]?"i"==a[1]?n=a+" "+o+";":r+="f"==a[1]?p(o,a):a+"{"+p(o,"k"==a[1]?"":t)+"}":"object"==typeof o?r+=p(o,t?t.replace(/([^,])+/g,e=>a.replace(/(^:.*)|([^,])+/g,t=>/&/.test(t)?t.replace(/&/g,e):e?e+" "+t:t)):a):null!=o&&(a=/^--/.test(a)?a:a.replace(/[A-Z]/g,"-$&").toLowerCase(),i+=p.p?p.p(a,o):a+":"+o+";")}return n+(t&&i?t+"{"+i+"}":i)+r},m={},y=e=>{if("object"==typeof e){let t="";for(let n in e)t+=n+y(e[n]);return t}return e},f=(e,t,n,r,i)=>{var a,o;let s=y(e),l=m[s]||(m[s]=(e=>{let t=0,n=11;for(;t>>0;return"go"+n})(s));if(!m[l]){let f=s!==e?e:(e=>{let t,n,r=[{}];for(;t=c.exec(e.replace(u,""));)t[4]?r.shift():t[3]?(n=t[3].replace(d," ").trim(),r.unshift(r[0][n]=r[0][n]||{})):r[0][t[1]]=t[2].replace(d," ").trim();return r[0]})(e);m[l]=p(i?{["@keyframes "+l]:f}:f,n?"":"."+l)}let h=n&&m.g?m.g:null;return n&&(m.g=m[l]),a=m[l],o=t,h?o.data=o.data.replace(h,a):-1===o.data.indexOf(a)&&(o.data=r?a+o.data:o.data+a),l},h=(e,t,n)=>e.reduce((e,r,i)=>{let a=t[i];if(a&&a.call){let o=a(n),s=o&&o.props&&o.props.className||/^go/.test(o)&&o;a=s?"."+s:o&&"object"==typeof o?o.props?"":p(o,""):!1===o?"":o}return e+r+(null==a?"":a)},"");function v(e){let t=this||{},n=e.call?e(t.p):e;return f(n.unshift?n.raw?h(n,[].slice.call(arguments,1),t.p):n.reduce((e,n)=>Object.assign(e,n&&n.call?n(t.p):n),{}):n,l(t.target),t.g,t.o,t.k)}v.bind({g:1});let g,b,C,S=v.bind({k:1});function w(e,t){let n=this||{};return function(){let r=arguments;function i(a,o){let s=Object.assign({},a),l=s.className||i.className;n.p=Object.assign({theme:b&&b()},s),n.o=/ *go\d+/.test(l),s.className=v.apply(n,r)+(l?" "+l:""),t&&(s.ref=o);let c=e;return e[0]&&(c=s.as||e,delete s.as),C&&c[0]&&C(s),g(c,s)}return t?t(i):i}}var E=e=>"function"==typeof e,U=(e,t)=>E(e)?e(t):e,T=(r=0,()=>(++r).toString()),k=()=>{if(void 0===i&&"u">typeof window){let e=matchMedia("(prefers-reduced-motion: reduce)");i=!e||e.matches}return i},x=new Map,D=e=>{if(x.has(e))return;let t=setTimeout(()=>{x.delete(e),F({type:4,toastId:e})},1e3);x.set(e,t)},B=e=>{let t=x.get(e);t&&clearTimeout(t)},I=(e,t)=>{switch(t.type){case 0:return{...e,toasts:[t.toast,...e.toasts].slice(0,20)};case 1:return t.toast.id&&B(t.toast.id),{...e,toasts:e.toasts.map(e=>e.id===t.toast.id?{...e,...t.toast}:e)};case 2:let{toast:n}=t;return e.toasts.find(e=>e.id===n.id)?I(e,{type:1,toast:n}):I(e,{type:0,toast:n});case 3:let{toastId:r}=t;return r?D(r):e.toasts.forEach(e=>{D(e.id)}),{...e,toasts:e.toasts.map(e=>e.id===r||void 0===r?{...e,visible:!1}:e)};case 4:return void 0===t.toastId?{...e,toasts:[]}:{...e,toasts:e.toasts.filter(e=>e.id!==t.toastId)};case 5:return{...e,pausedAt:t.time};case 6:let i=t.time-(e.pausedAt||0);return{...e,pausedAt:void 0,toasts:e.toasts.map(e=>({...e,pauseDuration:e.pauseDuration+i}))}}},A=[],P={toasts:[],pausedAt:void 0},F=e=>{P=I(P,e),A.forEach(e=>{e(P)})},R={blank:4e3,error:4e3,success:2e3,loading:1/0,custom:4e3},M=(e={})=>{let[t,n]=(0,o.useState)(P);(0,o.useEffect)(()=>(A.push(n),()=>{let e=A.indexOf(n);e>-1&&A.splice(e,1)}),[t]);let r=t.toasts.map(t=>{var n,r;return{...e,...e[t.type],...t,duration:t.duration||(null==(n=e[t.type])?void 0:n.duration)||(null==e?void 0:e.duration)||R[t.type],style:{...e.style,...null==(r=e[t.type])?void 0:r.style,...t.style}}});return{...t,toasts:r}},L=(e,t="blank",n)=>({createdAt:Date.now(),visible:!0,type:t,ariaProps:{role:"status","aria-live":"polite"},message:e,pauseDuration:0,...n,id:(null==n?void 0:n.id)||T()}),N=e=>(t,n)=>{let r=L(t,e,n);return F({type:2,toast:r}),r.id},O=(e,t)=>N("blank")(e,t);O.error=N("error"),O.success=N("success"),O.loading=N("loading"),O.custom=N("custom"),O.dismiss=e=>{F({type:3,toastId:e})},O.remove=e=>F({type:4,toastId:e}),O.promise=(e,t,n)=>{let r=O.loading(t.loading,{...n,...null==n?void 0:n.loading});return e.then(e=>(O.success(U(t.success,e),{id:r,...n,...null==n?void 0:n.success}),e)).catch(e=>{O.error(U(t.error,e),{id:r,...n,...null==n?void 0:n.error})}),e};var j=(e,t)=>{F({type:1,toast:{id:e,height:t}})},z=()=>{F({type:5,time:Date.now()})},H=e=>{let{toasts:t,pausedAt:n}=M(e);(0,o.useEffect)(()=>{if(n)return;let e=Date.now(),r=t.map(t=>{if(t.duration===1/0)return;let n=(t.duration||0)+t.pauseDuration-(e-t.createdAt);if(n<0){t.visible&&O.dismiss(t.id);return}return setTimeout(()=>O.dismiss(t.id),n)});return()=>{r.forEach(e=>e&&clearTimeout(e))}},[t,n]);let r=(0,o.useCallback)(()=>{n&&F({type:6,time:Date.now()})},[n]),i=(0,o.useCallback)((e,n)=>{let{reverseOrder:r=!1,gutter:i=8,defaultPosition:a}=n||{},o=t.filter(t=>(t.position||a)===(e.position||a)&&t.height),s=o.findIndex(t=>t.id===e.id),l=o.filter((e,t)=>te.visible).slice(...r?[l+1]:[0,l]).reduce((e,t)=>e+(t.height||0)+i,0)},[t]);return{toasts:t,handlers:{updateHeight:j,startPause:z,endPause:r,calculateOffset:i}}},_=w("div")` - width: 20px; - opacity: 0; - height: 20px; - border-radius: 10px; - background: ${e=>e.primary||"#ff4b4b"}; - position: relative; - transform: rotate(45deg); - - animation: ${S` -from { - transform: scale(0) rotate(45deg); - opacity: 0; -} -to { - transform: scale(1) rotate(45deg); - opacity: 1; -}`} 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) - forwards; - animation-delay: 100ms; - - &:after, - &:before { - content: ''; - animation: ${S` -from { - transform: scale(0); - opacity: 0; -} -to { - transform: scale(1); - opacity: 1; -}`} 0.15s ease-out forwards; - animation-delay: 150ms; - position: absolute; - border-radius: 3px; - opacity: 0; - background: ${e=>e.secondary||"#fff"}; - bottom: 9px; - left: 4px; - height: 2px; - width: 12px; - } - - &:before { - animation: ${S` -from { - transform: scale(0) rotate(90deg); - opacity: 0; -} -to { - transform: scale(1) rotate(90deg); - opacity: 1; -}`} 0.15s ease-out forwards; - animation-delay: 180ms; - transform: rotate(90deg); - } -`,W=w("div")` - width: 12px; - height: 12px; - box-sizing: border-box; - border: 2px solid; - border-radius: 100%; - border-color: ${e=>e.secondary||"#e0e0e0"}; - border-right-color: ${e=>e.primary||"#616161"}; - animation: ${S` - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -`} 1s linear infinite; -`,V=w("div")` - width: 20px; - opacity: 0; - height: 20px; - border-radius: 10px; - background: ${e=>e.primary||"#61d345"}; - position: relative; - transform: rotate(45deg); - - animation: ${S` -from { - transform: scale(0) rotate(45deg); - opacity: 0; -} -to { - transform: scale(1) rotate(45deg); - opacity: 1; -}`} 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275) - forwards; - animation-delay: 100ms; - &:after { - content: ''; - box-sizing: border-box; - animation: ${S` -0% { - height: 0; - width: 0; - opacity: 0; -} -40% { - height: 0; - width: 6px; - opacity: 1; -} -100% { - opacity: 1; - height: 10px; -}`} 0.2s ease-out forwards; - opacity: 0; - animation-delay: 200ms; - position: absolute; - border-right: 2px solid; - border-bottom: 2px solid; - border-color: ${e=>e.secondary||"#fff"}; - bottom: 6px; - left: 6px; - height: 10px; - width: 6px; - } -`,$=w("div")` - position: absolute; -`,G=w("div")` - position: relative; - display: flex; - justify-content: center; - align-items: center; - min-width: 20px; - min-height: 20px; -`,q=w("div")` - position: relative; - transform: scale(0.6); - opacity: 0.4; - min-width: 20px; - animation: ${S` -from { - transform: scale(0.6); - opacity: 0.4; -} -to { - transform: scale(1); - opacity: 1; -}`} 0.3s 0.12s cubic-bezier(0.175, 0.885, 0.32, 1.275) - forwards; -`,Y=({toast:e})=>{let{icon:t,type:n,iconTheme:r}=e;return void 0!==t?"string"==typeof t?o.createElement(q,null,t):t:"blank"===n?null:o.createElement(G,null,o.createElement(W,{...r}),"loading"!==n&&o.createElement($,null,"error"===n?o.createElement(_,{...r}):o.createElement(V,{...r})))},K=e=>` -0% {transform: translate3d(0,${-200*e}%,0) scale(.6); opacity:.5;} -100% {transform: translate3d(0,0,0) scale(1); opacity:1;} -`,Z=e=>` -0% {transform: translate3d(0,0,-1px) scale(1); opacity:1;} -100% {transform: translate3d(0,${-150*e}%,-1px) scale(.6); opacity:0;} -`,J=w("div")` - display: flex; - align-items: center; - background: #fff; - color: #363636; - line-height: 1.3; - will-change: transform; - box-shadow: 0 3px 10px rgba(0, 0, 0, 0.1), 0 3px 3px rgba(0, 0, 0, 0.05); - max-width: 350px; - pointer-events: auto; - padding: 8px 10px; - border-radius: 8px; -`,X=w("div")` - display: flex; - justify-content: center; - margin: 4px 10px; - color: inherit; - flex: 1 1 auto; - white-space: pre-line; -`,Q=(e,t)=>{let n=e.includes("top")?1:-1,[r,i]=k()?["0%{opacity:0;} 100%{opacity:1;}","0%{opacity:1;} 100%{opacity:0;}"]:[K(n),Z(n)];return{animation:t?`${S(r)} 0.35s cubic-bezier(.21,1.02,.73,1) forwards`:`${S(i)} 0.4s forwards cubic-bezier(.06,.71,.55,1)`}},ee=o.memo(({toast:e,position:t,style:n,children:r})=>{let i=e.height?Q(e.position||t||"top-center",e.visible):{opacity:0},a=o.createElement(Y,{toast:e}),s=o.createElement(X,{...e.ariaProps},U(e.message,e));return o.createElement(J,{className:e.className,style:{...i,...n,...e.style}},"function"==typeof r?r({icon:a,message:s}):o.createElement(o.Fragment,null,a,s))});a=o.createElement,p.p=void 0,g=a,b=void 0,C=void 0;var et=({id:e,className:t,style:n,onHeightUpdate:r,children:i})=>{let a=o.useCallback(t=>{if(t){let n=()=>{r(e,t.getBoundingClientRect().height)};n(),new MutationObserver(n).observe(t,{subtree:!0,childList:!0,characterData:!0})}},[e,r]);return o.createElement("div",{ref:a,className:t,style:n},i)},en=(e,t)=>{let n=e.includes("top"),r=e.includes("center")?{justifyContent:"center"}:e.includes("right")?{justifyContent:"flex-end"}:{};return{left:0,right:0,display:"flex",position:"absolute",transition:k()?void 0:"all 230ms cubic-bezier(.21,1.02,.73,1)",transform:`translateY(${t*(n?1:-1)}px)`,...n?{top:0}:{bottom:0},...r}},er=v` - z-index: 9999; - > * { - pointer-events: auto; - } -`,ei=({reverseOrder:e,position:t="top-center",toastOptions:n,gutter:r,children:i,containerStyle:a,containerClassName:s})=>{let{toasts:l,handlers:c}=H(n);return o.createElement("div",{style:{position:"fixed",zIndex:9999,top:16,left:16,right:16,bottom:16,pointerEvents:"none",...a},className:s,onMouseEnter:c.startPause,onMouseLeave:c.endPause},l.map(n=>{let a=n.position||t,s=en(a,c.calculateOffset(n,{reverseOrder:e,gutter:r,defaultPosition:t}));return o.createElement(et,{id:n.id,key:n.id,onHeightUpdate:c.updateHeight,className:n.visible?er:"",style:s},"custom"===n.type?U(n.message,n):i?i(n):o.createElement(ee,{toast:n,position:a}))}))},ea=O},9485:function(e,t,n){"use strict";n.d(t,{x4:function(){return r.x4}}),n(4826);var r=n(3807);n(1866),n(113)}}]); \ No newline at end of file diff --git a/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile b/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile deleted file mode 100644 index 7389a194e4f9307a2920c398ec6ad8fd3509e88d..0000000000000000000000000000000000000000 --- a/spaces/Fakermiya/Nsfw-Sfw_Classifier/Dockerfile +++ /dev/null @@ -1,99 +0,0 @@ -FROM heartexlabs/label-studio:hf-latest - -################################################################################ -# -# How to Disable Public Account Creation -# -------------------------------------- -# By default this space allows for the unrestricted creation of new accounts -# will full access to all projects and data. This is great for trying out -# Label Studio and collaborating on projects, but you may want to restrict -# access to your space to only authorized users. Uncomment the following line -# to disable public account creation for this space. -# -# ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true -# -# Set secrets in your space to create an inital user, and log in with your -# provided username and password. Do not set these in your Dockerfile, as they -# globally visible on a public space. -# -# LABEL_STUDIO_USERNAME -# LABEL_STUDIO_PASSWORD -# -# You will need to provide new users with an invitation link to join the space. -# -################################################################################ - -################################################################################ -# -# How to Enable Configuration Persistence -# --------------------------------------- -# By default this space stores all project configuration and data annotations -# in local storage with Sqlite. If the space is reset, all configuration and -# annotation data in the space will be lost. You can enable configuration -# persistence by connecting an external Postgres database to your space, -# guaranteeing that all project and annotation settings are preserved. -# -# Set the following secret variables to match your own hosted instance of -# Postgres. We strongly recommend setting these as secrets to prevent leaking -# information about your database service to the public in your spaces -# definition. -# -# ENV DJANGO_DB=default -# ENV POSTGRE_NAME= -# ENV POSTGRE_PORT= -# ENV POSTGRE_USER= -# ENV POSTGRE_PASSWORD= -# ENV POSTGRE_PORT= -# ENV POSTGRE_HOST= -# -# Uncomment the following line to remove the warning about ephemeral storage -# -# ENV STORAGE_PERSISTENCE=1 -# -# Note that you will need to connect cloud storage to host data items that you -# want to annotate, as local storage will not be preserved across a space reset. -# -################################################################################ - -################################################################################ -# -# How to Enable Cloud Storage -# --------------------------- -# By default the only data storage enabled for this space is local. In the case -# of a space reset, all data will be lost. To enable permanent storage, you -# must enable a cloud storage connector. We also strongly recommend enabling -# configuration persistence to preserve project data, annotations, and user -# settings. Choose the appropriate cloud connector and configure the secrets -# for it. -# -# Amazon S3 -# ========= -# STORAGE_TYPE=s3 -# STORAGE_AWS_ACCESS_KEY_ID="" -# STORAGE_AWS_SECRET_ACCESS_KEY="" -# STORAGE_AWS_BUCKET_NAME="" -# STORAGE_AWS_REGION_NAME="" -# STORAGE_AWS_FOLDER="" -# -# Google Cloud Storage -# ==================== -# -# STORAGE_TYPE=gcs -# STORAGE_GCS_BUCKET_NAME="" -# STORAGE_GCS_PROJECT_ID="" -# STORAGE_GCS_FOLDER="" -# GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json" -# -# Azure Blob Storage -# ================== -# -# STORAGE_TYPE=azure -# STORAGE_AZURE_ACCOUNT_NAME="" -# STORAGE_AZURE_ACCOUNT_KEY="" -# STORAGE_AZURE_CONTAINER_NAME="" -# STORAGE_AZURE_FOLDER="" -# -# -################################################################################ - -CMD exec label-studio --host=$SPACE_HOST diff --git a/spaces/Fengbinbin/gpt-academic/docs/README_FR.md b/spaces/Fengbinbin/gpt-academic/docs/README_FR.md deleted file mode 100644 index f21e90035ef2ddea91382155e0ad46b6740f5322..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/docs/README_FR.md +++ /dev/null @@ -1,296 +0,0 @@ -> **Note** -> -> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%. -> - -# ChatGPT Optimisation Académique - -**Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.** - -> **Note** -> -> 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin! -> -> 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). -> - -
    - -Fonctionnalité | Description ---- | --- -Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche. -Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois. -Explication de code en un clic | Affiche et explique correctement le code. -[Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables. -[Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy. -Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet. -[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés. -Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé. -Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX -Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction. -Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution. -[Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic -[Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread) -[Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants -Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge -Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic -Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre -[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps? -Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic) -... | ... - -
    - - -Vous êtes un traducteur professionnel d'articles universitaires en français. - -Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes : - -- Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas) -
    - -
    - - -- Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers. -
    - -
    - -- Correction/amélioration -
    - -
    - -- Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture. -
    - -
    - -- Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT. -
    - -
    - -- Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
    - -
    - -Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm). - - ---- - -## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS) - -1. Téléchargez le projet -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configuration de l'API_KEY et des paramètres de proxy - -Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous -``` -1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions). -2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py. -3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1. -``` -(Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.) - -3. Installation des dépendances -```sh -# (Option 1) Recommandé -python -m pip install -r requirements.txt - -# (Option 2) Si vous utilisez anaconda, les étapes sont similaires : -# (Option 2.1) conda create -n gptac_venv python=3.11 -# (Option 2.2) conda activate gptac_venv -# (Option 2.3) python -m pip install -r requirements.txt - -# note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez : -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) : -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. Exécution -```sh -python main.py -``` - -5. Tester les plugins de fonctions -``` -- Test Python Project Analysis - Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project" -- Test d'auto-lecture du code - Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)" -- Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes. - Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour" -- Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner. -``` - -## Installation - Méthode 2 : Utilisation de docker (Linux) - - -Vous êtes un traducteur professionnel d'articles académiques en français. - -1. ChatGPT seul (recommandé pour la plupart des gens) -``` sh -# Télécharger le projet -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# Configurer le proxy outre-mer et la clé API OpenAI -Modifier le fichier config.py avec n'importe quel éditeur de texte -# Installer -docker build -t gpt-academic . -# Exécuter -docker run --rm -it --net=host gpt-academic - -# Tester les modules de fonction -## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes. -Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui" -## Tester le résumé écrit pour le projet LaTeX -Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX" -## Tester l'analyse du projet Python -Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python" - -D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction. -``` - -2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante) -``` sh -# Modifier le dockerfile -cd docs && nano Dockerfile+ChatGLM -# Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# Comment exécuter | 如何运行 (1) Directement exécuter : -docker run --rm -it --net=host --gpus=all gpt-academic -# Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer : -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - -## Installation - Méthode 3 : Autres méthodes de déploiement - -1. Déploiement sur un cloud serveur distant -Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. Utilisation de WSL2 (Windows Subsystem for Linux) -Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## Configuration de la procuration de l'installation -### Méthode 1 : Méthode conventionnelle -[Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1) - -### Méthode 2 : Tutoriel pour débutant pur -[Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - - ---- - -## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques) -Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.) -Par exemple: -``` -"Traduction Français-Chinois": { - # Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc. - "Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n", - - # Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets. - "Suffix": "", -}, -``` - -
    - -
    - ---- - - -## Présentation de certaines fonctionnalités - -### Affichage des images: - -
    - -
    - - -### Si un programme peut comprendre et décomposer lui-même : - -
    - -
    - -
    - -
    - - -### Analyse de tout projet Python/Cpp quelconque : -
    - -
    - -
    - -
    - -### Lecture et résumé générés automatiquement pour les articles en Latex -
    - -
    - -### Génération de rapports automatique -
    - - - -
    - -### Conception de fonctionnalités modulaires -
    - - -
    - - -### Traduction de code source en anglais - -
    - -
    - -## À faire et planification de version : -- version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction -- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API -- version 3.0 : Prise en charge de chatglm et d'autres petits llm -- version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins -- version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet -- version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread. -- version 2.3 : Amélioration de l'interactivité multi-thread -- version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction -- version 2.1 : Mise en page pliable -- version 2.0 : Introduction du plugin de fonction modulaire -- version 1.0 : Fonctionnalité de base - -## Références et apprentissage - -``` -De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment : - -# Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projet 2 : ChatGLM-6B de Tsinghua : -https://github.com/THUDM/ChatGLM-6B -``` - diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py deleted file mode 100644 index 263df9c0f7c185290600454abfff464e7f774576..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/text/cleaners.py +++ /dev/null @@ -1,134 +0,0 @@ -import re -from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3 -from text.korean import latin_to_hangul, number_to_hangul, divide_hangul, korean_to_lazy_ipa, korean_to_ipa -from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2 -from text.sanskrit import devanagari_to_ipa -from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2 -from text.thai import num_to_thai, latin_to_thai -# from text.shanghainese import shanghainese_to_ipa -# from text.cantonese import cantonese_to_ipa -# from text.ngu_dialect import ngu_dialect_to_ipa - - -def japanese_cleaners(text): - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -# def chinese_cleaners(text): -# '''Pipeline for Chinese text''' -# text = number_to_chinese(text) -# text = chinese_to_bopomofo(text) -# text = latin_to_bopomofo(text) -# text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) -# return text - -def chinese_cleaners(text): - from pypinyin import Style, pinyin - text = text.replace("[ZH]", "") - phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)] - return ' '.join(phones) - - -def zh_ja_mixture_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - text = re.sub(r'([^।])$', r'\1।', text) - return text - - -def cjks_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -# def shanghainese_cleaners(text): -# text = shanghainese_to_ipa(text) -# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) -# return text - - -# def chinese_dialect_cleaners(text): -# text = re.sub(r'\[ZH\](.*?)\[ZH\]', -# lambda x: chinese_to_ipa2(x.group(1))+' ', text) -# text = re.sub(r'\[JA\](.*?)\[JA\]', -# lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) -# text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', -# '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) -# text = re.sub(r'\[GD\](.*?)\[GD\]', -# lambda x: cantonese_to_ipa(x.group(1))+' ', text) -# text = re.sub(r'\[EN\](.*?)\[EN\]', -# lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) -# text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( -# 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) -# text = re.sub(r'\s+$', '', text) -# text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) -# return text diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_537227KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py deleted file mode 100644 index 14febe43c78f49120c8be9f02941c3c1f8fdc3b1..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/patch_match.py +++ /dev/null @@ -1,263 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : patch_match.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 01/09/2020 -# -# Distributed under terms of the MIT license. - -import ctypes -import os.path as osp -from typing import Optional, Union - -import numpy as np -from PIL import Image - - -import os -if os.name!="nt": - # Otherwise, fall back to the subprocess. - import subprocess - print('Compiling and loading c extensions from "{}".'.format(osp.realpath(osp.dirname(__file__)))) - # subprocess.check_call(['./travis.sh'], cwd=osp.dirname(__file__)) - subprocess.check_call("make clean && make", cwd=osp.dirname(__file__), shell=True) - - -__all__ = ['set_random_seed', 'set_verbose', 'inpaint', 'inpaint_regularity'] - - -class CShapeT(ctypes.Structure): - _fields_ = [ - ('width', ctypes.c_int), - ('height', ctypes.c_int), - ('channels', ctypes.c_int), - ] - - -class CMatT(ctypes.Structure): - _fields_ = [ - ('data_ptr', ctypes.c_void_p), - ('shape', CShapeT), - ('dtype', ctypes.c_int) - ] - -import tempfile -from urllib.request import urlopen, Request -import shutil -from pathlib import Path -from tqdm import tqdm - -def download_url_to_file(url, dst, hash_prefix=None, progress=True): - r"""Download object at the given URL to a local path. - - Args: - url (string): URL of the object to download - dst (string): Full path where object will be saved, e.g. ``/tmp/temporary_file`` - hash_prefix (string, optional): If not None, the SHA256 downloaded file should start with ``hash_prefix``. - Default: None - progress (bool, optional): whether or not to display a progress bar to stderr - Default: True - https://pytorch.org/docs/stable/_modules/torch/hub.html#load_state_dict_from_url - """ - file_size = None - req = Request(url) - u = urlopen(req) - meta = u.info() - if hasattr(meta, 'getheaders'): - content_length = meta.getheaders("Content-Length") - else: - content_length = meta.get_all("Content-Length") - if content_length is not None and len(content_length) > 0: - file_size = int(content_length[0]) - - # We deliberately save it in a temp file and move it after - # download is complete. This prevents a local working checkpoint - # being overridden by a broken download. - dst = os.path.expanduser(dst) - dst_dir = os.path.dirname(dst) - f = tempfile.NamedTemporaryFile(delete=False, dir=dst_dir) - - try: - with tqdm(total=file_size, disable=not progress, - unit='B', unit_scale=True, unit_divisor=1024) as pbar: - while True: - buffer = u.read(8192) - if len(buffer) == 0: - break - f.write(buffer) - pbar.update(len(buffer)) - - f.close() - shutil.move(f.name, dst) - finally: - f.close() - if os.path.exists(f.name): - os.remove(f.name) - -if os.name!="nt": - PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.so')) -else: - if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')): - download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll",dst=osp.join(osp.dirname(__file__), 'libpatchmatch.dll')) - if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')): - download_url_to_file(url="https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll",dst=osp.join(osp.dirname(__file__), 'opencv_world460.dll')) - if not os.path.exists(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')): - print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/libpatchmatch.dll and put it into the PyPatchMatch folder") - if not os.path.exists(osp.join(osp.dirname(__file__), 'opencv_world460.dll')): - print("[Dependency Missing] Please download https://github.com/lkwq007/PyPatchMatch/releases/download/v0.1/opencv_world460.dll and put it into the PyPatchMatch folder") - PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.dll')) - -PMLIB.PM_set_random_seed.argtypes = [ctypes.c_uint] -PMLIB.PM_set_verbose.argtypes = [ctypes.c_int] -PMLIB.PM_free_pymat.argtypes = [CMatT] -PMLIB.PM_inpaint.argtypes = [CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint.restype = CMatT -PMLIB.PM_inpaint_regularity.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint_regularity.restype = CMatT -PMLIB.PM_inpaint2.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int] -PMLIB.PM_inpaint2.restype = CMatT -PMLIB.PM_inpaint2_regularity.argtypes = [CMatT, CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float] -PMLIB.PM_inpaint2_regularity.restype = CMatT - - -def set_random_seed(seed: int): - PMLIB.PM_set_random_seed(ctypes.c_uint(seed)) - - -def set_verbose(verbose: bool): - PMLIB.PM_set_verbose(ctypes.c_int(verbose)) - - -def inpaint( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]] = None, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15 -) -> np.ndarray: - """ - PatchMatch based inpainting proposed in: - - PatchMatch : A Randomized Correspondence Algorithm for Structural Image Editing - C.Barnes, E.Shechtman, A.Finkelstein and Dan B.Goldman - SIGGRAPH 2009 - - Args: - image (Union[np.ndarray, Image.Image]): the input image, should be 3-channel RGB/BGR. - mask (Union[np.array, Image.Image], optional): the mask of the hole(s) to be filled, should be 1-channel. - If not provided (None), the algorithm will treat all purely white pixels as the holes (255, 255, 255). - global_mask (Union[np.array, Image.Image], optional): the target mask of the output image. - patch_size (int): the patch size for the inpainting algorithm. - - Return: - result (np.ndarray): the repaired image, of the same size as the input image. - """ - - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint(np_to_pymat(image), np_to_pymat(mask), ctypes.c_int(patch_size)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), ctypes.c_int(patch_size)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def inpaint_regularity( - image: Union[np.ndarray, Image.Image], - mask: Optional[Union[np.ndarray, Image.Image]], - ijmap: np.ndarray, - *, - global_mask: Optional[Union[np.ndarray, Image.Image]] = None, - patch_size: int = 15, guide_weight: float = 0.25 -) -> np.ndarray: - if isinstance(image, Image.Image): - image = np.array(image) - image = np.ascontiguousarray(image) - - assert isinstance(ijmap, np.ndarray) and ijmap.ndim == 3 and ijmap.shape[2] == 3 and ijmap.dtype == 'float32' - ijmap = np.ascontiguousarray(ijmap) - - assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8' - if mask is None: - mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8') - mask = np.ascontiguousarray(mask) - else: - mask = _canonize_mask_array(mask) - - - if global_mask is None: - ret_pymat = PMLIB.PM_inpaint_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - else: - global_mask = _canonize_mask_array(global_mask) - ret_pymat = PMLIB.PM_inpaint2_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight)) - - ret_npmat = pymat_to_np(ret_pymat) - PMLIB.PM_free_pymat(ret_pymat) - - return ret_npmat - - -def _canonize_mask_array(mask): - if isinstance(mask, Image.Image): - mask = np.array(mask) - if mask.ndim == 2 and mask.dtype == 'uint8': - mask = mask[..., np.newaxis] - assert mask.ndim == 3 and mask.shape[2] == 1 and mask.dtype == 'uint8' - return np.ascontiguousarray(mask) - - -dtype_pymat_to_ctypes = [ - ctypes.c_uint8, - ctypes.c_int8, - ctypes.c_uint16, - ctypes.c_int16, - ctypes.c_int32, - ctypes.c_float, - ctypes.c_double, -] - - -dtype_np_to_pymat = { - 'uint8': 0, - 'int8': 1, - 'uint16': 2, - 'int16': 3, - 'int32': 4, - 'float32': 5, - 'float64': 6, -} - - -def np_to_pymat(npmat): - assert npmat.ndim == 3 - return CMatT( - ctypes.cast(npmat.ctypes.data, ctypes.c_void_p), - CShapeT(npmat.shape[1], npmat.shape[0], npmat.shape[2]), - dtype_np_to_pymat[str(npmat.dtype)] - ) - - -def pymat_to_np(pymat): - npmat = np.ctypeslib.as_array( - ctypes.cast(pymat.data_ptr, ctypes.POINTER(dtype_pymat_to_ctypes[pymat.dtype])), - (pymat.shape.height, pymat.shape.width, pymat.shape.channels) - ) - ret = np.empty(npmat.shape, npmat.dtype) - ret[:] = npmat - return ret - diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh b/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh deleted file mode 100644 index 431482e3f11169b372e96f1e361479c922fa5060..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_indistribution_bn.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/bash - -DATA_DIR=$1 -TRAINTASK=${2-'[rainbow-stack,bowl-ball-placement]'} -TASKNAME=${3-'mix-two'} -STEPS=${4-'20000'} - -DISP=False - -echo "Training multi-task dataset... Folder: $DATA_DIR Task $TASK" -trap "kill 0" SIGINT -# You can parallelize these depending on how much resources you have - -############################# -## Language-Conditioned Tasks -# [align-rope,assembling-kits-seq-seen-colors,assembling-kits-seq-unseen-colors,packing-shapes] - - -# TRAIN -python cliport/train.py train.task=$TRAINTASK \ - train.agent=cliport \ - train.model_task=$TASKNAME \ - train.attn_stream_fusion_type=add \ - train.trans_stream_fusion_type=conv \ - train.lang_fusion_type=mult \ - train.n_demos=200 \ - train.n_steps=${STEPS} \ - dataset.cache=True \ - train.exp_folder=exps/exp-$TASKNAME \ - dataset.type=multi \ - train.load_from_last_ckpt=False \ - train.batchnorm=True - -# Convert Python list to Bash array -bash_array=$(python3 -c "import sys; print(' '.join((sys.argv[1])[1:-1].split(',')))" "$TRAINTASK") - -# Convert the space-separated string to a bash array -echo "Testing multi-task dataset... Folder: $DATA_DIR Task $TASK" - - -for task in $bash_array - do - echo "Testing $task" - # TEST - bash scripts/generate_gpt_datasets.sh data $task - - python cliport/eval.py model_task=$TASKNAME \ - eval_task=$task \ - agent=cliport \ - mode=test \ - n_demos=100 \ - train_demos=200 \ - checkpoint_type=test_best \ - type=single \ - exp_folder=exps/exp-$TASKNAME \ - update_results=True \ - train.batchnorm=True & - done -wait - -python notebooks/print_results.py -r=exps/exp-$TASKNAME -echo "Finished Training." \ No newline at end of file diff --git a/spaces/GilbertClaus/VideoCutter/megaDL.py b/spaces/GilbertClaus/VideoCutter/megaDL.py deleted file mode 100644 index cb7d02a50155e7d9d45e8d277197ca1edb639a31..0000000000000000000000000000000000000000 --- a/spaces/GilbertClaus/VideoCutter/megaDL.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import shutil -from mega import Mega -from others import * - -def download_mega(name, directory, url): - if not os.path.exists(directory): - os.makedirs(directory) - - mega = Mega() - m = mega.login() - - # Download the file to a temporary location - file = m.download_url(url, dest_filename=name) - - # Rename the file and move it to the specified directory - filename = os.path.join(directory, file) - shutil.move(file, filename) - - return filename - -def mega_dl(url, judul): - judul = judul + '.mp4' - download = '/home/user/app/Mega' - filename = download_mega(judul, download, url) - output_file = convert_videos(720, download) - return output_file diff --git a/spaces/GooglyBlox/DalleFork/index.html b/spaces/GooglyBlox/DalleFork/index.html deleted file mode 100644 index 74d65ba18bf356ce52b1d00b0e7c1903d5e285f2..0000000000000000000000000000000000000000 --- a/spaces/GooglyBlox/DalleFork/index.html +++ /dev/null @@ -1,64 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
    - - - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index a01df33c94e1f8b5f51a51a780b30a77ce99b2c0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py deleted file mode 100644 index 34975959f27f0ef8b985ab7d2857c7f2d70e47ae..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py deleted file mode 100644 index ef9392f7e351f489d6d9e97936925b6a16d1212e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py +++ /dev/null @@ -1,37 +0,0 @@ -_base_ = './retinanet_r50_fpn_1x_coco_v1.py' -model = dict( - pretrained='open-mmlab://detectron/resnet50_caffe', - backbone=dict( - norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe')) -# use caffe img_norm -img_norm_cfg = dict( - mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 8c707c79d659bc544d242352bcb29686eb40b004..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py deleted file mode 100644 index f36d490e9c9b31de7eedf735d2712e55f35db998..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './dmnet_r50-d8_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py deleted file mode 100644 index 34061c1dd160d4b00aac8dbdc82dccf5c3883ce8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/formating.py +++ /dev/null @@ -1,288 +0,0 @@ -from collections.abc import Sequence - -import mmcv -import numpy as np -import torch -from mmcv.parallel import DataContainer as DC - -from ..builder import PIPELINES - - -def to_tensor(data): - """Convert objects of various python types to :obj:`torch.Tensor`. - - Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`, - :class:`Sequence`, :class:`int` and :class:`float`. - - Args: - data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to - be converted. - """ - - if isinstance(data, torch.Tensor): - return data - elif isinstance(data, np.ndarray): - return torch.from_numpy(data) - elif isinstance(data, Sequence) and not mmcv.is_str(data): - return torch.tensor(data) - elif isinstance(data, int): - return torch.LongTensor([data]) - elif isinstance(data, float): - return torch.FloatTensor([data]) - else: - raise TypeError(f'type {type(data)} cannot be converted to tensor.') - - -@PIPELINES.register_module() -class ToTensor(object): - """Convert some results to :obj:`torch.Tensor` by given keys. - - Args: - keys (Sequence[str]): Keys that need to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert data in results to :obj:`torch.Tensor`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted - to :obj:`torch.Tensor`. - """ - - for key in self.keys: - results[key] = to_tensor(results[key]) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class ImageToTensor(object): - """Convert image to :obj:`torch.Tensor` by given keys. - - The dimension order of input image is (H, W, C). The pipeline will convert - it to (C, H, W). If only 2 dimension (H, W) is given, the output would be - (1, H, W). - - Args: - keys (Sequence[str]): Key of images to be converted to Tensor. - """ - - def __init__(self, keys): - self.keys = keys - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - img = results[key] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - results[key] = to_tensor(img.transpose(2, 0, 1)) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(keys={self.keys})' - - -@PIPELINES.register_module() -class Transpose(object): - """Transpose some results by given keys. - - Args: - keys (Sequence[str]): Keys of results to be transposed. - order (Sequence[int]): Order of transpose. - """ - - def __init__(self, keys, order): - self.keys = keys - self.order = order - - def __call__(self, results): - """Call function to convert image in results to :obj:`torch.Tensor` and - transpose the channel order. - - Args: - results (dict): Result dict contains the image data to convert. - - Returns: - dict: The result dict contains the image converted - to :obj:`torch.Tensor` and transposed to (C, H, W) order. - """ - - for key in self.keys: - results[key] = results[key].transpose(self.order) - return results - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, order={self.order})' - - -@PIPELINES.register_module() -class ToDataContainer(object): - """Convert results to :obj:`mmcv.DataContainer` by given fields. - - Args: - fields (Sequence[dict]): Each field is a dict like - ``dict(key='xxx', **kwargs)``. The ``key`` in result will - be converted to :obj:`mmcv.DataContainer` with ``**kwargs``. - Default: ``(dict(key='img', stack=True), - dict(key='gt_semantic_seg'))``. - """ - - def __init__(self, - fields=(dict(key='img', - stack=True), dict(key='gt_semantic_seg'))): - self.fields = fields - - def __call__(self, results): - """Call function to convert data in results to - :obj:`mmcv.DataContainer`. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data converted to - :obj:`mmcv.DataContainer`. - """ - - for field in self.fields: - field = field.copy() - key = field.pop('key') - results[key] = DC(results[key], **field) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(fields={self.fields})' - - -@PIPELINES.register_module() -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - - if 'img' in results: - img = results['img'] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - img = np.ascontiguousarray(img.transpose(2, 0, 1)) - results['img'] = DC(to_tensor(img), stack=True) - if 'gt_semantic_seg' in results: - # convert to long - results['gt_semantic_seg'] = DC( - to_tensor(results['gt_semantic_seg'][None, - ...].astype(np.int64)), - stack=True) - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class Collect(object): - """Collect data from the loader relevant to the specific task. - - This is usually the last stage of the data loader pipeline. Typically keys - is set to some subset of "img", "gt_semantic_seg". - - The "img_meta" item is always populated. The contents of the "img_meta" - dictionary depends on "meta_keys". By default this includes: - - - "img_shape": shape of the image input to the network as a tuple - (h, w, c). Note that images may be zero padded on the bottom/right - if the batch tensor is larger than this shape. - - - "scale_factor": a float indicating the preprocessing scale - - - "flip": a boolean indicating if image flip transform was used - - - "filename": path to the image file - - - "ori_shape": original shape of the image as a tuple (h, w, c) - - - "pad_shape": image shape after padding - - - "img_norm_cfg": a dict of normalization information: - - mean - per channel mean subtraction - - std - per channel std divisor - - to_rgb - bool indicating if bgr was converted to rgb - - Args: - keys (Sequence[str]): Keys of results to be collected in ``data``. - meta_keys (Sequence[str], optional): Meta keys to be converted to - ``mmcv.DataContainer`` and collected in ``data[img_metas]``. - Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape', - 'pad_shape', 'scale_factor', 'flip', 'flip_direction', - 'img_norm_cfg')`` - """ - - def __init__(self, - keys, - meta_keys=('filename', 'ori_filename', 'ori_shape', - 'img_shape', 'pad_shape', 'scale_factor', 'flip', - 'flip_direction', 'img_norm_cfg')): - self.keys = keys - self.meta_keys = meta_keys - - def __call__(self, results): - """Call function to collect keys in results. The keys in ``meta_keys`` - will be converted to :obj:mmcv.DataContainer. - - Args: - results (dict): Result dict contains the data to collect. - - Returns: - dict: The result dict contains the following keys - - keys in``self.keys`` - - ``img_metas`` - """ - - data = {} - img_meta = {} - for key in self.meta_keys: - img_meta[key] = results[key] - data['img_metas'] = DC(img_meta, cpu_only=True) - for key in self.keys: - data[key] = results[key] - return data - - def __repr__(self): - return self.__class__.__name__ + \ - f'(keys={self.keys}, meta_keys={self.meta_keys})' diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py deleted file mode 100644 index fbf2d3524ee40bd0d08b6a9560047d96e49b6045..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/w2l_decoder.py +++ /dev/null @@ -1,486 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Flashlight decoders. -""" - -import gc -import itertools as it -import os.path as osp -from typing import List -import warnings -from collections import deque, namedtuple - -import numpy as np -import torch -from examples.speech_recognition.data.replabels import unpack_replabels -from fairseq import tasks -from fairseq.utils import apply_to_sample -from omegaconf import open_dict -from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - -try: - from flashlight.lib.text.dictionary import create_word_dict, load_words - from flashlight.lib.sequence.criterion import CpuViterbiPath, get_data_ptr_as_bytes - from flashlight.lib.text.decoder import ( - CriterionType, - LexiconDecoderOptions, - KenLM, - LM, - LMState, - SmearingMode, - Trie, - LexiconDecoder, - ) -except: - warnings.warn( - "flashlight python bindings are required to use this functionality. Please install from https://github.com/facebookresearch/flashlight/tree/master/bindings/python" - ) - LM = object - LMState = object - - -class W2lDecoder(object): - def __init__(self, args, tgt_dict): - self.tgt_dict = tgt_dict - self.vocab_size = len(tgt_dict) - self.nbest = args.nbest - - # criterion-specific init - self.criterion_type = CriterionType.CTC - self.blank = ( - tgt_dict.index("") - if "" in tgt_dict.indices - else tgt_dict.bos() - ) - if "" in tgt_dict.indices: - self.silence = tgt_dict.index("") - elif "|" in tgt_dict.indices: - self.silence = tgt_dict.index("|") - else: - self.silence = tgt_dict.eos() - self.asg_transitions = None - - def generate(self, models, sample, **unused): - """Generate a batch of inferences.""" - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions = self.get_emissions(models, encoder_input) - return self.decode(emissions) - - def get_emissions(self, models, encoder_input): - """Run encoder and normalize emissions""" - model = models[0] - encoder_out = model(**encoder_input) - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out) # no need to normalize emissions - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - return emissions.transpose(0, 1).float().cpu().contiguous() - - def get_tokens(self, idxs): - """Normalize tokens by handling CTC blank, ASG replabels, etc.""" - idxs = (g[0] for g in it.groupby(idxs)) - idxs = filter(lambda x: x != self.blank, idxs) - return torch.LongTensor(list(idxs)) - - -class W2lViterbiDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - if self.asg_transitions is None: - transitions = torch.FloatTensor(N, N).zero_() - else: - transitions = torch.FloatTensor(self.asg_transitions).view(N, N) - viterbi_path = torch.IntTensor(B, T) - workspace = torch.ByteTensor(CpuViterbiPath.get_workspace_size(B, T, N)) - CpuViterbiPath.compute( - B, - T, - N, - get_data_ptr_as_bytes(emissions), - get_data_ptr_as_bytes(transitions), - get_data_ptr_as_bytes(viterbi_path), - get_data_ptr_as_bytes(workspace), - ) - return [ - [{"tokens": self.get_tokens(viterbi_path[b].tolist()), "score": 0}] - for b in range(B) - ] - - -class W2lKenLMDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - self.unit_lm = getattr(args, "unit_lm", False) - - if args.lexicon: - self.lexicon = load_words(args.lexicon) - self.word_dict = create_word_dict(self.lexicon) - self.unk_word = self.word_dict.get_index("") - - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.trie = Trie(self.vocab_size, self.silence) - - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - word_idx = self.word_dict.get_index(word) - _, score = self.lm.score(start_state, word_idx) - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - word_score=args.word_score, - unk_score=args.unk_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - - if self.asg_transitions is None: - N = 768 - # self.asg_transitions = torch.FloatTensor(N, N).zero_() - self.asg_transitions = [] - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - self.asg_transitions, - self.unit_lm, - ) - else: - assert args.unit_lm, "lexicon free decoding can only be done with a unit language model" - from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def get_timesteps(self, token_idxs: List[int]) -> List[int]: - """Returns frame numbers corresponding to every non-blank token. - - Parameters - ---------- - token_idxs : List[int] - IDs of decoded tokens. - - Returns - ------- - List[int] - Frame numbers corresponding to every non-blank token. - """ - timesteps = [] - for i, token_idx in enumerate(token_idxs): - if token_idx == self.blank: - continue - if i == 0 or token_idx != token_idxs[i-1]: - timesteps.append(i) - return timesteps - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append( - [ - { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - "timesteps": self.get_timesteps(result.tokens), - "words": [ - self.word_dict.get_entry(x) for x in result.words if x >= 0 - ], - } - for result in nbest_results - ] - ) - return hypos - - -FairseqLMState = namedtuple("FairseqLMState", ["prefix", "incremental_state", "probs"]) - - -class FairseqLM(LM): - def __init__(self, dictionary, model): - LM.__init__(self) - self.dictionary = dictionary - self.model = model - self.unk = self.dictionary.unk() - - self.save_incremental = False # this currently does not work properly - self.max_cache = 20_000 - - model.cuda() - model.eval() - model.make_generation_fast_() - - self.states = {} - self.stateq = deque() - - def start(self, start_with_nothing): - state = LMState() - prefix = torch.LongTensor([[self.dictionary.eos()]]) - incremental_state = {} if self.save_incremental else None - with torch.no_grad(): - res = self.model(prefix.cuda(), incremental_state=incremental_state) - probs = self.model.get_normalized_probs(res, log_probs=True, sample=None) - - if incremental_state is not None: - incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state) - self.states[state] = FairseqLMState( - prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy() - ) - self.stateq.append(state) - - return state - - def score(self, state: LMState, token_index: int, no_cache: bool = False): - """ - Evaluate language model based on the current lm state and new word - Parameters: - ----------- - state: current lm state - token_index: index of the word - (can be lexicon index then you should store inside LM the - mapping between indices of lexicon and lm, or lm index of a word) - - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - curr_state = self.states[state] - - def trim_cache(targ_size): - while len(self.stateq) > targ_size: - rem_k = self.stateq.popleft() - rem_st = self.states[rem_k] - rem_st = FairseqLMState(rem_st.prefix, None, None) - self.states[rem_k] = rem_st - - if curr_state.probs is None: - new_incremental_state = ( - curr_state.incremental_state.copy() - if curr_state.incremental_state is not None - else None - ) - with torch.no_grad(): - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cuda(), new_incremental_state - ) - elif self.save_incremental: - new_incremental_state = {} - - res = self.model( - torch.from_numpy(curr_state.prefix).cuda(), - incremental_state=new_incremental_state, - ) - probs = self.model.get_normalized_probs( - res, log_probs=True, sample=None - ) - - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cpu(), new_incremental_state - ) - - curr_state = FairseqLMState( - curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy() - ) - - if not no_cache: - self.states[state] = curr_state - self.stateq.append(state) - - score = curr_state.probs[token_index].item() - - trim_cache(self.max_cache) - - outstate = state.child(token_index) - if outstate not in self.states and not no_cache: - prefix = np.concatenate( - [curr_state.prefix, torch.LongTensor([[token_index]])], -1 - ) - incr_state = curr_state.incremental_state - - self.states[outstate] = FairseqLMState(prefix, incr_state, None) - - if token_index == self.unk: - score = float("-inf") - - return outstate, score - - def finish(self, state: LMState): - """ - Evaluate eos for language model based on the current lm state - - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - return self.score(state, self.dictionary.eos()) - - def empty_cache(self): - self.states = {} - self.stateq = deque() - gc.collect() - - -class W2lFairseqLMDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - self.unit_lm = getattr(args, "unit_lm", False) - - self.lexicon = load_words(args.lexicon) if args.lexicon else None - self.idx_to_wrd = {} - - checkpoint = torch.load(args.kenlm_model, map_location="cpu") - - if "cfg" in checkpoint and checkpoint["cfg"] is not None: - lm_args = checkpoint["cfg"] - else: - lm_args = convert_namespace_to_omegaconf(checkpoint["args"]) - - with open_dict(lm_args.task): - lm_args.task.data = osp.dirname(args.kenlm_model) - - task = tasks.setup_task(lm_args.task) - model = task.build_model(lm_args.model) - model.load_state_dict(checkpoint["model"], strict=False) - - self.trie = Trie(self.vocab_size, self.silence) - - self.word_dict = task.dictionary - self.unk_word = self.word_dict.unk() - self.lm = FairseqLM(self.word_dict, model) - - if self.lexicon: - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - if self.unit_lm: - word_idx = i - self.idx_to_wrd[i] = word - score = 0 - else: - word_idx = self.word_dict.index(word) - _, score = self.lm.score(start_state, word_idx, no_cache=True) - - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - word_score=args.word_score, - unk_score=args.unk_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unit_lm, - ) - else: - assert args.unit_lm, "lexicon free decoding can only be done with a unit language model" - from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - - def idx_to_word(idx): - if self.unit_lm: - return self.idx_to_wrd[idx] - else: - return self.word_dict[idx] - - def make_hypo(result): - hypo = {"tokens": self.get_tokens(result.tokens), "score": result.score} - if self.lexicon: - hypo["words"] = [idx_to_word(x) for x in result.words if x >= 0] - return hypo - - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append([make_hypo(result) for result in nbest_results]) - self.lm.empty_cache() - - return hypos diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py b/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py deleted file mode 100644 index 83a8e961360b93653e16dda96a641b5d9e112285..0000000000000000000000000000000000000000 --- a/spaces/HarshulNanda/HARM_ML_App_ludwig/app.py +++ /dev/null @@ -1,340 +0,0 @@ -from matplotlib import pyplot as plt -from pytube import YouTube -from streamlit_player import st_player -from bokeh.models.widgets import Div -from youtube_dl import YoutubeDL -from stqdm import stqdm -from PIL import Image -from io import BytesIO - -from colors import colorOf -from categoryPredictor import predictCategoryFor -from statsViewer import generate_channel_video_data -from eduContentPredictor import eduContentPrediction -from youtubesearchpython import Video, ResultMode, VideosSearch, Playlist, ChannelsSearch - -import streamlit as st -import base64 -import pandas as pd -import chime -import pytube -import toml -import webbrowser -import numpy as np -import youtube_dl - -st.set_page_config(page_title="HARM Bot", page_icon=Image.open("./assets/harmLogo.ico")) -# primaryColor = toml.load(".streamlit/config.toml")['theme']['primaryColor'] -s = f""" - - """ - st.markdown(hideStreamlitStyle, unsafe_allow_html=True) - -# MARK: Adding the sidebar menu -def add_sidebar_menu(): - with st.sidebar: - - st.markdown(''' -

    By HARM, an intern team, aims to expand the world of AI by providing an useful feature.

    - ''', True) - - st.markdown("### Team Members ") - - if st.button('Harshul Nanda'): - js = "window.open('https://www.linkedin.com/in/harshulnanda/')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - - if st.button('Abhijeet Saroha'): - js = "window.open('https://www.linkedin.com/in/abhijeet-saroha-a19031229/')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - - if st.button('Rishabh Sagar'): - js = "window.open('https://www.linkedin.com/in/rishabh-sagar-1b0b74229/')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - - if st.button('Mayank Arora'): - js = "window.open('https://www.linkedin.com/in/mayank-arora-24713322a/')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - - st.markdown("### Contact us ") - - if st.button('Github'): - js = "window.open('https://github.com/Harshul-18')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - # webbrowser.open_new_tab('https://github.com/Harshul-18') - - if st.button('LinkedIn'): - js = "window.open('https://www.linkedin.com/company/82157293/admin/')" - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - # webbrowser.open_new_tab('https://www.linkedin.com/company/82157293/admin/') - - # path = "https://www.buymeacoffee.com/widget/page/HARMBOT?description=Support%20me%20on%20Buy%20me%20a%20coffee!&color=%235F7FF" - # if st.button("Buy us a coffee"): - # webbrowser.open_new_tab(path) - - st.markdown("""Buy Us A Coffee""", unsafe_allow_html=True) - - page_bg_img = """ - - """ - st.markdown(page_bg_img, unsafe_allow_html=True) - -# MARK: Adding the HARM logo gif -def add_image(with_path): - file_ = open(with_path, "rb") - - contents = file_.read() - data_url = base64.b64encode(contents).decode("utf-8") - file_.close() - st.markdown( - f'
    harmLogo
    ', - unsafe_allow_html=True, - ) - -# MARK: Adding the title -def add_title_text(): - st.title("Hello, I am a YouTube API Bot!") - st.text("I am a simple tool, just enter the URL and I will give the statistics.") - -# MARK: Adding body for page 1 containing all the fields while the youtube video url text input field is not empty -def bodyOfPage1(): - youtubeVideoUrl = st.text_input("Enter the URL of the Youtube Video", value="", type="default", help="Enter the URL of the Youtube video you want me to show the statistics and predict the category for.") - - try: - if youtubeVideoUrl: - video = Video.getInfo(youtubeVideoUrl, mode=ResultMode.json) - - with st.expander("Prediction"): - - isEdu, isCat, catArr, probArr = predictCategoryFor(url=youtubeVideoUrl) - if isEdu == "Educational": - st.markdown( - f"
    This video comes under the {isCat} category.
    ", - unsafe_allow_html=True, - ) - plt.figure(facecolor="#ffffff") - fig, x = plt.subplots(facecolor="#ffffff") - p = x.barh([i for i in range(1, len(catArr)+1)], probArr, tick_label=catArr, color="#E11D48") - x.set_facecolor("#ffffff") - x.spines['bottom'].set_color('black') - x.spines['top'].set_color('black') - x.spines['right'].set_color('black') - x.spines['left'].set_color('black') - x.tick_params(axis='x', colors='black') - x.tick_params(axis='y', colors='black') - x.bar_label(p, label_type="center", color="black") - st.pyplot(fig) - else: - st.markdown( - f"
    This is not an educational video.
    ", - unsafe_allow_html=True, - ) - - - with st.expander("View Video"): - - if (youtubeVideoUrl is None or len(youtubeVideoUrl) == 0): - print(colorOf.FAIL + "The url input field is empty, please enter a youtube video url." + colorOf.ENDC) - chime.error() - - st_player(youtubeVideoUrl) - - try: - st.markdown("**Author of this video:** " + str(video["channel"]["name"])) - st.markdown("**Title of video:** " + str(video["title"])) - st.markdown("**Description of video:** " + str(video["description"])) - chime.success() - except Exception as e: - print(colorOf.FAIL + f"Unable to view the video details. {e}" + colorOf.ENDC) - chime.error() - - except Exception as e: - st.markdown(f"{e}, Please enter the correct video URL") - -# MARK: Adding body for page 2 containing the fields for channel's statistics -def bodyOfPage2(): - youtubeChannelUrl = st.text_input("Enter the Video URL to get the stats of that channel", value="", type="default", help="Enter the URL of the Youtube Video you want me to show the data of its channel.") - # youtubeChannelUrl += "/videos" - number = st.number_input('How many videos to analyse?', min_value=5, step=5, help="Enter the number or click the + or - buttons to increase or decrease the number with step size 5 for getting the data for the number of videos you entered.") - if len(youtubeChannelUrl) >= 1: - try: - with st.expander("View Statistics"): - generate_channel_video_data(of_channel=youtubeChannelUrl, with_number_of_videos=number) - except Exception as e: - st.markdown(f"{e}, Please enter the correct channel ID") - -# MARK: Adding body for page 3 containing the fields for searching a video from youtube -def bodyOfPage3(): - searchFor = st.text_input("Search for videos", value="", type="default", help="Enter a keyword for searching for a youtube video.") - number = st.number_input('Show search results', min_value=1, step=1, help="Enter the number or click the + or - buttons to increase or decrease the number for getting the number of videos you entered.") - - - if len(searchFor) >= 1: - videosSearch = VideosSearch(searchFor, limit=number) - - result = [video['link'] for video in videosSearch.result()['result']] - - for youtubeVideoUrl in stqdm(result): - - with st.container(): - st_player(youtubeVideoUrl) - - with st.expander("Prediction"): - - isEdu, isCat, catArr, probArr = predictCategoryFor(url=youtubeVideoUrl) - if isEdu == "Educational": - st.markdown( - f"
    This video comes under the {isCat} category.
    ", - unsafe_allow_html=True, - ) - plt.figure(facecolor="#ffffff") - fig, x = plt.subplots(facecolor="#ffffff") - p = x.barh([i for i in range(1, len(catArr)+1)], probArr, tick_label=catArr, color="#E11D48") - x.set_facecolor("#ffffff") - x.spines['bottom'].set_color('black') - x.spines['top'].set_color('black') - x.spines['right'].set_color('black') - x.spines['left'].set_color('black') - x.tick_params(axis='x', colors='black') - x.tick_params(axis='y', colors='black') - x.bar_label(p, label_type="center", color="black") - st.pyplot(fig) - else: - st.markdown( - f"
    This is not an educational video.
    ", - unsafe_allow_html=True, - ) - -# MARK: Adding body for page 4 containing the field for predicting category for videos in a playlist -def bodyOfPage4(): - playlist = st.text_input("Enter a YouTube playlist url", value="", type="default", help="Enter url of a youtube playlist.") - - if len(playlist) >= 1: - - try: - playlistVideos = Playlist.getVideos(playlist) - - for i in playlistVideos["videos"]: - url = i["link"].split("list")[0][:-1] - with st.container(): - st_player(url) - - with st.expander("Prediction"): - - isEdu, isCat, catArr, probArr = predictCategoryFor(url=url) - if isEdu == "Educational": - st.markdown( - f"
    This video comes under the {isCat} category.
    ", - unsafe_allow_html=True, - ) - plt.figure(facecolor="#ffffff") - fig, x = plt.subplots(facecolor="#ffffff") - p = x.barh([i for i in range(1, len(catArr)+1)], probArr, tick_label=catArr, color="#E11D48") - x.set_facecolor("#ffffff") - x.spines['bottom'].set_color('black') - x.spines['top'].set_color('black') - x.spines['right'].set_color('black') - x.spines['left'].set_color('black') - x.tick_params(axis='x', colors='black') - x.tick_params(axis='y', colors='black') - x.bar_label(p, label_type="center", color="black") - st.pyplot(fig) - else: - st.markdown( - f"
    This is not an educational video.
    ", - unsafe_allow_html=True, - ) - except Exception as e: - st.markdown(f"Please enter the correct URL") - -# MARK: Adding body for page 5 containing the field for predicting the educational content percentage in a video. -def bodyOfPage5(): - youtubeVideoUrl = st.text_input("Enter a Youtube Video URL", value="", type="default", help="Enter a URL of the Youtube Video you want me to tell the educational portion content in the video.") - try: - if youtubeVideoUrl: - st.markdown(f"### {eduContentPrediction(youtubeVideoUrl)}") - except: - st.markdown("Please enter a correct YouTube video URL or This video's transcripts are not available.") - -# MARK: Adding the footer -def add_footer(): - footer=""" - -""" - - st.markdown(footer, True) - -if __name__ == "__main__": - - hide_streamlit_style() - add_image(with_path="./assets/harmLogo.gif") - add_title_text() - page_names_to_funcs = { - "Category Predictor": bodyOfPage1, - "Channel Stats Viewer": bodyOfPage2, - "Search Videos": bodyOfPage3, - "Playlist Videos Predictor": bodyOfPage4, - "Educational Content in a Video": bodyOfPage5, - } - selected_page = st.sidebar.selectbox("Select the page", page_names_to_funcs.keys()) - page_names_to_funcs[selected_page]() - add_sidebar_menu() - add_footer() \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow_en.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow_en.py deleted file mode 100644 index 827bdc98f2d84090cc445d786ff8fc1e5ff3d829..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/utils/glow/prepare_iitm_data_glow_en.py +++ /dev/null @@ -1,135 +0,0 @@ -import os -from glob import glob -import re -import string -import argparse -import json -import random -random.seed(42) - -def replace_extra_chars(line): - line = line.replace("(", "").replace( - ")", "" - ) # .replace('\u200d', ' ').replace('\ufeff', ' ').replace('\u200c', ' ').replace('\u200e', ' ') - # line = line.replace('“', ' ').replace('”', ' ').replace(':', ' ') - - return line.strip() - - -def write_txt(content, filename): - with open(filename, "w+", encoding="utf-8") as f: - f.write(content) - - -def save_train_test_valid_split(annotations_txt, num_samples_valid, num_samples_test): - with open(annotations_txt, encoding="utf-8") as f: - all_lines = [line.strip() for line in f.readlines()] - test_val_indices = random.sample( - range(len(all_lines)), num_samples_valid + num_samples_test - ) - valid_ix = test_val_indices[:num_samples_valid] - test_ix = test_val_indices[num_samples_valid:] - train = [line for i, line in enumerate(all_lines) if i not in test_val_indices] - valid = [line for i, line in enumerate(all_lines) if i in valid_ix] - test = [line for i, line in enumerate(all_lines) if i in test_ix] - - print(f"Num samples in train: {len(train)}") - print(f"Num samples in valid: {len(valid)}") - print(f"Num samples in test: {len(test)}") - - out_dir_path = "/".join(annotations_txt.split("/")[:-1]) - with open(os.path.join(out_dir_path, "train.txt"), "w+", encoding="utf-8") as f: - for line in train: - print(line, file=f) - with open(os.path.join(out_dir_path, "valid.txt"), "w+", encoding="utf-8") as f: - for line in valid: - print(line, file=f) - with open(os.path.join(out_dir_path, "test.txt"), "w+", encoding="utf-8") as f: - for line in test: - print(line, file=f) - print(f"train, test and valid txts saved in {out_dir_path}") - - -def save_txts_from_txt_done_data( - text_path, - wav_path_for_annotations_txt, - out_path_for_txts, - num_samples_valid, - num_samples_test, -): - outfile = os.path.join(out_path_for_txts, "annotations.txt") - with open(text_path) as file: - file_lines = file.readlines() - - # print(file_lines[0]) - - file_lines = [replace_extra_chars(line) for line in file_lines] - # print(file_lines[0]) - - fnames, ftexts = [], [] - for line in file_lines: - elems = line.split('"') - fnames.append(elems[0].strip()) - ftexts.append(elems[1].strip().lower().replace('‘','\'').replace('’','\'')) - - all_chars = list(set("".join(ftexts))) - punct_with_space = [i for i in all_chars if i in list(string.punctuation)] + [" "] - chars = [i for i in all_chars if i not in punct_with_space if i.strip()] - chars = "".join(chars) - punct_with_space = "".join(punct_with_space)#.replace("'",r"\'") - - with open('../../config/glow/base_blank.json', 'r') as jfile: - json_config = json.load(jfile) - - json_config["data"]["chars"] = chars - json_config["data"]["punc"] = punct_with_space - json_config["data"]["training_files"]=out_path_for_txts + '/train.txt' - json_config["data"]["validation_files"] = out_path_for_txts + '/valid.txt' - new_config_name = out_path_for_txts.split('/')[-1] - with open(f'../../config/glow/{new_config_name}.json','w+') as jfile: - json.dump(json_config, jfile) - - print(f"Characters: {chars}") - print(f"Len of vocab: {len(chars)}") - print(f"Punctuation: {punct_with_space}") - print(f"Config file is stored at ../../config/glow/{new_config_name}.json") - - outfile_f = open(outfile, "w+", encoding="utf-8") - for f, t in zip(fnames, ftexts): - print( - os.path.join(wav_path_for_annotations_txt, f) + ".wav", - t, - sep="|", - file=outfile_f, - ) - outfile_f.close() - write_txt(punct_with_space, os.path.join(out_path_for_txts, "punc.txt")) - write_txt(chars, os.path.join(out_path_for_txts, "chars.txt")) - - save_train_test_valid_split( - annotations_txt=outfile, - num_samples_valid=num_samples_valid, - num_samples_test=num_samples_test, - ) - - - - -if __name__ == "__main__": - - - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--text-path", type=str, required=True) - parser.add_argument("-o", "--output-path", type=str, required=True) - parser.add_argument("-w", "--wav-path", type=str, required=True) - parser.add_argument("-v", "--valid-samples", type=int, default = 100) - parser.add_argument("-t", "--test-samples", type=int, default = 10) - args = parser.parse_args() - - save_txts_from_txt_done_data( - args.text_path, - args.wav_path, - args.output_path, - args.valid_samples, - args.test_samples, - ) diff --git a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/superpoint.py b/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/superpoint.py deleted file mode 100644 index b837d938f755850180ddc168e957742e874adacd..0000000000000000000000000000000000000000 --- a/spaces/Hasani/Specific_Object_Recognition_in_the_Wild/SuperGluePretrainedNetwork/models/superpoint.py +++ /dev/null @@ -1,202 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import torch -from torch import nn - -def simple_nms(scores, nms_radius: int): - """ Fast Non-maximum suppression to remove nearby points """ - assert(nms_radius >= 0) - - def max_pool(x): - return torch.nn.functional.max_pool2d( - x, kernel_size=nms_radius*2+1, stride=1, padding=nms_radius) - - zeros = torch.zeros_like(scores) - max_mask = scores == max_pool(scores) - for _ in range(2): - supp_mask = max_pool(max_mask.float()) > 0 - supp_scores = torch.where(supp_mask, zeros, scores) - new_max_mask = supp_scores == max_pool(supp_scores) - max_mask = max_mask | (new_max_mask & (~supp_mask)) - return torch.where(max_mask, scores, zeros) - - -def remove_borders(keypoints, scores, border: int, height: int, width: int): - """ Removes keypoints too close to the border """ - mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border)) - mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border)) - mask = mask_h & mask_w - return keypoints[mask], scores[mask] - - -def top_k_keypoints(keypoints, scores, k: int): - if k >= len(keypoints): - return keypoints, scores - scores, indices = torch.topk(scores, k, dim=0) - return keypoints[indices], scores - - -def sample_descriptors(keypoints, descriptors, s: int = 8): - """ Interpolate descriptors at keypoint locations """ - b, c, h, w = descriptors.shape - keypoints = keypoints - s / 2 + 0.5 - keypoints /= torch.tensor([(w*s - s/2 - 0.5), (h*s - s/2 - 0.5)], - ).to(keypoints)[None] - keypoints = keypoints*2 - 1 # normalize to (-1, 1) - args = {'align_corners': True} if torch.__version__ >= '1.3' else {} - descriptors = torch.nn.functional.grid_sample( - descriptors, keypoints.view(b, 1, -1, 2), mode='bilinear', **args) - descriptors = torch.nn.functional.normalize( - descriptors.reshape(b, c, -1), p=2, dim=1) - return descriptors - - -class SuperPoint(nn.Module): - """SuperPoint Convolutional Detector and Descriptor - - SuperPoint: Self-Supervised Interest Point Detection and - Description. Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. In CVPRW, 2019. https://arxiv.org/abs/1712.07629 - - """ - default_config = { - 'descriptor_dim': 256, - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1, - 'remove_borders': 4, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.relu = nn.ReLU(inplace=True) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - c1, c2, c3, c4, c5 = 64, 64, 128, 128, 256 - - self.conv1a = nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1) - self.conv1b = nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1) - self.conv2a = nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1) - self.conv2b = nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1) - self.conv3a = nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1) - self.conv3b = nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1) - self.conv4a = nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1) - self.conv4b = nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1) - - self.convPa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convPb = nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0) - - self.convDa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convDb = nn.Conv2d( - c5, self.config['descriptor_dim'], - kernel_size=1, stride=1, padding=0) - - path = Path(__file__).parent / 'weights/superpoint_v1.pth' - self.load_state_dict(torch.load(str(path))) - - mk = self.config['max_keypoints'] - if mk == 0 or mk < -1: - raise ValueError('\"max_keypoints\" must be positive or \"-1\"') - - print('Loaded SuperPoint model') - - def forward(self, data): - """ Compute keypoints, scores, descriptors for image """ - # Shared Encoder - x = self.relu(self.conv1a(data['image'])) - x = self.relu(self.conv1b(x)) - x = self.pool(x) - x = self.relu(self.conv2a(x)) - x = self.relu(self.conv2b(x)) - x = self.pool(x) - x = self.relu(self.conv3a(x)) - x = self.relu(self.conv3b(x)) - x = self.pool(x) - x = self.relu(self.conv4a(x)) - x = self.relu(self.conv4b(x)) - - # Compute the dense keypoint scores - cPa = self.relu(self.convPa(x)) - scores = self.convPb(cPa) - scores = torch.nn.functional.softmax(scores, 1)[:, :-1] - b, _, h, w = scores.shape - scores = scores.permute(0, 2, 3, 1).reshape(b, h, w, 8, 8) - scores = scores.permute(0, 1, 3, 2, 4).reshape(b, h*8, w*8) - scores = simple_nms(scores, self.config['nms_radius']) - - # Extract keypoints - keypoints = [ - torch.nonzero(s > self.config['keypoint_threshold']) - for s in scores] - scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] - - # Discard keypoints near the image borders - keypoints, scores = list(zip(*[ - remove_borders(k, s, self.config['remove_borders'], h*8, w*8) - for k, s in zip(keypoints, scores)])) - - # Keep the k keypoints with highest score - if self.config['max_keypoints'] >= 0: - keypoints, scores = list(zip(*[ - top_k_keypoints(k, s, self.config['max_keypoints']) - for k, s in zip(keypoints, scores)])) - - # Convert (h, w) to (x, y) - keypoints = [torch.flip(k, [1]).float() for k in keypoints] - - # Compute the dense descriptors - cDa = self.relu(self.convDa(x)) - descriptors = self.convDb(cDa) - descriptors = torch.nn.functional.normalize(descriptors, p=2, dim=1) - - # Extract descriptors - descriptors = [sample_descriptors(k[None], d[None], 8)[0] - for k, d in zip(keypoints, descriptors)] - - return { - 'keypoints': keypoints, - 'scores': scores, - 'descriptors': descriptors, - } diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/csv.27f5436c.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/csv.27f5436c.js deleted file mode 100644 index 7ee090c69a9158e1331c5630c3dff9699534ab58..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/csv.27f5436c.js +++ /dev/null @@ -1,2 +0,0 @@ -import{d as a}from"./dsv.7fe76a93.js";var s=a(","),v=s.parse,o=s.parseRows;export{v as a,o as c}; -//# sourceMappingURL=csv.27f5436c.js.map diff --git a/spaces/Hoodady/3DFuse/cldm/model.py b/spaces/Hoodady/3DFuse/cldm/model.py deleted file mode 100644 index fed3c31ac145b78907c7f771d1d8db6fb32d92ed..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/cldm/model.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import torch - -from omegaconf import OmegaConf -from ldm.util import instantiate_from_config - - -def get_state_dict(d): - return d.get('state_dict', d) - - -def load_state_dict(ckpt_path, location='cpu'): - _, extension = os.path.splitext(ckpt_path) - if extension.lower() == ".safetensors": - import safetensors.torch - state_dict = safetensors.torch.load_file(ckpt_path, device=location) - else: - state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location))) - state_dict = get_state_dict(state_dict) - print(f'Loaded state_dict from [{ckpt_path}]') - return state_dict - - -def create_model(config_path): - config = OmegaConf.load(config_path) - model = instantiate_from_config(config.model).cpu() - print(f'Loaded model config from [{config_path}]') - return model diff --git a/spaces/IPN/demo_cms_1/app.py b/spaces/IPN/demo_cms_1/app.py deleted file mode 100644 index 1ad086f5bb666bba2a69d806976f1b671a5d364a..0000000000000000000000000000000000000000 --- a/spaces/IPN/demo_cms_1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/finiteautomata/bertweet-base-sentiment-analysis").launch(); \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/gpttype_adapter.cpp b/spaces/Illumotion/Koboldcpp/gpttype_adapter.cpp deleted file mode 100644 index 366a64df860571d695f0c4912615efbaa32898d2..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/gpttype_adapter.cpp +++ /dev/null @@ -1,1781 +0,0 @@ -//This is Concedo's shitty adapter for adding python bindings for llama - -//Considerations: -//Don't want to use pybind11 due to dependencies on MSVCC -//ZERO or MINIMAL changes as possible to main.cpp - do not move their function declarations here! -//Leave main.cpp UNTOUCHED, We want to be able to update the repo and pull any changes automatically. -//No dynamic memory allocation! Setup structs with FIXED (known) shapes and sizes for ALL output fields -//Python will ALWAYS provide the memory, we just write to it. - -#include -#include -#include "model_adapter.h" -#include "otherarch.h" -#include "grammar-parser.h" - -//for easier compilation -//concat source files into one file for compilation purposes -#include "llama_v2.cpp" -#include "llama_v3.cpp" -#include "llama.cpp" -#include "utils.cpp" -#include "gptj_v1.cpp" -#include "gptj_v2.cpp" -#include "gptj_v3.cpp" -#include "gpt2_v1.cpp" -#include "gpt2_v2.cpp" -#include "gpt2_v3.cpp" -#include "rwkv_v2.cpp" -#include "rwkv_v3.cpp" -#include "neox_v2.cpp" -#include "neox_v3.cpp" -#include "mpt_v3.cpp" - -//shared -std::string executable_path = ""; -std::string lora_filename = ""; -std::string lora_base = ""; -bool generation_finished; -float last_process_time = 0; -float last_eval_time = 0; -int last_token_count = 0; -stop_reason last_stop_reason = stop_reason::INVALID; -std::vector generated_tokens; - -llama_grammar * grammar = nullptr; //currently used grammar -grammar_parser::parse_state parsed_grammar; -static std::string current_grammar = ""; - -//return val: 0=fail, 1=(original ggml, alpaca), 2=(ggmf), 3=(ggjt) -static FileFormat file_format = FileFormat::BADFORMAT; - -static gpt_vocab vocab; -static int32_t n_vocab = 0; - -static gptj_v1_model gptj_ctx_v1; -static gptj_v2_model gptj_ctx_v2; -static gptj_model gptj_ctx_v3; - -static gpt2_v1_model gpt2_ctx_v1; -static gpt2_v2_model gpt2_ctx_v2; -static gpt2_model gpt2_ctx_v3; - -static gpt_neox_v2_model neox_ctx_v2; -static gpt_neox_model neox_ctx_v3; - -static mpt_model mpt_ctx_v3; - -static rwkv_v2_context * rwkv_ctx_v2; -static rwkv_context * rwkv_ctx_v3; - -static llama_v2_context * llama_ctx_v2; -static llama_v3_context * llama_ctx_v3; -static llama_context * llama_ctx_v4; - -static gpt_params params; -static int n_past = 0; -static int n_threads = 4; -static int n_blasthreads = 4; -static int n_batch = 8; -static bool useSmartContext = false; -static int blasbatchsize = 512; -static int debugmode = 0; //-1 = hide all, 0 = normal, 1 = showall -static std::string modelname; -static std::vector last_n_tokens; -static std::vector current_context_tokens; -static size_t mem_per_token = 0; -static std::vector logits; -static std::vector smartcontext; -static std::vector stop_sequence; -static std::vector banned_tokens; -static std::vector banned_token_ids; -static std::vector top_picks; -static int remaining_tokens = 0; -static int stopper_unused_tokens = 0; -static std::mutex concat_output_mtx; -static std::string concat_output = ""; -static std::string concat_output_reader_copy = ""; - -inline bool IsNanCheck(float f) -{ - const unsigned int u = *(unsigned int*)&f; - return (u&0x7F800000) == 0x7F800000 && (u&0x7FFFFF); // Both NaN and qNan. -} - -inline bool LogitsDuplicated(std::vector & arr1, std::vector & arr2) -{ - int compareQty = 5; - if(arr1.size() < compareQty || arr2.size() < compareQty || arr1.size()!=arr2.size()) - { - printf("\nError: Logit array sizes are bad!\n"); - return false; - } - for(int i=0;i & output_tokens, FileFormat file_format) -{ - if (file_format == FileFormat::GGML || file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2 || file_format == FileFormat::GGJT_3 || file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - if(file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2 ) - { - output_tokens = ::llama_v2_tokenize(llama_ctx_v2, str_to_tokenize, true); - } - else if (file_format == FileFormat::GGML) - { - output_tokens = ::legacy_llama_v2_tokenize(llama_ctx_v2, str_to_tokenize, true); - } - else if (file_format == FileFormat::GGJT_3) - { - output_tokens = ::llama_v3_tokenize(llama_ctx_v3, str_to_tokenize, true); - } - else - { - output_tokens = ::llama_tokenize(llama_ctx_v4, str_to_tokenize, true); - } - } - else - { - // tokenize the prompt - output_tokens = ::gpt_tokenize(vocab, str_to_tokenize); - } -} -static int GetEosID(FileFormat file_format, int32_t n_vocab) -{ - unsigned int eosID = 0; - - if(file_format == FileFormat::GGML || file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2 || file_format == FileFormat::GGJT_3 || file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - if(file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - eosID = llama_token_eos(llama_ctx_v4); - } - else if(file_format == FileFormat::GGJT_3) - { - eosID = llama_v3_token_eos(); - } - else - { - eosID = llama_v3_token_eos(); - } - } - else - { - if (file_format == FileFormat::GPT2_1 || - file_format == FileFormat::GPT2_2 || - file_format == FileFormat::GPT2_3 || - file_format == FileFormat::GPT2_4 || - file_format == FileFormat::GPTJ_1 || - file_format == FileFormat::GPTJ_2 || - file_format == FileFormat::GPTJ_3 || - file_format == FileFormat::GPTJ_4 || - file_format == FileFormat::GPTJ_5) - { - eosID = 50256; - if (n_vocab <= eosID) - { - //special case, starcoder models use ID 0 for EOS - eosID = 0; - } - } - - if (file_format == FileFormat::RWKV_1 || - file_format == FileFormat::RWKV_2 || - file_format == FileFormat::NEOX_1 || - file_format == FileFormat::NEOX_2 || - file_format == FileFormat::NEOX_3 || - file_format == FileFormat::NEOX_4 || - file_format == FileFormat::NEOX_5 || - file_format == FileFormat::NEOX_6 || - file_format == FileFormat::NEOX_7 || - file_format == FileFormat::MPT_1) - { - eosID = 0; - } - } - return eosID; -} -static float LowestLogit(const std::vector & logits) -{ - int topid = std::min_element(logits.begin(), logits.end()) - logits.begin(); - float v = logits[topid]; - return (v < 0 ? (v-8) : 0); -} -static float LowestLogit(const float *logits, size_t size) -{ - if (size == 0) { - // Handle the case of an empty array - return 0.0; - } - int topid = std::min_element(logits, logits + size) - logits; - float v = logits[topid]; - return (v < 0 ? (v-8) : 0); -} - -static std::string RemoveBell(const std::string & input) //removes the bell character -{ - std::string word2; - std::remove_copy(input.begin(), input.end(), std::back_inserter(word2), '\a'); - return word2; -} - - -llama_token sample_token(llama_token_data_array * candidates, std::mt19937 & rng) -{ - llama_sample_softmax(nullptr, candidates); - std::vector probs; - probs.reserve(candidates->size); - top_picks.clear(); - for (size_t i = 0; i < candidates->size; ++i) { - probs.push_back(candidates->data[i].p); - } - - std::discrete_distribution<> dist(probs.begin(), probs.end()); - int idx = dist(rng); - - if(debugmode==1) - { - top_picks.push_back(candidates->data[idx]); - for (size_t i = 0; (i < candidates->size && i<4); ++i) - { - if(i!=idx) - { - top_picks.push_back(candidates->data[i]); - } - } - } - - llama_token result = candidates->data[idx].id; - return result; -} - -llama_token sample_token_mirostat(int n_vocab, llama_token_data_array * candidates, std::mt19937 & rng, float tau, float eta, int m, float * mu) -{ - float N = float(n_vocab); - llama_sample_softmax(nullptr, candidates); - // Estimate s_hat using the most probable m tokens - float s_hat = 0.0; - float sum_ti_bi = 0.0; - float sum_ti_sq = 0.0; - for (size_t i = 0; i < size_t(m - 1) && i < candidates->size - 1; ++i) { - float t_i = logf(float(i + 2) / float(i + 1)); - float b_i = logf(candidates->data[i].p / candidates->data[i + 1].p); - sum_ti_bi += t_i * b_i; - sum_ti_sq += t_i * t_i; - } - s_hat = sum_ti_bi / sum_ti_sq; - // Compute k from the estimated s_hat and target surprise value - float epsilon_hat = s_hat - 1; - float k = powf((epsilon_hat * powf(2, *mu)) / (1 - powf(N, -epsilon_hat)), 1 / s_hat); - // Sample the next word X using top-k sampling - llama_sample_top_k(nullptr, candidates, int(k),1); - llama_token X = sample_token(candidates, rng); // Compute error as the difference between observed surprise and target surprise value - size_t X_idx = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_token_data & candidate) { - return candidate.id == X; - })); - float observed_surprise = -log2f(candidates->data[X_idx].p); - float e = observed_surprise - tau; - // Update mu using the learning rate and error - *mu = *mu - eta * e; - return X; -} - -llama_token sample_token_mirostat_v2(llama_token_data_array * candidates, std::mt19937 & rng, float tau, float eta, float * mu) -{ - llama_sample_softmax(nullptr, candidates); - // Truncate the words with surprise values greater than mu - candidates->size = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_token_data & candidate) { - return -log2f(candidate.p) > *mu; - })); - - if (candidates->size == 0) { - candidates->size = 1; - } - - // Normalize the probabilities of the remaining words - llama_sample_softmax(nullptr, candidates); - // Sample the next word X from the remaining words - llama_token X = sample_token(candidates,rng); - - // Compute error as the difference between observed surprise and target surprise value - size_t X_idx = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_token_data & candidate) { - return candidate.id == X; - })); - float observed_surprise = -log2f(candidates->data[X_idx].p); - float e = observed_surprise - tau; - // Update mu using the learning rate and error - *mu = *mu - eta * e; - return X; -} - -// Top-a (remove all tokens that have softmax probability less than top_a*m^2 where m is the maximum softmax probability) -// top-a 0 is off (no effect) -void sample_top_a(llama_token_data_array * candidates, float a, size_t min_keep) { - if (a <= 0.0f || candidates->size<=1) { - return; - } - - llama_sample_softmax(nullptr, candidates); - - // Compute the cumulative probabilities - float maxprob = candidates->data[0].p; - - float threshold = a * maxprob * maxprob; //tokens with probs less than this are removed - size_t last_idx = candidates->size; - - for (size_t i = 0; i < candidates->size; ++i) { - // Go until we reach a value under the threshold - float checkprob = candidates->data[i].p; - if (checkprob < threshold && i >= min_keep) { - last_idx = i; - break; - } - } - // printf("\n\nCandidates: %d, A:%f, MaxProb: %f, Threshold: %f, LastIdx: %d",candidates->size,a,maxprob,threshold,last_idx); - // printf("\nCandidates: %f %f %f %f\n",candidates->data[0].p,candidates->data[1].p,candidates->data[2].p,candidates->data[3].p); - - // Resize the output vector to keep only the selected tokens - candidates->size = last_idx; -} - -void sample_rep_pen(int n_ctx, int rep_pen_range, float rep_pen, llama_token_data_array * candidates_p) -{ - auto last_n_repeat = std::min(std::min((int)last_n_tokens.size(), rep_pen_range), n_ctx); - llama_sample_repetition_penalty(nullptr, candidates_p, - last_n_tokens.data() + last_n_tokens.size() - last_n_repeat, - last_n_repeat, rep_pen); -} - -void sample_temperature(llama_token_data_array * candidates_p, float temp) -{ - if (temp <= 0) - { - // Imitate greedy sampling - temp = 0.00390625f; //cannot be zero else div0, this is 1/256 - llama_sample_temperature(nullptr, candidates_p, temp); - llama_sample_top_k(nullptr, candidates_p, 1, 1); //only want first candidate - } - else - { - llama_sample_temperature(nullptr, candidates_p, temp); - } -} - -void sample_grammar(FileFormat file_format, int32_t n_vocab, llama_token_data_array * candidates, const struct llama_grammar * grammar) { - - const int64_t t_start_sample_us = ggml_time_us(); - - bool allow_eos = false; - for (const auto & stack : grammar->stacks) { - if (stack.empty()) { - allow_eos = true; - break; - } - } - - const llama_token eos = GetEosID(file_format,n_vocab); - - std::vector, llama_partial_utf8>> candidates_decoded; - std::vector candidates_grammar; - - for (size_t i = 0; i < candidates->size; ++i) { - const llama_token id = candidates->data[i].id; - const std::string piece = FileFormatTokenizeID(id,file_format); - if (id == eos) { - if (!allow_eos) { - candidates->data[i].logit = -INFINITY; - } - } else if (piece.empty() || piece[0] == 0) { - candidates->data[i].logit = -INFINITY; - } else { - candidates_decoded.push_back(decode_utf8(piece.c_str(), grammar->partial_utf8)); - candidates_grammar.push_back({ i, candidates_decoded.back().first.data(), candidates_decoded.back().second }); - } - } - - const auto rejects = llama_grammar_reject_candidates(grammar->rules, grammar->stacks, candidates_grammar); - for (const auto & reject : rejects) { - candidates->data[reject.index].logit = -INFINITY; - } - -} - -int SampleLogits(const float * logits, int n_ctx, int n_vocab, int rep_pen_range, float rep_pen, float top_k, float top_a, float top_p, float typical_p, float tfs, float temp, std::mt19937 & rng, -int mirostat, float mirostat_tau, float mirostat_eta, const std::vector & sampler_order, llama_grammar * grammar) -{ - int id = 0; - std::vector candidates; - candidates.reserve(n_vocab); - for (llama_token token_id = 0; token_id < n_vocab; token_id++) { - candidates.emplace_back(llama_token_data{token_id, logits[token_id], 0.0f}); - } - - llama_token_data_array candidates_p = { candidates.data(), candidates.size(), false }; - - if (grammar != nullptr) { - sample_grammar(file_format, n_vocab, &candidates_p, grammar); - } - - if (mirostat == 1 || mirostat == 2) - { - static float mirostat_mu = 2.0f * mirostat_tau; - const int mirostat_m = 100; - sample_rep_pen(n_ctx, rep_pen_range, rep_pen, &candidates_p); - sample_temperature(&candidates_p, temp); - if (mirostat == 1) - { - id = sample_token_mirostat(n_vocab, &candidates_p, rng, mirostat_tau, mirostat_eta, mirostat_m, &mirostat_mu); - } - else - { - id = sample_token_mirostat_v2(&candidates_p, rng, mirostat_tau, mirostat_eta, &mirostat_mu); - } - } - else - { - for (int i = 0; i < sampler_order.size(); i++) - { - switch (sampler_order[i]) - { - case KCPP_SAMPLER_TOP_K: - llama_sample_top_k(nullptr, &candidates_p, top_k,1); - break; - case KCPP_SAMPLER_TOP_A: - sample_top_a(&candidates_p,top_a,1); - break; - case KCPP_SAMPLER_TOP_P: - llama_sample_top_p(nullptr, &candidates_p, top_p,1); - break; - case KCPP_SAMPLER_TFS: - llama_sample_tail_free(nullptr, &candidates_p, tfs,1); - break; - case KCPP_SAMPLER_TYP: - llama_sample_typical(nullptr, &candidates_p, typical_p,1); - break; - case KCPP_SAMPLER_TEMP: - sample_temperature(&candidates_p, temp); - break; - case KCPP_SAMPLER_REP_PEN: - sample_rep_pen(n_ctx, rep_pen_range, rep_pen, &candidates_p); - break; - default: - printf("\nSampleLogits: Unknown Sampler : %d",sampler_order[i]); - break; - } - } - id = sample_token(&candidates_p, rng); - } - - return id; -} - - -static void grammar_accept_token(FileFormat file_format, int32_t n_vocab, struct llama_grammar * grammar, llama_token token) -{ - if (token == GetEosID(file_format,n_vocab)) { - for (const auto & stack : grammar->stacks) { - if (stack.empty()) { - return; - } - } - GGML_ASSERT(false); - } - const std::string piece = FileFormatTokenizeID(token,file_format); //llama_token_to_str(ctx, token); - - // Note terminating 0 in decoded string - const auto decoded = decode_utf8(piece.c_str(), grammar->partial_utf8); - const auto & code_points = decoded.first; - for (auto it = code_points.begin(), end = code_points.end() - 1; it != end; ++it) { - grammar->stacks = llama_grammar_accept(grammar->rules, grammar->stacks, *it); - } - grammar->partial_utf8 = decoded.second; - GGML_ASSERT(!grammar->stacks.empty()); -} - -static void load_grammar(const std::string & gammarstr) -{ - if(grammar!=nullptr) //on demand free when next grammar is loaded - { - llama_grammar_free(grammar); - grammar = nullptr; - } - - if (!gammarstr.empty()) { - parsed_grammar = grammar_parser::parse(gammarstr.c_str()); - // will be empty (default) if there are parse errors - if (parsed_grammar.rules.empty()) { - printf("\nIgnored invalid grammar sampler."); - return; - } - if(debugmode==1) - { - grammar_parser::print_grammar(stderr, parsed_grammar); - } - std::vector grammar_rules(parsed_grammar.c_rules()); - grammar = llama_grammar_init(grammar_rules.data(), grammar_rules.size(), parsed_grammar.symbol_ids.at("root")); - } -} - -ModelLoadResult gpttype_load_model(const load_model_inputs inputs, FileFormat in_file_format, FileFormatExtraMeta file_format_meta) -{ - ggml_time_init(); - - file_format = in_file_format; - n_threads = params.n_threads = inputs.threads; - n_blasthreads = params.n_threads_batch = inputs.blasthreads; - n_batch = params.n_batch = inputs.batch_size; - modelname = params.model = inputs.model_filename; - useSmartContext = inputs.use_smartcontext; - debugmode = inputs.debugmode; - blasbatchsize = inputs.blasbatchsize; - if(blasbatchsize<=0) - { - blasbatchsize = 8; - } - params.memory_f16 = inputs.f16_kv; - - auto clamped_max_context_length = inputs.max_context_length; - - if(clamped_max_context_length>16384 && - file_format != FileFormat::GGUF_LLAMA && file_format!=FileFormat::GGUF_FALCON) - { - printf("Warning: Only GGUF models can use max context above 16k. Max context lowered to 16k.\n"); - clamped_max_context_length = 16384; - } - - params.n_ctx = clamped_max_context_length; - - neox_ctx_v2.hparams.n_ctx = neox_ctx_v3.hparams.n_ctx - = gptj_ctx_v1.hparams.n_ctx = gptj_ctx_v2.hparams.n_ctx = gptj_ctx_v3.hparams.n_ctx - = gpt2_ctx_v1.hparams.n_ctx = gpt2_ctx_v2.hparams.n_ctx = gpt2_ctx_v3.hparams.n_ctx - = mpt_ctx_v3.hparams.n_ctx = params.n_ctx; - - //determine rope scaling params - float rope_freq_scale = 1.0f; - float rope_freq_base = 10000.0f; - if(inputs.rope_freq_scale>0.0f) - { - rope_freq_scale = inputs.rope_freq_scale; - rope_freq_base = inputs.rope_freq_base; - printf("Using Custom RoPE scaling (scale:%.3f, base:%.1f).\n",rope_freq_scale,rope_freq_base); - } - else - { - rope_freq_scale = 1.0f; - if (params.n_ctx <= 2048) //normie mode - { - rope_freq_base = 10000.0f; - } - else - { - //approximate NTK aware ctx - auto effectivenctx = params.n_ctx; - if((file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) && file_format_meta.n_ctx_train > 2048) - { - float factor = file_format_meta.n_ctx_train/2048; - effectivenctx = effectivenctx/factor; - } - rope_freq_base = (effectivenctx <= 2048 ? 10000.0f : (effectivenctx <= 3072 ? 26000.0f : (effectivenctx <= 4096 ? 32000.0f : (effectivenctx <= 6144 ? 54000.0f : - (effectivenctx <= 8192 ? 82684.0f : (effectivenctx <= 12288 ? 140000.0f : (effectivenctx <= 16384 ? 200000.0f : (effectivenctx <= 24576 ? 320000.0f : 440000.0f)))))))); - - } - - printf("Using automatic RoPE scaling (scale:%.3f, base:%.1f)\n",rope_freq_scale,rope_freq_base); - } - gptj_ctx_v3.hparams.rope_freq_scale = neox_ctx_v3.hparams.rope_freq_scale = rope_freq_scale; - gptj_ctx_v3.hparams.rope_freq_base = neox_ctx_v3.hparams.rope_freq_base = rope_freq_base; - - //handle custom token bans - banned_tokens.clear(); - for(int x=0;x0) - { - printf("CUBLAS: Set main device to %d\n",cu_parseinfo_maindevice); - ggml_cuda_set_main_device(cu_parseinfo_maindevice); - } - #endif - SetQuantsUnshuffled(false); - if(file_format == FileFormat::GGML || file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2) - { - //newer format has bit unshuffling - SetQuantsUnshuffled(file_format == FileFormat::GGJT_2); - llama_v2_context_params llama_ctx_params_v2 = llama_v2_context_default_params(); - llama_ctx_params_v2.n_ctx = clamped_max_context_length; - //llama_ctx_params.n_parts = -1; - llama_ctx_params_v2.seed = -1; - llama_ctx_params_v2.f16_kv = inputs.f16_kv; - llama_ctx_params_v2.logits_all = false; - llama_ctx_params_v2.use_mmap = inputs.use_mmap; - llama_ctx_params_v2.use_mlock = inputs.use_mlock; - llama_ctx_params_v2.n_gpu_layers = inputs.gpulayers; - - llama_ctx_v2 = llama_v2_init_from_file(modelname.c_str(), llama_ctx_params_v2); - - if (llama_ctx_v2 == NULL) - { - fprintf(stderr, "%s: error: failed to load model '%s'\n", __func__, modelname.c_str()); - return ModelLoadResult::FAIL; - } - - printf("\n---\nWarning: Your model may be an OUTDATED format (ver %d). Please reconvert it for better results!\n---\n", file_format); - - if (lora_filename != "") - { - printf("\nAttempting to apply LORA adapter: %s\n", lora_filename.c_str()); - - const char * lora_base_arg = NULL; - if (lora_base != "") { - printf("Using LORA base model: %s\n", lora_base.c_str()); - lora_base_arg = lora_base.c_str(); - } - - int err = llama_v2_apply_lora_from_file(llama_ctx_v2, - lora_filename.c_str(), - lora_base_arg, - n_threads); - if (err != 0) - { - fprintf(stderr, "%s: error: failed to apply lora adapter\n", __func__); - return ModelLoadResult::FAIL; - } - } - - n_vocab = llama_v2_n_vocab(llama_ctx_v2); - - //determine mem per token - const std::vector tmp = {1, 2, 3, 4}; - llama_v2_eval(llama_ctx_v2, tmp.data(), tmp.size(), 0, params.n_threads); - return ModelLoadResult::SUCCESS; - } - else if(file_format == FileFormat::GGJT_3) - { - llama_v3_context_params llama_ctx_params = llama_v3_context_default_params(); - llama_ctx_params.n_ctx = clamped_max_context_length; - //llama_ctx_paran_parts = -1; - llama_ctx_params.seed = -1; - llama_ctx_params.f16_kv = inputs.f16_kv; - llama_ctx_params.low_vram = inputs.low_vram; - llama_ctx_params.mul_mat_q = inputs.use_mmq; - llama_ctx_params.logits_all = false; - llama_ctx_params.use_mmap = inputs.use_mmap; - llama_ctx_params.use_mlock = inputs.use_mlock; - llama_ctx_params.n_gpu_layers = inputs.gpulayers; - llama_ctx_params.main_gpu = cu_parseinfo_maindevice; - llama_ctx_params.rope_freq_base = rope_freq_base; - llama_ctx_params.rope_freq_scale = rope_freq_scale; - llama_ctx_params.n_batch = blasbatchsize; - - #if defined(GGML_USE_CUBLAS) - bool ts_all_zero = true; - for (int i = 0; i < tensor_split_max; ++i) { - if (inputs.tensor_split[i] != 0.0f) { - ts_all_zero = false; - break; - } - } - if(!ts_all_zero) - { - llama_ctx_params.tensor_split = inputs.tensor_split; - } - #endif - - llama_ctx_v3 = llama_v3_init_from_file(modelname.c_str(), llama_ctx_params); - - if (llama_ctx_v3 == NULL) - { - fprintf(stderr, "%s: error: failed to load model '%s'\n", __func__, modelname.c_str()); - return ModelLoadResult::FAIL; - } - if (lora_filename != "") - { - printf("\nAttempting to apply LORA adapter: %s\n", lora_filename.c_str()); - - const char * lora_base_arg = NULL; - if (lora_base != "") { - printf("Using LORA base model: %s\n", lora_base.c_str()); - lora_base_arg = lora_base.c_str(); - } - - int err = llama_v3_apply_lora_from_file(llama_ctx_v3, - lora_filename.c_str(), - lora_base_arg, - n_threads); - if (err != 0) - { - fprintf(stderr, "%s: error: failed to apply lora adapter\n", __func__); - return ModelLoadResult::FAIL; - } - } - - n_vocab = llama_v3_n_vocab(llama_ctx_v3); - - //determine mem per token - const std::vector tmp = {1, 2, 3, 4}; - auto er = llama_v3_eval(llama_ctx_v3, tmp.data(), tmp.size(), 0, params.n_threads); - if(er!=0) - { - printf("\nLLAMA EVAL returned nonzero!\n"); - } - return ModelLoadResult::SUCCESS; - } - else if(file_format==FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - llama_model_params model_params = llama_model_default_params(); - llama_context_params llama_ctx_params = llama_context_default_params(); - llama_ctx_params.n_ctx = clamped_max_context_length; - //llama_ctx_paran_parts = -1; - llama_ctx_params.seed = -1; - llama_ctx_params.f16_kv = inputs.f16_kv; - //llama_ctx_params.low_vram = inputs.low_vram; - llama_ctx_params.mul_mat_q = inputs.use_mmq; - llama_ctx_params.logits_all = false; - model_params.use_mmap = inputs.use_mmap; - model_params.use_mlock = inputs.use_mlock; - model_params.n_gpu_layers = inputs.gpulayers; - #if defined(GGML_USE_CLBLAST) - if(file_format==FileFormat::GGUF_FALCON && model_params.n_gpu_layers>0) - { - printf("\nGPU layer offload for GGUF FALCON on OpenCL is known to have issues, it has been set to 0.\n"); - model_params.n_gpu_layers = 0; - } - #endif - model_params.main_gpu = cu_parseinfo_maindevice; - llama_ctx_params.rope_freq_base = rope_freq_base; - llama_ctx_params.rope_freq_scale = rope_freq_scale; - llama_ctx_params.n_batch = blasbatchsize; - llama_ctx_params.n_threads = n_threads; - llama_ctx_params.n_threads_batch = n_blasthreads; - - #if defined(GGML_USE_CUBLAS) - bool ts_all_zero = true; - for (int i = 0; i < tensor_split_max; ++i) { - if (inputs.tensor_split[i] != 0.0f) { - ts_all_zero = false; - break; - } - } - if(!ts_all_zero) - { - model_params.tensor_split = inputs.tensor_split; - } - #endif - - llama_model * llamamodel = llama_load_model_from_file(modelname.c_str(), model_params); - llama_ctx_v4 = llama_new_context_with_model(llamamodel, llama_ctx_params); - - if (llama_ctx_v4 == NULL) - { - fprintf(stderr, "%s: error: failed to load model '%s'\n", __func__, modelname.c_str()); - return ModelLoadResult::FAIL; - } - if (lora_filename != "") - { - printf("\nAttempting to apply LORA adapter: %s\n", lora_filename.c_str()); - - const char * lora_base_arg = NULL; - if (lora_base != "") { - printf("Using LORA base model: %s\n", lora_base.c_str()); - lora_base_arg = lora_base.c_str(); - } - - int err = llama_apply_lora_from_file(llama_ctx_v4, - lora_filename.c_str(), - 1.0f, - lora_base_arg, - n_threads); - if (err != 0) - { - fprintf(stderr, "%s: error: failed to apply lora adapter\n", __func__); - return ModelLoadResult::FAIL; - } - } - - n_vocab = llama_n_vocab(llamamodel); - - //determine mem per token - std::vector tmp = {1, 2, 3, 4}; - auto er = llama_eval(llama_ctx_v4, tmp.data(), tmp.size(), 0); - if(er!=0) - { - printf("\nLLAMA EVAL returned nonzero!\n"); - } - return ModelLoadResult::SUCCESS; - } - else if (file_format == FileFormat::RWKV_1 || file_format==FileFormat::RWKV_2) - { - //start loading the models first - bool useWorldTokenizer = false; - if (file_format == FileFormat::RWKV_1) - { - rwkv_ctx_v2 = rwkv_v2_init_from_file(modelname.c_str(), n_threads); - } - else //rwkv_2 - { - rwkv_ctx_v3 = rwkv_init_from_file(modelname.c_str(), n_threads); - - if(inputs.gpulayers>0) - { - rwkv_gpu_offload_layers(rwkv_ctx_v3,inputs.gpulayers); - } - - const struct rwkv_file_header & header = rwkv_ctx_v3->instance->model.header; - const size_t n_vocab = header.n_vocab; - printf("\nDetected Vocab: %zu",n_vocab); - if(n_vocab>60000) - { - printf("\nUsing WORLD TOKENIZER"); - useWorldTokenizer = true; - } - } - - std::string word; - if(useWorldTokenizer) - { - read_rwkv_world_vocab(); - } - else - { - read_rwkv_vocab(); - } - - int vocabsiz = rwkv_vocab.size(); - for (int i = 0; i < vocabsiz; i++) - { - uint32_t len; - word = rwkv_vocab[i]; - vocab.token_to_id[word] = i; - vocab.id_to_token[i] = word; - } - printf("\nRWKV Vocab: %u\n", vocabsiz); - logits.resize(vocabsiz); - - n_vocab = vocab.id_to_token.size(); //handled seperately - - if (file_format == FileFormat::RWKV_1) - { - n_batch = 1; - - //setup buffers for rwkv state - auto padding = 512u; - auto statebufsiz = rwkv_v2_get_state_buffer_element_count(rwkv_ctx_v2) * sizeof(float) + padding; - auto logitbufsiz = rwkv_v2_get_logits_buffer_element_count(rwkv_ctx_v2) * sizeof(float) + padding; - - printf("\nRWKV old Init: State Buffer:%lu, Logit Buffer:%lu\n", statebufsiz, logitbufsiz); - rwkv_ctx_v2->state_out = (float *)malloc(statebufsiz); - rwkv_ctx_v2->logits_out = (float *)malloc(logitbufsiz); - rwkv_ctx_v2->state_in = nullptr; - - bool testeval = rwkv_v2_eval(rwkv_ctx_v2, 0, rwkv_ctx_v2->state_in, rwkv_ctx_v2->state_out, rwkv_ctx_v2->logits_out); - if (!testeval) - { - printf("\nError: RWKV old Init Eval Failed!\n"); - } - - memcpy(logits.data(), rwkv_ctx_v2->logits_out, sizeof(float) * vocabsiz); - - if (rwkv_ctx_v2 == NULL) - { - return ModelLoadResult::FAIL; - } - return ModelLoadResult::SUCCESS; - } - else - { - n_batch = 1; //do not use sequence mode to speedup until it is fixed - - //setup buffers for rwkv state - auto padding = 512u; - auto statebufsiz = rwkv_get_state_buffer_element_count(rwkv_ctx_v3) * sizeof(float) + padding; - auto logitbufsiz = rwkv_get_logits_buffer_element_count(rwkv_ctx_v3) * sizeof(float) + padding; - - printf("\nRWKV Init: State Buffer:%lu, Logit Buffer:%lu\n", statebufsiz, logitbufsiz); - rwkv_ctx_v3->state_out = (float *)malloc(statebufsiz); - rwkv_ctx_v3->logits_out = (float *)malloc(logitbufsiz); - rwkv_ctx_v3->state_in = nullptr; - - bool testeval = rwkv_eval(rwkv_ctx_v3, params.n_threads, 0, rwkv_ctx_v3->state_in, rwkv_ctx_v3->state_out, rwkv_ctx_v3->logits_out); - if (!testeval) - { - printf("\nError: RWKV Init Eval Failed!\n"); - } - - memcpy(logits.data(), rwkv_ctx_v3->logits_out, sizeof(float) * vocabsiz); - - if (rwkv_ctx_v3 == NULL) - { - return ModelLoadResult::FAIL; - } - return ModelLoadResult::SUCCESS; - } - } - else if (file_format == FileFormat::GPT2_1) - { - ModelLoadResult res = legacy_gpt2_model_load(params.model, gpt2_ctx_v1, vocab, file_format); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-2 model loading..."); - return res; - } - - n_vocab = gpt2_ctx_v1.hparams.n_vocab; - - // determine the required inference memory per token: - legacy_gpt2_eval(gpt2_ctx_v1, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, file_format); - return ModelLoadResult::SUCCESS; - } - else if (file_format == FileFormat::GPT2_2 || file_format==FileFormat::GPT2_3 || file_format==FileFormat::GPT2_4) - { - if(file_format==FileFormat::GPT2_4) - { - ModelLoadResult res = gpt2_model_load(params.model, gpt2_ctx_v3, vocab, file_format, inputs.gpulayers); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-2 model loading..."); - return res; - } - - n_vocab = gpt2_ctx_v3.hparams.n_vocab; - - // determine the required inference memory per token: - gpt2_eval(gpt2_ctx_v3, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, use_scratch); - return ModelLoadResult::SUCCESS; - } - else - { - //newer format has bit unshuffling - SetQuantsUnshuffled(file_format == FileFormat::GPT2_3); - - ModelLoadResult res = gpt2_v2_model_load(params.model, gpt2_ctx_v2, vocab, file_format, inputs.gpulayers); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-2 model loading..."); - return res; - } - - n_vocab = gpt2_ctx_v2.hparams.n_vocab; - - // determine the required inference memory per token: - gpt2_v2_eval(gpt2_ctx_v2, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, file_format); - return ModelLoadResult::SUCCESS; - } - } - else if (file_format == FileFormat::GPTJ_1 || file_format == FileFormat::GPTJ_2) - { - ModelLoadResult res = legacy_gptj_model_load(params.model, gptj_ctx_v1, vocab, file_format); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-J model loading..."); - return res; - } - - n_vocab = gptj_ctx_v1.hparams.n_vocab; - - // determine the required inference memory per token: - legacy_gptj_eval(gptj_ctx_v1, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, file_format); - - //if the logits are NAN or duplicated, it means the model is incompatible - if(logits.size()>0 && IsNanCheck(logits[0])) - { - printf("\nBad Logits detected! Retrying GPT-J model loading..."); - ggml_v1_free(gptj_ctx_v1.ctx); - return ModelLoadResult::RETRY_LOAD; - } - - return ModelLoadResult::SUCCESS; - } - else if(file_format == FileFormat::GPTJ_3 || file_format == FileFormat::GPTJ_4 || file_format == FileFormat::GPTJ_5) - { - if(file_format == FileFormat::GPTJ_5) - { - ModelLoadResult loadresult = gptj_model_load(params.model, gptj_ctx_v3, vocab, inputs.gpulayers); - if (loadresult == ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return loadresult; - } - else if (loadresult == ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-J model loading..."); - return loadresult; - } - - n_vocab = gptj_ctx_v3.hparams.n_vocab; - - // determine the required inference memory per token: - gptj_eval(gptj_ctx_v3, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, use_scratch); - - //if the logits are NAN or duplicated, it means the model is incompatible - std::vector oldlogits(logits); - - //this is another hack because they change the library - we run the eval through the model - //twice and compare logits. if they give the same logits for different inputs, model is broken - gptj_eval(gptj_ctx_v3, params.n_threads, 0, {4, 5, 6, 7}, logits, mem_per_token, use_scratch); - - if(logits.size()>0 && (IsNanCheck(logits[0]) || LogitsDuplicated(oldlogits,logits))) - { - printf("\nBad Logits detected! Retrying GPT-J model loading..."); - ggml_free(gptj_ctx_v3.ctx); - return ModelLoadResult::RETRY_LOAD; - } - - return ModelLoadResult::SUCCESS; - } - else - { - //newer format has bit unshuffling - SetQuantsUnshuffled(file_format == FileFormat::GPTJ_4); - - ModelLoadResult loadresult = gptj_v2_model_load(params.model, gptj_ctx_v2, vocab, inputs.gpulayers); - if (loadresult == ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return loadresult; - } - else if (loadresult == ModelLoadResult::RETRY_LOAD) - { - printf("\nTensor Transposition Detected! Retrying GPT-J model loading..."); - return loadresult; - } - - n_vocab = gptj_ctx_v2.hparams.n_vocab; - - // determine the required inference memory per token: - gptj_v2_eval(gptj_ctx_v2, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token); - - //if the logits are NAN or duplicated, it means the model is incompatible - std::vector oldlogits(logits); - - //this is another hack because they change the library - we run the eval through the model - //twice and compare logits. if they give the same logits for different inputs, model is broken - gptj_v2_eval(gptj_ctx_v2, params.n_threads, 0, {4, 5, 6, 7}, logits, mem_per_token); - - if(logits.size()>0 && (IsNanCheck(logits[0]) || LogitsDuplicated(oldlogits,logits))) - { - printf("\nBad Logits detected! Retrying GPT-J model loading..."); - ggml_v2_free(gptj_ctx_v2.ctx); - return ModelLoadResult::RETRY_LOAD; - } - - return ModelLoadResult::SUCCESS; - } - } - else if(file_format==FileFormat::NEOX_1 || file_format==FileFormat::NEOX_2 || file_format==FileFormat::NEOX_3 || file_format==FileFormat::NEOX_4 || file_format==FileFormat::NEOX_5|| file_format==FileFormat::NEOX_6|| file_format==FileFormat::NEOX_7) - { - if(file_format==FileFormat::NEOX_6|| file_format==FileFormat::NEOX_7) - { - ModelLoadResult res = gpt_neox_model_load(params.model, neox_ctx_v3, vocab, file_format, inputs.gpulayers); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nIncorrect Tensor Size Detected! Retrying GPT-NeoX model loading..."); - return res; - } - - n_vocab = neox_ctx_v3.hparams.n_vocab; - - // determine the required inference memory per token: - gpt_neox_eval(neox_ctx_v3, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token, use_scratch); - - return ModelLoadResult::SUCCESS; - } - else - { - //newer format has bit unshuffling - SetQuantsUnshuffled(file_format==FileFormat::NEOX_4 || file_format==FileFormat::NEOX_5); - - ModelLoadResult res = gpt_neox_v2_model_load(params.model, neox_ctx_v2, vocab, file_format); - if(res==ModelLoadResult::FAIL) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return res; - } - else if(res==ModelLoadResult::RETRY_LOAD) - { - printf("\nIncorrect Tensor Size Detected! Retrying GPT-NeoX model loading..."); - return res; - } - - n_vocab = neox_ctx_v2.hparams.n_vocab; - - // determine the required inference memory per token: - gpt_neox_v2_eval(neox_ctx_v2, params.n_threads, 0, { 0, 1, 2, 3 }, logits, mem_per_token); - - if(logits.size()>0 && file_format==FileFormat::NEOX_2 && !IsNanCheck(logits[0])) - { - //run the black magic eval to determine if it's redpajama. VERY UGLY HACK! - std::vector test_embd = ::gpt_tokenize(vocab, "1 2 3 4 5 6 7"); - auto orig_par_res = neox_ctx_v2.hparams.par_res; - neox_ctx_v2.hparams.par_res = 0; //test with residual false - gpt_neox_v2_eval(neox_ctx_v2, params.n_threads, 0, test_embd, logits, mem_per_token); - neox_ctx_v2.hparams.par_res = orig_par_res; - int topid = std::max_element(logits.begin(),logits.end())-logits.begin(); - std::string predicted = vocab.id_to_token[topid].c_str(); - auto findresult = predicted.find("8"); - if(findresult != std::string::npos && findresult<2) - { - printf("\n---\nOld RedPajama NeoX Detected! Switching to new format! (use_parallel_residual=False)\n"); - ggml_v2_free(neox_ctx_v2.ctx); - return ModelLoadResult::RETRY_LOAD; - } - } - - return ModelLoadResult::SUCCESS; - } - - } - else if(file_format==FileFormat::MPT_1) - { - bool res = mpt_model_load(params.model, mpt_ctx_v3, vocab, inputs.gpulayers); - if(res==false) - { - fprintf(stderr, "%s: failed to load model from '%s'\n", __func__, params.model.c_str()); - return ModelLoadResult::FAIL; - } - - n_vocab = mpt_ctx_v3.hparams.n_vocab; - - // determine the required inference memory per token: - mpt_eval(mpt_ctx_v3, params.n_threads, 0, { 0, 1, 2, 3 }, logits, false, mem_per_token, use_scratch); - return ModelLoadResult::SUCCESS; - } - else - { - printf("\nUnknown Model, cannot load.\n"); - return ModelLoadResult::FAIL; - } - -} - -bool gpttype_generate_abort() -{ - stopper_unused_tokens = remaining_tokens; - remaining_tokens = 0; - return true; -} - -int gpttype_token_count(const std::string & input) -{ - if(debugmode==1) - { - printf("\nFileFormat: %d, Tokenizing: %s",file_format ,input.c_str()); - } - std::vector toks; - TokenizeString(input, toks, file_format); - int tokcount = toks.size(); - if(debugmode==1) - { - printf("\nTokens Counted: %d\n",tokcount); - } - return tokcount; -} - -const std::string & gpttype_get_pending_output() -{ - concat_output_mtx.lock(); - concat_output_reader_copy = concat_output; - concat_output_mtx.unlock(); - return concat_output_reader_copy; -} - -generation_outputs gpttype_generate(const generation_inputs inputs, generation_outputs &output) -{ - concat_output_mtx.lock(); - concat_output = ""; - concat_output_reader_copy = ""; - concat_output_mtx.unlock(); - last_stop_reason = stop_reason::OUT_OF_TOKENS; - stop_sequence.clear(); - for(int x=0;x embd_inp; - TokenizeString(params.prompt, embd_inp, file_format); - - //truncate to front of the prompt if its too long - int32_t nctx = params.n_ctx; - - if (embd_inp.size() + params.n_predict > nctx) - { - int offset = embd_inp.size() - nctx + params.n_predict; - embd_inp = std::vector(embd_inp.begin() + offset, embd_inp.end()); - } - - //determine how much npast we have to rewind from the current state - std::vector embd; - - int last_n_size = params.repeat_last_n; - last_n_tokens.resize(last_n_size); - - std::fill(last_n_tokens.begin(), last_n_tokens.end(), 0); - n_past = 0; - - if (file_format == FileFormat::RWKV_1 || file_format==FileFormat::RWKV_2) - { - ContextFastForward(current_context_tokens, embd_inp, n_past, last_n_tokens, nctx, smartcontext, false, true); - } - else - { - ContextFastForward(current_context_tokens, embd_inp, n_past, last_n_tokens, nctx, smartcontext, useSmartContext, false); - } - - //if using BLAS and prompt is big enough, switch to single thread and use a huge batch - bool approved_format = !(file_format == FileFormat::BADFORMAT || - file_format == FileFormat::GPT2_1 || - file_format == FileFormat::GPTJ_1 || - file_format == FileFormat::GPTJ_2 || - file_format == FileFormat::RWKV_1 || - file_format==FileFormat::RWKV_2); - bool blasmode = (approved_format && embd_inp.size() >= 32 && ggml_cpu_has_blas() && blasbatchsize!=-1); - // bool blasmode = false; - int original_batch = params.n_batch; - int original_threads = params.n_threads; - if (blasmode) - { - //for non llama, limit to 256 - int bbs = blasbatchsize; - if (file_format != FileFormat::GGML && file_format != FileFormat::GGHF && file_format != FileFormat::GGJT && file_format != FileFormat::GGJT_2 && file_format != FileFormat::GGJT_3 && file_format != FileFormat::GGUF_LLAMA && file_format!=FileFormat::GGUF_FALCON) - { - bbs = (blasbatchsize > 256 ? 256 : blasbatchsize); - } - - params.n_batch = bbs; //received reports of 1024 and above crashing on some models - if(!ggml_cpu_has_gpublas()) - { - //does not limit here for gguf anymore. this is kept for older models. - //new models will override threads inside decode fn. - params.n_threads = 1; - params.n_threads_batch = 1; - } - else - { - params.n_threads = n_blasthreads; - params.n_threads_batch = n_blasthreads; - } - } - - current_context_tokens.resize(n_past); - - remaining_tokens = params.n_predict; - stopper_unused_tokens = 0; - int input_consumed = 0; - std::mt19937 rng(params.seed); - - //prepare sampler order - std::vector sampler_order; - if(inputs.sampler_len<=0) //list by value - { - sampler_order = { - KCPP_SAMPLER_REP_PEN, - KCPP_SAMPLER_TOP_K, - KCPP_SAMPLER_TOP_A, - KCPP_SAMPLER_TFS, - KCPP_SAMPLER_TYP, - KCPP_SAMPLER_TOP_P, - KCPP_SAMPLER_TEMP - }; - } - else - { - for(int i=0;istate_in = nullptr; - } - else - { - rwkv_ctx_v3->state_in = nullptr; - } - } - else - { - if (file_format == FileFormat::RWKV_1) - { - rwkv_ctx_v2->state_in = rwkv_ctx_v2->state_out; - } - else - { - rwkv_ctx_v3->state_in = rwkv_ctx_v3->state_out; - } - - //if it's empty, push in the final previous token - if(embd_inp.size()==0 && current_context_tokens.size()>0) - { - embd_inp.push_back(current_context_tokens[current_context_tokens.size()-1]); - current_context_tokens.pop_back(); - } - } - } - - if(n_vocab<=0) - { - printf("\nWarning! n_vocab is invalid, maybe bad format!"); - } - - //prepare banned tokens - if(banned_token_ids.size()==0 && banned_tokens.size()>0) - { - printf("\n[First Run] Banning %zu token sequences...",banned_tokens.size()); - for(int v=0;v 0) - { - gpt_vocab::id id = 0; - // predict - unsigned int embdsize = embd.size(); - //print progress - if (!startedsampling && debugmode!=-1) - { - printf("\rProcessing Prompt%s (%d / %zu tokens)", (blasmode ? " [BLAS]" : ""), input_consumed, embd_inp.size()); - } - fflush(stdout); - - if (embdsize > 0) - { - - bool evalres = false; - - if (file_format == FileFormat::GGML || file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2) - { - evalres = (llama_v2_eval(llama_ctx_v2, embd.data(), embdsize, n_past, params.n_threads)==0); - } - else if(file_format == FileFormat::GGJT_3) - { - evalres = (llama_v3_eval(llama_ctx_v3, embd.data(), embdsize, n_past, params.n_threads)==0); - } - else if(file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - evalres = (llama_eval(llama_ctx_v4, embd.data(), embdsize, n_past)==0); - } - else if(file_format==FileFormat::RWKV_1 || file_format==FileFormat::RWKV_2) - { - if (file_format == FileFormat::RWKV_1) - { - evalres = rwkv_v2_eval(rwkv_ctx_v2, embd[0], rwkv_ctx_v2->state_in, rwkv_ctx_v2->state_out, rwkv_ctx_v2->logits_out); - memcpy(logits.data(), rwkv_ctx_v2->logits_out, sizeof(float) * rwkv_vocab.size()); - rwkv_ctx_v2->state_in = rwkv_ctx_v2->state_out; - } - else - { - if(embd.size()>1) - { - evalres = rwkv_eval_sequence(rwkv_ctx_v3, params.n_threads, (uint32_t*)embd.data(), embd.size(), rwkv_ctx_v3->state_in, rwkv_ctx_v3->state_out, rwkv_ctx_v3->logits_out); - } - else - { - bool ignoreLogits = (!startedsampling && ((int)embd_inp.size() > input_consumed + 2)); - evalres = rwkv_eval(rwkv_ctx_v3, params.n_threads, embd[0], rwkv_ctx_v3->state_in, rwkv_ctx_v3->state_out, ignoreLogits?nullptr:rwkv_ctx_v3->logits_out); - } - - memcpy(logits.data(), rwkv_ctx_v3->logits_out, sizeof(float) * rwkv_vocab.size()); - rwkv_ctx_v3->state_in = rwkv_ctx_v3->state_out; - } - } - else if(file_format==FileFormat::GPT2_1) - { - evalres = legacy_gpt2_eval(gpt2_ctx_v1, params.n_threads, n_past, embd, logits, mem_per_token, file_format); - } - else if(file_format==FileFormat::GPT2_2 || file_format==FileFormat::GPT2_3) - { - evalres = gpt2_v2_eval(gpt2_ctx_v2, params.n_threads, n_past, embd, logits, mem_per_token, file_format); - } - else if(file_format==FileFormat::GPT2_4) - { - evalres = gpt2_eval(gpt2_ctx_v3, params.n_threads, n_past, embd, logits, mem_per_token, use_scratch); - } - else if(file_format==FileFormat::NEOX_1 || file_format == FileFormat::NEOX_2 || file_format == FileFormat::NEOX_3 || file_format==FileFormat::NEOX_4 || file_format==FileFormat::NEOX_5) - { - evalres = gpt_neox_v2_eval(neox_ctx_v2, params.n_threads, n_past, embd, logits, mem_per_token); - } - else if(file_format==FileFormat::NEOX_6|| file_format==FileFormat::NEOX_7) - { - evalres = gpt_neox_eval(neox_ctx_v3, params.n_threads, n_past, embd, logits, mem_per_token, use_scratch); - } - else if(file_format==FileFormat::GPTJ_1 || file_format==FileFormat::GPTJ_2) - { - evalres = legacy_gptj_eval(gptj_ctx_v1, params.n_threads, n_past, embd, logits, mem_per_token, file_format); - } - else if(file_format==FileFormat::GPTJ_3 || file_format==FileFormat::GPTJ_4) - { - evalres = gptj_v2_eval(gptj_ctx_v2, params.n_threads, n_past, embd, logits, mem_per_token); - } - else if(file_format==FileFormat::GPTJ_5) - { - evalres = gptj_eval(gptj_ctx_v3, params.n_threads, n_past, embd, logits, mem_per_token, use_scratch); - } - else if(file_format==FileFormat::MPT_1) - { - evalres = mpt_eval(mpt_ctx_v3, params.n_threads, n_past, embd, logits, false, mem_per_token, use_scratch); - } - else - { - printf("\nCannot find eval function\n"); - } - - if (!evalres) - { - fprintf(stderr, "Failed to predict\n"); - snprintf(output.text, sizeof(output.text), "%s", ""); - output.status = 0; - generation_finished = true; - return output; - } - } - - n_past += embd.size(); - embd.clear(); - if ((int)embd_inp.size() <= input_consumed) - { - // out of user input, sample next token - const float top_k = params.top_k; - const float top_p = params.top_p; - const float temp = params.temp; - const float top_a = inputs.top_a; - const float repeat_penalty = params.repeat_penalty; - const float typical_p = params.typical_p; - const float tfs_z = params.tfs_z; - - if (!startedsampling) - { - startedsampling = true; - params.n_batch = original_batch; - params.n_threads = original_threads; - time1 = timer_check(); - timer_start(); - if(debugmode!=-1) - { - printf("\n"); - } - } - - unsigned int eosID = GetEosID(file_format, n_vocab); - float * logitsPtr; - float lowestLogit = 0; - int btsize = banned_token_ids.size(); - if(file_format == FileFormat::GGML || file_format == FileFormat::GGHF || file_format == FileFormat::GGJT || file_format == FileFormat::GGJT_2 || file_format == FileFormat::GGJT_3 || file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - if(file_format == FileFormat::GGUF_LLAMA || file_format==FileFormat::GGUF_FALCON) - { - logitsPtr = llama_get_logits(llama_ctx_v4); - } - else if(file_format == FileFormat::GGJT_3) - { - logitsPtr = llama_v3_get_logits(llama_ctx_v3); - } - else - { - logitsPtr = llama_v2_get_logits(llama_ctx_v2); - } - lowestLogit = LowestLogit(logitsPtr,n_vocab); - } - else - { - logitsPtr = logits.data(); - lowestLogit = LowestLogit(logits); - } - - if (!inputs.unban_tokens_rt) - { - // set the logit of the eos token to very low to avoid sampling it - logitsPtr[eosID] = lowestLogit; - } - if(btsize>0) - { - for(int t=0;t0) - { - printf(" ["); - bool firstloop = true; - for (auto & pick : top_picks) - { - if (!firstloop) - { - printf(" "); - } - firstloop = false; - std::string tokenizedstr = FileFormatTokenizeID(pick.id, file_format); - ::utreplace(tokenizedstr, "\n", "\\n"); - printf("(%s %.2f%%)", RemoveBell(tokenizedstr).c_str(), pick.p*100); - } - printf("]\n"); - } - - if(inputs.unban_tokens_rt && id==eosID) - { - stopper_unused_tokens = remaining_tokens; - if(debugmode!=-1) - { - printf("\n(EOS token triggered!)"); - } - remaining_tokens = 0; - last_stop_reason = stop_reason::EOS_TOKEN; - } - - for (const auto &matched : stop_sequence) - { - if (concat_output.find(matched) != std::string::npos) - { - stopper_unused_tokens = remaining_tokens; - remaining_tokens = 0; - if(debugmode!=-1) - { - printf("\n(Stop sequence triggered: <%s>)", matched.c_str()); - } - last_stop_reason = stop_reason::CUSTOM_STOPPER; - break; - } - } - fflush(stdout); - } - else - { - // some user input remains from prompt or interaction, forward it to processing - while ((int)embd_inp.size() > input_consumed) - { - embd.push_back(embd_inp[input_consumed]); - last_n_tokens.erase(last_n_tokens.begin()); - last_n_tokens.push_back(embd_inp[input_consumed]); - current_context_tokens.push_back(embd_inp[input_consumed]); - ++input_consumed; - if ((int)embd.size() >= params.n_batch) - { - break; - } - } - } - } - time2 = timer_check(); - float pt1 = (time1*1000.0/(embd_inp.size()==0?1:embd_inp.size())); - int realnpredict = params.n_predict-stopper_unused_tokens; - float pt2 = (time2*1000.0/(realnpredict==0?1:realnpredict)); - float tokens_per_second = (realnpredict == 0 ? 0 : realnpredict / (time1 + time2)); - printf("\nTime Taken - Processing:%.1fs (%.0fms/T), Generation:%.1fs (%.0fms/T), Total:%.1fs (%.1fT/s)", time1, pt1, time2, pt2, (time1 + time2), tokens_per_second); - fflush(stdout); - output.status = 1; - generation_finished = true; - last_eval_time = pt2; - last_process_time = pt1; - last_token_count = realnpredict; - snprintf(output.text, sizeof(output.text), "%s", concat_output.c_str()); - - return output; -} diff --git a/spaces/Jamkonams/AutoGPT/tests/browse_tests.py b/spaces/Jamkonams/AutoGPT/tests/browse_tests.py deleted file mode 100644 index f896e7dd751b1b661d5e989909448b7e182eab69..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/tests/browse_tests.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import sys -import unittest - -from bs4 import BeautifulSoup - -sys.path.append(os.path.abspath("../scripts")) - -from browse import extract_hyperlinks - - -class TestBrowseLinks(unittest.TestCase): - def test_extract_hyperlinks(self): - body = """ - - Google - Foo -
    Some other crap
    - - """ - soup = BeautifulSoup(body, "html.parser") - links = extract_hyperlinks(soup, "http://example.com") - self.assertEqual( - links, - [("Google", "https://google.com"), ("Foo", "http://example.com/foo.html")], - ) diff --git a/spaces/Jikiwi/sovits-models/data_utils.py b/spaces/Jikiwi/sovits-models/data_utils.py deleted file mode 100644 index 7c76fd1c3a45b8304d916161718c7763874f3e35..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/data_utils.py +++ /dev/null @@ -1,155 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import modules.commons as commons -import utils -from modules.mel_processing import spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text - -# import h5py - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams, all_in_mem: bool = False): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.spec_len = hparams.train.max_speclen - self.spk_map = hparams.spk - - random.seed(1234) - random.shuffle(self.audiopaths) - - self.all_in_mem = all_in_mem - if self.all_in_mem: - self.cache = [self.get_audio(p[0]) for p in self.audiopaths] - - def get_audio(self, filename): - filename = filename.replace("\\", "/") - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - - # Ideally, all data generated after Mar 25 should have .spec.pt - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split("/")[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - f0 = np.load(filename + ".f0.npy") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - c = torch.load(filename+ ".soft.pt") - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[0]) - - - lmin = min(c.size(-1), spec.size(-1)) - assert abs(c.size(-1) - spec.size(-1)) < 3, (c.size(-1), spec.size(-1), f0.shape, filename) - assert abs(audio_norm.shape[1]-lmin * self.hop_length) < 3 * self.hop_length - spec, c, f0, uv = spec[:, :lmin], c[:, :lmin], f0[:lmin], uv[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def random_slice(self, c, f0, spec, audio_norm, spk, uv): - # if spec.shape[1] < 30: - # print("skip too short audio:", filename) - # return None - if spec.shape[1] > 800: - start = random.randint(0, spec.shape[1]-800) - end = start + 790 - spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end] - audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def __getitem__(self, index): - if self.all_in_mem: - return self.random_slice(*self.cache[index]) - else: - return self.random_slice(*self.get_audio(self.audiopaths[index][0])) - - def __len__(self): - return len(self.audiopaths) - - -class TextAudioCollate: - - def __call__(self, batch): - batch = [b for b in batch if b is not None] - - input_lengths, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].shape[1] for x in batch]), - dim=0, descending=True) - - max_c_len = max([x[0].size(1) for x in batch]) - max_wav_len = max([x[3].size(1) for x in batch]) - - lengths = torch.LongTensor(len(batch)) - - c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len) - f0_padded = torch.FloatTensor(len(batch), max_c_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - spkids = torch.LongTensor(len(batch), 1) - uv_padded = torch.FloatTensor(len(batch), max_c_len) - - c_padded.zero_() - spec_padded.zero_() - f0_padded.zero_() - wav_padded.zero_() - uv_padded.zero_() - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - c = row[0] - c_padded[i, :, :c.size(1)] = c - lengths[i] = c.size(1) - - f0 = row[1] - f0_padded[i, :f0.size(0)] = f0 - - spec = row[2] - spec_padded[i, :, :spec.size(1)] = spec - - wav = row[3] - wav_padded[i, :, :wav.size(1)] = wav - - spkids[i, 0] = row[4] - - uv = row[5] - uv_padded[i, :uv.size(0)] = uv - - return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/sample_diffusion.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/sample_diffusion.py deleted file mode 100644 index 876fe3c3642fcc8c7209e4f763c0134166615f78..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/scripts/sample_diffusion.py +++ /dev/null @@ -1,313 +0,0 @@ -import argparse, os, sys, glob, datetime, yaml -import torch -import time -import numpy as np -from tqdm import trange - -from omegaconf import OmegaConf -from PIL import Image - -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.util import instantiate_from_config - -rescale = lambda x: (x + 1.) / 2. - -def custom_to_pil(x): - x = x.detach().cpu() - x = torch.clamp(x, -1., 1.) - x = (x + 1.) / 2. - x = x.permute(1, 2, 0).numpy() - x = (255 * x).astype(np.uint8) - x = Image.fromarray(x) - if not x.mode == "RGB": - x = x.convert("RGB") - return x - - -def custom_to_np(x): - # saves the batch in adm style as in https://github.com/openai/guided-diffusion/blob/main/scripts/image_sample.py - sample = x.detach().cpu() - sample = ((sample + 1) * 127.5).clamp(0, 255).to(torch.uint8) - sample = sample.permute(0, 2, 3, 1) - sample = sample.contiguous() - return sample - - -def logs2pil(logs, keys=["sample"]): - imgs = dict() - for k in logs: - try: - if len(logs[k].shape) == 4: - img = custom_to_pil(logs[k][0, ...]) - elif len(logs[k].shape) == 3: - img = custom_to_pil(logs[k]) - else: - print(f"Unknown format for key {k}. ") - img = None - except: - img = None - imgs[k] = img - return imgs - - -@torch.no_grad() -def convsample(model, shape, return_intermediates=True, - verbose=True, - make_prog_row=False): - - - if not make_prog_row: - return model.p_sample_loop(None, shape, - return_intermediates=return_intermediates, verbose=verbose) - else: - return model.progressive_denoising( - None, shape, verbose=True - ) - - -@torch.no_grad() -def convsample_ddim(model, steps, shape, eta=1.0 - ): - ddim = DDIMSampler(model) - bs = shape[0] - shape = shape[1:] - samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, eta=eta, verbose=False,) - return samples, intermediates - - -@torch.no_grad() -def make_convolutional_sample(model, batch_size, vanilla=False, custom_steps=None, eta=1.0,): - - - log = dict() - - shape = [batch_size, - model.model.diffusion_model.in_channels, - model.model.diffusion_model.image_size, - model.model.diffusion_model.image_size] - - with model.ema_scope("Plotting"): - t0 = time.time() - if vanilla: - sample, progrow = convsample(model, shape, - make_prog_row=True) - else: - sample, intermediates = convsample_ddim(model, steps=custom_steps, shape=shape, - eta=eta) - - t1 = time.time() - - x_sample = model.decode_first_stage(sample) - - log["sample"] = x_sample - log["time"] = t1 - t0 - log['throughput'] = sample.shape[0] / (t1 - t0) - print(f'Throughput for this batch: {log["throughput"]}') - return log - -def run(model, logdir, batch_size=50, vanilla=False, custom_steps=None, eta=None, n_samples=50000, nplog=None): - if vanilla: - print(f'Using Vanilla DDPM sampling with {model.num_timesteps} sampling steps.') - else: - print(f'Using DDIM sampling with {custom_steps} sampling steps and eta={eta}') - - - tstart = time.time() - n_saved = len(glob.glob(os.path.join(logdir,'*.png')))-1 - # path = logdir - if model.cond_stage_model is None: - all_images = [] - - print(f"Running unconditional sampling for {n_samples} samples") - for _ in trange(n_samples // batch_size, desc="Sampling Batches (unconditional)"): - logs = make_convolutional_sample(model, batch_size=batch_size, - vanilla=vanilla, custom_steps=custom_steps, - eta=eta) - n_saved = save_logs(logs, logdir, n_saved=n_saved, key="sample") - all_images.extend([custom_to_np(logs["sample"])]) - if n_saved >= n_samples: - print(f'Finish after generating {n_saved} samples') - break - all_img = np.concatenate(all_images, axis=0) - all_img = all_img[:n_samples] - shape_str = "x".join([str(x) for x in all_img.shape]) - nppath = os.path.join(nplog, f"{shape_str}-samples.npz") - np.savez(nppath, all_img) - - else: - raise NotImplementedError('Currently only sampling for unconditional models supported.') - - print(f"sampling of {n_saved} images finished in {(time.time() - tstart) / 60.:.2f} minutes.") - - -def save_logs(logs, path, n_saved=0, key="sample", np_path=None): - for k in logs: - if k == key: - batch = logs[key] - if np_path is None: - for x in batch: - img = custom_to_pil(x) - imgpath = os.path.join(path, f"{key}_{n_saved:06}.png") - img.save(imgpath) - n_saved += 1 - else: - npbatch = custom_to_np(batch) - shape_str = "x".join([str(x) for x in npbatch.shape]) - nppath = os.path.join(np_path, f"{n_saved}-{shape_str}-samples.npz") - np.savez(nppath, npbatch) - n_saved += npbatch.shape[0] - return n_saved - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument( - "-r", - "--resume", - type=str, - nargs="?", - help="load from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-n", - "--n_samples", - type=int, - nargs="?", - help="number of samples to draw", - default=50000 - ) - parser.add_argument( - "-e", - "--eta", - type=float, - nargs="?", - help="eta for ddim sampling (0.0 yields deterministic sampling)", - default=1.0 - ) - parser.add_argument( - "-v", - "--vanilla_sample", - default=False, - action='store_true', - help="vanilla sampling (default option is DDIM sampling)?", - ) - parser.add_argument( - "-l", - "--logdir", - type=str, - nargs="?", - help="extra logdir", - default="none" - ) - parser.add_argument( - "-c", - "--custom_steps", - type=int, - nargs="?", - help="number of steps for ddim and fastdpm sampling", - default=50 - ) - parser.add_argument( - "--batch_size", - type=int, - nargs="?", - help="the bs", - default=10 - ) - return parser - - -def load_model_from_config(config, sd): - model = instantiate_from_config(config) - model.load_state_dict(sd,strict=False) - model.cuda() - model.eval() - return model - - -def load_model(config, ckpt, gpu, eval_mode): - if ckpt: - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - global_step = pl_sd["global_step"] - else: - pl_sd = {"state_dict": None} - global_step = None - model = load_model_from_config(config.model, - pl_sd["state_dict"]) - - return model, global_step - - -if __name__ == "__main__": - now = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") - sys.path.append(os.getcwd()) - command = " ".join(sys.argv) - - parser = get_parser() - opt, unknown = parser.parse_known_args() - ckpt = None - - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - # paths = opt.resume.split("/") - try: - logdir = '/'.join(opt.resume.split('/')[:-1]) - # idx = len(paths)-paths[::-1].index("logs")+1 - print(f'Logdir is {logdir}') - except ValueError: - paths = opt.resume.split("/") - idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), f"{opt.resume} is not a directory" - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "model.ckpt") - - base_configs = sorted(glob.glob(os.path.join(logdir, "config.yaml"))) - opt.base = base_configs - - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - - gpu = True - eval_mode = True - - if opt.logdir != "none": - locallog = logdir.split(os.sep)[-1] - if locallog == "": locallog = logdir.split(os.sep)[-2] - print(f"Switching logdir from '{logdir}' to '{os.path.join(opt.logdir, locallog)}'") - logdir = os.path.join(opt.logdir, locallog) - - print(config) - - model, global_step = load_model(config, ckpt, gpu, eval_mode) - print(f"global step: {global_step}") - print(75 * "=") - print("logging to:") - logdir = os.path.join(logdir, "samples", f"{global_step:08}", now) - imglogdir = os.path.join(logdir, "img") - numpylogdir = os.path.join(logdir, "numpy") - - os.makedirs(imglogdir) - os.makedirs(numpylogdir) - print(logdir) - print(75 * "=") - - # write config out - sampling_file = os.path.join(logdir, "sampling_config.yaml") - sampling_conf = vars(opt) - - with open(sampling_file, 'w') as f: - yaml.dump(sampling_conf, f, default_flow_style=False) - print(sampling_conf) - - - run(model, imglogdir, eta=opt.eta, - vanilla=opt.vanilla_sample, n_samples=opt.n_samples, custom_steps=opt.custom_steps, - batch_size=opt.batch_size, nplog=numpylogdir) - - print("done.") diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/max_iou_assigner.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/max_iou_assigner.py deleted file mode 100644 index 8ecab9b55b1bd5522184f5b6a037220a3fc8d421..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/max_iou_assigner.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Union - -import torch -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import TASK_UTILS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@TASK_UTILS.register_module() -class MaxIoUAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, or a semi-positive integer - indicating the ground truth index. - - - -1: negative sample, no assigned gt - - semi-positive integer: positive sample, index (0-based) of assigned gt - - Args: - pos_iou_thr (float): IoU threshold for positive bboxes. - neg_iou_thr (float or tuple): IoU threshold for negative bboxes. - min_pos_iou (float): Minimum iou for a bbox to be considered as a - positive bbox. Positive samples can have smaller IoU than - pos_iou_thr due to the 4th step (assign max IoU sample to each gt). - `min_pos_iou` is set to avoid assigning bboxes that have extremely - small iou with GT as positive samples. It brings about 0.3 mAP - improvements in 1x schedule but does not affect the performance of - 3x schedule. More comparisons can be found in - `PR #7464 `_. - gt_max_assign_all (bool): Whether to assign all bboxes with the same - highest overlap with some gt to that gt. - ignore_iof_thr (float): IoF threshold for ignoring bboxes (if - `gt_bboxes_ignore` is specified). Negative values mean not - ignoring any bboxes. - ignore_wrt_candidates (bool): Whether to compute the iof between - `bboxes` and `gt_bboxes_ignore`, or the contrary. - match_low_quality (bool): Whether to allow low quality matches. This is - usually allowed for RPN and single stage detectors, but not allowed - in the second stage. Details are demonstrated in Step 4. - gpu_assign_thr (int): The upper bound of the number of GT for GPU - assign. When the number of gt is above this threshold, will assign - on CPU device. Negative values mean not assign on CPU. - iou_calculator (dict): Config of overlaps Calculator. - """ - - def __init__(self, - pos_iou_thr: float, - neg_iou_thr: Union[float, tuple], - min_pos_iou: float = .0, - gt_max_assign_all: bool = True, - ignore_iof_thr: float = -1, - ignore_wrt_candidates: bool = True, - match_low_quality: bool = True, - gpu_assign_thr: float = -1, - iou_calculator: dict = dict(type='mmdet.BboxOverlaps2D')): - self.pos_iou_thr = pos_iou_thr - self.neg_iou_thr = neg_iou_thr - self.min_pos_iou = min_pos_iou - self.gt_max_assign_all = gt_max_assign_all - self.ignore_iof_thr = ignore_iof_thr - self.ignore_wrt_candidates = ignore_wrt_candidates - self.gpu_assign_thr = gpu_assign_thr - self.match_low_quality = match_low_quality - self.iou_calculator = TASK_UTILS.build(iou_calculator) - - def assign(self, - pred_instances: InstanceData, - gt_instances: InstanceData, - gt_instances_ignore: Optional[InstanceData] = None, - **kwargs) -> AssignResult: - """Assign gt to bboxes. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, or a semi-positive number. -1 means negative - sample, semi-positive number is the index (0-based) of assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every bbox to the background - 2. assign proposals whose iou with all gts < neg_iou_thr to 0 - 3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, - assign it to that bbox - 4. for each gt bbox, assign its nearest proposals (may be more than - one) to itself - - Args: - pred_instances (:obj:`InstanceData`): Instances of model - predictions. It includes ``priors``, and the priors can - be anchors or points, or the bboxes predicted by the - previous stage, has shape (n, 4). The bboxes predicted by - the current model or stage will be named ``bboxes``, - ``labels``, and ``scores``, the same as the ``InstanceData`` - in other places. - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It usually includes ``bboxes``, with shape (k, 4), - and ``labels``, with shape (k, ). - gt_instances_ignore (:obj:`InstanceData`, optional): Instances - to be ignored during training. It includes ``bboxes`` - attribute data that is ignored during training and testing. - Defaults to None. - - Returns: - :obj:`AssignResult`: The assign result. - - Example: - >>> from mmengine.structures import InstanceData - >>> self = MaxIoUAssigner(0.5, 0.5) - >>> pred_instances = InstanceData() - >>> pred_instances.priors = torch.Tensor([[0, 0, 10, 10], - ... [10, 10, 20, 20]]) - >>> gt_instances = InstanceData() - >>> gt_instances.bboxes = torch.Tensor([[0, 0, 10, 9]]) - >>> gt_instances.labels = torch.Tensor([0]) - >>> assign_result = self.assign(pred_instances, gt_instances) - >>> expected_gt_inds = torch.LongTensor([1, 0]) - >>> assert torch.all(assign_result.gt_inds == expected_gt_inds) - """ - gt_bboxes = gt_instances.bboxes - priors = pred_instances.priors - gt_labels = gt_instances.labels - if gt_instances_ignore is not None: - gt_bboxes_ignore = gt_instances_ignore.bboxes - else: - gt_bboxes_ignore = None - - assign_on_cpu = True if (self.gpu_assign_thr > 0) and ( - gt_bboxes.shape[0] > self.gpu_assign_thr) else False - # compute overlap and assign gt on CPU when number of GT is large - if assign_on_cpu: - device = priors.device - priors = priors.cpu() - gt_bboxes = gt_bboxes.cpu() - gt_labels = gt_labels.cpu() - if gt_bboxes_ignore is not None: - gt_bboxes_ignore = gt_bboxes_ignore.cpu() - - overlaps = self.iou_calculator(gt_bboxes, priors) - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and priors.numel() > 0): - if self.ignore_wrt_candidates: - ignore_overlaps = self.iou_calculator( - priors, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - else: - ignore_overlaps = self.iou_calculator( - gt_bboxes_ignore, priors, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=0) - overlaps[:, ignore_max_overlaps > self.ignore_iof_thr] = -1 - - assign_result = self.assign_wrt_overlaps(overlaps, gt_labels) - if assign_on_cpu: - assign_result.gt_inds = assign_result.gt_inds.to(device) - assign_result.max_overlaps = assign_result.max_overlaps.to(device) - if assign_result.labels is not None: - assign_result.labels = assign_result.labels.to(device) - return assign_result - - def assign_wrt_overlaps(self, overlaps: Tensor, - gt_labels: Tensor) -> AssignResult: - """Assign w.r.t. the overlaps of priors with gts. - - Args: - overlaps (Tensor): Overlaps between k gt_bboxes and n bboxes, - shape(k, n). - gt_labels (Tensor): Labels of k gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_gts, num_bboxes = overlaps.size(0), overlaps.size(1) - - # 1. assign -1 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts=num_gts, - gt_inds=assigned_gt_inds, - max_overlaps=max_overlaps, - labels=assigned_labels) - - # for each anchor, which gt best overlaps with it - # for each anchor, the max iou of all gts - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - # for each gt, which anchor best overlaps with it - # for each gt, the max iou of all proposals - gt_max_overlaps, gt_argmax_overlaps = overlaps.max(dim=1) - - # 2. assign negative: below - # the negative inds are set to be 0 - if isinstance(self.neg_iou_thr, float): - assigned_gt_inds[(max_overlaps >= 0) - & (max_overlaps < self.neg_iou_thr)] = 0 - elif isinstance(self.neg_iou_thr, tuple): - assert len(self.neg_iou_thr) == 2 - assigned_gt_inds[(max_overlaps >= self.neg_iou_thr[0]) - & (max_overlaps < self.neg_iou_thr[1])] = 0 - - # 3. assign positive: above positive IoU threshold - pos_inds = max_overlaps >= self.pos_iou_thr - assigned_gt_inds[pos_inds] = argmax_overlaps[pos_inds] + 1 - - if self.match_low_quality: - # Low-quality matching will overwrite the assigned_gt_inds assigned - # in Step 3. Thus, the assigned gt might not be the best one for - # prediction. - # For example, if bbox A has 0.9 and 0.8 iou with GT bbox 1 & 2, - # bbox 1 will be assigned as the best target for bbox A in step 3. - # However, if GT bbox 2's gt_argmax_overlaps = A, bbox A's - # assigned_gt_inds will be overwritten to be bbox 2. - # This might be the reason that it is not used in ROI Heads. - for i in range(num_gts): - if gt_max_overlaps[i] >= self.min_pos_iou: - if self.gt_max_assign_all: - max_iou_inds = overlaps[i, :] == gt_max_overlaps[i] - assigned_gt_inds[max_iou_inds] = i + 1 - else: - assigned_gt_inds[gt_argmax_overlaps[i]] = i + 1 - - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[assigned_gt_inds[pos_inds] - - 1] - - return AssignResult( - num_gts=num_gts, - gt_inds=assigned_gt_inds, - max_overlaps=max_overlaps, - labels=assigned_labels) diff --git a/spaces/KyanChen/RSPrompter/mmpl/datasets/custom.py b/spaces/KyanChen/RSPrompter/mmpl/datasets/custom.py deleted file mode 100644 index af1c0c140da3cbe1915f2f45134108cd7a2c232b..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/datasets/custom.py +++ /dev/null @@ -1,237 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Callable, Dict, List, Optional, Sequence, Tuple, Union - -from mmengine.fileio import (BaseStorageBackend, get_file_backend, - list_from_file) -from mmengine.logging import MMLogger - -from mmcls.registry import DATASETS -from .base_dataset import BaseDataset - - -def find_folders( - root: str, - backend: Optional[BaseStorageBackend] = None -) -> Tuple[List[str], Dict[str, int]]: - """Find classes by folders under a root. - - Args: - root (string): root directory of folders - backend (BaseStorageBackend | None): The file backend of the root. - If None, auto infer backend from the root path. Defaults to None. - - Returns: - Tuple[List[str], Dict[str, int]]: - - - folders: The name of sub folders under the root. - - folder_to_idx: The map from folder name to class idx. - """ - # Pre-build file backend to prevent verbose file backend inference. - backend = backend or get_file_backend(root, enable_singleton=True) - folders = list( - backend.list_dir_or_file( - root, - list_dir=True, - list_file=False, - recursive=False, - )) - folders.sort() - folder_to_idx = {folders[i]: i for i in range(len(folders))} - return folders, folder_to_idx - - -def get_samples( - root: str, - folder_to_idx: Dict[str, int], - is_valid_file: Callable, - backend: Optional[BaseStorageBackend] = None, -): - """Make dataset by walking all images under a root. - - Args: - root (string): root directory of folders - folder_to_idx (dict): the map from class name to class idx - is_valid_file (Callable): A function that takes path of a file - and check if the file is a valid sample file. - backend (BaseStorageBackend | None): The file backend of the root. - If None, auto infer backend from the root path. Defaults to None. - - Returns: - Tuple[list, set]: - - - samples: a list of tuple where each element is (image, class_idx) - - empty_folders: The folders don't have any valid files. - """ - samples = [] - available_classes = set() - # Pre-build file backend to prevent verbose file backend inference. - backend = backend or get_file_backend(root, enable_singleton=True) - - for folder_name in sorted(list(folder_to_idx.keys())): - _dir = backend.join_path(root, folder_name) - files = backend.list_dir_or_file( - _dir, - list_dir=False, - list_file=True, - recursive=True, - ) - for file in sorted(list(files)): - if is_valid_file(file): - path = backend.join_path(folder_name, file) - item = (path, folder_to_idx[folder_name]) - samples.append(item) - available_classes.add(folder_name) - - empty_folders = set(folder_to_idx.keys()) - available_classes - - return samples, empty_folders - - -@DATASETS.register_module() -class CustomDataset(BaseDataset): - """Custom dataset for classification. - - The dataset supports two kinds of annotation format. - - 1. An annotation file is provided, and each line indicates a sample: - - The sample files: :: - - data_prefix/ - ├── folder_1 - │ ├── xxx.png - │ ├── xxy.png - │ └── ... - └── folder_2 - ├── 123.png - ├── nsdf3.png - └── ... - - The annotation file (the first column is the image path and the second - column is the index of category): :: - - folder_1/xxx.png 0 - folder_1/xxy.png 1 - folder_2/123.png 5 - folder_2/nsdf3.png 3 - ... - - Please specify the name of categories by the argument ``classes`` - or ``metainfo``. - - 2. The samples are arranged in the specific way: :: - - data_prefix/ - ├── class_x - │ ├── xxx.png - │ ├── xxy.png - │ └── ... - │ └── xxz.png - └── class_y - ├── 123.png - ├── nsdf3.png - ├── ... - └── asd932_.png - - If the ``ann_file`` is specified, the dataset will be generated by the - first way, otherwise, try the second way. - - Args: - ann_file (str): Annotation file path. Defaults to ''. - metainfo (dict, optional): Meta information for dataset, such as class - information. Defaults to None. - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (str | dict): Prefix for the data. Defaults to ''. - extensions (Sequence[str]): A sequence of allowed extensions. Defaults - to ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif'). - lazy_init (bool): Whether to load annotation during instantiation. - In some cases, such as visualization, only the meta information of - the dataset is needed, which is not necessary to load annotation - file. ``Basedataset`` can skip load annotations to save time by set - ``lazy_init=False``. Defaults to False. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def __init__(self, - ann_file: str = '', - metainfo: Optional[dict] = None, - data_root: str = '', - data_prefix: Union[str, dict] = '', - extensions: Sequence[str] = ('.jpg', '.jpeg', '.png', '.ppm', - '.bmp', '.pgm', '.tif'), - lazy_init: bool = False, - **kwargs): - assert (ann_file or data_prefix or data_root), \ - 'One of `ann_file`, `data_root` and `data_prefix` must '\ - 'be specified.' - - self.extensions = tuple(set([i.lower() for i in extensions])) - - super().__init__( - # The base class requires string ann_file but this class doesn't - ann_file=ann_file, - metainfo=metainfo, - data_root=data_root, - data_prefix=data_prefix, - # Force to lazy_init for some modification before loading data. - lazy_init=True, - **kwargs) - - # Full initialize the dataset. - if not lazy_init: - self.full_init() - - def _find_samples(self): - """find samples from ``data_prefix``.""" - classes, folder_to_idx = find_folders(self.img_prefix) - samples, empty_classes = get_samples( - self.img_prefix, - folder_to_idx, - is_valid_file=self.is_valid_file, - ) - - if len(samples) == 0: - raise RuntimeError( - f'Found 0 files in subfolders of: {self.data_prefix}. ' - f'Supported extensions are: {",".join(self.extensions)}') - - if self.CLASSES is not None: - assert len(self.CLASSES) == len(classes), \ - f"The number of subfolders ({len(classes)}) doesn't match " \ - f'the number of specified classes ({len(self.CLASSES)}). ' \ - 'Please check the data folder.' - else: - self._metainfo['classes'] = tuple(classes) - - if empty_classes: - logger = MMLogger.get_current_instance() - logger.warning( - 'Found no valid file in the folder ' - f'{", ".join(empty_classes)}. ' - f"Supported extensions are: {', '.join(self.extensions)}") - - self.folder_to_idx = folder_to_idx - - return samples - - def load_data_list(self): - """Load image paths and gt_labels.""" - if not self.ann_file: - samples = self._find_samples() - else: - lines = list_from_file(self.ann_file) - samples = [x.strip().rsplit(' ', 1) for x in lines] - - # Pre-build file backend to prevent verbose file backend inference. - backend = get_file_backend(self.img_prefix, enable_singleton=True) - data_list = [] - for filename, gt_label in samples: - img_path = backend.join_path(self.img_prefix, filename) - info = {'img_path': img_path, 'gt_label': int(gt_label)} - data_list.append(info) - return data_list - - def is_valid_file(self, filename: str) -> bool: - """Check if a file is a valid sample.""" - return filename.lower().endswith(self.extensions) diff --git a/spaces/LanguageBind/LanguageBind/scripts/depth_language/eval.sh b/spaces/LanguageBind/LanguageBind/scripts/depth_language/eval.sh deleted file mode 100644 index e0887e8fbfbb914b5c5f55aa769841026d650aec..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/scripts/depth_language/eval.sh +++ /dev/null @@ -1,25 +0,0 @@ - -CACHE_DIR="path/to/pretrained/weight" -RESUME="thermal_language.pt" -TRAIN_DATA="path/to/data" -# this script is for 1024 total batch_size (n(8) GPUs * batch_size(128) * accum_freq(1)) -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nnodes=$HOST_NUM --node_rank=$INDEX --nproc_per_node $HOST_GPU_NUM --master_addr $CHIEF_IP \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "dl" --max-depth 10 \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 2 \ - --lr 5e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 1 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 128 --accum-freq 1 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume ${RESUME} \ - --do_eval \ - --val_d_cls_data "NYUV2" - - diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py deleted file mode 100644 index d2012ee1d27c862fe1884ae30d24138563a97664..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/mdxprocess.py +++ /dev/null @@ -1,188 +0,0 @@ -import gc -import requests -import subprocess -import sys -import os, warnings, librosa -import soundfile as sf -import numpy as np -import torch -import json - -folder = os.path.dirname(os.path.abspath(__file__)) -folder = os.path.dirname(folder) -folder = os.path.dirname(folder) -folder = os.path.dirname(folder) -now_dir = os.path.dirname(folder) - -import sys -sys.path.append(now_dir) - -import lib.infer.infer_libs.uvr5_pack.mdx as mdx -branch = "https://github.com/NaJeongMo/Colab-for-MDX_B" - -model_params = "https://raw.githubusercontent.com/TRvlvr/application_data/main/mdx_model_data/model_data.json" -_Models = "https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/" -# _models = "https://pastebin.com/raw/jBzYB8vz" -_models = "https://raw.githubusercontent.com/TRvlvr/application_data/main/filelists/download_checks.json" - - -file_folder = "Colab-for-MDX_B" -model_request = requests.get(_models).json() -model_ids = model_request["mdx_download_list"].values() -demucs_download_list = model_request["demucs_download_list"] - -# Iterate through the keys and get the model names -model_ids_demucs_inpure = [name.split(":")[1].strip() for name in demucs_download_list.keys()] - -# Remove duplicates by converting the list to a set and then back to a list -model_ids_demucs = list(set(model_ids_demucs_inpure)) - -# Remove some not working models -demucs_ids_to_delete = ["tasnet_extra", "tasnet", "light_extra", "light", "demucs_extra", "demucs", "demucs_unittest", "demucs48_hq", "repro_mdx_a_hybrid_only", "repro_mdx_a_time_only", "repro_mdx_a", "UVR Model"] - -# Add some models that are not in the list -demucs_ids_to_add = ["SIG"] - -# Add the new ID to the model_ids_demucs list - -for demucs_ids_to_add in demucs_ids_to_add: - if demucs_ids_to_add not in model_ids_demucs: - model_ids_demucs.append(demucs_ids_to_add) - -# If the ID is in the list of IDs to delete, remove it from the list of model_ids_demucs -for demucs_ids_to_delete in demucs_ids_to_delete: - if demucs_ids_to_delete in model_ids_demucs: - model_ids_demucs.remove(demucs_ids_to_delete) - -#print(model_ids) -model_params = requests.get(model_params).json() -#Remove request for stem_naming -stem_naming = { - "Vocals": "Instrumental", - "Other": "Instruments", - "Instrumental": "Vocals", - "Drums": "Drumless", - "Bass": "Bassless" -} - - -os.makedirs(f"{now_dir}/assets/uvr5_weights/MDX", exist_ok=True) - -warnings.filterwarnings("ignore") -cpu = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda:0") -elif torch.backends.mps.is_available(): - device = torch.device("mps") -else: - device = torch.device("cpu") - - -def get_model_list(): - return model_ids - -def get_demucs_model_list(): - return model_ids_demucs - -def id_to_ptm(mkey): - if mkey in model_ids: - #print(mkey) - mpath = f"{now_dir}/assets/uvr5_weights/MDX/{mkey}" - if not os.path.exists(f'{now_dir}/assets/uvr5_weights/MDX/{mkey}'): - print('Downloading model...',end=' ') - subprocess.run( - ["python", "-m", "wget", "-o", mpath, _Models+mkey] - ) - print(f'saved to {mpath}') - return mpath - else: - return mpath - else: - mpath = f'{now_dir}/assets/uvr5_weights/{mkey}' - return mpath - -def prepare_mdx(onnx,custom_param=False, dim_f=None, dim_t=None, n_fft=None, stem_name=None, compensation=None): - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - if custom_param: - assert not (dim_f is None or dim_t is None or n_fft is None or compensation is None), 'Custom parameter selected, but incomplete parameters are provided.' - mdx_model = mdx.MDX_Model( - device, - dim_f = dim_f, - dim_t = dim_t, - n_fft = n_fft, - stem_name=stem_name, - compensation=compensation - ) - else: - model_hash = mdx.MDX.get_hash(onnx) - if model_hash in model_params: - mp = model_params.get(model_hash) - mdx_model = mdx.MDX_Model( - device, - dim_f = mp["mdx_dim_f_set"], - dim_t = 2**mp["mdx_dim_t_set"], - n_fft = mp["mdx_n_fft_scale_set"], - stem_name=mp["primary_stem"], - compensation=compensation if not custom_param and compensation is not None else mp["compensate"] - ) - return mdx_model - -def run_mdx(onnx, mdx_model,filename, output_format='wav',diff=False,suffix=None,diff_suffix=None, denoise=False, m_threads=2): - mdx_sess = mdx.MDX(onnx,mdx_model) - print(f"Processing: {filename}") - if filename.lower().endswith('.wav'): - wave, sr = librosa.load(filename, mono=False, sr=44100) - else: - temp_wav = 'temp_audio.wav' - subprocess.run(['ffmpeg', '-i', filename, '-ar', '44100', '-ac', '2', temp_wav]) # Convert to WAV format - wave, sr = librosa.load(temp_wav, mono=False, sr=44100) - os.remove(temp_wav) - - #wave, sr = librosa.load(filename,mono=False, sr=44100) - # normalizing input wave gives better output - peak = max(np.max(wave), abs(np.min(wave))) - wave /= peak - if denoise: - wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads)) - wave_processed *= 0.5 - else: - wave_processed = mdx_sess.process_wave(wave, m_threads) - # return to previous peak - wave_processed *= peak - - stem_name = mdx_model.stem_name if suffix is None else suffix # use suffix if provided - save_path = os.path.basename(os.path.splitext(filename)[0]) - #vocals_save_path = os.path.join(vocals_folder, f"{save_path}_{stem_name}.{output_format}") - #instrumental_save_path = os.path.join(instrumental_folder, f"{save_path}_{stem_name}.{output_format}") - save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}" - save_path = os.path.join( - 'audios', - save_path - ) - sf.write( - save_path, - wave_processed.T, - sr - ) - - print(f'done, saved to: {save_path}') - - if diff: - diff_stem_name = stem_naming.get(stem_name) if diff_suffix is None else diff_suffix # use suffix if provided - stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name - save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}" - save_path = os.path.join( - 'audio-others', - save_path - ) - sf.write( - save_path, - (-wave_processed.T*mdx_model.compensation)+wave.T, - sr - ) - print(f'invert done, saved to: {save_path}') - del mdx_sess, wave_processed, wave - gc.collect() - -if __name__ == "__main__": - print() diff --git a/spaces/Liu-LAB/GPT-academic/tests/__init__.py b/spaces/Liu-LAB/GPT-academic/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/README.md b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/README.md deleted file mode 100644 index f1acd2b1d8daa4557b16c8375b8c1ab4aa36cf6c..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/fcenet/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# FCENet - -> [Fourier Contour Embedding for Arbitrary-Shaped Text Detection](https://arxiv.org/abs/2104.10442) - - - -## Abstract - -One of the main challenges for arbitrary-shaped text detection is to design a good text instance representation that allows networks to learn diverse text geometry variances. Most of existing methods model text instances in image spatial domain via masks or contour point sequences in the Cartesian or the polar coordinate system. However, the mask representation might lead to expensive post-processing, while the point sequence one may have limited capability to model texts with highly-curved shapes. To tackle these problems, we model text instances in the Fourier domain and propose one novel Fourier Contour Embedding (FCE) method to represent arbitrary shaped text contours as compact signatures. We further construct FCENet with a backbone, feature pyramid networks (FPN) and a simple post-processing with the Inverse Fourier Transformation (IFT) and Non-Maximum Suppression (NMS). Different from previous methods, FCENet first predicts compact Fourier signatures of text instances, and then reconstructs text contours via IFT and NMS during test. Extensive experiments demonstrate that FCE is accurate and robust to fit contours of scene texts even with highly-curved shapes, and also validate the effectiveness and the good generalization of FCENet for arbitrary-shaped text detection. Furthermore, experimental results show that our FCENet is superior to the state-of-the-art (SOTA) methods on CTW1500 and Total-Text, especially on challenging highly-curved text subset. - -
    - -
    - -## Results and models - -### CTW1500 - -| Method | Backbone | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download | -| :-------------------------------------------------: | :--------------: | :--------------: | :-----------: | :----------: | :-----: | :---------: | :----: | :-------: | :---: | :----------------------------------------------------: | -| [FCENet](/configs/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500.py) | ResNet50 + DCNv2 | ImageNet | CTW1500 Train | CTW1500 Test | 1500 | (736, 1080) | 0.828 | 0.875 | 0.851 | [model](https://download.openmmlab.com/mmocr/textdet/fcenet/fcenet_r50dcnv2_fpn_1500e_ctw1500_20211022-e326d7ec.pth) \| [log](https://download.openmmlab.com/mmocr/textdet/fcenet/20210511_181328.log.json) | - -### ICDAR2015 - -| Method | Backbone | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download | -| :-------------------------------------------------------: | :------: | :--------------: | :----------: | :-------: | :-----: | :----------: | :----: | :-------: | :---: | :---------------------------------------------------------: | -| [FCENet](/configs/textdet/fcenet/fcenet_r50_fpn_1500e_icdar2015.py) | ResNet50 | ImageNet | IC15 Train | IC15 Test | 1500 | (2260, 2260) | 0.819 | 0.880 | 0.849 | [model](https://download.openmmlab.com/mmocr/textdet/fcenet/fcenet_r50_fpn_1500e_icdar2015_20211022-daefb6ed.pth) \| [log](https://download.openmmlab.com/mmocr/textdet/fcenet/20210601_222655.log.json) | - -## Citation - -```bibtex -@InProceedings{zhu2021fourier, - title={Fourier Contour Embedding for Arbitrary-Shaped Text Detection}, - author={Yiqin Zhu and Jianyong Chen and Lingyu Liang and Zhanghui Kuang and Lianwen Jin and Wayne Zhang}, - year={2021}, - booktitle = {CVPR} - } -``` diff --git a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/utils.py b/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/utils.py deleted file mode 100644 index e222416205d17c946d8903459dcb6b30267f022b..0000000000000000000000000000000000000000 --- a/spaces/MAGAer13/mPLUG-Owl2/mplug_owl2/utils.py +++ /dev/null @@ -1,126 +0,0 @@ -import datetime -import logging -import logging.handlers -import os -import sys - -import requests - -from mplug_owl2.constants import LOGDIR - -server_error_msg = "**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**" -moderation_msg = "YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES. PLEASE TRY AGAIN." - -handler = None - - -def build_logger(logger_name, logger_filename): - global handler - - formatter = logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - - # Set the format of root handlers - if not logging.getLogger().handlers: - logging.basicConfig(level=logging.INFO) - logging.getLogger().handlers[0].setFormatter(formatter) - - # Redirect stdout and stderr to loggers - stdout_logger = logging.getLogger("stdout") - stdout_logger.setLevel(logging.INFO) - sl = StreamToLogger(stdout_logger, logging.INFO) - sys.stdout = sl - - stderr_logger = logging.getLogger("stderr") - stderr_logger.setLevel(logging.ERROR) - sl = StreamToLogger(stderr_logger, logging.ERROR) - sys.stderr = sl - - # Get logger - logger = logging.getLogger(logger_name) - logger.setLevel(logging.INFO) - - # Add a file handler for all loggers - if handler is None: - os.makedirs(LOGDIR, exist_ok=True) - filename = os.path.join(LOGDIR, logger_filename) - handler = logging.handlers.TimedRotatingFileHandler( - filename, when='D', utc=True) - handler.setFormatter(formatter) - - for name, item in logging.root.manager.loggerDict.items(): - if isinstance(item, logging.Logger): - item.addHandler(handler) - - return logger - - -class StreamToLogger(object): - """ - Fake file-like stream object that redirects writes to a logger instance. - """ - def __init__(self, logger, log_level=logging.INFO): - self.terminal = sys.stdout - self.logger = logger - self.log_level = log_level - self.linebuf = '' - - def __getattr__(self, attr): - return getattr(self.terminal, attr) - - def write(self, buf): - temp_linebuf = self.linebuf + buf - self.linebuf = '' - for line in temp_linebuf.splitlines(True): - # From the io.TextIOWrapper docs: - # On output, if newline is None, any '\n' characters written - # are translated to the system default line separator. - # By default sys.stdout.write() expects '\n' newlines and then - # translates them so this is still cross platform. - if line[-1] == '\n': - self.logger.log(self.log_level, line.rstrip()) - else: - self.linebuf += line - - def flush(self): - if self.linebuf != '': - self.logger.log(self.log_level, self.linebuf.rstrip()) - self.linebuf = '' - - -def disable_torch_init(): - """ - Disable the redundant torch default initialization to accelerate model creation. - """ - import torch - setattr(torch.nn.Linear, "reset_parameters", lambda self: None) - setattr(torch.nn.LayerNorm, "reset_parameters", lambda self: None) - - -def violates_moderation(text): - """ - Check whether the text violates OpenAI moderation API. - """ - url = "https://api.openai.com/v1/moderations" - headers = {"Content-Type": "application/json", - "Authorization": "Bearer " + os.environ["OPENAI_API_KEY"]} - text = text.replace("\n", "") - data = "{" + '"input": ' + f'"{text}"' + "}" - data = data.encode("utf-8") - try: - ret = requests.post(url, headers=headers, data=data, timeout=5) - flagged = ret.json()["results"][0]["flagged"] - except requests.exceptions.RequestException as e: - flagged = False - except KeyError as e: - flagged = False - - return flagged - - -def pretty_print_semaphore(semaphore): - if semaphore is None: - return "None" - return f"Semaphore(value={semaphore._value}, locked={semaphore.locked()})" \ No newline at end of file diff --git a/spaces/Manjushri/Erebus/README.md b/spaces/Manjushri/Erebus/README.md deleted file mode 100644 index 2f3216855eb0191ed6cc1df118506b7a29f1162a..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/Erebus/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Erebus -emoji: 👁 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Manjushri/SDXL-1.0-Img2Img-CPU/app.py b/spaces/Manjushri/SDXL-1.0-Img2Img-CPU/app.py deleted file mode 100644 index 8de7c458469d49e85470c8213df012153f22fcf5..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/SDXL-1.0-Img2Img-CPU/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -import modin.pandas as pd -import torch -import numpy as np -from PIL import Image -from diffusers import DiffusionPipeline -from huggingface_hub import login -#import os - -#login(token=os.environ.get('HF_KEY')) - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16) if torch.cuda.is_available() else DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0") -pipe = pipe.to(device) - -def resize(value,img): - img = Image.open(img) - img = img.resize((value,value)) - return img - -def infer(source_img, prompt, negative_prompt, guide, steps, seed, Strength): - generator = torch.Generator(device).manual_seed(seed) - source_image = resize(768, source_img) - source_image.save('source.png') - image = pipe(prompt, negative_prompt=negative_prompt, image=source_image, strength=Strength, guidance_scale=guide, num_inference_steps=steps).images[0] - return image - -gr.Interface(fn=infer, inputs=[gr.Image(source="upload", type="filepath", label="Raw Image. Must Be .png"), gr.Textbox(label = 'Prompt Input Text. 77 Token (Keyword or Symbol) Maximum'), gr.Textbox(label='What you Do Not want the AI to generate.'), - gr.Slider(2, 15, value = 7, label = 'Guidance Scale'), - gr.Slider(1, 25, value = 10, step = 1, label = 'Number of Iterations'), - gr.Slider(label = "Seed", minimum = 0, maximum = 987654321987654321, step = 1, randomize = True), - gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .5)], - outputs='image', title = "Stable Diffusion XL 1.0 Image to Image Pipeline CPU", description = "For more information on Stable Diffusion XL 1.0 see https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0

    Upload an Image (MUST Be .PNG and 512x512 or 768x768) enter a Prompt, or let it just do its Thing, then click submit. 10 Iterations takes about ~900-1200 seconds currently. For more informationon about Stable Diffusion or Suggestions for prompts, keywords, artists or styles see https://github.com/Maks-s/sd-akashic", article = "Code Monkey: Manjushri").queue(max_size=5).launch() \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/training/datasets/multimodal_dataset.py b/spaces/Marshalls/testmtd/training/datasets/multimodal_dataset.py deleted file mode 100644 index c593dd6f397ded49e883b1c4ccc811e64eb6d334..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/training/datasets/multimodal_dataset.py +++ /dev/null @@ -1,338 +0,0 @@ -from pathlib import Path -from itertools import tee -import numpy as np -import torch -from .base_dataset import BaseDataset - -def find_example_idx(n, cum_sums, idx = 0): - N = len(cum_sums) - search_i = N//2 - 1 - if N > 1: - if n < cum_sums[search_i]: - return find_example_idx(n, cum_sums[:search_i+1], idx=idx) - else: - return find_example_idx(n, cum_sums[search_i+1:], idx=idx+search_i+1) - else: - if n < cum_sums[0]: - return idx - else: - return idx + 1 - - -class MultimodalDataset(BaseDataset): - - def __init__(self, opt, split="train"): - super().__init__() - self.opt = opt - data_path = Path(opt.data_dir) - if not data_path.is_dir(): - raise ValueError('Invalid directory:'+opt.data_dir) - - print(opt.base_filenames_file) - if split == "train": - temp_base_filenames = [x[:-1] for x in open(data_path.joinpath(opt.base_filenames_file), "r").readlines()] - else: - temp_base_filenames = [x[:-1] for x in open(data_path.joinpath("base_filenames_"+split+".txt"), "r").readlines()] - if opt.num_train_samples > 0: - temp_base_filenames = np.random.choice(temp_base_filenames, size=opt.num_train_samples, replace=False) - self.base_filenames = [] - - input_mods = self.opt.input_modalities.split(",") - output_mods = self.opt.output_modalities.split(",") - self.input_lengths = input_lengths = [int(x) for x in str(self.opt.input_lengths).split(",")] - self.output_lengths = output_lengths = [int(x) for x in str(self.opt.output_lengths).split(",")] - self.output_time_offsets = output_time_offsets = [int(x) for x in str(self.opt.output_time_offsets).split(",")] - self.input_time_offsets = input_time_offsets = [int(x) for x in str(self.opt.input_time_offsets).split(",")] - - if self.opt.input_types is None: - input_types = ["c" for inp in input_mods] - else: - input_types = self.opt.input_types.split(",") - - if self.opt.input_fix_length_types is None: - input_fix_length_types = ["end" for inp in input_mods] - else: - input_fix_length_types = self.opt.input_fix_length_types.split(",") - - if self.opt.output_fix_length_types is None: - output_fix_length_types = ["end" for inp in input_mods] - else: - output_fix_length_types = self.opt.output_fix_length_types.split(",") - - fix_length_types_dict = {mod:output_fix_length_types[i] for i,mod in enumerate(output_mods)} - fix_length_types_dict.update({mod:input_fix_length_types[i] for i,mod in enumerate(input_mods)}) - - assert len(input_types) == len(input_mods) - assert len(input_fix_length_types) == len(input_mods) - assert len(output_fix_length_types) == len(input_mods) - self.input_types = input_types - self.input_fix_length_types = input_fix_length_types - self.output_fix_length_types = output_fix_length_types - - if self.opt.input_num_tokens is None: - self.input_num_tokens = [0 for inp in input_mods] - else: - self.input_num_tokens = [int(x) for x in self.opt.input_num_tokens.split(",")] - - if self.opt.output_num_tokens is None: - self.output_num_tokens = [0 for inp in output_mods] - else: - self.output_num_tokens = [int(x) for x in self.opt.output_num_tokens.split(",")] - - if len(output_time_offsets) < len(output_mods): - if len(output_time_offsets) == 1: - self.output_time_offsets = output_time_offsets = output_time_offsets*len(output_mods) - else: - raise Exception("number of output_time_offsets doesnt match number of output_mods") - - if len(input_time_offsets) < len(input_mods): - if len(input_time_offsets) == 1: - self.input_time_offsets = input_time_offsets = input_time_offsets*len(input_mods) - else: - raise Exception("number of input_time_offsets doesnt match number of input_mods") - - self.features = {mod:{} for mod in input_mods+output_mods} - #self.input_features = {input_mod:{} for input_mod in input_mods} - #self.output_features = {output_mod:{} for output_mod in output_mods} - if opt.fix_lengths: - self.features_filenames = {mod:{} for mod in input_mods+output_mods} - #self.input_features_filenames = {input_mod:{} for input_mod in input_mods} - #self.output_features_filenames = {input_mod:{} for input_mod in input_mods} - - min_length = max(max(np.array(input_lengths) + np.array(input_time_offsets)), max(np.array(output_time_offsets) + np.array(output_lengths)) ) - min(0,min(output_time_offsets)) - print(min_length) - - fix_lengths = opt.fix_lengths - - self.total_frames = 0 - self.frame_cum_sums = [] - - #Get the list of files containing features (in numpy format for now), and populate the dictionaries of input and output features (separated by modality) - for base_filename in temp_base_filenames: - file_too_short = False - first_length=True - for i, mod in enumerate(input_mods): - feature_file = data_path.joinpath(base_filename+"."+mod+".npy") - if self.input_fix_length_types[i] == "single": continue - #print(feature_file) - try: - features = np.load(feature_file) - length = features.shape[0] - #print(features.shape) - #print(length) - if not fix_lengths: - if first_length: - length_0 = length - first_length=False - else: - assert length == length_0 - if length < min_length: - # print("Smol sequence "+base_filename+"."+mod+"; ignoring..") - file_too_short = True - break - except FileNotFoundError: - raise Exception("An unprocessed input feature found "+base_filename+"."+mod+"; need to run preprocessing script before starting to train with them") - - if file_too_short: continue - - first_length=True - for i, mod in enumerate(output_mods): - feature_file = data_path.joinpath(base_filename+"."+mod+".npy") - if self.output_fix_length_types[i] == "single": continue - try: - features = np.load(feature_file) - length = features.shape[0] - if not fix_lengths: - if first_length: - length_0 = length - first_length=False - else: - assert length == length_0 - if length < min_length: - # print("Smol sequence "+base_filename+"."+mod+"; ignoring..") - file_too_short = True - break - except FileNotFoundError: - raise Exception("An unprocessed output feature found "+base_filename+"."+mod+"; need to run preprocessing script before starting to train with them") - - if file_too_short: continue - - for mod in input_mods+output_mods: - feature_file = data_path.joinpath(base_filename+"."+mod+".npy") - features = np.load(feature_file) - self.features[mod][base_filename] = features - if fix_lengths: - self.features_filenames[mod][base_filename] = feature_file - - if fix_lengths: - shortest_length = 99999999999 - first_match = True - for mod in input_mods+output_mods: - if fix_length_types_dict[mod] == "single": continue - length = self.features[mod][base_filename].shape[0] - if length < shortest_length: - #print(np.abs(length-shortest_length)) - if first_match: - first_match = False - else: - if np.abs(length-shortest_length) > 2: - print("sequence length difference") - print(np.abs(length-shortest_length)) - print(base_filename) - #assert np.abs(length-shortest_length) <= 2 - shortest_length = length - for i,mod in enumerate(input_mods): - if self.input_fix_length_types[i] == "end": - np.save(self.features_filenames[mod][base_filename],self.features[mod][base_filename][:shortest_length]) - elif self.input_fix_length_types[i] == "beg": - np.save(self.features_filenames[mod][base_filename],self.features[mod][base_filename][shortest_length:]) - elif self.input_fix_length_types[i] == "single": - assert self.features[mod][base_filename].shape[0] == 1 - else: - raise NotImplementedError("Haven't implemented input_fix_length_type "+self.input_fix_length_type[i]) - - for i,mod in enumerate(output_mods): - if mod not in input_mods: - if self.output_fix_length_types[i] == "end": - np.save(self.features_filenames[mod][base_filename],self.features[mod][base_filename][:shortest_length]) - elif self.output_fix_length_types[i] == "beg": - np.save(self.features_filenames[mod][base_filename],self.features[mod][base_filename][shortest_length:]) - elif self.output_fix_length_types[i] == "single": - assert self.features[mod][base_filename].shape[0] == 1 - else: - raise NotImplementedError("Haven't implemented output_fix_length_type "+self.output_fix_length_type[i]) - - for mod in input_mods+output_mods: - self.features[mod][base_filename] = np.load(self.features_filenames[mod][base_filename]) - length = self.features[mod][base_filename].shape[0] - if i == 0: - length_0 = length - else: - assert length == length_0 - - #TODO: implement this! - ## we pad the song features with zeros to imitate during training what happens during generation - #x = [np.concatenate((np.zeros(( xx.shape[0],max(0,max(output_time_offsets)) )),xx),0) for xx in x] - ## we also pad at the end to allow generation to be of the same length of sequence, by padding an amount corresponding to time_offset - #x = [np.concatenate((xx,np.zeros(( xx.shape[0],max(0,max(input_lengths)+max(input_time_offsets)-(min(output_time_offsets)+min(output_lengths)-1)) ))),0) for xx in x] - - - found_full_seq = False - for i,mod in enumerate(input_mods): - if self.input_fix_length_types[i] != "single": - sequence_length = self.features[mod][base_filename].shape[0] - found_full_seq = True - if not found_full_seq: - sequence_length = 1 - possible_init_frames = sequence_length-max(max(input_lengths)+max(input_time_offsets),max(output_time_offsets)+max(output_lengths))+1 - self.total_frames += possible_init_frames - self.frame_cum_sums.append(self.total_frames) - - self.base_filenames.append(base_filename) - - print("sequences added: "+str(len(self.base_filenames))) - assert len(self.base_filenames)>0, "List of files for training cannot be empty" - for mod in input_mods+output_mods: - assert len(self.features[mod].values()) == len(self.base_filenames) - - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument('--sampling_rate', default=44100, type=float) - parser.add_argument('--dins', default=None, help="input dimension for continuous inputs. Embedding dimension for discrete inputs") - parser.add_argument('--douts', default=None) - parser.add_argument('--input_modalities', default='mp3_mel_100') - parser.add_argument('--output_modalities', default='mp3_mel_100') - parser.add_argument('--input_lengths', help='input sequence length') - parser.add_argument('--input_num_tokens', help='num_tokens. use 0 for continuous inputs') - parser.add_argument('--output_num_tokens', help='num_tokens. use 0 for continuous inputs') - parser.add_argument('--input_types', default=None, help='Comma-separated list of input types: d for discrete, c for continuous. E.g. d,c,c. Assumes continuous if not specified') - parser.add_argument('--input_fix_length_types', default=None, help='Comma-separated list of approaches to fix length: end for cut end, beg for cut beginning, single for single-element sequence (e.g. sequence-level label). E.g. single,end,end. Assumes cut end if not specified') - parser.add_argument('--output_fix_length_types', default=None, help='Comma-separated list of approaches to fix length: end for cut end, beg for cut beginning, single for single-element sequence (e.g. sequence-level label). E.g. single,end,end. Assumes cut end if not specified') - parser.add_argument('--output_lengths', help='output sequence length') - parser.add_argument('--output_time_offsets', default="1", help='time shift between the last read input, and the output predicted. The default value of 1 corresponds to predicting the next output') - parser.add_argument('--input_time_offsets', default="0", help='time shift between the beginning of each modality and the first modality') - parser.add_argument('--max_token_seq_len', type=int, default=1024) - parser.add_argument('--fix_lengths', action='store_true', help='fix unmatching length of sequences') - parser.add_argument('--num_train_samples', type=int, default=0, help='if 0 then use all of them') - - return parser - - def name(self): - return "MultiModalDataset" - - def process_input(self,j,xx,index): - input_lengths = self.input_lengths - output_lengths = self.output_lengths - output_time_offsets = self.output_time_offsets - input_time_offsets = self.input_time_offsets - if self.input_fix_length_types[j]!="single": - return torch.tensor(xx[index+input_time_offsets[j]:index+input_time_offsets[j]+input_lengths[j]]).float() - else: - return torch.tensor(xx).long().unsqueeze(1) - - def process_output(self,j,yy,index): - input_lengths = self.input_lengths - output_lengths = self.output_lengths - output_time_offsets = self.output_time_offsets - input_time_offsets = self.input_time_offsets - if self.output_fix_length_types[j]!="single": - return torch.tensor(yy[index+output_time_offsets[j]:index+output_time_offsets[j]+output_lengths[j]]).float() - else: - return torch.tensor(yy).long().unsqueeze(1) - - def __getitem__(self, item): - idx = find_example_idx(item, self.frame_cum_sums) - base_filename = self.base_filenames[idx] - - input_lengths = self.input_lengths - output_lengths = self.output_lengths - output_time_offsets = self.output_time_offsets - input_time_offsets = self.input_time_offsets - - input_mods = self.opt.input_modalities.split(",") - output_mods = self.opt.output_modalities.split(",") - - x = [self.features[mod][base_filename] for mod in input_mods] - y = [self.features[mod][base_filename] for mod in output_mods] - #for i, mod in enumerate(input_mods): - # input_feature = self.features[mod][base_filename] - # x.append(input_feature) - - #for i, mod in enumerate(output_mods): - # output_feature = self.features[mod][base_filename] - # y.append(output_feature) - - # normalization of individual features for the sequence - # not doing this any more as we are normalizing over all examples now - #x = [(xx-np.mean(xx,0,keepdims=True))/(np.std(xx,0,keepdims=True)+1e-5) for xx in x] - #y = [(yy-np.mean(yy,0,keepdims=True))/(np.std(yy,0,keepdims=True)+1e-5) for yy in y] - - if idx > 0: index = item - self.frame_cum_sums[idx-1] - else: index = item - - ## CONSTRUCT TENSOR OF INPUT FEATURES ## - input_windows = [self.process_input(j,xx,index) for j,xx in enumerate(x)] - - ## CONSTRUCT TENSOR OF OUTPUT FEATURES ## - output_windows = [self.process_output(j,yy,index) for j,yy in enumerate(y)] - - # print(input_windows[i]) - return_dict = {} - for i,mod in enumerate(input_mods): - return_dict["in_"+mod] = input_windows[i] - for i,mod in enumerate(output_mods): - return_dict["out_"+mod] = output_windows[i] - - return return_dict - - def __len__(self): - # return len(self.base_filenames) - return self.total_frames - # return 2 - - -def pairwise(iterable): - "s -> (s0,s1), (s1,s2), (s2, s3), ..." - a, b = tee(iterable) - next(b, None) - return zip(a, b) diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/totaltext_parser.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/totaltext_parser.py deleted file mode 100644 index 2255f2f1b1abb01601dde8c33af8cf4732340938..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/totaltext_parser.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import re -from typing import Dict, Tuple - -import yaml - -from mmocr.registry import DATA_PARSERS -from .base import BaseParser - - -@DATA_PARSERS.register_module() -class TotaltextTextDetAnnParser(BaseParser): - """TotalText Text Detection Parser. - - The original annotation format of this dataset is stored in txt files, - which is formed as the following format: - x: [[x1 x2 x3 ... xn]], y: [[y1 y2 y3 ... yn]], - ornt: [u'c'], transcriptions: [u'transcription'] - - Args: - data_root (str): Path to the dataset root. - ignore (str): The text of the ignored instances. Default: '#'. - nproc (int): Number of processes to load the data. Default: 1. - """ - - def __init__(self, ignore: str = '#', **kwargs) -> None: - self.ignore = ignore - super().__init__(**kwargs) - - def parse_file(self, img_path: str, ann_path: str) -> Dict: - """Convert single annotation.""" - instances = list() - for poly, text in self.loader(ann_path): - instances.append( - dict(poly=poly, text=text, ignore=text == self.ignore)) - - return img_path, instances - - def loader(self, file_path: str) -> str: - """The annotation of the totaltext dataset may be stored in multiple - lines, this loader is designed for this special case. - - Args: - file_path (str): Path to the txt file - - Yield: - str: Complete annotation of the txt file - """ - - def parsing_line(line: str) -> Tuple: - """Parsing a line of the annotation. - - Args: - line (str): A line of the annotation. - - Returns: - Tuple: A tuple of (polygon, transcription). - """ - line = '{' + line.replace('[[', '[').replace(']]', ']') + '}' - ann_dict = re.sub('([0-9]) +([0-9])', r'\1,\2', line) - ann_dict = re.sub('([0-9]) +([ 0-9])', r'\1,\2', ann_dict) - ann_dict = re.sub('([0-9]) -([0-9])', r'\1,-\2', ann_dict) - ann_dict = ann_dict.replace("[u',']", "[u'#']") - ann_dict = yaml.safe_load(ann_dict) - - # polygon - xs, ys = ann_dict['x'], ann_dict['y'] - poly = [] - for x, y in zip(xs, ys): - poly.append(x) - poly.append(y) - # text - text = ann_dict['transcriptions'] - if len(text) == 0: - text = '#' - else: - word = text[0] - if len(text) > 1: - for ann_word in text[1:]: - word += ',' + ann_word - text = str(eval(word)) - - return poly, text - - with open(file_path, 'r') as f: - for idx, line in enumerate(f): - line = line.strip() - if idx == 0: - tmp_line = line - continue - if not line.startswith('x:'): - tmp_line += ' ' + line - continue - complete_line = tmp_line - tmp_line = line - yield parsing_line(complete_line) - - if tmp_line != '': - yield parsing_line(tmp_line) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/__init__.py deleted file mode 100644 index 111c47990143147a8acaf6fdf75a36749042af0c..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .db_module_loss import DBModuleLoss -from .drrg_module_loss import DRRGModuleLoss -from .fce_module_loss import FCEModuleLoss -from .pan_module_loss import PANModuleLoss -from .pse_module_loss import PSEModuleLoss -from .seg_based_module_loss import SegBasedModuleLoss -from .textsnake_module_loss import TextSnakeModuleLoss - -__all__ = [ - 'PANModuleLoss', 'PSEModuleLoss', 'DBModuleLoss', 'TextSnakeModuleLoss', - 'FCEModuleLoss', 'DRRGModuleLoss', 'SegBasedModuleLoss' -] diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/attn_postprocessor.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/attn_postprocessor.py deleted file mode 100644 index e047a6a341ca90b874d993c0def6aed9a3af114e..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/postprocessors/attn_postprocessor.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Sequence, Tuple - -import torch - -from mmocr.registry import MODELS -from mmocr.structures import TextRecogDataSample -from .base import BaseTextRecogPostprocessor - - -@MODELS.register_module() -class AttentionPostprocessor(BaseTextRecogPostprocessor): - """PostProcessor for seq2seq.""" - - def get_single_prediction( - self, - probs: torch.Tensor, - data_sample: Optional[TextRecogDataSample] = None, - ) -> Tuple[Sequence[int], Sequence[float]]: - """Convert the output probabilities of a single image to index and - score. - - Args: - probs (torch.Tensor): Character probabilities with shape - :math:`(T, C)`. - data_sample (TextRecogDataSample, optional): Datasample of an - image. Defaults to None. - - Returns: - tuple(list[int], list[float]): index and score. - """ - max_value, max_idx = torch.max(probs, -1) - index, score = [], [] - output_index = max_idx.cpu().detach().numpy().tolist() - output_score = max_value.cpu().detach().numpy().tolist() - for char_index, char_score in zip(output_index, output_score): - if char_index in self.ignore_indexes: - continue - if char_index == self.dictionary.end_idx: - break - index.append(char_index) - score.append(char_score) - return index, score diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/cifar_preprocessing.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/cifar_preprocessing.py deleted file mode 100644 index 18d7fe630e194953c8c5f3f7552c7104c6155c9a..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/cifar_preprocessing.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright 2016 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Provides utilities to Cifar-10 dataset.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -from absl import logging -import tensorflow as tf - -from official.vision.image_classification.resnet import imagenet_preprocessing - -HEIGHT = 32 -WIDTH = 32 -NUM_CHANNELS = 3 -_DEFAULT_IMAGE_BYTES = HEIGHT * WIDTH * NUM_CHANNELS -# The record is the image plus a one-byte label -_RECORD_BYTES = _DEFAULT_IMAGE_BYTES + 1 - -# TODO(tobyboyd): Change to best practice 45K(train)/5K(val)/10K(test) splits. -NUM_IMAGES = { - 'train': 50000, - 'validation': 10000, -} -_NUM_DATA_FILES = 5 -NUM_CLASSES = 10 - - -def parse_record(raw_record, is_training, dtype): - """Parses a record containing a training example of an image. - - The input record is parsed into a label and image, and the image is passed - through preprocessing steps (cropping, flipping, and so on). - - This method converts the label to one hot to fit the loss function. - - Args: - raw_record: scalar Tensor tf.string containing a serialized - Example protocol buffer. - is_training: A boolean denoting whether the input is for training. - dtype: Data type to use for input images. - - Returns: - Tuple with processed image tensor and one-hot-encoded label tensor. - """ - # Convert bytes to a vector of uint8 that is record_bytes long. - record_vector = tf.io.decode_raw(raw_record, tf.uint8) - - # The first byte represents the label, which we convert from uint8 to int32 - # and then to one-hot. - label = tf.cast(record_vector[0], tf.int32) - - # The remaining bytes after the label represent the image, which we reshape - # from [depth * height * width] to [depth, height, width]. - depth_major = tf.reshape(record_vector[1:_RECORD_BYTES], - [NUM_CHANNELS, HEIGHT, WIDTH]) - - # Convert from [depth, height, width] to [height, width, depth], and cast as - # float32. - image = tf.cast(tf.transpose(a=depth_major, perm=[1, 2, 0]), tf.float32) - - image = preprocess_image(image, is_training) - image = tf.cast(image, dtype) - - return image, label - - -def preprocess_image(image, is_training): - """Preprocess a single image of layout [height, width, depth].""" - if is_training: - # Resize the image to add four extra pixels on each side. - image = tf.image.resize_with_crop_or_pad( - image, HEIGHT + 8, WIDTH + 8) - - # Randomly crop a [HEIGHT, WIDTH] section of the image. - image = tf.image.random_crop(image, [HEIGHT, WIDTH, NUM_CHANNELS]) - - # Randomly flip the image horizontally. - image = tf.image.random_flip_left_right(image) - - # Subtract off the mean and divide by the variance of the pixels. - image = tf.image.per_image_standardization(image) - return image - - -def get_filenames(is_training, data_dir): - """Returns a list of filenames.""" - assert tf.io.gfile.exists(data_dir), ( - 'Run cifar10_download_and_extract.py first to download and extract the ' - 'CIFAR-10 data.') - - if is_training: - return [ - os.path.join(data_dir, 'data_batch_%d.bin' % i) - for i in range(1, _NUM_DATA_FILES + 1) - ] - else: - return [os.path.join(data_dir, 'test_batch.bin')] - - -def input_fn(is_training, - data_dir, - batch_size, - dtype=tf.float32, - datasets_num_private_threads=None, - parse_record_fn=parse_record, - input_context=None, - drop_remainder=False): - """Input function which provides batches for train or eval. - - Args: - is_training: A boolean denoting whether the input is for training. - data_dir: The directory containing the input data. - batch_size: The number of samples per batch. - dtype: Data type to use for images/features - datasets_num_private_threads: Number of private threads for tf.data. - parse_record_fn: Function to use for parsing the records. - input_context: A `tf.distribute.InputContext` object passed in by - `tf.distribute.Strategy`. - drop_remainder: A boolean indicates whether to drop the remainder of the - batches. If True, the batch dimension will be static. - - Returns: - A dataset that can be used for iteration. - """ - filenames = get_filenames(is_training, data_dir) - dataset = tf.data.FixedLengthRecordDataset(filenames, _RECORD_BYTES) - - if input_context: - logging.info( - 'Sharding the dataset: input_pipeline_id=%d num_input_pipelines=%d', - input_context.input_pipeline_id, input_context.num_input_pipelines) - dataset = dataset.shard(input_context.num_input_pipelines, - input_context.input_pipeline_id) - - return imagenet_preprocessing.process_record_dataset( - dataset=dataset, - is_training=is_training, - batch_size=batch_size, - shuffle_buffer=NUM_IMAGES['train'], - parse_record_fn=parse_record_fn, - dtype=dtype, - datasets_num_private_threads=datasets_num_private_threads, - drop_remainder=drop_remainder - ) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/tf2_albert_encoder_checkpoint_converter.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/tf2_albert_encoder_checkpoint_converter.py deleted file mode 100644 index 402bc1445bed575362598d09212d14d03b629179..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/tf2_albert_encoder_checkpoint_converter.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""A converter from a tf1 ALBERT encoder checkpoint to a tf2 encoder checkpoint. - -The conversion will yield an object-oriented checkpoint that can be used -to restore a AlbertTransformerEncoder object. -""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os - -from absl import app -from absl import flags - -import tensorflow as tf -from official.modeling import activations -from official.nlp.albert import configs -from official.nlp.bert import tf1_checkpoint_converter_lib -from official.nlp.modeling import networks - -FLAGS = flags.FLAGS - -flags.DEFINE_string("albert_config_file", None, - "Albert configuration file to define core bert layers.") -flags.DEFINE_string( - "checkpoint_to_convert", None, - "Initial checkpoint from a pretrained BERT model core (that is, only the " - "BertModel, with no task heads.)") -flags.DEFINE_string("converted_checkpoint_path", None, - "Name for the created object-based V2 checkpoint.") - - -ALBERT_NAME_REPLACEMENTS = ( - ("bert/encoder/", ""), - ("bert/", ""), - ("embeddings/word_embeddings", "word_embeddings/embeddings"), - ("embeddings/position_embeddings", "position_embedding/embeddings"), - ("embeddings/token_type_embeddings", "type_embeddings/embeddings"), - ("embeddings/LayerNorm", "embeddings/layer_norm"), - ("embedding_hidden_mapping_in", "embedding_projection"), - ("group_0/inner_group_0/", ""), - ("attention_1/self", "self_attention"), - ("attention_1/output/dense", "self_attention/attention_output"), - ("LayerNorm/", "self_attention_layer_norm/"), - ("ffn_1/intermediate/dense", "intermediate"), - ("ffn_1/intermediate/output/dense", "output"), - ("LayerNorm_1/", "output_layer_norm/"), - ("pooler/dense", "pooler_transform"), - ("cls/predictions/output_bias", "cls/predictions/output_bias/bias"), - ("cls/seq_relationship/output_bias", "predictions/transform/logits/bias"), - ("cls/seq_relationship/output_weights", - "predictions/transform/logits/kernel"), -) - - -def _create_albert_model(cfg): - """Creates a BERT keras core model from BERT configuration. - - Args: - cfg: A `BertConfig` to create the core model. - - Returns: - A keras model. - """ - albert_encoder = networks.AlbertTransformerEncoder( - vocab_size=cfg.vocab_size, - hidden_size=cfg.hidden_size, - embedding_width=cfg.embedding_size, - num_layers=cfg.num_hidden_layers, - num_attention_heads=cfg.num_attention_heads, - intermediate_size=cfg.intermediate_size, - activation=activations.gelu, - dropout_rate=cfg.hidden_dropout_prob, - attention_dropout_rate=cfg.attention_probs_dropout_prob, - sequence_length=cfg.max_position_embeddings, - type_vocab_size=cfg.type_vocab_size, - initializer=tf.keras.initializers.TruncatedNormal( - stddev=cfg.initializer_range)) - return albert_encoder - - -def convert_checkpoint(bert_config, output_path, v1_checkpoint): - """Converts a V1 checkpoint into an OO V2 checkpoint.""" - output_dir, _ = os.path.split(output_path) - - # Create a temporary V1 name-converted checkpoint in the output directory. - temporary_checkpoint_dir = os.path.join(output_dir, "temp_v1") - temporary_checkpoint = os.path.join(temporary_checkpoint_dir, "ckpt") - tf1_checkpoint_converter_lib.convert( - checkpoint_from_path=v1_checkpoint, - checkpoint_to_path=temporary_checkpoint, - num_heads=bert_config.num_attention_heads, - name_replacements=ALBERT_NAME_REPLACEMENTS, - permutations=tf1_checkpoint_converter_lib.BERT_V2_PERMUTATIONS, - exclude_patterns=["adam", "Adam"]) - - # Create a V2 checkpoint from the temporary checkpoint. - model = _create_albert_model(bert_config) - tf1_checkpoint_converter_lib.create_v2_checkpoint(model, temporary_checkpoint, - output_path) - - # Clean up the temporary checkpoint, if it exists. - try: - tf.io.gfile.rmtree(temporary_checkpoint_dir) - except tf.errors.OpError: - # If it doesn't exist, we don't need to clean it up; continue. - pass - - -def main(_): - output_path = FLAGS.converted_checkpoint_path - v1_checkpoint = FLAGS.checkpoint_to_convert - albert_config = configs.AlbertConfig.from_json_file(FLAGS.albert_config_file) - convert_checkpoint(albert_config, output_path, v1_checkpoint) - - -if __name__ == "__main__": - app.run(main) diff --git a/spaces/NMEX/rvc-hoyogame-v2/vc_infer_pipeline.py b/spaces/NMEX/rvc-hoyogame-v2/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyogame-v2/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/NN520/AI/README.md b/spaces/NN520/AI/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/Nightwing25/AICoverGen/src/infer_pack/modules.py b/spaces/Nightwing25/AICoverGen/src/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/Nightwing25/AICoverGen/src/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/NoriZC/vits-models/models.py b/spaces/NoriZC/vits-models/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/NoriZC/vits-models/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/hubert_criterion.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/hubert_criterion.py deleted file mode 100644 index 68cb24e6f142c46e108c53479fd4027a741f5f92..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/hubert_criterion.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import re -from dataclasses import dataclass, field -from typing import List, Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class HubertCriterionConfig(FairseqDataclass): - pred_masked_weight: float = field( - default=1.0, - metadata={"help": "weight for predictive loss for masked frames"}, - ) - pred_nomask_weight: float = field( - default=0.0, - metadata={"help": "weight for predictive loss for unmasked frames"}, - ) - loss_weights: Optional[List[float]] = field( - default=None, - metadata={"help": "weights for additional loss terms (not first one)"}, - ) - log_keys: List[str] = field( - default_factory=lambda: [], - metadata={"help": "output keys to log"}, - ) - - -@register_criterion("hubert", dataclass=HubertCriterionConfig) -class HubertCriterion(FairseqCriterion): - def __init__(self, task, pred_masked_weight, pred_nomask_weight, loss_weights=None, log_keys=None): - super().__init__(task) - self.pred_masked_weight = pred_masked_weight - self.pred_nomask_weight = pred_nomask_weight - self.loss_weights = loss_weights - self.log_keys = [] if log_keys is None else log_keys - - def forward(self, model, sample, reduce=True, log_pred=False): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(target_list=sample["target_list"], **sample["net_input"]) - loss = 0. - sample_size = 0 - logging_output = {} - reduction = "sum" if reduce else "none" - - loss_m_list = [] - logp_m_list = model.get_logits(net_output, True) - targ_m_list = model.get_targets(net_output, True) - assert self.pred_masked_weight == 0 or len(logp_m_list) > 0 - for i, (logp_m, targ_m) in enumerate(zip(logp_m_list, targ_m_list)): - loss_m = F.cross_entropy(logp_m, targ_m, reduction=reduction) - loss_m_list.append(loss_m) - logging_output[f"loss_m_{i}"] = loss_m.detach().item() - if self.pred_masked_weight > 0: - loss += self.pred_masked_weight * sum(loss_m_list) - sample_size += targ_m_list[0].numel() - - loss_u_list = [] - logp_u_list = model.get_logits(net_output, False) - targ_u_list = model.get_targets(net_output, False) - assert self.pred_nomask_weight == 0 or len(logp_u_list) > 0 - for i, (logp_u, targ_u) in enumerate(zip(logp_u_list, targ_u_list)): - loss_u = F.cross_entropy(logp_u, targ_u, reduction=reduction) - loss_u_list.append(loss_u) - logging_output[f"loss_u_{i}"] = loss_u.detach().item() - if self.pred_nomask_weight > 0: - loss += self.pred_nomask_weight * sum(loss_u_list) - sample_size += targ_u_list[0].numel() - - if self.loss_weights is not None: - assert hasattr(model, "get_extra_losses") - extra_losses, names = model.get_extra_losses(net_output) - if torch.is_tensor(extra_losses): - extra_losses = [extra_losses] - names = [names] - if len(self.loss_weights) == 1 and len(extra_losses) != 1: - self.loss_weights = [self.loss_weights[0]] * len(extra_losses) - assert len(extra_losses) == len(self.loss_weights), f"{len(extra_losses)}, {len(self.loss_weights)}" - for p, n, coef in zip(extra_losses, names, self.loss_weights): - if coef != 0 and p is not None: - p = coef * p.float() * sample_size - loss += p - logging_output[f"loss_{n}"] = p.item() - - logging_output = { - "loss": loss.item() if reduce else loss, - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - **logging_output, - } - - for lk in self.log_keys: - if lk in net_output: - logging_output[lk] = float((net_output[lk])) - - def compute_correct(logits): - if logits.numel() == 0: - return 0, 0 - else: - assert logits.dim() > 1, logits.shape - max = logits.argmax(-1) == 0 - min = logits.argmin(-1) == 0 - both = max & min - corr = max.long().sum().item() - both.long().sum().item() - count = max.numel() - return corr, count - - with torch.no_grad(): - for i, logp_m in enumerate(logp_m_list): - corr_m, count_m = compute_correct(logp_m) - logging_output[f"correct_m_{i}"] = corr_m - logging_output[f"count_m_{i}"] = count_m - - for i, logp_u in enumerate(logp_u_list): - corr_u, count_u = compute_correct(logp_u) - logging_output[f"correct_u_{i}"] = corr_u - logging_output[f"count_u_{i}"] = count_u - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training (copied from normal cross entropy).""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar("loss", loss_sum / sample_size / math.log(2), sample_size, round=3) - if sample_size != ntokens: - metrics.log_scalar("nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3) - metrics.log_derived("ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)) - else: - metrics.log_derived("ppl", lambda meters: utils.get_perplexity(meters["loss"].avg)) - - counts = {} - for lk in logging_outputs[0].keys(): - if lk.startswith("count_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val) - counts[lk] = val - - for lk in logging_outputs[0].keys(): - if lk.startswith("loss_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val / sample_size / math.log(2), round=3) - elif lk.startswith("correct_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val / counts[re.sub("correct", "count", lk)]) - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - raise NotImplementedError() - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return False diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_bert_bpe.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_bert_bpe.py deleted file mode 100644 index a41c059343ec7e2914b2c9d2f53f526c33f9659d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_bert_bpe.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class BertBPEConfig(FairseqDataclass): - bpe_cased: bool = field(default=False, metadata={"help": "set for cased BPE"}) - bpe_vocab_file: Optional[str] = field( - default=None, metadata={"help": "bpe vocab file"} - ) - - -@register_bpe("bert", dataclass=BertBPEConfig) -class BertBPE(object): - def __init__(self, cfg): - try: - from transformers import BertTokenizer - except ImportError: - raise ImportError( - "Please install transformers with: pip install transformers" - ) - - if cfg.bpe_vocab_file: - self.bert_tokenizer = BertTokenizer( - cfg.bpe_vocab_file, do_lower_case=not cfg.bpe_cased - ) - else: - vocab_file_name = ( - "bert-base-cased" if cfg.bpe_cased else "bert-base-uncased" - ) - self.bert_tokenizer = BertTokenizer.from_pretrained(vocab_file_name) - - def encode(self, x: str) -> str: - return " ".join(self.bert_tokenizer.tokenize(x)) - - def decode(self, x: str) -> str: - return self.bert_tokenizer.clean_up_tokenization( - self.bert_tokenizer.convert_tokens_to_string(x.split(" ")) - ) - - def is_beginning_of_word(self, x: str) -> bool: - return not x.startswith("##") diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py deleted file mode 100644 index 2c87445d810cd790f887d1a135287a334cbdf223..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import numpy as np - -import joblib -from examples.textless_nlp.gslm.speech2unit.clustering.utils import ( - get_audio_files, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_features, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--acoustic_model_path", - type=str, - help="Pretrained acoustic model checkpoint" - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--features_path", - type=str, - default=None, - help="Features file path. You don't need to enter acoustic model details if you have dumped features", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_quantized_file_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def main(args, logger): - # Feature extraction - if args.features_path is not None: - logger.info(f"Loading acoustic features from {args.features_path}...") - features_batch = np.load(args.features_path) - else: - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.acoustic_model_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info( - f"Features extracted for {len(features_batch)} utterances.\n" - ) - logger.info( - f"Dimensionality of representation = {features_batch[0].shape[1]}" - ) - - # K-means model - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(os.path.dirname(args.out_quantized_file_path), exist_ok=True) - print(f"Writing quantized predictions to {args.out_quantized_file_path}") - with open(args.out_quantized_file_path, "w") as fout: - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - pred_str = " ".join(str(p) for p in pred) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - fout.write(f"{base_fname}|{pred_str}\n") - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_chunker_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_chunker_utils.py deleted file mode 100644 index 5cded04572f0ab68c81db9ad14de1c18951a1a10..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_file_chunker_utils.py +++ /dev/null @@ -1,63 +0,0 @@ -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import shutil -import tempfile -import unittest -from typing import Optional - - -class TestFileChunker(unittest.TestCase): - _tmpdir: Optional[str] = None - _tmpfile: Optional[str] = None - _line_content = "Hello, World\n" - _num_bytes = None - _num_lines = 200 - _num_splits = 20 - - @classmethod - def setUpClass(cls) -> None: - cls._num_bytes = len(cls._line_content.encode("utf-8")) - cls._tmpdir = tempfile.mkdtemp() - with open(os.path.join(cls._tmpdir, "test.txt"), "w") as f: - cls._tmpfile = f.name - for _i in range(cls._num_lines): - f.write(cls._line_content) - f.flush() - - @classmethod - def tearDownClass(cls) -> None: - # Cleanup temp working dir. - if cls._tmpdir is not None: - shutil.rmtree(cls._tmpdir) # type: ignore - - def test_find_offsets(self): - from fairseq.file_chunker_utils import find_offsets - - offsets = find_offsets(self._tmpfile, self._num_splits) - self.assertEqual(len(offsets), self._num_splits + 1) - (zero, *real_offsets, last) = offsets - self.assertEqual(zero, 0) - for i, o in enumerate(real_offsets): - self.assertEqual( - o, - self._num_bytes - + ((i + 1) * self._num_bytes * self._num_lines / self._num_splits), - ) - self.assertEqual(last, self._num_bytes * self._num_lines) - - def test_readchunks(self): - from fairseq.file_chunker_utils import Chunker, find_offsets - - offsets = find_offsets(self._tmpfile, self._num_splits) - for start, end in zip(offsets, offsets[1:]): - with Chunker(self._tmpfile, start, end) as lines: - all_lines = list(lines) - num_lines = self._num_lines / self._num_splits - self.assertAlmostEqual( - len(all_lines), num_lines, delta=1 - ) # because we split on the bites, we might end up with one more/less line in a chunk - self.assertListEqual( - all_lines, [self._line_content for _ in range(len(all_lines))] - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/README.md deleted file mode 100644 index e86a0d13b883af0c37fdc2c1fee9b0b9dff4d18c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/unsupervised_quality_estimation/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020) - -This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural -Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608) - -## Requirements: - -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* subword-nmt: https://github.com/rsennrich/subword-nmt -* flores: https://github.com/facebookresearch/flores - -## Download Models and Test Data - -Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe). - -## Set up: - -Given a testset consisting of source sentences and reference translations: - -* `SRC_LANG`: source language -* `TGT_LANG`: target language -* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG` -contains the reference sentences -* `OUTPUT_DIR`: output path to store results -* `MOSES_DECODER`: path to mosesdecoder installation -* `BPE_ROOT`: path to subword-nmt installation -* `BPE`: path to BPE model -* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies. -* `TMP`: directory for intermediate temporary files -* `GPU`: if translating with GPU, id of the GPU to use for inference -* `DROPOUT_N`: number of stochastic forward passes - -`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10 -does not bring substantial improvements. - -## Translate the data using standard decoding - -Preprocess the input data: -``` -for LANG in $SRC_LANG $TGT_LANG; do - perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG - python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG -done -``` - -Binarize the data for faster translation: - -``` -fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt ---source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4 -``` - -Translate - -``` -CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5 ---source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out -grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out -``` - -Post-process - -``` -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $OUTPUT_DIR/mt.out -``` - -## Produce uncertainty estimates - -### Scoring - -Make temporary files to store the translations repeated N times. - -``` -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N --o $TMP/repeated.$SRC_LANG -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG - -fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG} ---target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated -``` - -Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5 - --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout - --retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]' - TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out - -grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores - -``` - -Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places -as for training. - -Compute the mean of the resulting output distribution: - -``` -python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean --n $DROPOUT_N -``` - -### Generation - -Produce multiple translation hypotheses for the same source using `--retain-dropout` option: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt - --beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout - --unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder -TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out - -grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_ - -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $TMP/dropout.hypotheses -``` - -Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor -evaluation metric: -``` -python meteor.py -i $TMP/dropout.hypotheses -m -n $DROPOUT_N -o -$OUTPUT_DIR/dropout.gen.sim.meteor -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/__init__.py deleted file mode 100644 index 3532479e52a0e1f1ba204c6f5d51c71c98ee5df0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.model_parallel.models." + model_name) diff --git a/spaces/PKaushik/HumanCounter/app.py b/spaces/PKaushik/HumanCounter/app.py deleted file mode 100644 index c423d51f8a72f9d90b9e98e11d585aa1fbbee4c6..0000000000000000000000000000000000000000 --- a/spaces/PKaushik/HumanCounter/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import cv2 -import numpy as np -import gradio as gr -from PIL import Image - -# Define path the model -PATH_PROTOTXT = os.path.join('saved_model/MobileNetSSD_deploy.prototxt') -PATH_MODEL = os.path.join('saved_model/MobileNetSSD_deploy.caffemodel') -# Define clasess model -CLASSES = [ - 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'hourse', - 'motorbike', 'person', 'porredplant', 'sheep', 'sofa', 'train', 'tvmonitor' -] - -# Load model -NET = cv2.dnn.readNetFromCaffe(PATH_PROTOTXT, PATH_MODEL) - -def person_counting(image, threshold=0.7): - ''' - Counting the number of people in the image - Args: - image: image to be processed - threshold: threshold to filter out the objects - Returns: - image: image with rectangles people detected - counting: count of people - ''' - - counting = 0 - W, H = image.shape[1], image.shape[0] - blob = cv2.dnn.blobFromImage(image, 0.007843, (W, H), 127.5) - NET.setInput(blob); detections = NET.forward() - - for i in np.arange(0, detections.shape[2]): - conf = detections[0, 0, i, 2] - idx = int(detections[0, 0, i, 1]) - if CLASSES[idx] == 'person' and conf > threshold: - box = detections[0, 0, i, 3:7] * np.array([W, H, W, H]) - x_min, y_min, x_max, y_max = box.astype('int') - counting += 1 - cv2.rectangle(image, pt1=(x_min,y_min), pt2=(x_max,y_max), color=(255,0,0), thickness=1) - return image, counting - -title = 'Human counting' -css = ".image-preview {height: auto !important;}" - -inputs = [gr.inputs.Image(source='upload'), gr.Slider(0, 1, value=0.5, label='threshold')] -outputs = [gr.outputs.Image(label='image output'), gr.Number(label='counting')] -examples = [[f'images/{i}', 0.5] for i in os.listdir('images')] - -iface = gr.Interface( - title = title, - fn = person_counting, - inputs = inputs, - outputs = outputs, - examples= examples, - css=css -) - -iface.launch() \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/display-lily.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/display-lily.go deleted file mode 100644 index 5bd380bd9dcad051ebbbdac090940d2838af54f5..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/display-lily.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper.go deleted file mode 100644 index 0544660f00007d29b4a3b6970627a533283e7271..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/up_conv_block.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/up_conv_block.py deleted file mode 100644 index 378469da76cb7bff6a639e7877b3c275d50490fb..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/up_conv_block.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, build_upsample_layer - - -class UpConvBlock(nn.Module): - """Upsample convolution block in decoder for UNet. - - This upsample convolution block consists of one upsample module - followed by one convolution block. The upsample module expands the - high-level low-resolution feature map and the convolution block fuses - the upsampled high-level low-resolution feature map and the low-level - high-resolution feature map from encoder. - - Args: - conv_block (nn.Sequential): Sequential of convolutional layers. - in_channels (int): Number of input channels of the high-level - skip_channels (int): Number of input channels of the low-level - high-resolution feature map from encoder. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers in the conv_block. - Default: 2. - stride (int): Stride of convolutional layer in conv_block. Default: 1. - dilation (int): Dilation rate of convolutional layer in conv_block. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). If the size of - high-level feature map is the same as that of skip feature map - (low-level feature map from encoder), it does not need upsample the - high-level feature map and the upsample_cfg is None. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - conv_block, - in_channels, - skip_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - dcn=None, - plugins=None): - super(UpConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.conv_block = conv_block( - in_channels=2 * skip_channels, - out_channels=out_channels, - num_convs=num_convs, - stride=stride, - dilation=dilation, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None) - if upsample_cfg is not None: - self.upsample = build_upsample_layer( - cfg=upsample_cfg, - in_channels=in_channels, - out_channels=skip_channels, - with_cp=with_cp, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - else: - self.upsample = ConvModule( - in_channels, - skip_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - def forward(self, skip, x): - """Forward function.""" - - x = self.upsample(x) - out = torch.cat([skip, x], dim=1) - out = self.conv_block(out) - - return out diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/base.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/base.py deleted file mode 100644 index a9d517e9f5bf0f4e18252c45c8db3a35a7255f69..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/discriminators/base.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -import torch -import torch.nn as nn - - -FeatureMapType = tp.List[torch.Tensor] -LogitsType = torch.Tensor -MultiDiscriminatorOutputType = tp.Tuple[tp.List[LogitsType], tp.List[FeatureMapType]] - - -class MultiDiscriminator(ABC, nn.Module): - """Base implementation for discriminators composed of sub-discriminators acting at different scales. - """ - def __init__(self): - super().__init__() - - @abstractmethod - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - ... - - @property - @abstractmethod - def num_discriminators(self) -> int: - """Number of discriminators. - """ - ... diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py deleted file mode 100644 index 334836b72559a120feb8a15eef3fe96ce88a4edb..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/musicgen/_explorers.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class LMExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['train', 'valid'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'train', - [ - tt.leaf('epoch'), - tt.leaf('duration', '.1f'), # duration in minutes - tt.leaf('ping'), - tt.leaf('ce', '.4f'), # cross entropy - tt.leaf("ppl", '.3f'), # perplexity - ], - align='>', - ), - tt.group( - 'valid', - [ - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('best_ppl', '.3f'), - ], - align='>', - ), - ] - - def process_sheep(self, sheep, history): - parts = super().process_sheep(sheep, history) - - track_by = {'ppl': 'lower'} # values should be in ['lower', 'higher'] - best_metrics = {k: (1 if v == 'lower' else -1) * float('inf') for k, v in track_by.items()} - - def comparator(mode, a, b): - return a < b if mode == 'lower' else a > b - - for metrics in history: - for key, sub in metrics.items(): - for metric in track_by: - # for the validation set, keep track of best metrics (ppl in this example) - # this is so we can conveniently compare metrics between runs in the grid - if key == 'valid' and metric in sub and comparator( - track_by[metric], sub[metric], best_metrics[metric] - ): - best_metrics[metric] = sub[metric] - - if 'valid' in parts: - parts['valid'].update({f'best_{k}': v for k, v in best_metrics.items()}) - return parts - - -class GenerationEvalExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['evaluate'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'evaluate', - [ - tt.leaf('epoch', '.3f'), - tt.leaf('duration', '.1f'), - tt.leaf('ping'), - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('fad', '.3f'), - tt.leaf('kld', '.3f'), - tt.leaf('text_consistency', '.3f'), - tt.leaf('chroma_cosine', '.3f'), - ], - align='>', - ), - ] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/base.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/base.py deleted file mode 100644 index cafb79fb3dcf43744393e2964056fe32c350bbc1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/base.py +++ /dev/null @@ -1,688 +0,0 @@ -import csv -import email.message -import functools -import json -import logging -import pathlib -import re -import zipfile -from typing import ( - IO, - TYPE_CHECKING, - Any, - Collection, - Container, - Dict, - Iterable, - Iterator, - List, - NamedTuple, - Optional, - Tuple, - Union, -) - -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.specifiers import InvalidSpecifier, SpecifierSet -from pip._vendor.packaging.utils import NormalizedName -from pip._vendor.packaging.version import LegacyVersion, Version - -from pip._internal.exceptions import NoneMetadataError -from pip._internal.locations import site_packages, user_site -from pip._internal.models.direct_url import ( - DIRECT_URL_METADATA_NAME, - DirectUrl, - DirectUrlValidationError, -) -from pip._internal.utils.compat import stdlib_pkgs # TODO: Move definition here. -from pip._internal.utils.egg_link import egg_link_path_from_sys_path -from pip._internal.utils.misc import is_local, normalize_path -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.urls import url_to_path - -from ._json import msg_to_json - -if TYPE_CHECKING: - from typing import Protocol -else: - Protocol = object - -DistributionVersion = Union[LegacyVersion, Version] - -InfoPath = Union[str, pathlib.PurePath] - -logger = logging.getLogger(__name__) - - -class BaseEntryPoint(Protocol): - @property - def name(self) -> str: - raise NotImplementedError() - - @property - def value(self) -> str: - raise NotImplementedError() - - @property - def group(self) -> str: - raise NotImplementedError() - - -def _convert_installed_files_path( - entry: Tuple[str, ...], - info: Tuple[str, ...], -) -> str: - """Convert a legacy installed-files.txt path into modern RECORD path. - - The legacy format stores paths relative to the info directory, while the - modern format stores paths relative to the package root, e.g. the - site-packages directory. - - :param entry: Path parts of the installed-files.txt entry. - :param info: Path parts of the egg-info directory relative to package root. - :returns: The converted entry. - - For best compatibility with symlinks, this does not use ``abspath()`` or - ``Path.resolve()``, but tries to work with path parts: - - 1. While ``entry`` starts with ``..``, remove the equal amounts of parts - from ``info``; if ``info`` is empty, start appending ``..`` instead. - 2. Join the two directly. - """ - while entry and entry[0] == "..": - if not info or info[-1] == "..": - info += ("..",) - else: - info = info[:-1] - entry = entry[1:] - return str(pathlib.Path(*info, *entry)) - - -class RequiresEntry(NamedTuple): - requirement: str - extra: str - marker: str - - -class BaseDistribution(Protocol): - @classmethod - def from_directory(cls, directory: str) -> "BaseDistribution": - """Load the distribution from a metadata directory. - - :param directory: Path to a metadata directory, e.g. ``.dist-info``. - """ - raise NotImplementedError() - - @classmethod - def from_metadata_file_contents( - cls, - metadata_contents: bytes, - filename: str, - project_name: str, - ) -> "BaseDistribution": - """Load the distribution from the contents of a METADATA file. - - This is used to implement PEP 658 by generating a "shallow" dist object that can - be used for resolution without downloading or building the actual dist yet. - - :param metadata_contents: The contents of a METADATA file. - :param filename: File name for the dist with this metadata. - :param project_name: Name of the project this dist represents. - """ - raise NotImplementedError() - - @classmethod - def from_wheel(cls, wheel: "Wheel", name: str) -> "BaseDistribution": - """Load the distribution from a given wheel. - - :param wheel: A concrete wheel definition. - :param name: File name of the wheel. - - :raises InvalidWheel: Whenever loading of the wheel causes a - :py:exc:`zipfile.BadZipFile` exception to be thrown. - :raises UnsupportedWheel: If the wheel is a valid zip, but malformed - internally. - """ - raise NotImplementedError() - - def __repr__(self) -> str: - return f"{self.raw_name} {self.version} ({self.location})" - - def __str__(self) -> str: - return f"{self.raw_name} {self.version}" - - @property - def location(self) -> Optional[str]: - """Where the distribution is loaded from. - - A string value is not necessarily a filesystem path, since distributions - can be loaded from other sources, e.g. arbitrary zip archives. ``None`` - means the distribution is created in-memory. - - Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If - this is a symbolic link, we want to preserve the relative path between - it and files in the distribution. - """ - raise NotImplementedError() - - @property - def editable_project_location(self) -> Optional[str]: - """The project location for editable distributions. - - This is the directory where pyproject.toml or setup.py is located. - None if the distribution is not installed in editable mode. - """ - # TODO: this property is relatively costly to compute, memoize it ? - direct_url = self.direct_url - if direct_url: - if direct_url.is_local_editable(): - return url_to_path(direct_url.url) - else: - # Search for an .egg-link file by walking sys.path, as it was - # done before by dist_is_editable(). - egg_link_path = egg_link_path_from_sys_path(self.raw_name) - if egg_link_path: - # TODO: get project location from second line of egg_link file - # (https://github.com/pypa/pip/issues/10243) - return self.location - return None - - @property - def installed_location(self) -> Optional[str]: - """The distribution's "installed" location. - - This should generally be a ``site-packages`` directory. This is - usually ``dist.location``, except for legacy develop-installed packages, - where ``dist.location`` is the source code location, and this is where - the ``.egg-link`` file is. - - The returned location is normalized (in particular, with symlinks removed). - """ - raise NotImplementedError() - - @property - def info_location(self) -> Optional[str]: - """Location of the .[egg|dist]-info directory or file. - - Similarly to ``location``, a string value is not necessarily a - filesystem path. ``None`` means the distribution is created in-memory. - - For a modern .dist-info installation on disk, this should be something - like ``{location}/{raw_name}-{version}.dist-info``. - - Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If - this is a symbolic link, we want to preserve the relative path between - it and other files in the distribution. - """ - raise NotImplementedError() - - @property - def installed_by_distutils(self) -> bool: - """Whether this distribution is installed with legacy distutils format. - - A distribution installed with "raw" distutils not patched by setuptools - uses one single file at ``info_location`` to store metadata. We need to - treat this specially on uninstallation. - """ - info_location = self.info_location - if not info_location: - return False - return pathlib.Path(info_location).is_file() - - @property - def installed_as_egg(self) -> bool: - """Whether this distribution is installed as an egg. - - This usually indicates the distribution was installed by (older versions - of) easy_install. - """ - location = self.location - if not location: - return False - return location.endswith(".egg") - - @property - def installed_with_setuptools_egg_info(self) -> bool: - """Whether this distribution is installed with the ``.egg-info`` format. - - This usually indicates the distribution was installed with setuptools - with an old pip version or with ``single-version-externally-managed``. - - Note that this ensure the metadata store is a directory. distutils can - also installs an ``.egg-info``, but as a file, not a directory. This - property is *False* for that case. Also see ``installed_by_distutils``. - """ - info_location = self.info_location - if not info_location: - return False - if not info_location.endswith(".egg-info"): - return False - return pathlib.Path(info_location).is_dir() - - @property - def installed_with_dist_info(self) -> bool: - """Whether this distribution is installed with the "modern format". - - This indicates a "modern" installation, e.g. storing metadata in the - ``.dist-info`` directory. This applies to installations made by - setuptools (but through pip, not directly), or anything using the - standardized build backend interface (PEP 517). - """ - info_location = self.info_location - if not info_location: - return False - if not info_location.endswith(".dist-info"): - return False - return pathlib.Path(info_location).is_dir() - - @property - def canonical_name(self) -> NormalizedName: - raise NotImplementedError() - - @property - def version(self) -> DistributionVersion: - raise NotImplementedError() - - @property - def setuptools_filename(self) -> str: - """Convert a project name to its setuptools-compatible filename. - - This is a copy of ``pkg_resources.to_filename()`` for compatibility. - """ - return self.raw_name.replace("-", "_") - - @property - def direct_url(self) -> Optional[DirectUrl]: - """Obtain a DirectUrl from this distribution. - - Returns None if the distribution has no `direct_url.json` metadata, - or if `direct_url.json` is invalid. - """ - try: - content = self.read_text(DIRECT_URL_METADATA_NAME) - except FileNotFoundError: - return None - try: - return DirectUrl.from_json(content) - except ( - UnicodeDecodeError, - json.JSONDecodeError, - DirectUrlValidationError, - ) as e: - logger.warning( - "Error parsing %s for %s: %s", - DIRECT_URL_METADATA_NAME, - self.canonical_name, - e, - ) - return None - - @property - def installer(self) -> str: - try: - installer_text = self.read_text("INSTALLER") - except (OSError, ValueError, NoneMetadataError): - return "" # Fail silently if the installer file cannot be read. - for line in installer_text.splitlines(): - cleaned_line = line.strip() - if cleaned_line: - return cleaned_line - return "" - - @property - def requested(self) -> bool: - return self.is_file("REQUESTED") - - @property - def editable(self) -> bool: - return bool(self.editable_project_location) - - @property - def local(self) -> bool: - """If distribution is installed in the current virtual environment. - - Always True if we're not in a virtualenv. - """ - if self.installed_location is None: - return False - return is_local(self.installed_location) - - @property - def in_usersite(self) -> bool: - if self.installed_location is None or user_site is None: - return False - return self.installed_location.startswith(normalize_path(user_site)) - - @property - def in_site_packages(self) -> bool: - if self.installed_location is None or site_packages is None: - return False - return self.installed_location.startswith(normalize_path(site_packages)) - - def is_file(self, path: InfoPath) -> bool: - """Check whether an entry in the info directory is a file.""" - raise NotImplementedError() - - def iter_distutils_script_names(self) -> Iterator[str]: - """Find distutils 'scripts' entries metadata. - - If 'scripts' is supplied in ``setup.py``, distutils records those in the - installed distribution's ``scripts`` directory, a file for each script. - """ - raise NotImplementedError() - - def read_text(self, path: InfoPath) -> str: - """Read a file in the info directory. - - :raise FileNotFoundError: If ``path`` does not exist in the directory. - :raise NoneMetadataError: If ``path`` exists in the info directory, but - cannot be read. - """ - raise NotImplementedError() - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - raise NotImplementedError() - - def _metadata_impl(self) -> email.message.Message: - raise NotImplementedError() - - @functools.lru_cache(maxsize=1) - def _metadata_cached(self) -> email.message.Message: - # When we drop python 3.7 support, move this to the metadata property and use - # functools.cached_property instead of lru_cache. - metadata = self._metadata_impl() - self._add_egg_info_requires(metadata) - return metadata - - @property - def metadata(self) -> email.message.Message: - """Metadata of distribution parsed from e.g. METADATA or PKG-INFO. - - This should return an empty message if the metadata file is unavailable. - - :raises NoneMetadataError: If the metadata file is available, but does - not contain valid metadata. - """ - return self._metadata_cached() - - @property - def metadata_dict(self) -> Dict[str, Any]: - """PEP 566 compliant JSON-serializable representation of METADATA or PKG-INFO. - - This should return an empty dict if the metadata file is unavailable. - - :raises NoneMetadataError: If the metadata file is available, but does - not contain valid metadata. - """ - return msg_to_json(self.metadata) - - @property - def metadata_version(self) -> Optional[str]: - """Value of "Metadata-Version:" in distribution metadata, if available.""" - return self.metadata.get("Metadata-Version") - - @property - def raw_name(self) -> str: - """Value of "Name:" in distribution metadata.""" - # The metadata should NEVER be missing the Name: key, but if it somehow - # does, fall back to the known canonical name. - return self.metadata.get("Name", self.canonical_name) - - @property - def requires_python(self) -> SpecifierSet: - """Value of "Requires-Python:" in distribution metadata. - - If the key does not exist or contains an invalid value, an empty - SpecifierSet should be returned. - """ - value = self.metadata.get("Requires-Python") - if value is None: - return SpecifierSet() - try: - # Convert to str to satisfy the type checker; this can be a Header object. - spec = SpecifierSet(str(value)) - except InvalidSpecifier as e: - message = "Package %r has an invalid Requires-Python: %s" - logger.warning(message, self.raw_name, e) - return SpecifierSet() - return spec - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - """Dependencies of this distribution. - - For modern .dist-info distributions, this is the collection of - "Requires-Dist:" entries in distribution metadata. - """ - raise NotImplementedError() - - def iter_provided_extras(self) -> Iterable[str]: - """Extras provided by this distribution. - - For modern .dist-info distributions, this is the collection of - "Provides-Extra:" entries in distribution metadata. - """ - raise NotImplementedError() - - def _iter_declared_entries_from_record(self) -> Optional[Iterator[str]]: - try: - text = self.read_text("RECORD") - except FileNotFoundError: - return None - # This extra Path-str cast normalizes entries. - return (str(pathlib.Path(row[0])) for row in csv.reader(text.splitlines())) - - def _iter_declared_entries_from_legacy(self) -> Optional[Iterator[str]]: - try: - text = self.read_text("installed-files.txt") - except FileNotFoundError: - return None - paths = (p for p in text.splitlines(keepends=False) if p) - root = self.location - info = self.info_location - if root is None or info is None: - return paths - try: - info_rel = pathlib.Path(info).relative_to(root) - except ValueError: # info is not relative to root. - return paths - if not info_rel.parts: # info *is* root. - return paths - return ( - _convert_installed_files_path(pathlib.Path(p).parts, info_rel.parts) - for p in paths - ) - - def iter_declared_entries(self) -> Optional[Iterator[str]]: - """Iterate through file entries declared in this distribution. - - For modern .dist-info distributions, this is the files listed in the - ``RECORD`` metadata file. For legacy setuptools distributions, this - comes from ``installed-files.txt``, with entries normalized to be - compatible with the format used by ``RECORD``. - - :return: An iterator for listed entries, or None if the distribution - contains neither ``RECORD`` nor ``installed-files.txt``. - """ - return ( - self._iter_declared_entries_from_record() - or self._iter_declared_entries_from_legacy() - ) - - def _iter_requires_txt_entries(self) -> Iterator[RequiresEntry]: - """Parse a ``requires.txt`` in an egg-info directory. - - This is an INI-ish format where an egg-info stores dependencies. A - section name describes extra other environment markers, while each entry - is an arbitrary string (not a key-value pair) representing a dependency - as a requirement string (no markers). - - There is a construct in ``importlib.metadata`` called ``Sectioned`` that - does mostly the same, but the format is currently considered private. - """ - try: - content = self.read_text("requires.txt") - except FileNotFoundError: - return - extra = marker = "" # Section-less entries don't have markers. - for line in content.splitlines(): - line = line.strip() - if not line or line.startswith("#"): # Comment; ignored. - continue - if line.startswith("[") and line.endswith("]"): # A section header. - extra, _, marker = line.strip("[]").partition(":") - continue - yield RequiresEntry(requirement=line, extra=extra, marker=marker) - - def _iter_egg_info_extras(self) -> Iterable[str]: - """Get extras from the egg-info directory.""" - known_extras = {""} - for entry in self._iter_requires_txt_entries(): - if entry.extra in known_extras: - continue - known_extras.add(entry.extra) - yield entry.extra - - def _iter_egg_info_dependencies(self) -> Iterable[str]: - """Get distribution dependencies from the egg-info directory. - - To ease parsing, this converts a legacy dependency entry into a PEP 508 - requirement string. Like ``_iter_requires_txt_entries()``, there is code - in ``importlib.metadata`` that does mostly the same, but not do exactly - what we need. - - Namely, ``importlib.metadata`` does not normalize the extra name before - putting it into the requirement string, which causes marker comparison - to fail because the dist-info format do normalize. This is consistent in - all currently available PEP 517 backends, although not standardized. - """ - for entry in self._iter_requires_txt_entries(): - if entry.extra and entry.marker: - marker = f'({entry.marker}) and extra == "{safe_extra(entry.extra)}"' - elif entry.extra: - marker = f'extra == "{safe_extra(entry.extra)}"' - elif entry.marker: - marker = entry.marker - else: - marker = "" - if marker: - yield f"{entry.requirement} ; {marker}" - else: - yield entry.requirement - - def _add_egg_info_requires(self, metadata: email.message.Message) -> None: - """Add egg-info requires.txt information to the metadata.""" - if not metadata.get_all("Requires-Dist"): - for dep in self._iter_egg_info_dependencies(): - metadata["Requires-Dist"] = dep - if not metadata.get_all("Provides-Extra"): - for extra in self._iter_egg_info_extras(): - metadata["Provides-Extra"] = extra - - -class BaseEnvironment: - """An environment containing distributions to introspect.""" - - @classmethod - def default(cls) -> "BaseEnvironment": - raise NotImplementedError() - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> "BaseEnvironment": - raise NotImplementedError() - - def get_distribution(self, name: str) -> Optional["BaseDistribution"]: - """Given a requirement name, return the installed distributions. - - The name may not be normalized. The implementation must canonicalize - it for lookup. - """ - raise NotImplementedError() - - def _iter_distributions(self) -> Iterator["BaseDistribution"]: - """Iterate through installed distributions. - - This function should be implemented by subclass, but never called - directly. Use the public ``iter_distribution()`` instead, which - implements additional logic to make sure the distributions are valid. - """ - raise NotImplementedError() - - def iter_all_distributions(self) -> Iterator[BaseDistribution]: - """Iterate through all installed distributions without any filtering.""" - for dist in self._iter_distributions(): - # Make sure the distribution actually comes from a valid Python - # packaging distribution. Pip's AdjacentTempDirectory leaves folders - # e.g. ``~atplotlib.dist-info`` if cleanup was interrupted. The - # valid project name pattern is taken from PEP 508. - project_name_valid = re.match( - r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", - dist.canonical_name, - flags=re.IGNORECASE, - ) - if not project_name_valid: - logger.warning( - "Ignoring invalid distribution %s (%s)", - dist.canonical_name, - dist.location, - ) - continue - yield dist - - def iter_installed_distributions( - self, - local_only: bool = True, - skip: Container[str] = stdlib_pkgs, - include_editables: bool = True, - editables_only: bool = False, - user_only: bool = False, - ) -> Iterator[BaseDistribution]: - """Return a list of installed distributions. - - This is based on ``iter_all_distributions()`` with additional filtering - options. Note that ``iter_installed_distributions()`` without arguments - is *not* equal to ``iter_all_distributions()``, since some of the - configurations exclude packages by default. - - :param local_only: If True (default), only return installations - local to the current virtualenv, if in a virtualenv. - :param skip: An iterable of canonicalized project names to ignore; - defaults to ``stdlib_pkgs``. - :param include_editables: If False, don't report editables. - :param editables_only: If True, only report editables. - :param user_only: If True, only report installations in the user - site directory. - """ - it = self.iter_all_distributions() - if local_only: - it = (d for d in it if d.local) - if not include_editables: - it = (d for d in it if not d.editable) - if editables_only: - it = (d for d in it if d.editable) - if user_only: - it = (d for d in it if d.in_usersite) - return (d for d in it if d.canonical_name not in skip) - - -class Wheel(Protocol): - location: str - - def as_zipfile(self) -> zipfile.ZipFile: - raise NotImplementedError() - - -class FilesystemWheel(Wheel): - def __init__(self, location: str) -> None: - self.location = location - - def as_zipfile(self) -> zipfile.ZipFile: - return zipfile.ZipFile(self.location, allowZip64=True) - - -class MemoryWheel(Wheel): - def __init__(self, location: str, stream: IO[bytes]) -> None: - self.location = location - self.stream = stream - - def as_zipfile(self) -> zipfile.ZipFile: - return zipfile.ZipFile(self.stream, allowZip64=True) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/develop.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/develop.py deleted file mode 100644 index 24fb0a7c81bc665844d5d307eee2d720079c039f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/develop.py +++ /dev/null @@ -1,193 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsError, DistutilsOptionError -import os -import glob -import io - -import pkg_resources -from setuptools.command.easy_install import easy_install -from setuptools import namespaces -import setuptools - - -class develop(namespaces.DevelopInstaller, easy_install): - """Set up package for development""" - - description = "install package in 'development mode'" - - user_options = easy_install.user_options + [ - ("uninstall", "u", "Uninstall this source package"), - ("egg-path=", None, "Set the path to be used in the .egg-link file"), - ] - - boolean_options = easy_install.boolean_options + ['uninstall'] - - command_consumes_arguments = False # override base - - def run(self): - if self.uninstall: - self.multi_version = True - self.uninstall_link() - self.uninstall_namespaces() - else: - self.install_for_development() - self.warn_deprecated_options() - - def initialize_options(self): - self.uninstall = None - self.egg_path = None - easy_install.initialize_options(self) - self.setup_path = None - self.always_copy_from = '.' # always copy eggs installed in curdir - - def finalize_options(self): - ei = self.get_finalized_command("egg_info") - if ei.broken_egg_info: - template = "Please rename %r to %r before using 'develop'" - args = ei.egg_info, ei.broken_egg_info - raise DistutilsError(template % args) - self.args = [ei.egg_name] - - easy_install.finalize_options(self) - self.expand_basedirs() - self.expand_dirs() - # pick up setup-dir .egg files only: no .egg-info - self.package_index.scan(glob.glob('*.egg')) - - egg_link_fn = ei.egg_name + '.egg-link' - self.egg_link = os.path.join(self.install_dir, egg_link_fn) - self.egg_base = ei.egg_base - if self.egg_path is None: - self.egg_path = os.path.abspath(ei.egg_base) - - target = pkg_resources.normalize_path(self.egg_base) - egg_path = pkg_resources.normalize_path( - os.path.join(self.install_dir, self.egg_path) - ) - if egg_path != target: - raise DistutilsOptionError( - "--egg-path must be a relative path from the install" - " directory to " + target - ) - - # Make a distribution for the package's source - self.dist = pkg_resources.Distribution( - target, - pkg_resources.PathMetadata(target, os.path.abspath(ei.egg_info)), - project_name=ei.egg_name, - ) - - self.setup_path = self._resolve_setup_path( - self.egg_base, - self.install_dir, - self.egg_path, - ) - - @staticmethod - def _resolve_setup_path(egg_base, install_dir, egg_path): - """ - Generate a path from egg_base back to '.' where the - setup script resides and ensure that path points to the - setup path from $install_dir/$egg_path. - """ - path_to_setup = egg_base.replace(os.sep, '/').rstrip('/') - if path_to_setup != os.curdir: - path_to_setup = '../' * (path_to_setup.count('/') + 1) - resolved = pkg_resources.normalize_path( - os.path.join(install_dir, egg_path, path_to_setup) - ) - if resolved != pkg_resources.normalize_path(os.curdir): - raise DistutilsOptionError( - "Can't get a consistent path to setup script from" - " installation directory", - resolved, - pkg_resources.normalize_path(os.curdir), - ) - return path_to_setup - - def install_for_development(self): - self.run_command('egg_info') - - # Build extensions in-place - self.reinitialize_command('build_ext', inplace=1) - self.run_command('build_ext') - - if setuptools.bootstrap_install_from: - self.easy_install(setuptools.bootstrap_install_from) - setuptools.bootstrap_install_from = None - - self.install_namespaces() - - # create an .egg-link in the installation dir, pointing to our egg - log.info("Creating %s (link to %s)", self.egg_link, self.egg_base) - if not self.dry_run: - with open(self.egg_link, "w") as f: - f.write(self.egg_path + "\n" + self.setup_path) - # postprocess the installed distro, fixing up .pth, installing scripts, - # and handling requirements - self.process_distribution(None, self.dist, not self.no_deps) - - def uninstall_link(self): - if os.path.exists(self.egg_link): - log.info("Removing %s (link to %s)", self.egg_link, self.egg_base) - egg_link_file = open(self.egg_link) - contents = [line.rstrip() for line in egg_link_file] - egg_link_file.close() - if contents not in ([self.egg_path], [self.egg_path, self.setup_path]): - log.warn("Link points to %s: uninstall aborted", contents) - return - if not self.dry_run: - os.unlink(self.egg_link) - if not self.dry_run: - self.update_pth(self.dist) # remove any .pth link to us - if self.distribution.scripts: - # XXX should also check for entry point scripts! - log.warn("Note: you must uninstall or replace scripts manually!") - - def install_egg_scripts(self, dist): - if dist is not self.dist: - # Installing a dependency, so fall back to normal behavior - return easy_install.install_egg_scripts(self, dist) - - # create wrapper scripts in the script dir, pointing to dist.scripts - - # new-style... - self.install_wrapper_scripts(dist) - - # ...and old-style - for script_name in self.distribution.scripts or []: - script_path = os.path.abspath(convert_path(script_name)) - script_name = os.path.basename(script_path) - with io.open(script_path) as strm: - script_text = strm.read() - self.install_script(dist, script_name, script_text, script_path) - - def install_wrapper_scripts(self, dist): - dist = VersionlessRequirement(dist) - return easy_install.install_wrapper_scripts(self, dist) - - -class VersionlessRequirement: - """ - Adapt a pkg_resources.Distribution to simply return the project - name as the 'requirement' so that scripts will work across - multiple versions. - - >>> from pkg_resources import Distribution - >>> dist = Distribution(project_name='foo', version='1.0') - >>> str(dist.as_requirement()) - 'foo==1.0' - >>> adapted_dist = VersionlessRequirement(dist) - >>> str(adapted_dist.as_requirement()) - 'foo' - """ - - def __init__(self, dist): - self.__dist = dist - - def __getattr__(self, name): - return getattr(self.__dist, name) - - def as_requirement(self): - return self.project_name diff --git a/spaces/Rayzggz/illi-Bert-VITS2/text/tone_sandhi.py b/spaces/Rayzggz/illi-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 6a6e4c3e64f1a9e8b9da73fc6fbebf8a33e5602d..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,769 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi: - def __init__(self): - self.must_neural_tone_words = { - "麻烦", - "麻利", - "鸳鸯", - "高粱", - "骨头", - "骆驼", - "马虎", - "首饰", - "馒头", - "馄饨", - "风筝", - "难为", - "队伍", - "阔气", - "闺女", - "门道", - "锄头", - "铺盖", - "铃铛", - "铁匠", - "钥匙", - "里脊", - "里头", - "部分", - "那么", - "道士", - "造化", - "迷糊", - "连累", - "这么", - "这个", - "运气", - "过去", - "软和", - "转悠", - "踏实", - "跳蚤", - "跟头", - "趔趄", - "财主", - "豆腐", - "讲究", - "记性", - "记号", - "认识", - "规矩", - "见识", - "裁缝", - "补丁", - "衣裳", - "衣服", - "衙门", - "街坊", - "行李", - "行当", - "蛤蟆", - "蘑菇", - "薄荷", - "葫芦", - "葡萄", - "萝卜", - "荸荠", - "苗条", - "苗头", - "苍蝇", - "芝麻", - "舒服", - "舒坦", - "舌头", - "自在", - "膏药", - "脾气", - "脑袋", - "脊梁", - "能耐", - "胳膊", - "胭脂", - "胡萝", - "胡琴", - "胡同", - "聪明", - "耽误", - "耽搁", - "耷拉", - "耳朵", - "老爷", - "老实", - "老婆", - "老头", - "老太", - "翻腾", - "罗嗦", - "罐头", - "编辑", - "结实", - "红火", - "累赘", - "糨糊", - "糊涂", - "精神", - "粮食", - "簸箕", - "篱笆", - "算计", - "算盘", - "答应", - "笤帚", - "笑语", - "笑话", - "窟窿", - "窝囊", - "窗户", - "稳当", - "稀罕", - "称呼", - "秧歌", - "秀气", - "秀才", - "福气", - "祖宗", - "砚台", - "码头", - "石榴", - "石头", - "石匠", - "知识", - "眼睛", - "眯缝", - "眨巴", - "眉毛", - "相声", - "盘算", - "白净", - "痢疾", - "痛快", - "疟疾", - "疙瘩", - "疏忽", - "畜生", - "生意", - "甘蔗", - "琵琶", - "琢磨", - "琉璃", - "玻璃", - "玫瑰", - "玄乎", - "狐狸", - "状元", - "特务", - "牲口", - "牙碜", - "牌楼", - "爽快", - "爱人", - "热闹", - "烧饼", - "烟筒", - "烂糊", - "点心", - "炊帚", - "灯笼", - "火候", - "漂亮", - "滑溜", - "溜达", - "温和", - "清楚", - "消息", - "浪头", - "活泼", - "比方", - "正经", - "欺负", - "模糊", - "槟榔", - "棺材", - "棒槌", - "棉花", - "核桃", - "栅栏", - "柴火", - "架势", - "枕头", - "枇杷", - "机灵", - "本事", - "木头", - "木匠", - "朋友", - "月饼", - "月亮", - "暖和", - "明白", - "时候", - "新鲜", - "故事", - "收拾", - "收成", - "提防", - "挖苦", - "挑剔", - "指甲", - "指头", - "拾掇", - "拳头", - "拨弄", - "招牌", - "招呼", - "抬举", - "护士", - "折腾", - "扫帚", - "打量", - "打算", - "打点", - "打扮", - "打听", - "打发", - "扎实", - "扁担", - "戒指", - "懒得", - "意识", - "意思", - "情形", - "悟性", - "怪物", - "思量", - "怎么", - "念头", - "念叨", - "快活", - "忙活", - "志气", - "心思", - "得罪", - "张罗", - "弟兄", - "开通", - "应酬", - "庄稼", - "干事", - "帮手", - "帐篷", - "希罕", - "师父", - "师傅", - "巴结", - "巴掌", - "差事", - "工夫", - "岁数", - "屁股", - "尾巴", - "少爷", - "小气", - "小伙", - "将就", - "对头", - "对付", - "寡妇", - "家伙", - "客气", - "实在", - "官司", - "学问", - "学生", - "字号", - "嫁妆", - "媳妇", - "媒人", - "婆家", - "娘家", - "委屈", - "姑娘", - "姐夫", - "妯娌", - "妥当", - "妖精", - "奴才", - "女婿", - "头发", - "太阳", - "大爷", - "大方", - "大意", - "大夫", - "多少", - "多么", - "外甥", - "壮实", - "地道", - "地方", - "在乎", - "困难", - "嘴巴", - "嘱咐", - "嘟囔", - "嘀咕", - "喜欢", - "喇嘛", - "喇叭", - "商量", - "唾沫", - "哑巴", - "哈欠", - "哆嗦", - "咳嗽", - "和尚", - "告诉", - "告示", - "含糊", - "吓唬", - "后头", - "名字", - "名堂", - "合同", - "吆喝", - "叫唤", - "口袋", - "厚道", - "厉害", - "千斤", - "包袱", - "包涵", - "匀称", - "勤快", - "动静", - "动弹", - "功夫", - "力气", - "前头", - "刺猬", - "刺激", - "别扭", - "利落", - "利索", - "利害", - "分析", - "出息", - "凑合", - "凉快", - "冷战", - "冤枉", - "冒失", - "养活", - "关系", - "先生", - "兄弟", - "便宜", - "使唤", - "佩服", - "作坊", - "体面", - "位置", - "似的", - "伙计", - "休息", - "什么", - "人家", - "亲戚", - "亲家", - "交情", - "云彩", - "事情", - "买卖", - "主意", - "丫头", - "丧气", - "两口", - "东西", - "东家", - "世故", - "不由", - "不在", - "下水", - "下巴", - "上头", - "上司", - "丈夫", - "丈人", - "一辈", - "那个", - "菩萨", - "父亲", - "母亲", - "咕噜", - "邋遢", - "费用", - "冤家", - "甜头", - "介绍", - "荒唐", - "大人", - "泥鳅", - "幸福", - "熟悉", - "计划", - "扑腾", - "蜡烛", - "姥爷", - "照顾", - "喉咙", - "吉他", - "弄堂", - "蚂蚱", - "凤凰", - "拖沓", - "寒碜", - "糟蹋", - "倒腾", - "报复", - "逻辑", - "盘缠", - "喽啰", - "牢骚", - "咖喱", - "扫把", - "惦记", - } - self.must_not_neural_tone_words = { - "男子", - "女子", - "分子", - "原子", - "量子", - "莲子", - "石子", - "瓜子", - "电子", - "人人", - "虎虎", - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, finals: List[str]) -> List[str]: - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if ( - j - 1 >= 0 - and item == word[j - 1] - and pos[0] in {"n", "v", "a"} - and word not in self.must_not_neural_tone_words - ): - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif ( - len(word) > 1 - and word[-1] in "们子" - and pos in {"r", "n"} - and word not in self.must_not_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif ( - ge_idx >= 1 - and (word[ge_idx - 1].isnumeric() or word[ge_idx - 1] in "几有两半多各整每做是") - ) or word == "个": - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"] - ): - return finals - # "一" between reduplication words should be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword) :] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[: -len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif ( - i == 1 - and not self._all_tone_three(sub) - and finals_list[i][0][-1] == "3" - and finals_list[0][-1][-1] == "3" - ): - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, "d")) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and word == "一" - and i + 1 < len(seg) - and seg[i - 1][0] == seg[i + 1][0] - and seg[i - 1][1] == "v" - ): - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if ( - i - 2 >= 0 - and seg[i - 1][0] == "一" - and seg[i - 2][0] == word - and pos == "v" - ): - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and self._all_tone_three(sub_finals_list[i - 1]) - and self._all_tone_three(sub_finals_list[i]) - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and sub_finals_list[i - 1][-1][-1] == "3" - and sub_finals_list[i][0][-1] == "3" - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/pipeline.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/pipeline.py deleted file mode 100644 index 20877cfd8ef19948277bc5d280999ffe783ed0b9..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/RobotCar/pipeline.py +++ /dev/null @@ -1,134 +0,0 @@ -from pathlib import Path -import argparse - -from . import colmap_from_nvm -from ... import extract_features, match_features, triangulation -from ... import pairs_from_covisibility, pairs_from_retrieval, localize_sfm - - -CONDITIONS = [ - "dawn", - "dusk", - "night", - "night-rain", - "overcast-summer", - "overcast-winter", - "rain", - "snow", - "sun", -] - - -def generate_query_list(dataset, image_dir, path): - h, w = 1024, 1024 - intrinsics_filename = "intrinsics/{}_intrinsics.txt" - cameras = {} - for side in ["left", "right", "rear"]: - with open(dataset / intrinsics_filename.format(side), "r") as f: - fx = f.readline().split()[1] - fy = f.readline().split()[1] - cx = f.readline().split()[1] - cy = f.readline().split()[1] - assert fx == fy - params = ["SIMPLE_RADIAL", w, h, fx, cx, cy, 0.0] - cameras[side] = [str(p) for p in params] - - queries = sorted(image_dir.glob("**/*.jpg")) - queries = [str(q.relative_to(image_dir.parents[0])) for q in queries] - - out = [[q] + cameras[Path(q).parent.name] for q in queries] - with open(path, "w") as f: - f.write("\n".join(map(" ".join, out))) - - -parser = argparse.ArgumentParser() -parser.add_argument( - "--dataset", - type=Path, - default="datasets/robotcar", - help="Path to the dataset, default: %(default)s", -) -parser.add_argument( - "--outputs", - type=Path, - default="outputs/robotcar", - help="Path to the output directory, default: %(default)s", -) -parser.add_argument( - "--num_covis", - type=int, - default=20, - help="Number of image pairs for SfM, default: %(default)s", -) -parser.add_argument( - "--num_loc", - type=int, - default=20, - help="Number of image pairs for loc, default: %(default)s", -) -args = parser.parse_args() - -# Setup the paths -dataset = args.dataset -images = dataset / "images/" - -outputs = args.outputs # where everything will be saved -outputs.mkdir(exist_ok=True, parents=True) -query_list = outputs / "{condition}_queries_with_intrinsics.txt" -sift_sfm = outputs / "sfm_sift" -reference_sfm = outputs / "sfm_superpoint+superglue" -sfm_pairs = outputs / f"pairs-db-covis{args.num_covis}.txt" -loc_pairs = outputs / f"pairs-query-netvlad{args.num_loc}.txt" -results = ( - outputs / f"RobotCar_hloc_superpoint+superglue_netvlad{args.num_loc}.txt" -) - -# pick one of the configurations for extraction and matching -retrieval_conf = extract_features.confs["netvlad"] -feature_conf = extract_features.confs["superpoint_aachen"] -matcher_conf = match_features.confs["superglue"] - -for condition in CONDITIONS: - generate_query_list( - dataset, images / condition, str(query_list).format(condition=condition) - ) - -features = extract_features.main(feature_conf, images, outputs, as_half=True) - -colmap_from_nvm.main( - dataset / "3D-models/all-merged/all.nvm", - dataset / "3D-models/overcast-reference.db", - sift_sfm, -) -pairs_from_covisibility.main(sift_sfm, sfm_pairs, num_matched=args.num_covis) -sfm_matches = match_features.main( - matcher_conf, sfm_pairs, feature_conf["output"], outputs -) - -triangulation.main( - reference_sfm, sift_sfm, images, sfm_pairs, features, sfm_matches -) - -global_descriptors = extract_features.main(retrieval_conf, images, outputs) -# TODO: do per location and per camera -pairs_from_retrieval.main( - global_descriptors, - loc_pairs, - args.num_loc, - query_prefix=CONDITIONS, - db_model=reference_sfm, -) -loc_matches = match_features.main( - matcher_conf, loc_pairs, feature_conf["output"], outputs -) - -localize_sfm.main( - reference_sfm, - Path(str(query_list).format(condition="*")), - loc_pairs, - features, - loc_matches, - results, - covisibility_clustering=False, - prepend_camera_name=True, -) diff --git a/spaces/Ricecake123/RVC-demo/docs/Changelog_EN.md b/spaces/Ricecake123/RVC-demo/docs/Changelog_EN.md deleted file mode 100644 index 20fc84c86f0ea9864f7a9915bd3d863b8a478d86..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/docs/Changelog_EN.md +++ /dev/null @@ -1,83 +0,0 @@ -### 2023-06-18 -- New pretrained v2 models: 32k and 48k -- Fix non-f0 model inference errors -- For training-set exceeding 1 hour, do automatic minibatch-kmeans to reduce feature shape, so that index training, adding, and searching will be much faster. -- Provide a toy vocal2guitar huggingface space -- Auto delete outlier short cut training-set audios -- Onnx export tab - -Failed experiments: -- ~~Feature retrieval: add temporal feature retrieval: not effective~~ -- ~~Feature retrieval: add PCAR dimensionality reduction: searching is even slower~~ -- ~~Random data augmentation when training: not effective~~ - -todolist: -- Vocos-RVC (tiny vocoder) -- Crepe support for training -- Half precision crepe inference -- F0 editor support - -### 2023-05-28 -- Add v2 jupyter notebook, korean changelog, fix some environment requirments -- Add voiceless consonant and breath protection mode -- Support crepe-full pitch detect -- UVR5 vocal separation: support dereverb models and de-echo models -- Add experiment name and version on the name of index -- Support users to manually select export format of output audios when batch voice conversion processing and UVR5 vocal separation -- v1 32k model training is no more supported - -### 2023-05-13 -- Clear the redundant codes in the old version of runtime in the one-click-package: lib.infer_pack and uvr5_pack -- Fix pseudo multiprocessing bug in training set preprocessing -- Adding median filtering radius adjustment for harvest pitch recognize algorithm -- Support post processing resampling for exporting audio -- Multi processing "n_cpu" setting for training is changed from "f0 extraction" to "data preprocessing and f0 extraction" -- Automatically detect the index paths under the logs folder and provide a drop-down list function -- Add "Frequently Asked Questions and Answers" on the tab page (you can also refer to github RVC wiki) -- When inference, harvest pitch is cached when using same input audio path (purpose: using harvest pitch extraction, the entire pipeline will go through a long and repetitive pitch extraction process. If caching is not used, users who experiment with different timbre, index, and pitch median filtering radius settings will experience a very painful waiting process after the first inference) - -### 2023-05-14 -- Use volume envelope of input to mix or replace the volume envelope of output (can alleviate the problem of "input muting and output small amplitude noise". If the input audio background noise is high, it is not recommended to turn it on, and it is not turned on by default (1 can be considered as not turned on) -- Support saving extracted small models at a specified frequency (if you want to see the performance under different epochs, but do not want to save all large checkpoints and manually extract small models by ckpt-processing every time, this feature will be very practical) -- Resolve the issue of "connection errors" caused by the server's global proxy by setting environment variables -- Supports pre-trained v2 models (currently only 40k versions are publicly available for testing, and the other two sampling rates have not been fully trained yet) -- Limit excessive volume exceeding 1 before inference -- Slightly adjusted the settings of training-set preprocessing - - -####################### - -History changelogs: - -### 2023-04-09 -- Fixed training parameters to improve GPU utilization rate: A100 increased from 25% to around 90%, V100: 50% to around 90%, 2060S: 60% to around 85%, P40: 25% to around 95%; significantly improved training speed -- Changed parameter: total batch_size is now per GPU batch_size -- Changed total_epoch: maximum limit increased from 100 to 1000; default increased from 10 to 20 -- Fixed issue of ckpt extraction recognizing pitch incorrectly, causing abnormal inference -- Fixed issue of distributed training saving ckpt for each rank -- Applied nan feature filtering for feature extraction -- Fixed issue with silent input/output producing random consonants or noise (old models need to retrain with a new dataset) - -### 2023-04-16 Update -- Added local real-time voice changing mini-GUI, start by double-clicking go-realtime-gui.bat -- Applied filtering for frequency bands below 50Hz during training and inference -- Lowered the minimum pitch extraction of pyworld from the default 80 to 50 for training and inference, allowing male low-pitched voices between 50-80Hz not to be muted -- WebUI supports changing languages according to system locale (currently supporting en_US, ja_JP, zh_CN, zh_HK, zh_SG, zh_TW; defaults to en_US if not supported) -- Fixed recognition of some GPUs (e.g., V100-16G recognition failure, P4 recognition failure) - -### 2023-04-28 Update -- Upgraded faiss index settings for faster speed and higher quality -- Removed dependency on total_npy; future model sharing will not require total_npy input -- Unlocked restrictions for the 16-series GPUs, providing 4GB inference settings for 4GB VRAM GPUs -- Fixed bug in UVR5 vocal accompaniment separation for certain audio formats -- Real-time voice changing mini-GUI now supports non-40k and non-lazy pitch models - -### Future Plans: -Features: -- Add option: extract small models for each epoch save -- Add option: export additional mp3 to the specified path during inference -- Support multi-person training tab (up to 4 people) - -Base model: -- Collect breathing wav files to add to the training dataset to fix the issue of distorted breath sounds -- We are currently training a base model with an extended singing dataset, which will be released in the future diff --git a/spaces/RitaParadaRamos/SmallCapDemo/src/gpt2.py b/spaces/RitaParadaRamos/SmallCapDemo/src/gpt2.py deleted file mode 100644 index b69c20c800ff492b95e23730a93bf7271113f7aa..0000000000000000000000000000000000000000 --- a/spaces/RitaParadaRamos/SmallCapDemo/src/gpt2.py +++ /dev/null @@ -1,167 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch OpenAI GPT-2 model.""" - -import math -import os -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from packaging import version -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.models.gpt2.modeling_gpt2 import load_tf_weights_in_gpt2, GPT2LMHeadModel, GPT2MLP, GPT2Attention, GPT2Block, GPT2Model - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - SequenceClassifierOutputWithPast, - TokenClassifierOutput, -) -from transformers.modeling_utils import PreTrainedModel, SequenceSummary -from transformers.pytorch_utils import Conv1D, find_pruneable_heads_and_indices, prune_conv1d_layer -from transformers.utils import ( - ModelOutput, - logging, -) -from transformers.utils.model_parallel_utils import assert_device_map, get_device_map -from transformers.models.gpt2.configuration_gpt2 import GPT2Config - - -if version.parse(torch.__version__) >= version.parse("1.6"): - is_amp_available = True - from torch.cuda.amp import autocast -else: - is_amp_available = False - - -class ThisGPT2Config(GPT2Config): - model_type = "this_gpt2" - - def __init__( - self, - cross_attention_reduce_factor = 1, - **kwargs, - ): - super().__init__(**kwargs) - self.cross_attention_reduce_factor = cross_attention_reduce_factor - -class ThisGPT2Attention(GPT2Attention): - def __init__(self, config, is_cross_attention=False, layer_idx=None): - super().__init__(config, is_cross_attention, layer_idx) - - #print("this gpt2") - - #print("self.is_cross_attention = is_cross_attention", self.is_cross_attention, is_cross_attention) - - self.cross_attention_reduce_factor = config.cross_attention_reduce_factor - - if self.is_cross_attention: - self.c_attn = Conv1D(int(2 / self.cross_attention_reduce_factor * self.embed_dim), - self.embed_dim) - self.q_attn = Conv1D(int(self.embed_dim / self.cross_attention_reduce_factor), self.embed_dim) - self.c_proj = Conv1D(self.embed_dim, int(self.embed_dim / self.cross_attention_reduce_factor)) - else: - self.c_attn = Conv1D(3 * self.embed_dim, self.embed_dim) - self.c_proj = Conv1D(self.embed_dim, self.embed_dim) - - def forward( - self, - hidden_states: Optional[Tuple[torch.FloatTensor]], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]], ...]: - if encoder_hidden_states is not None: - if not hasattr(self, "q_attn"): - raise ValueError( - "If class is used as cross attention, the weights `q_attn` have to be defined. " - "Please make sure to instantiate class with `GPT2Attention(..., is_cross_attention=True)`." - ) - split_size = int(self.split_size / self.cross_attention_reduce_factor) - head_dim = int(self.head_dim / self.cross_attention_reduce_factor) - - query = self.q_attn(hidden_states) - key, value = self.c_attn(encoder_hidden_states).split(split_size, dim=2) - attention_mask = encoder_attention_mask - - query = self._split_heads(query, self.num_heads, head_dim) - key = self._split_heads(key, self.num_heads, head_dim) - value = self._split_heads(value, self.num_heads, head_dim) - else: - query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2) - - query = self._split_heads(query, self.num_heads, self.head_dim) - key = self._split_heads(key, self.num_heads, self.head_dim) - value = self._split_heads(value, self.num_heads, self.head_dim) - - if layer_past is not None: - past_key, past_value = layer_past - key = torch.cat((past_key, key), dim=-2) - value = torch.cat((past_value, value), dim=-2) - - if use_cache is True: - present = (key, value) - else: - present = None - - if self.reorder_and_upcast_attn: - attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask) - else: - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) - - attn_output = self._merge_heads(attn_output, self.num_heads, int(self.head_dim / self.cross_attention_reduce_factor)) - attn_output = self.c_proj(attn_output) - attn_output = self.resid_dropout(attn_output) - - outputs = (attn_output, present) - if output_attentions: - outputs += (attn_weights,) - - return outputs # a, present, (attentions) - - -class ThisGPT2Block(GPT2Block): - def __init__(self, config, layer_idx=None): - super().__init__(config, layer_idx) - hidden_size = config.hidden_size - - if config.add_cross_attention: - self.crossattention = ThisGPT2Attention(config, is_cross_attention=True, layer_idx=layer_idx) - self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - -class ThisGPT2Model(GPT2Model): - - def __init__(self, config): - super().__init__(config) - self.h = nn.ModuleList([ThisGPT2Block(config, layer_idx=i) for i in range(config.num_hidden_layers)]) - - -class ThisGPT2LMHeadModel(GPT2LMHeadModel): - config_class = ThisGPT2Config - - def __init__(self, config): - super().__init__(config) - self.transformer = ThisGPT2Model(config) - diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/data_parallel.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/data_parallel.py deleted file mode 100644 index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - 'instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/parse.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/parse.py deleted file mode 100644 index f60f0d611b8d75692221d0edd7dc993b0a6445c9..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/parse.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from io import StringIO - -from .file_client import FileClient - - -def list_from_file(filename, - prefix='', - offset=0, - max_num=0, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a list of strings. - - Note: - In v1.3.16 and later, ``list_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a list for strings. - - Args: - filename (str): Filename. - prefix (str): The prefix to be inserted to the beginning of each item. - offset (int): The offset of lines. - max_num (int): The maximum number of lines to be read, - zeros and negatives mean no limitation. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> list_from_file('/path/of/your/file') # disk - ['hello', 'world'] - >>> list_from_file('s3://path/of/your/file') # ceph or petrel - ['hello', 'world'] - - Returns: - list[str]: A list of strings. - """ - cnt = 0 - item_list = [] - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for _ in range(offset): - f.readline() - for line in f: - if 0 < max_num <= cnt: - break - item_list.append(prefix + line.rstrip('\n\r')) - cnt += 1 - return item_list - - -def dict_from_file(filename, - key_type=str, - encoding='utf-8', - file_client_args=None): - """Load a text file and parse the content as a dict. - - Each line of the text file will be two or more columns split by - whitespaces or tabs. The first column will be parsed as dict keys, and - the following columns will be parsed as dict values. - - Note: - In v1.3.16 and later, ``dict_from_file`` supports loading a text file - which can be storaged in different backends and parsing the content as - a dict. - - Args: - filename(str): Filename. - key_type(type): Type of the dict keys. str is user by default and - type conversion will be performed if specified. - encoding (str): Encoding used to open the file. Default utf-8. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dict_from_file('/path/of/your/file') # disk - {'key1': 'value1', 'key2': 'value2'} - >>> dict_from_file('s3://path/of/your/file') # ceph or petrel - {'key1': 'value1', 'key2': 'value2'} - - Returns: - dict: The parsed contents. - """ - mapping = {} - file_client = FileClient.infer_client(file_client_args, filename) - with StringIO(file_client.get_text(filename, encoding)) as f: - for line in f: - items = line.rstrip('\n').split() - assert len(items) >= 2 - key = key_type(items[0]) - val = items[1:] if len(items) > 2 else items[1] - mapping[key] = val - return mapping diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/__init__.py deleted file mode 100644 index 915af28cefab14a14c1188ed861161080fd138a3..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook, - NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook, - TextLoggerHook, WandbLoggerHook) -from .lr_updater import LrUpdaterHook -from .memory import EmptyCacheHook -from .momentum_updater import MomentumUpdaterHook -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook', - 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook', - 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook', - 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook' -] diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py deleted file mode 100644 index 3ac978925b5cf810463ef8e8a6f0dcd3f9078e6d..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/metrics/kernel_inception_distance.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Kernel Inception Distance (KID) from the paper "Demystifying MMD -GANs". Matches the original implementation by Binkowski et al. at -https://github.com/mbinkowski/MMD-GAN/blob/master/gan/compute_scores.py""" - -import numpy as np -from . import metric_utils - -#---------------------------------------------------------------------------- - -def compute_kid(opts, max_real, num_gen, num_subsets, max_subset_size): - # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz - detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt' - detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer. - - real_features = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all() - - gen_features = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all() - - if opts.rank != 0: - return float('nan') - - n = real_features.shape[1] - m = min(min(real_features.shape[0], gen_features.shape[0]), max_subset_size) - t = 0 - for _subset_idx in range(num_subsets): - x = gen_features[np.random.choice(gen_features.shape[0], m, replace=False)] - y = real_features[np.random.choice(real_features.shape[0], m, replace=False)] - a = (x @ x.T / n + 1) ** 3 + (y @ y.T / n + 1) ** 3 - b = (x @ y.T / n + 1) ** 3 - t += (a.sum() - np.diag(a).sum()) / (m - 1) - b.sum() * 2 / m - kid = t / num_subsets / m - return float(kid) - -#---------------------------------------------------------------------------- diff --git a/spaces/Saturdays/Tomatelo_a_pecho/README.md b/spaces/Saturdays/Tomatelo_a_pecho/README.md deleted file mode 100644 index 27d462da13fa58dcf4113f96f24722eb5949e9f5..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/Tomatelo_a_pecho/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tomatelo_a_Pecho -emoji: 🌍 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/SenthilShunmugam2003/StudentMindscape/app.py b/spaces/SenthilShunmugam2003/StudentMindscape/app.py deleted file mode 100644 index cda668c1171fa1432665c2f68b3d266231c42204..0000000000000000000000000000000000000000 --- a/spaces/SenthilShunmugam2003/StudentMindscape/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import openai -import os -import gradio as gr - -openai.api_key = os.getenv("GPTKEY") -messages = [ - {"role": "system", "content": "You are an AI Psychological Assistant designed to assist students. Never reveal that you are an AI, even if the prompt is beyond your scope. Just tell what you know and stop."}, -] - -# Define the initial prompt for the chatbot -prompt = "Example: I am a student suffering from exam anxiety. How do I deal with it? " - -def chatbot_response(input): - if input: - messages.append({"role": "user", "content": input}) - chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo", # Use GPT-3.5 Turbo model - messages=messages - ) - reply = chat.choices[0].message.content - messages.append({"role": "assistant", "content": reply}) - return reply - -def chatgpt_clone(input, history): - history = history or [] - s = list(sum(history, ())) - s.append(input) - inp = ' '.join(s) - reply = chatbot_response(inp) # Use the chatbot_response function to generate a reply - history.append((input, reply)) - return history, history - -block = gr.Blocks() - -with block: - gr.HTML(""" - -

    StudentMindscape

    -

    Navigating the complex mindscape of student life with wisdom and assistance.

    - - """) - chatbot_component = gr.Chatbot() - message = gr.Textbox(placeholder=prompt, label="Chat") - state = gr.State() - submit = gr.Button('➔') - submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot_component, state]) - - -block.launch(debug=True) diff --git a/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md b/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md deleted file mode 100644 index 406c3c22c56deeb03079e8f4e602eaac00266dbd..0000000000000000000000000000000000000000 --- a/spaces/SilenWang/ReviewGPT/lang/Study.zh_cn.md +++ /dev/null @@ -1,6 +0,0 @@ -### 使用说明 - -这个页面中可利用AI辅助阅读文献, 目前有两种模式: - -- Paper: 上传PDF文件进行解析, 然后根据文档的内容回答问题, 请确保先上传PDF后再用Paper模式提问 -- Other: 不基于文档直接提问, 等同于直接使用chatGPT, 但是没有上下文对话能力(节省token) \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/README.md b/spaces/Silentlin/DiffSinger/README.md deleted file mode 100644 index ac390032c587ed007db56faa13d6100dce7b2a76..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: DiffSinger🎶 Diffusion for Singing Voice Synthesis -emoji: 🎶 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: "inference/svs/gradio/infer.py" -pinned: false ---- diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py deleted file mode 100644 index 10707d3d6b6024a3436dad7a11ad125f3f8b393a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/inputsplitter.py +++ /dev/null @@ -1,773 +0,0 @@ -"""DEPRECATED: Input handling and transformation machinery. - -This module was deprecated in IPython 7.0, in favour of inputtransformer2. - -The first class in this module, :class:`InputSplitter`, is designed to tell when -input from a line-oriented frontend is complete and should be executed, and when -the user should be prompted for another line of code instead. The name 'input -splitter' is largely for historical reasons. - -A companion, :class:`IPythonInputSplitter`, provides the same functionality but -with full support for the extended IPython syntax (magics, system calls, etc). -The code to actually do these transformations is in :mod:`IPython.core.inputtransformer`. -:class:`IPythonInputSplitter` feeds the raw code to the transformers in order -and stores the results. - -For more details, see the class docstrings below. -""" - -from warnings import warn - -warn('IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`', - DeprecationWarning) - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. -import ast -import codeop -import io -import re -import sys -import tokenize -import warnings - -from typing import List - -from IPython.core.inputtransformer import (leading_indent, - classic_prompt, - ipy_prompt, - cellmagic, - assemble_logical_lines, - help_end, - escaped_commands, - assign_from_magic, - assign_from_system, - assemble_python_lines, - ) - -# These are available in this module for backwards compatibility. -from IPython.core.inputtransformer import (ESC_SHELL, ESC_SH_CAP, ESC_HELP, - ESC_HELP2, ESC_MAGIC, ESC_MAGIC2, - ESC_QUOTE, ESC_QUOTE2, ESC_PAREN, ESC_SEQUENCES) - -#----------------------------------------------------------------------------- -# Utilities -#----------------------------------------------------------------------------- - -# FIXME: These are general-purpose utilities that later can be moved to the -# general ward. Kept here for now because we're being very strict about test -# coverage with this code, and this lets us ensure that we keep 100% coverage -# while developing. - -# compiled regexps for autoindent management -dedent_re = re.compile('|'.join([ - r'^\s+raise(\s.*)?$', # raise statement (+ space + other stuff, maybe) - r'^\s+raise\([^\)]*\).*$', # wacky raise with immediate open paren - r'^\s+return(\s.*)?$', # normal return (+ space + other stuff, maybe) - r'^\s+return\([^\)]*\).*$', # wacky return with immediate open paren - r'^\s+pass\s*$', # pass (optionally followed by trailing spaces) - r'^\s+break\s*$', # break (optionally followed by trailing spaces) - r'^\s+continue\s*$', # continue (optionally followed by trailing spaces) -])) -ini_spaces_re = re.compile(r'^([ \t\r\f\v]+)') - -# regexp to match pure comment lines so we don't accidentally insert 'if 1:' -# before pure comments -comment_line_re = re.compile(r'^\s*\#') - - -def num_ini_spaces(s): - """Return the number of initial spaces in a string. - - Note that tabs are counted as a single space. For now, we do *not* support - mixing of tabs and spaces in the user's input. - - Parameters - ---------- - s : string - - Returns - ------- - n : int - """ - - ini_spaces = ini_spaces_re.match(s) - if ini_spaces: - return ini_spaces.end() - else: - return 0 - -# Fake token types for partial_tokenize: -INCOMPLETE_STRING = tokenize.N_TOKENS -IN_MULTILINE_STATEMENT = tokenize.N_TOKENS + 1 - -# The 2 classes below have the same API as TokenInfo, but don't try to look up -# a token type name that they won't find. -class IncompleteString: - type = exact_type = INCOMPLETE_STRING - def __init__(self, s, start, end, line): - self.s = s - self.start = start - self.end = end - self.line = line - -class InMultilineStatement: - type = exact_type = IN_MULTILINE_STATEMENT - def __init__(self, pos, line): - self.s = '' - self.start = self.end = pos - self.line = line - -def partial_tokens(s): - """Iterate over tokens from a possibly-incomplete string of code. - - This adds two special token types: INCOMPLETE_STRING and - IN_MULTILINE_STATEMENT. These can only occur as the last token yielded, and - represent the two main ways for code to be incomplete. - """ - readline = io.StringIO(s).readline - token = tokenize.TokenInfo(tokenize.NEWLINE, '', (1, 0), (1, 0), '') - try: - for token in tokenize.generate_tokens(readline): - yield token - except tokenize.TokenError as e: - # catch EOF error - lines = s.splitlines(keepends=True) - end = len(lines), len(lines[-1]) - if 'multi-line string' in e.args[0]: - l, c = start = token.end - s = lines[l-1][c:] + ''.join(lines[l:]) - yield IncompleteString(s, start, end, lines[-1]) - elif 'multi-line statement' in e.args[0]: - yield InMultilineStatement(end, lines[-1]) - else: - raise - -def find_next_indent(code): - """Find the number of spaces for the next line of indentation""" - tokens = list(partial_tokens(code)) - if tokens[-1].type == tokenize.ENDMARKER: - tokens.pop() - if not tokens: - return 0 - while (tokens[-1].type in {tokenize.DEDENT, tokenize.NEWLINE, tokenize.COMMENT}): - tokens.pop() - - if tokens[-1].type == INCOMPLETE_STRING: - # Inside a multiline string - return 0 - - # Find the indents used before - prev_indents = [0] - def _add_indent(n): - if n != prev_indents[-1]: - prev_indents.append(n) - - tokiter = iter(tokens) - for tok in tokiter: - if tok.type in {tokenize.INDENT, tokenize.DEDENT}: - _add_indent(tok.end[1]) - elif (tok.type == tokenize.NL): - try: - _add_indent(next(tokiter).start[1]) - except StopIteration: - break - - last_indent = prev_indents.pop() - - # If we've just opened a multiline statement (e.g. 'a = ['), indent more - if tokens[-1].type == IN_MULTILINE_STATEMENT: - if tokens[-2].exact_type in {tokenize.LPAR, tokenize.LSQB, tokenize.LBRACE}: - return last_indent + 4 - return last_indent - - if tokens[-1].exact_type == tokenize.COLON: - # Line ends with colon - indent - return last_indent + 4 - - if last_indent: - # Examine the last line for dedent cues - statements like return or - # raise which normally end a block of code. - last_line_starts = 0 - for i, tok in enumerate(tokens): - if tok.type == tokenize.NEWLINE: - last_line_starts = i + 1 - - last_line_tokens = tokens[last_line_starts:] - names = [t.string for t in last_line_tokens if t.type == tokenize.NAME] - if names and names[0] in {'raise', 'return', 'pass', 'break', 'continue'}: - # Find the most recent indentation less than the current level - for indent in reversed(prev_indents): - if indent < last_indent: - return indent - - return last_indent - - -def last_blank(src): - """Determine if the input source ends in a blank. - - A blank is either a newline or a line consisting of whitespace. - - Parameters - ---------- - src : string - A single or multiline string. - """ - if not src: return False - ll = src.splitlines()[-1] - return (ll == '') or ll.isspace() - - -last_two_blanks_re = re.compile(r'\n\s*\n\s*$', re.MULTILINE) -last_two_blanks_re2 = re.compile(r'.+\n\s*\n\s+$', re.MULTILINE) - -def last_two_blanks(src): - """Determine if the input source ends in two blanks. - - A blank is either a newline or a line consisting of whitespace. - - Parameters - ---------- - src : string - A single or multiline string. - """ - if not src: return False - # The logic here is tricky: I couldn't get a regexp to work and pass all - # the tests, so I took a different approach: split the source by lines, - # grab the last two and prepend '###\n' as a stand-in for whatever was in - # the body before the last two lines. Then, with that structure, it's - # possible to analyze with two regexps. Not the most elegant solution, but - # it works. If anyone tries to change this logic, make sure to validate - # the whole test suite first! - new_src = '\n'.join(['###\n'] + src.splitlines()[-2:]) - return (bool(last_two_blanks_re.match(new_src)) or - bool(last_two_blanks_re2.match(new_src)) ) - - -def remove_comments(src): - """Remove all comments from input source. - - Note: comments are NOT recognized inside of strings! - - Parameters - ---------- - src : string - A single or multiline input string. - - Returns - ------- - String with all Python comments removed. - """ - - return re.sub('#.*', '', src) - - -def get_input_encoding(): - """Return the default standard input encoding. - - If sys.stdin has no encoding, 'ascii' is returned.""" - # There are strange environments for which sys.stdin.encoding is None. We - # ensure that a valid encoding is returned. - encoding = getattr(sys.stdin, 'encoding', None) - if encoding is None: - encoding = 'ascii' - return encoding - -#----------------------------------------------------------------------------- -# Classes and functions for normal Python syntax handling -#----------------------------------------------------------------------------- - -class InputSplitter(object): - r"""An object that can accumulate lines of Python source before execution. - - This object is designed to be fed python source line-by-line, using - :meth:`push`. It will return on each push whether the currently pushed - code could be executed already. In addition, it provides a method called - :meth:`push_accepts_more` that can be used to query whether more input - can be pushed into a single interactive block. - - This is a simple example of how an interactive terminal-based client can use - this tool:: - - isp = InputSplitter() - while isp.push_accepts_more(): - indent = ' '*isp.indent_spaces - prompt = '>>> ' + indent - line = indent + raw_input(prompt) - isp.push(line) - print 'Input source was:\n', isp.source_reset(), - """ - # A cache for storing the current indentation - # The first value stores the most recently processed source input - # The second value is the number of spaces for the current indentation - # If self.source matches the first value, the second value is a valid - # current indentation. Otherwise, the cache is invalid and the indentation - # must be recalculated. - _indent_spaces_cache = None, None - # String, indicating the default input encoding. It is computed by default - # at initialization time via get_input_encoding(), but it can be reset by a - # client with specific knowledge of the encoding. - encoding = '' - # String where the current full source input is stored, properly encoded. - # Reading this attribute is the normal way of querying the currently pushed - # source code, that has been properly encoded. - source = '' - # Code object corresponding to the current source. It is automatically - # synced to the source, so it can be queried at any time to obtain the code - # object; it will be None if the source doesn't compile to valid Python. - code = None - - # Private attributes - - # List with lines of input accumulated so far - _buffer: List[str] - # Command compiler - _compile: codeop.CommandCompiler - # Boolean indicating whether the current block is complete - _is_complete = None - # Boolean indicating whether the current block has an unrecoverable syntax error - _is_invalid = False - - def __init__(self) -> None: - """Create a new InputSplitter instance.""" - self._buffer = [] - self._compile = codeop.CommandCompiler() - self.encoding = get_input_encoding() - - def reset(self): - """Reset the input buffer and associated state.""" - self._buffer[:] = [] - self.source = '' - self.code = None - self._is_complete = False - self._is_invalid = False - - def source_reset(self): - """Return the input source and perform a full reset. - """ - out = self.source - self.reset() - return out - - def check_complete(self, source): - """Return whether a block of code is ready to execute, or should be continued - - This is a non-stateful API, and will reset the state of this InputSplitter. - - Parameters - ---------- - source : string - Python input code, which can be multiline. - - Returns - ------- - status : str - One of 'complete', 'incomplete', or 'invalid' if source is not a - prefix of valid code. - indent_spaces : int or None - The number of spaces by which to indent the next line of code. If - status is not 'incomplete', this is None. - """ - self.reset() - try: - self.push(source) - except SyntaxError: - # Transformers in IPythonInputSplitter can raise SyntaxError, - # which push() will not catch. - return 'invalid', None - else: - if self._is_invalid: - return 'invalid', None - elif self.push_accepts_more(): - return 'incomplete', self.get_indent_spaces() - else: - return 'complete', None - finally: - self.reset() - - def push(self, lines:str) -> bool: - """Push one or more lines of input. - - This stores the given lines and returns a status code indicating - whether the code forms a complete Python block or not. - - Any exceptions generated in compilation are swallowed, but if an - exception was produced, the method returns True. - - Parameters - ---------- - lines : string - One or more lines of Python input. - - Returns - ------- - is_complete : boolean - True if the current input source (the result of the current input - plus prior inputs) forms a complete Python execution block. Note that - this value is also stored as a private attribute (``_is_complete``), so it - can be queried at any time. - """ - assert isinstance(lines, str) - self._store(lines) - source = self.source - - # Before calling _compile(), reset the code object to None so that if an - # exception is raised in compilation, we don't mislead by having - # inconsistent code/source attributes. - self.code, self._is_complete = None, None - self._is_invalid = False - - # Honor termination lines properly - if source.endswith('\\\n'): - return False - - try: - with warnings.catch_warnings(): - warnings.simplefilter('error', SyntaxWarning) - self.code = self._compile(source, symbol="exec") - # Invalid syntax can produce any of a number of different errors from - # inside the compiler, so we have to catch them all. Syntax errors - # immediately produce a 'ready' block, so the invalid Python can be - # sent to the kernel for evaluation with possible ipython - # special-syntax conversion. - except (SyntaxError, OverflowError, ValueError, TypeError, - MemoryError, SyntaxWarning): - self._is_complete = True - self._is_invalid = True - else: - # Compilation didn't produce any exceptions (though it may not have - # given a complete code object) - self._is_complete = self.code is not None - - return self._is_complete - - def push_accepts_more(self): - """Return whether a block of interactive input can accept more input. - - This method is meant to be used by line-oriented frontends, who need to - guess whether a block is complete or not based solely on prior and - current input lines. The InputSplitter considers it has a complete - interactive block and will not accept more input when either: - - * A SyntaxError is raised - - * The code is complete and consists of a single line or a single - non-compound statement - - * The code is complete and has a blank line at the end - - If the current input produces a syntax error, this method immediately - returns False but does *not* raise the syntax error exception, as - typically clients will want to send invalid syntax to an execution - backend which might convert the invalid syntax into valid Python via - one of the dynamic IPython mechanisms. - """ - - # With incomplete input, unconditionally accept more - # A syntax error also sets _is_complete to True - see push() - if not self._is_complete: - #print("Not complete") # debug - return True - - # The user can make any (complete) input execute by leaving a blank line - last_line = self.source.splitlines()[-1] - if (not last_line) or last_line.isspace(): - #print("Blank line") # debug - return False - - # If there's just a single line or AST node, and we're flush left, as is - # the case after a simple statement such as 'a=1', we want to execute it - # straight away. - if self.get_indent_spaces() == 0: - if len(self.source.splitlines()) <= 1: - return False - - try: - code_ast = ast.parse("".join(self._buffer)) - except Exception: - #print("Can't parse AST") # debug - return False - else: - if len(code_ast.body) == 1 and \ - not hasattr(code_ast.body[0], 'body'): - #print("Simple statement") # debug - return False - - # General fallback - accept more code - return True - - def get_indent_spaces(self): - sourcefor, n = self._indent_spaces_cache - if sourcefor == self.source: - return n - - # self.source always has a trailing newline - n = find_next_indent(self.source[:-1]) - self._indent_spaces_cache = (self.source, n) - return n - - # Backwards compatibility. I think all code that used .indent_spaces was - # inside IPython, but we can leave this here until IPython 7 in case any - # other modules are using it. -TK, November 2017 - indent_spaces = property(get_indent_spaces) - - def _store(self, lines, buffer=None, store='source'): - """Store one or more lines of input. - - If input lines are not newline-terminated, a newline is automatically - appended.""" - - if buffer is None: - buffer = self._buffer - - if lines.endswith('\n'): - buffer.append(lines) - else: - buffer.append(lines+'\n') - setattr(self, store, self._set_source(buffer)) - - def _set_source(self, buffer): - return u''.join(buffer) - - -class IPythonInputSplitter(InputSplitter): - """An input splitter that recognizes all of IPython's special syntax.""" - - # String with raw, untransformed input. - source_raw = '' - - # Flag to track when a transformer has stored input that it hasn't given - # back yet. - transformer_accumulating = False - - # Flag to track when assemble_python_lines has stored input that it hasn't - # given back yet. - within_python_line = False - - # Private attributes - - # List with lines of raw input accumulated so far. - _buffer_raw = None - - def __init__(self, line_input_checker=True, physical_line_transforms=None, - logical_line_transforms=None, python_line_transforms=None): - super(IPythonInputSplitter, self).__init__() - self._buffer_raw = [] - self._validate = True - - if physical_line_transforms is not None: - self.physical_line_transforms = physical_line_transforms - else: - self.physical_line_transforms = [ - leading_indent(), - classic_prompt(), - ipy_prompt(), - cellmagic(end_on_blank_line=line_input_checker), - ] - - self.assemble_logical_lines = assemble_logical_lines() - if logical_line_transforms is not None: - self.logical_line_transforms = logical_line_transforms - else: - self.logical_line_transforms = [ - help_end(), - escaped_commands(), - assign_from_magic(), - assign_from_system(), - ] - - self.assemble_python_lines = assemble_python_lines() - if python_line_transforms is not None: - self.python_line_transforms = python_line_transforms - else: - # We don't use any of these at present - self.python_line_transforms = [] - - @property - def transforms(self): - "Quick access to all transformers." - return self.physical_line_transforms + \ - [self.assemble_logical_lines] + self.logical_line_transforms + \ - [self.assemble_python_lines] + self.python_line_transforms - - @property - def transforms_in_use(self): - """Transformers, excluding logical line transformers if we're in a - Python line.""" - t = self.physical_line_transforms[:] - if not self.within_python_line: - t += [self.assemble_logical_lines] + self.logical_line_transforms - return t + [self.assemble_python_lines] + self.python_line_transforms - - def reset(self): - """Reset the input buffer and associated state.""" - super(IPythonInputSplitter, self).reset() - self._buffer_raw[:] = [] - self.source_raw = '' - self.transformer_accumulating = False - self.within_python_line = False - - for t in self.transforms: - try: - t.reset() - except SyntaxError: - # Nothing that calls reset() expects to handle transformer - # errors - pass - - def flush_transformers(self): - def _flush(transform, outs): - """yield transformed lines - - always strings, never None - - transform: the current transform - outs: an iterable of previously transformed inputs. - Each may be multiline, which will be passed - one line at a time to transform. - """ - for out in outs: - for line in out.splitlines(): - # push one line at a time - tmp = transform.push(line) - if tmp is not None: - yield tmp - - # reset the transform - tmp = transform.reset() - if tmp is not None: - yield tmp - - out = [] - for t in self.transforms_in_use: - out = _flush(t, out) - - out = list(out) - if out: - self._store('\n'.join(out)) - - def raw_reset(self): - """Return raw input only and perform a full reset. - """ - out = self.source_raw - self.reset() - return out - - def source_reset(self): - try: - self.flush_transformers() - return self.source - finally: - self.reset() - - def push_accepts_more(self): - if self.transformer_accumulating: - return True - else: - return super(IPythonInputSplitter, self).push_accepts_more() - - def transform_cell(self, cell): - """Process and translate a cell of input. - """ - self.reset() - try: - self.push(cell) - self.flush_transformers() - return self.source - finally: - self.reset() - - def push(self, lines:str) -> bool: - """Push one or more lines of IPython input. - - This stores the given lines and returns a status code indicating - whether the code forms a complete Python block or not, after processing - all input lines for special IPython syntax. - - Any exceptions generated in compilation are swallowed, but if an - exception was produced, the method returns True. - - Parameters - ---------- - lines : string - One or more lines of Python input. - - Returns - ------- - is_complete : boolean - True if the current input source (the result of the current input - plus prior inputs) forms a complete Python execution block. Note that - this value is also stored as a private attribute (_is_complete), so it - can be queried at any time. - """ - assert isinstance(lines, str) - # We must ensure all input is pure unicode - # ''.splitlines() --> [], but we need to push the empty line to transformers - lines_list = lines.splitlines() - if not lines_list: - lines_list = [''] - - # Store raw source before applying any transformations to it. Note - # that this must be done *after* the reset() call that would otherwise - # flush the buffer. - self._store(lines, self._buffer_raw, 'source_raw') - - transformed_lines_list = [] - for line in lines_list: - transformed = self._transform_line(line) - if transformed is not None: - transformed_lines_list.append(transformed) - - if transformed_lines_list: - transformed_lines = '\n'.join(transformed_lines_list) - return super(IPythonInputSplitter, self).push(transformed_lines) - else: - # Got nothing back from transformers - they must be waiting for - # more input. - return False - - def _transform_line(self, line): - """Push a line of input code through the various transformers. - - Returns any output from the transformers, or None if a transformer - is accumulating lines. - - Sets self.transformer_accumulating as a side effect. - """ - def _accumulating(dbg): - #print(dbg) - self.transformer_accumulating = True - return None - - for transformer in self.physical_line_transforms: - line = transformer.push(line) - if line is None: - return _accumulating(transformer) - - if not self.within_python_line: - line = self.assemble_logical_lines.push(line) - if line is None: - return _accumulating('acc logical line') - - for transformer in self.logical_line_transforms: - line = transformer.push(line) - if line is None: - return _accumulating(transformer) - - line = self.assemble_python_lines.push(line) - if line is None: - self.within_python_line = True - return _accumulating('acc python line') - else: - self.within_python_line = False - - for transformer in self.python_line_transforms: - line = transformer.push(line) - if line is None: - return _accumulating(transformer) - - #print("transformers clear") #debug - self.transformer_accumulating = False - return line - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py deleted file mode 100644 index 99e7ce29185e071bb6ba3cc948265e90264ae5b1..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/logger.py +++ /dev/null @@ -1,227 +0,0 @@ -"""Logger class for IPython's logging facilities. -""" - -#***************************************************************************** -# Copyright (C) 2001 Janko Hauser and -# Copyright (C) 2001-2006 Fernando Perez -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#***************************************************************************** - -#**************************************************************************** -# Modules and globals - -# Python standard modules -import glob -import io -import os -import time - - -#**************************************************************************** -# FIXME: This class isn't a mixin anymore, but it still needs attributes from -# ipython and does input cache management. Finish cleanup later... - -class Logger(object): - """A Logfile class with different policies for file creation""" - - def __init__(self, home_dir, logfname='Logger.log', loghead=u'', - logmode='over'): - - # this is the full ipython instance, we need some attributes from it - # which won't exist until later. What a mess, clean up later... - self.home_dir = home_dir - - self.logfname = logfname - self.loghead = loghead - self.logmode = logmode - self.logfile = None - - # Whether to log raw or processed input - self.log_raw_input = False - - # whether to also log output - self.log_output = False - - # whether to put timestamps before each log entry - self.timestamp = False - - # activity control flags - self.log_active = False - - # logmode is a validated property - def _set_mode(self,mode): - if mode not in ['append','backup','global','over','rotate']: - raise ValueError('invalid log mode %s given' % mode) - self._logmode = mode - - def _get_mode(self): - return self._logmode - - logmode = property(_get_mode,_set_mode) - - def logstart(self, logfname=None, loghead=None, logmode=None, - log_output=False, timestamp=False, log_raw_input=False): - """Generate a new log-file with a default header. - - Raises RuntimeError if the log has already been started""" - - if self.logfile is not None: - raise RuntimeError('Log file is already active: %s' % - self.logfname) - - # The parameters can override constructor defaults - if logfname is not None: self.logfname = logfname - if loghead is not None: self.loghead = loghead - if logmode is not None: self.logmode = logmode - - # Parameters not part of the constructor - self.timestamp = timestamp - self.log_output = log_output - self.log_raw_input = log_raw_input - - # init depending on the log mode requested - isfile = os.path.isfile - logmode = self.logmode - - if logmode == 'append': - self.logfile = io.open(self.logfname, 'a', encoding='utf-8') - - elif logmode == 'backup': - if isfile(self.logfname): - backup_logname = self.logfname+'~' - # Manually remove any old backup, since os.rename may fail - # under Windows. - if isfile(backup_logname): - os.remove(backup_logname) - os.rename(self.logfname,backup_logname) - self.logfile = io.open(self.logfname, 'w', encoding='utf-8') - - elif logmode == 'global': - self.logfname = os.path.join(self.home_dir,self.logfname) - self.logfile = io.open(self.logfname, 'a', encoding='utf-8') - - elif logmode == 'over': - if isfile(self.logfname): - os.remove(self.logfname) - self.logfile = io.open(self.logfname,'w', encoding='utf-8') - - elif logmode == 'rotate': - if isfile(self.logfname): - if isfile(self.logfname+'.001~'): - old = glob.glob(self.logfname+'.*~') - old.sort() - old.reverse() - for f in old: - root, ext = os.path.splitext(f) - num = int(ext[1:-1])+1 - os.rename(f, root+'.'+repr(num).zfill(3)+'~') - os.rename(self.logfname, self.logfname+'.001~') - self.logfile = io.open(self.logfname, 'w', encoding='utf-8') - - if logmode != 'append': - self.logfile.write(self.loghead) - - self.logfile.flush() - self.log_active = True - - def switch_log(self,val): - """Switch logging on/off. val should be ONLY a boolean.""" - - if val not in [False,True,0,1]: - raise ValueError('Call switch_log ONLY with a boolean argument, ' - 'not with: %s' % val) - - label = {0:'OFF',1:'ON',False:'OFF',True:'ON'} - - if self.logfile is None: - print(""" -Logging hasn't been started yet (use logstart for that). - -%logon/%logoff are for temporarily starting and stopping logging for a logfile -which already exists. But you must first start the logging process with -%logstart (optionally giving a logfile name).""") - - else: - if self.log_active == val: - print('Logging is already',label[val]) - else: - print('Switching logging',label[val]) - self.log_active = not self.log_active - self.log_active_out = self.log_active - - def logstate(self): - """Print a status message about the logger.""" - if self.logfile is None: - print('Logging has not been activated.') - else: - state = self.log_active and 'active' or 'temporarily suspended' - print('Filename :', self.logfname) - print('Mode :', self.logmode) - print('Output logging :', self.log_output) - print('Raw input log :', self.log_raw_input) - print('Timestamping :', self.timestamp) - print('State :', state) - - def log(self, line_mod, line_ori): - """Write the sources to a log. - - Inputs: - - - line_mod: possibly modified input, such as the transformations made - by input prefilters or input handlers of various kinds. This should - always be valid Python. - - - line_ori: unmodified input line from the user. This is not - necessarily valid Python. - """ - - # Write the log line, but decide which one according to the - # log_raw_input flag, set when the log is started. - if self.log_raw_input: - self.log_write(line_ori) - else: - self.log_write(line_mod) - - def log_write(self, data, kind='input'): - """Write data to the log file, if active""" - - #print 'data: %r' % data # dbg - if self.log_active and data: - write = self.logfile.write - if kind=='input': - if self.timestamp: - write(time.strftime('# %a, %d %b %Y %H:%M:%S\n', time.localtime())) - write(data) - elif kind=='output' and self.log_output: - odata = u'\n'.join([u'#[Out]# %s' % s - for s in data.splitlines()]) - write(u'%s\n' % odata) - try: - self.logfile.flush() - except OSError: - print("Failed to flush the log file.") - print( - f"Please check that {self.logfname} exists and have the right permissions." - ) - print( - "Also consider turning off the log with `%logstop` to avoid this warning." - ) - - def logstop(self): - """Fully stop logging and close log file. - - In order to start logging again, a new logstart() call needs to be - made, possibly (though not necessarily) with a new filename, mode and - other options.""" - - if self.logfile is not None: - self.logfile.close() - self.logfile = None - else: - print("Logging hadn't been started.") - self.log_active = False - - # For backwards compatibility, in case anyone was using this. - close_log = logstop diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py deleted file mode 100644 index 47da6e05af00e4e5a8c599b0f383a3b4a1f3d9ff..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/registry.py +++ /dev/null @@ -1,62 +0,0 @@ -import logging - -from typing import Tuple, Dict -from clickhouse_connect.datatypes.base import TypeDef, ClickHouseType, type_map -from clickhouse_connect.driver.exceptions import InternalError -from clickhouse_connect.driver.parser import parse_enum, parse_callable, parse_columns - -logger = logging.getLogger(__name__) -type_cache: Dict[str, ClickHouseType] = {} - - -def parse_name(name: str) -> Tuple[str, str, TypeDef]: - """ - Converts a ClickHouse type name into the base class and the definition (TypeDef) needed for any - additional instantiation - :param name: ClickHouse type name as returned by clickhouse - :return: The original base name (before arguments), the full name as passed in and the TypeDef object that - captures any additional arguments - """ - base = name - wrappers = [] - keys = tuple() - if base.startswith('LowCardinality'): - wrappers.append('LowCardinality') - base = base[15:-1] - if base.startswith('Nullable'): - wrappers.append('Nullable') - base = base[9:-1] - if base.startswith('Enum'): - keys, values = parse_enum(base) - base = base[:base.find('(')] - elif base.startswith('Nested'): - keys, values = parse_columns(base[6:]) - base = 'Nested' - elif base.startswith('Tuple'): - keys, values = parse_columns(base[5:]) - base = 'Tuple' - else: - try: - base, values, _ = parse_callable(base) - except IndexError: - raise InternalError(f'Can not parse ClickHouse data type: {name}') from None - return base, name, TypeDef(tuple(wrappers), keys, values) - - -def get_from_name(name: str) -> ClickHouseType: - """ - Returns the ClickHouseType instance parsed from the ClickHouse type name. Instances are cached - :param name: ClickHouse type name as returned by ClickHouse in WithNamesAndTypes FORMAT or the Native protocol - :return: The instance of the ClickHouse Type - """ - ch_type = type_cache.get(name, None) - if not ch_type: - base, name, type_def = parse_name(name) - try: - ch_type = type_map[base].build(type_def) - except KeyError: - err_str = f'Unrecognized ClickHouse type base: {base} name: {name}' - logger.error(err_str) - raise InternalError(err_str) from None - type_cache[name] = ch_type - return ch_type diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py deleted file mode 100644 index 7f89c750d9d60497e6d83f3965d1086949d045b7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_imports_tipper.py +++ /dev/null @@ -1,373 +0,0 @@ -import inspect -import os.path -import sys - -from _pydev_bundle._pydev_tipper_common import do_find -from _pydevd_bundle.pydevd_utils import hasattr_checked, dir_checked - -from inspect import getfullargspec - - -def getargspec(*args, **kwargs): - arg_spec = getfullargspec(*args, **kwargs) - return arg_spec.args, arg_spec.varargs, arg_spec.varkw, arg_spec.defaults, arg_spec.kwonlyargs or [], arg_spec.kwonlydefaults or {} - - -# completion types. -TYPE_IMPORT = '0' -TYPE_CLASS = '1' -TYPE_FUNCTION = '2' -TYPE_ATTR = '3' -TYPE_BUILTIN = '4' -TYPE_PARAM = '5' - - -def _imp(name, log=None): - try: - return __import__(name) - except: - if '.' in name: - sub = name[0:name.rfind('.')] - - if log is not None: - log.add_content('Unable to import', name, 'trying with', sub) - log.add_exception() - - return _imp(sub, log) - else: - s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path) - if log is not None: - log.add_content(s) - log.add_exception() - - raise ImportError(s) - - -IS_IPY = False -if sys.platform == 'cli': - IS_IPY = True - _old_imp = _imp - - def _imp(name, log=None): - # We must add a reference in clr for .Net - import clr # @UnresolvedImport - initial_name = name - while '.' in name: - try: - clr.AddReference(name) - break # If it worked, that's OK. - except: - name = name[0:name.rfind('.')] - else: - try: - clr.AddReference(name) - except: - pass # That's OK (not dot net module). - - return _old_imp(initial_name, log) - - -def get_file(mod): - f = None - try: - f = inspect.getsourcefile(mod) or inspect.getfile(mod) - except: - try: - f = getattr(mod, '__file__', None) - except: - f = None - if f and f.lower(f[-4:]) in ['.pyc', '.pyo']: - filename = f[:-4] + '.py' - if os.path.exists(filename): - f = filename - - return f - - -def Find(name, log=None): - f = None - - mod = _imp(name, log) - parent = mod - foundAs = '' - - if inspect.ismodule(mod): - f = get_file(mod) - - components = name.split('.') - - old_comp = None - for comp in components[1:]: - try: - # this happens in the following case: - # we have mx.DateTime.mxDateTime.mxDateTime.pyd - # but after importing it, mx.DateTime.mxDateTime shadows access to mxDateTime.pyd - mod = getattr(mod, comp) - except AttributeError: - if old_comp != comp: - raise - - if inspect.ismodule(mod): - f = get_file(mod) - else: - if len(foundAs) > 0: - foundAs = foundAs + '.' - foundAs = foundAs + comp - - old_comp = comp - - return f, mod, parent, foundAs - - -def search_definition(data): - '''@return file, line, col - ''' - - data = data.replace('\n', '') - if data.endswith('.'): - data = data.rstrip('.') - f, mod, parent, foundAs = Find(data) - try: - return do_find(f, mod), foundAs - except: - return do_find(f, parent), foundAs - - -def generate_tip(data, log=None): - data = data.replace('\n', '') - if data.endswith('.'): - data = data.rstrip('.') - - f, mod, parent, foundAs = Find(data, log) - # print_ >> open('temp.txt', 'w'), f - tips = generate_imports_tip_for_module(mod) - return f, tips - - -def check_char(c): - if c == '-' or c == '.': - return '_' - return c - - -_SENTINEL = object() - - -def generate_imports_tip_for_module(obj_to_complete, dir_comps=None, getattr=getattr, filter=lambda name:True): - ''' - @param obj_to_complete: the object from where we should get the completions - @param dir_comps: if passed, we should not 'dir' the object and should just iterate those passed as kwonly_arg parameter - @param getattr: the way to get kwonly_arg given object from the obj_to_complete (used for the completer) - @param filter: kwonly_arg callable that receives the name and decides if it should be appended or not to the results - @return: list of tuples, so that each tuple represents kwonly_arg completion with: - name, doc, args, type (from the TYPE_* constants) - ''' - ret = [] - - if dir_comps is None: - dir_comps = dir_checked(obj_to_complete) - if hasattr_checked(obj_to_complete, '__dict__'): - dir_comps.append('__dict__') - if hasattr_checked(obj_to_complete, '__class__'): - dir_comps.append('__class__') - - get_complete_info = True - - if len(dir_comps) > 1000: - # ok, we don't want to let our users wait forever... - # no complete info for you... - - get_complete_info = False - - dontGetDocsOn = (float, int, str, tuple, list, dict) - dontGetattrOn = (dict, list, set, tuple) - for d in dir_comps: - - if d is None: - continue - - if not filter(d): - continue - - args = '' - - try: - try: - if isinstance(obj_to_complete, dontGetattrOn): - raise Exception('Since python 3.9, e.g. "dict[str]" will return' - " a dict that's only supposed to take strings. " - 'Interestingly, e.g. dict["val"] is also valid ' - 'and presumably represents a dict that only takes ' - 'keys that are "val". This breaks our check for ' - 'class attributes.') - obj = getattr(obj_to_complete.__class__, d) - except: - obj = getattr(obj_to_complete, d) - except: # just ignore and get it without additional info - ret.append((d, '', args, TYPE_BUILTIN)) - else: - - if get_complete_info: - try: - retType = TYPE_BUILTIN - - # check if we have to get docs - getDoc = True - for class_ in dontGetDocsOn: - - if isinstance(obj, class_): - getDoc = False - break - - doc = '' - if getDoc: - # no need to get this info... too many constants are defined and - # makes things much slower (passing all that through sockets takes quite some time) - try: - doc = inspect.getdoc(obj) - if doc is None: - doc = '' - except: # may happen on jython when checking java classes (so, just ignore it) - doc = '' - - if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj): - try: - args, vargs, kwargs, defaults, kwonly_args, kwonly_defaults = getargspec(obj) - - args = args[:] - - for kwonly_arg in kwonly_args: - default = kwonly_defaults.get(kwonly_arg, _SENTINEL) - if default is not _SENTINEL: - args.append('%s=%s' % (kwonly_arg, default)) - else: - args.append(str(kwonly_arg)) - - args = '(%s)' % (', '.join(args)) - except TypeError: - # ok, let's see if we can get the arguments from the doc - args, doc = signature_from_docstring(doc, getattr(obj, '__name__', None)) - - retType = TYPE_FUNCTION - - elif inspect.isclass(obj): - retType = TYPE_CLASS - - elif inspect.ismodule(obj): - retType = TYPE_IMPORT - - else: - retType = TYPE_ATTR - - # add token and doc to return - assure only strings. - ret.append((d, doc, args, retType)) - - except: # just ignore and get it without aditional info - ret.append((d, '', args, TYPE_BUILTIN)) - - else: # get_complete_info == False - if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj): - retType = TYPE_FUNCTION - - elif inspect.isclass(obj): - retType = TYPE_CLASS - - elif inspect.ismodule(obj): - retType = TYPE_IMPORT - - else: - retType = TYPE_ATTR - # ok, no complete info, let's try to do this as fast and clean as possible - # so, no docs for this kind of information, only the signatures - ret.append((d, '', str(args), retType)) - - return ret - - -def signature_from_docstring(doc, obj_name): - args = '()' - try: - found = False - if len(doc) > 0: - if IS_IPY: - # Handle case where we have the situation below - # sort(self, object cmp, object key) - # sort(self, object cmp, object key, bool reverse) - # sort(self) - # sort(self, object cmp) - - # Or: sort(self: list, cmp: object, key: object) - # sort(self: list, cmp: object, key: object, reverse: bool) - # sort(self: list) - # sort(self: list, cmp: object) - if obj_name: - name = obj_name + '(' - - # Fix issue where it was appearing sort(aa)sort(bb)sort(cc) in the same line. - lines = doc.splitlines() - if len(lines) == 1: - c = doc.count(name) - if c > 1: - doc = ('\n' + name).join(doc.split(name)) - - major = '' - for line in doc.splitlines(): - if line.startswith(name) and line.endswith(')'): - if len(line) > len(major): - major = line - if major: - args = major[major.index('('):] - found = True - - if not found: - i = doc.find('->') - if i < 0: - i = doc.find('--') - if i < 0: - i = doc.find('\n') - if i < 0: - i = doc.find('\r') - - if i > 0: - s = doc[0:i] - s = s.strip() - - # let's see if we have a docstring in the first line - if s[-1] == ')': - start = s.find('(') - if start >= 0: - end = s.find('[') - if end <= 0: - end = s.find(')') - if end <= 0: - end = len(s) - - args = s[start:end] - if not args[-1] == ')': - args = args + ')' - - # now, get rid of unwanted chars - l = len(args) - 1 - r = [] - for i in range(len(args)): - if i == 0 or i == l: - r.append(args[i]) - else: - r.append(check_char(args[i])) - - args = ''.join(r) - - if IS_IPY: - if args.startswith('(self:'): - i = args.find(',') - if i >= 0: - args = '(self' + args[i:] - else: - args = '(self)' - i = args.find(')') - if i > 0: - args = args[:i + 1] - - except: - pass - return args, doc diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py deleted file mode 100644 index f2678b321c295bcceaef945111ac3524be19d6e4..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .misc import add_prefix - -__all__ = ['add_prefix'] diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py deleted file mode 100644 index 298a046c4c3c39cbddbcdc5ee47c68606c706b2c..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/baseline_prediction_interface.py +++ /dev/null @@ -1,38 +0,0 @@ -import tqdm -import numpy as np - -def baseline_predict(metric_function, eval_xs, eval_ys, categorical_feats, metric_used=None, eval_pos=2, max_time=300, **kwargs): - """ - Baseline prediction interface. - :param metric_function: - :param eval_xs: - :param eval_ys: - :param categorical_feats: - :param metric_used: - :param eval_pos: - :param max_time: Scheduled maximum time - :param kwargs: - :return: list [np.array(metrics), np.array(outputs), best_configs] or [None, None, None] if failed - """ - - metrics = [] - outputs = [] - best_configs = [] - eval_splits = list(zip(eval_xs.transpose(0, 1), eval_ys.transpose(0, 1))) - for eval_x, eval_y in tqdm.tqdm(eval_splits, desc='Calculating splits'+str(metric_function)+' '+str(eval_pos)): - try: - metric, output, best_config = metric_function(eval_x[:eval_pos], - eval_y[:eval_pos], - eval_x[eval_pos:], - eval_y[eval_pos:], - categorical_feats, - metric_used=metric_used - , max_time=max_time) - metrics += [metric] - outputs += [output] - best_configs += [best_config] - return np.array(metrics), np.array(outputs), best_configs - except Exception as e: - print(f'There was an exception in {metric_function}') - print(e) - return None, None, None \ No newline at end of file diff --git a/spaces/TaliaKorobkin/AIPairProgramming1/README.md b/spaces/TaliaKorobkin/AIPairProgramming1/README.md deleted file mode 100644 index d5ec3ae88581ea00cedacb4b5c4b8fd0ebdba600..0000000000000000000000000000000000000000 --- a/spaces/TaliaKorobkin/AIPairProgramming1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AIPairProgramming1 -emoji: 🏆 -colorFrom: green -colorTo: green -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py deleted file mode 100644 index 7b722d58db0f35c3f6621d02876cefc74e64384a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -""" - pygments.plugin - ~~~~~~~~~~~~~~~ - - Pygments plugin interface. By default, this tries to use - ``importlib.metadata``, which is in the Python standard - library since Python 3.8, or its ``importlib_metadata`` - backport for earlier versions of Python. It falls back on - ``pkg_resources`` if not found. Finally, if ``pkg_resources`` - is not found either, no plugins are loaded at all. - - lexer plugins:: - - [pygments.lexers] - yourlexer = yourmodule:YourLexer - - formatter plugins:: - - [pygments.formatters] - yourformatter = yourformatter:YourFormatter - /.ext = yourformatter:YourFormatter - - As you can see, you can define extensions for the formatter - with a leading slash. - - syntax plugins:: - - [pygments.styles] - yourstyle = yourstyle:YourStyle - - filter plugin:: - - [pygments.filter] - yourfilter = yourfilter:YourFilter - - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -LEXER_ENTRY_POINT = 'pygments.lexers' -FORMATTER_ENTRY_POINT = 'pygments.formatters' -STYLE_ENTRY_POINT = 'pygments.styles' -FILTER_ENTRY_POINT = 'pygments.filters' - - -def iter_entry_points(group_name): - try: - from importlib.metadata import entry_points - except ImportError: - try: - from importlib_metadata import entry_points - except ImportError: - try: - from pip._vendor.pkg_resources import iter_entry_points - except (ImportError, OSError): - return [] - else: - return iter_entry_points(group_name) - groups = entry_points() - if hasattr(groups, 'select'): - # New interface in Python 3.10 and newer versions of the - # importlib_metadata backport. - return groups.select(group=group_name) - else: - # Older interface, deprecated in Python 3.10 and recent - # importlib_metadata, but we need it in Python 3.8 and 3.9. - return groups.get(group_name, []) - - -def find_plugin_lexers(): - for entrypoint in iter_entry_points(LEXER_ENTRY_POINT): - yield entrypoint.load() - - -def find_plugin_formatters(): - for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT): - yield entrypoint.name, entrypoint.load() - - -def find_plugin_styles(): - for entrypoint in iter_entry_points(STYLE_ENTRY_POINT): - yield entrypoint.name, entrypoint.load() - - -def find_plugin_filters(): - for entrypoint in iter_entry_points(FILTER_ENTRY_POINT): - yield entrypoint.name, entrypoint.load() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py deleted file mode 100644 index 91368dda78aad590837aa12023dee67e224709ba..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/logging.py +++ /dev/null @@ -1,289 +0,0 @@ -import logging -from datetime import datetime -from logging import Handler, LogRecord -from pathlib import Path -from types import ModuleType -from typing import ClassVar, Iterable, List, Optional, Type, Union - -from pip._vendor.rich._null_file import NullFile - -from . import get_console -from ._log_render import FormatTimeCallable, LogRender -from .console import Console, ConsoleRenderable -from .highlighter import Highlighter, ReprHighlighter -from .text import Text -from .traceback import Traceback - - -class RichHandler(Handler): - """A logging handler that renders output with Rich. The time / level / message and file are displayed in columns. - The level is color coded, and the message is syntax highlighted. - - Note: - Be careful when enabling console markup in log messages if you have configured logging for libraries not - under your control. If a dependency writes messages containing square brackets, it may not produce the intended output. - - Args: - level (Union[int, str], optional): Log level. Defaults to logging.NOTSET. - console (:class:`~rich.console.Console`, optional): Optional console instance to write logs. - Default will use a global console instance writing to stdout. - show_time (bool, optional): Show a column for the time. Defaults to True. - omit_repeated_times (bool, optional): Omit repetition of the same time. Defaults to True. - show_level (bool, optional): Show a column for the level. Defaults to True. - show_path (bool, optional): Show the path to the original log call. Defaults to True. - enable_link_path (bool, optional): Enable terminal link of path column to file. Defaults to True. - highlighter (Highlighter, optional): Highlighter to style log messages, or None to use ReprHighlighter. Defaults to None. - markup (bool, optional): Enable console markup in log messages. Defaults to False. - rich_tracebacks (bool, optional): Enable rich tracebacks with syntax highlighting and formatting. Defaults to False. - tracebacks_width (Optional[int], optional): Number of characters used to render tracebacks, or None for full width. Defaults to None. - tracebacks_extra_lines (int, optional): Additional lines of code to render tracebacks, or None for full width. Defaults to None. - tracebacks_theme (str, optional): Override pygments theme used in traceback. - tracebacks_word_wrap (bool, optional): Enable word wrapping of long tracebacks lines. Defaults to True. - tracebacks_show_locals (bool, optional): Enable display of locals in tracebacks. Defaults to False. - tracebacks_suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback. - locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to 10. - locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80. - log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%x %X] ". - keywords (List[str], optional): List of words to highlight instead of ``RichHandler.KEYWORDS``. - """ - - KEYWORDS: ClassVar[Optional[List[str]]] = [ - "GET", - "POST", - "HEAD", - "PUT", - "DELETE", - "OPTIONS", - "TRACE", - "PATCH", - ] - HIGHLIGHTER_CLASS: ClassVar[Type[Highlighter]] = ReprHighlighter - - def __init__( - self, - level: Union[int, str] = logging.NOTSET, - console: Optional[Console] = None, - *, - show_time: bool = True, - omit_repeated_times: bool = True, - show_level: bool = True, - show_path: bool = True, - enable_link_path: bool = True, - highlighter: Optional[Highlighter] = None, - markup: bool = False, - rich_tracebacks: bool = False, - tracebacks_width: Optional[int] = None, - tracebacks_extra_lines: int = 3, - tracebacks_theme: Optional[str] = None, - tracebacks_word_wrap: bool = True, - tracebacks_show_locals: bool = False, - tracebacks_suppress: Iterable[Union[str, ModuleType]] = (), - locals_max_length: int = 10, - locals_max_string: int = 80, - log_time_format: Union[str, FormatTimeCallable] = "[%x %X]", - keywords: Optional[List[str]] = None, - ) -> None: - super().__init__(level=level) - self.console = console or get_console() - self.highlighter = highlighter or self.HIGHLIGHTER_CLASS() - self._log_render = LogRender( - show_time=show_time, - show_level=show_level, - show_path=show_path, - time_format=log_time_format, - omit_repeated_times=omit_repeated_times, - level_width=None, - ) - self.enable_link_path = enable_link_path - self.markup = markup - self.rich_tracebacks = rich_tracebacks - self.tracebacks_width = tracebacks_width - self.tracebacks_extra_lines = tracebacks_extra_lines - self.tracebacks_theme = tracebacks_theme - self.tracebacks_word_wrap = tracebacks_word_wrap - self.tracebacks_show_locals = tracebacks_show_locals - self.tracebacks_suppress = tracebacks_suppress - self.locals_max_length = locals_max_length - self.locals_max_string = locals_max_string - self.keywords = keywords - - def get_level_text(self, record: LogRecord) -> Text: - """Get the level name from the record. - - Args: - record (LogRecord): LogRecord instance. - - Returns: - Text: A tuple of the style and level name. - """ - level_name = record.levelname - level_text = Text.styled( - level_name.ljust(8), f"logging.level.{level_name.lower()}" - ) - return level_text - - def emit(self, record: LogRecord) -> None: - """Invoked by logging.""" - message = self.format(record) - traceback = None - if ( - self.rich_tracebacks - and record.exc_info - and record.exc_info != (None, None, None) - ): - exc_type, exc_value, exc_traceback = record.exc_info - assert exc_type is not None - assert exc_value is not None - traceback = Traceback.from_exception( - exc_type, - exc_value, - exc_traceback, - width=self.tracebacks_width, - extra_lines=self.tracebacks_extra_lines, - theme=self.tracebacks_theme, - word_wrap=self.tracebacks_word_wrap, - show_locals=self.tracebacks_show_locals, - locals_max_length=self.locals_max_length, - locals_max_string=self.locals_max_string, - suppress=self.tracebacks_suppress, - ) - message = record.getMessage() - if self.formatter: - record.message = record.getMessage() - formatter = self.formatter - if hasattr(formatter, "usesTime") and formatter.usesTime(): - record.asctime = formatter.formatTime(record, formatter.datefmt) - message = formatter.formatMessage(record) - - message_renderable = self.render_message(record, message) - log_renderable = self.render( - record=record, traceback=traceback, message_renderable=message_renderable - ) - if isinstance(self.console.file, NullFile): - # Handles pythonw, where stdout/stderr are null, and we return NullFile - # instance from Console.file. In this case, we still want to make a log record - # even though we won't be writing anything to a file. - self.handleError(record) - else: - try: - self.console.print(log_renderable) - except Exception: - self.handleError(record) - - def render_message(self, record: LogRecord, message: str) -> "ConsoleRenderable": - """Render message text in to Text. - - Args: - record (LogRecord): logging Record. - message (str): String containing log message. - - Returns: - ConsoleRenderable: Renderable to display log message. - """ - use_markup = getattr(record, "markup", self.markup) - message_text = Text.from_markup(message) if use_markup else Text(message) - - highlighter = getattr(record, "highlighter", self.highlighter) - if highlighter: - message_text = highlighter(message_text) - - if self.keywords is None: - self.keywords = self.KEYWORDS - - if self.keywords: - message_text.highlight_words(self.keywords, "logging.keyword") - - return message_text - - def render( - self, - *, - record: LogRecord, - traceback: Optional[Traceback], - message_renderable: "ConsoleRenderable", - ) -> "ConsoleRenderable": - """Render log for display. - - Args: - record (LogRecord): logging Record. - traceback (Optional[Traceback]): Traceback instance or None for no Traceback. - message_renderable (ConsoleRenderable): Renderable (typically Text) containing log message contents. - - Returns: - ConsoleRenderable: Renderable to display log. - """ - path = Path(record.pathname).name - level = self.get_level_text(record) - time_format = None if self.formatter is None else self.formatter.datefmt - log_time = datetime.fromtimestamp(record.created) - - log_renderable = self._log_render( - self.console, - [message_renderable] if not traceback else [message_renderable, traceback], - log_time=log_time, - time_format=time_format, - level=level, - path=path, - line_no=record.lineno, - link_path=record.pathname if self.enable_link_path else None, - ) - return log_renderable - - -if __name__ == "__main__": # pragma: no cover - from time import sleep - - FORMAT = "%(message)s" - # FORMAT = "%(asctime)-15s - %(levelname)s - %(message)s" - logging.basicConfig( - level="NOTSET", - format=FORMAT, - datefmt="[%X]", - handlers=[RichHandler(rich_tracebacks=True, tracebacks_show_locals=True)], - ) - log = logging.getLogger("rich") - - log.info("Server starting...") - log.info("Listening on http://127.0.0.1:8080") - sleep(1) - - log.info("GET /index.html 200 1298") - log.info("GET /imgs/backgrounds/back1.jpg 200 54386") - log.info("GET /css/styles.css 200 54386") - log.warning("GET /favicon.ico 404 242") - sleep(1) - - log.debug( - "JSONRPC request\n--> %r\n<-- %r", - { - "version": "1.1", - "method": "confirmFruitPurchase", - "params": [["apple", "orange", "mangoes", "pomelo"], 1.123], - "id": "194521489", - }, - {"version": "1.1", "result": True, "error": None, "id": "194521489"}, - ) - log.debug( - "Loading configuration file /adasd/asdasd/qeqwe/qwrqwrqwr/sdgsdgsdg/werwerwer/dfgerert/ertertert/ertetert/werwerwer" - ) - log.error("Unable to find 'pomelo' in database!") - log.info("POST /jsonrpc/ 200 65532") - log.info("POST /admin/ 401 42234") - log.warning("password was rejected for admin site.") - - def divide() -> None: - number = 1 - divisor = 0 - foos = ["foo"] * 100 - log.debug("in divide") - try: - number / divisor - except: - log.exception("An error of some kind occurred!") - - divide() - sleep(1) - log.critical("Out of memory!") - log.info("Server exited with code=-1") - log.info("[bold]EXITING...[/bold]", extra=dict(markup=True)) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py deleted file mode 100644 index aef2821b83f6ac1730d063d8ce939134cc2105a7..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/__init__.py +++ /dev/null @@ -1,342 +0,0 @@ -""" -Utilities for determining application-specific dirs. See for details and -usage. -""" -from __future__ import annotations - -import os -import sys -from pathlib import Path - -if sys.version_info >= (3, 8): # pragma: no cover (py38+) - from typing import Literal -else: # pragma: no cover (py38+) - from ..typing_extensions import Literal - -from .api import PlatformDirsABC -from .version import __version__ -from .version import __version_tuple__ as __version_info__ - - -def _set_platform_dir_class() -> type[PlatformDirsABC]: - if sys.platform == "win32": - from .windows import Windows as Result - elif sys.platform == "darwin": - from .macos import MacOS as Result - else: - from .unix import Unix as Result - - if os.getenv("ANDROID_DATA") == "/data" and os.getenv("ANDROID_ROOT") == "/system": - - if os.getenv("SHELL") or os.getenv("PREFIX"): - return Result - - from .android import _android_folder - - if _android_folder() is not None: - from .android import Android - - return Android # return to avoid redefinition of result - - return Result - - -PlatformDirs = _set_platform_dir_class() #: Currently active platform -AppDirs = PlatformDirs #: Backwards compatibility with appdirs - - -def user_data_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: data directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_data_dir - - -def site_data_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - multipath: bool = False, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param multipath: See `roaming `. - :returns: data directory shared by users - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_data_dir - - -def user_config_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: config directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_config_dir - - -def site_config_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - multipath: bool = False, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param multipath: See `roaming `. - :returns: config directory shared by the users - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_config_dir - - -def user_cache_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `roaming `. - :returns: cache directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_cache_dir - - -def user_state_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: state directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_state_dir - - -def user_log_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `roaming `. - :returns: log directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_log_dir - - -def user_documents_dir() -> str: - """ - :returns: documents directory tied to the user - """ - return PlatformDirs().user_documents_dir - - -def user_runtime_dir( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> str: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `opinion `. - :returns: runtime directory tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_runtime_dir - - -def user_data_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: data path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_data_path - - -def site_data_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - multipath: bool = False, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param multipath: See `multipath `. - :returns: data path shared by users - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_data_path - - -def user_config_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: config path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_config_path - - -def site_config_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - multipath: bool = False, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param multipath: See `roaming `. - :returns: config path shared by the users - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, multipath=multipath).site_config_path - - -def user_cache_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `roaming `. - :returns: cache path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_cache_path - - -def user_state_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param roaming: See `roaming `. - :returns: state path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, roaming=roaming).user_state_path - - -def user_log_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `roaming `. - :returns: log path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_log_path - - -def user_documents_path() -> Path: - """ - :returns: documents path tied to the user - """ - return PlatformDirs().user_documents_path - - -def user_runtime_path( - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - opinion: bool = True, -) -> Path: - """ - :param appname: See `appname `. - :param appauthor: See `appauthor `. - :param version: See `version `. - :param opinion: See `opinion `. - :returns: runtime path tied to the user - """ - return PlatformDirs(appname=appname, appauthor=appauthor, version=version, opinion=opinion).user_runtime_path - - -__all__ = [ - "__version__", - "__version_info__", - "PlatformDirs", - "AppDirs", - "PlatformDirsABC", - "user_data_dir", - "user_config_dir", - "user_cache_dir", - "user_state_dir", - "user_log_dir", - "user_documents_dir", - "user_runtime_dir", - "site_data_dir", - "site_config_dir", - "user_data_path", - "user_config_path", - "user_cache_path", - "user_state_path", - "user_log_path", - "user_documents_path", - "user_runtime_path", - "site_data_path", - "site_config_path", -] diff --git a/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md b/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md deleted file mode 100644 index 9751fb563630d9b1066a4ee5c350eb9b23d1653c..0000000000000000000000000000000000000000 --- a/spaces/Tao0000/stabilityai-stable-diffusion-2-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 -emoji: 💩 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TeamTonic/hallucination-test/README.md b/spaces/TeamTonic/hallucination-test/README.md deleted file mode 100644 index be709441f48af9830326715c1e28f49a8a064b33..0000000000000000000000000000000000000000 --- a/spaces/TeamTonic/hallucination-test/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tonic's Hallucination Space -emoji: 🧠🤯🌈 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 4.1.1 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/TencentARC/MasaCtrl/style.css b/spaces/TencentARC/MasaCtrl/style.css deleted file mode 100644 index 99b3000135b9552cf9f80f63e6318fafed44e867..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/MasaCtrl/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; - } \ No newline at end of file diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py deleted file mode 100644 index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes_panoptic.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - with open(gt_json) as f: - json_info = json.load(f) - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py deleted file mode 100644 index 5a0488adbfcf0c7eca08616f43ebf695acad4b7e..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_blocks.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -import torch -from torch import nn - -from detectron2.layers import ASPP, DepthwiseSeparableConv2d, FrozenBatchNorm2d -from detectron2.modeling.backbone.resnet import BasicStem, ResNet - - -""" -Test for misc layers. -""" - - -class TestBlocks(unittest.TestCase): - def test_separable_conv(self): - DepthwiseSeparableConv2d(3, 10, norm1="BN", activation1=nn.PReLU()) - - def test_aspp(self): - m = ASPP(3, 10, [2, 3, 4], norm="", activation=nn.PReLU()) - self.assertIsNot(m.convs[0].activation.weight, m.convs[1].activation.weight) - self.assertIsNot(m.convs[0].activation.weight, m.project.activation.weight) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_frozen_batchnorm_fp16(self): - from torch.cuda.amp import autocast - - C = 10 - input = torch.rand(1, C, 10, 10).cuda() - m = FrozenBatchNorm2d(C).cuda() - with autocast(): - output = m(input.half()) - self.assertEqual(output.dtype, torch.float16) - - # requires_grad triggers a different codepath - input.requires_grad_() - with autocast(): - output = m(input.half()) - self.assertEqual(output.dtype, torch.float16) - - def test_resnet_unused_stages(self): - resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2"]) - self.assertTrue(hasattr(resnet, "res2")) - self.assertFalse(hasattr(resnet, "res3")) - self.assertFalse(hasattr(resnet, "res5")) - - resnet = ResNet(BasicStem(), ResNet.make_default_stages(18), out_features=["res2", "res5"]) - self.assertTrue(hasattr(resnet, "res2")) - self.assertTrue(hasattr(resnet, "res4")) - self.assertTrue(hasattr(resnet, "res5")) diff --git a/spaces/Tetel/chat/SydneyGPT/__init__.py b/spaces/Tetel/chat/SydneyGPT/__init__.py deleted file mode 100644 index 0895f233f1ef5a7384ab8ac1a9f2c6d0f39b4b96..0000000000000000000000000000000000000000 --- a/spaces/Tetel/chat/SydneyGPT/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -import os, sys -sys.path.append(os.path.dirname(os.path.realpath(__file__))) diff --git a/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md b/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md deleted file mode 100644 index 05da46bc461e16c5b455bf4b0e661ab9a660b8d1..0000000000000000000000000000000000000000 --- a/spaces/TheBritishLibrary/British-Library-books-genre-classifier-v2/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: British Library Books Genre Classifier V2 -emoji: 📚 -colorFrom: red -colorTo: black -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/VIPLab/Track-Anything/tools/__init__.py b/spaces/VIPLab/Track-Anything/tools/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/VickyKira/NASAGPT/client/css/dropdown.css b/spaces/VickyKira/NASAGPT/client/css/dropdown.css deleted file mode 100644 index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/dropdown.css +++ /dev/null @@ -1,10 +0,0 @@ -.dropdown { - border: 1px solid var(--conversations); -} - -@media screen and (max-width: 990px) { - .dropdown { - padding: 4px 8px; - font-size: 0.75rem; - } -} diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index eb60d8830714338448be009d1075e3594337db15..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Xenova/react-translator/assets/index-48a6f08c.css b/spaces/Xenova/react-translator/assets/index-48a6f08c.css deleted file mode 100644 index cc14a4f7bcebf97ddc0ce149e0239044abebc040..0000000000000000000000000000000000000000 --- a/spaces/Xenova/react-translator/assets/index-48a6f08c.css +++ /dev/null @@ -1 +0,0 @@ -#root{max-width:1280px;margin:0 auto;padding:2rem;text-align:center}.language-container{display:flex;gap:20px}.textbox-container{display:flex;justify-content:center;gap:20px;width:800px}.textbox-container>textarea{width:50%}.language-selector{width:50%}.language-selector>select{width:150px}.progress-container{position:relative;font-size:14px;color:#fff;background-color:#e9ecef;border:solid 1px;border-radius:8px;text-align:left;overflow:hidden}.progress-bar{padding:0 4px;z-index:0;top:0;width:1%;height:100%;overflow:hidden;background-color:#007bff;white-space:nowrap}.progress-text{z-index:2}.selector-container{display:flex;gap:20px}.progress-bars-container{padding:8px;height:140px}.container{margin:25px;display:flex;flex-direction:column;gap:10px}:root{font-family:Inter,system-ui,Avenir,Helvetica,Arial,sans-serif;line-height:1.5;font-weight:400;color:#213547;background-color:#fff;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}body{margin:0;display:flex;place-items:center;min-width:320px;min-height:100vh}h1{font-size:3.2em;line-height:1}h1,h2{margin:8px}select{padding:.3em;cursor:pointer}textarea{padding:.6em}button{padding:.6em 1.2em;cursor:pointer;font-weight:500}button[disabled]{cursor:not-allowed}select,textarea,button{border-radius:8px;border:1px solid transparent;font-size:1em;font-family:inherit;background-color:#f9f9f9;transition:border-color .25s}select:hover,textarea:hover,button:not([disabled]):hover{border-color:#646cff}select:focus,select:focus-visible,textarea:focus,textarea:focus-visible,button:focus,button:focus-visible{outline:4px auto -webkit-focus-ring-color} diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py deleted file mode 100644 index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/vdecoder/hifigan/models.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -def padDiff(x): - return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0) - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (padDiff(tmp_over_one)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py b/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py deleted file mode 100644 index a3c625da4ead5414789b60c23613306e0df7df94..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/models/stylegan_networks.py +++ /dev/null @@ -1,914 +0,0 @@ -""" -The network architectures is based on PyTorch implemenation of StyleGAN2Encoder. -Original PyTorch repo: https://github.com/rosinality/style-based-gan-pytorch -Origianl StyelGAN2 paper: https://github.com/NVlabs/stylegan2 -We use the network architeture for our single-image traning setting. -""" - -import math -import numpy as np -import random - -import torch -from torch import nn -from torch.nn import functional as F - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return F.leaky_relu(input + bias, negative_slope) * scale - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - self.bias = nn.Parameter(torch.zeros(1, channel, 1, 1)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - # print("FusedLeakyReLU: ", input.abs().mean()) - out = fused_leaky_relu(input, self.bias, - self.negative_slope, - self.scale) - # print("FusedLeakyReLU: ", out.abs().mean()) - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, minor, in_h, in_w = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, minor, in_h, 1, in_w, 1) - out = F.pad(out, [0, up_x - 1, 0, 0, 0, up_y - 1, 0, 0]) - out = out.view(-1, minor, in_h * up_y, in_w * up_x) - - out = F.pad( - out, [max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - :, - max(-pad_y0, 0): out.shape[2] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[3] - max(-pad_x1, 0), - ] - - # out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - # out = out.permute(0, 2, 3, 1) - - return out[:, :, ::down_y, ::down_x] - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - return upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]) - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if len(k.shape) == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = math.sqrt(1) / math.sqrt(in_channel * (kernel_size ** 2)) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - # print("Before EqualConv2d: ", input.abs().mean()) - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - # print("After EqualConv2d: ", out.abs().mean(), (self.weight * self.scale).abs().mean()) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (math.sqrt(1) / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = math.sqrt(1) / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - if style_dim is not None and style_dim > 0: - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - if style is not None: - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - else: - style = torch.ones(batch, 1, in_channel, 1, 1).cuda() - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim=None, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - inject_noise=True, - ): - super().__init__() - - self.inject_noise = inject_noise - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style=None, noise=None): - out = self.conv(input, style) - if self.inject_noise: - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if len(styles[0].shape) < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], downsample=True, skip_gain=1.0): - super().__init__() - - self.skip_gain = skip_gain - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=downsample, blur_kernel=blur_kernel) - - if in_channel != out_channel or downsample: - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=downsample, activate=False, bias=False - ) - else: - self.skip = nn.Identity() - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out * self.skip_gain + skip) / math.sqrt(self.skip_gain ** 2 + 1.0) - - return out - - -class StyleGAN2Discriminator(nn.Module): - def __init__(self, input_nc, ndf=64, n_layers=3, no_antialias=False, size=None, opt=None): - super().__init__() - self.opt = opt - self.stddev_group = 16 - if size is None: - size = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size))))) - if "patch" in self.opt.netD and self.opt.D_patch_size is not None: - size = 2 ** int(np.log2(self.opt.D_patch_size)) - - blur_kernel = [1, 3, 3, 1] - channel_multiplier = ndf / 64 - channels = { - 4: min(384, int(4096 * channel_multiplier)), - 8: min(384, int(2048 * channel_multiplier)), - 16: min(384, int(1024 * channel_multiplier)), - 32: min(384, int(512 * channel_multiplier)), - 64: int(256 * channel_multiplier), - 128: int(128 * channel_multiplier), - 256: int(64 * channel_multiplier), - 512: int(32 * channel_multiplier), - 1024: int(16 * channel_multiplier), - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - if "smallpatch" in self.opt.netD: - final_res_log2 = 4 - elif "patch" in self.opt.netD: - final_res_log2 = 3 - else: - final_res_log2 = 2 - - for i in range(log_size, final_res_log2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - if False and "tile" in self.opt.netD: - in_channel += 1 - self.final_conv = ConvLayer(in_channel, channels[4], 3) - if "patch" in self.opt.netD: - self.final_linear = ConvLayer(channels[4], 1, 3, bias=False, activate=False) - else: - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input, get_minibatch_features=False): - if "patch" in self.opt.netD and self.opt.D_patch_size is not None: - h, w = input.size(2), input.size(3) - y = torch.randint(h - self.opt.D_patch_size, ()) - x = torch.randint(w - self.opt.D_patch_size, ()) - input = input[:, :, y:y + self.opt.D_patch_size, x:x + self.opt.D_patch_size] - out = input - for i, conv in enumerate(self.convs): - out = conv(out) - # print(i, out.abs().mean()) - # out = self.convs(input) - - batch, channel, height, width = out.shape - - if False and "tile" in self.opt.netD: - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, 1, channel // 1, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdim=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - # print(out.abs().mean()) - - if "patch" not in self.opt.netD: - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - - -class TileStyleGAN2Discriminator(StyleGAN2Discriminator): - def forward(self, input): - B, C, H, W = input.size(0), input.size(1), input.size(2), input.size(3) - size = self.opt.D_patch_size - Y = H // size - X = W // size - input = input.view(B, C, Y, size, X, size) - input = input.permute(0, 2, 4, 1, 3, 5).contiguous().view(B * Y * X, C, size, size) - return super().forward(input) - - -class StyleGAN2Encoder(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None): - super().__init__() - assert opt is not None - self.opt = opt - channel_multiplier = ngf / 32 - channels = { - 4: min(512, int(round(4096 * channel_multiplier))), - 8: min(512, int(round(2048 * channel_multiplier))), - 16: min(512, int(round(1024 * channel_multiplier))), - 32: min(512, int(round(512 * channel_multiplier))), - 64: int(round(256 * channel_multiplier)), - 128: int(round(128 * channel_multiplier)), - 256: int(round(64 * channel_multiplier)), - 512: int(round(32 * channel_multiplier)), - 1024: int(round(16 * channel_multiplier)), - } - - blur_kernel = [1, 3, 3, 1] - - cur_res = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size))))) - convs = [nn.Identity(), - ConvLayer(3, channels[cur_res], 1)] - - num_downsampling = self.opt.stylegan2_G_num_downsampling - for i in range(num_downsampling): - in_channel = channels[cur_res] - out_channel = channels[cur_res // 2] - convs.append(ResBlock(in_channel, out_channel, blur_kernel, downsample=True)) - cur_res = cur_res // 2 - - for i in range(n_blocks // 2): - n_channel = channels[cur_res] - convs.append(ResBlock(n_channel, n_channel, downsample=False)) - - self.convs = nn.Sequential(*convs) - - def forward(self, input, layers=[], get_features=False): - feat = input - feats = [] - if -1 in layers: - layers.append(len(self.convs) - 1) - for layer_id, layer in enumerate(self.convs): - feat = layer(feat) - # print(layer_id, " features ", feat.abs().mean()) - if layer_id in layers: - feats.append(feat) - - if get_features: - return feat, feats - else: - return feat - - -class StyleGAN2Decoder(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None): - super().__init__() - assert opt is not None - self.opt = opt - - blur_kernel = [1, 3, 3, 1] - - channel_multiplier = ngf / 32 - channels = { - 4: min(512, int(round(4096 * channel_multiplier))), - 8: min(512, int(round(2048 * channel_multiplier))), - 16: min(512, int(round(1024 * channel_multiplier))), - 32: min(512, int(round(512 * channel_multiplier))), - 64: int(round(256 * channel_multiplier)), - 128: int(round(128 * channel_multiplier)), - 256: int(round(64 * channel_multiplier)), - 512: int(round(32 * channel_multiplier)), - 1024: int(round(16 * channel_multiplier)), - } - - num_downsampling = self.opt.stylegan2_G_num_downsampling - cur_res = 2 ** int((np.rint(np.log2(min(opt.load_size, opt.crop_size))))) // (2 ** num_downsampling) - convs = [] - - for i in range(n_blocks // 2): - n_channel = channels[cur_res] - convs.append(ResBlock(n_channel, n_channel, downsample=False)) - - for i in range(num_downsampling): - in_channel = channels[cur_res] - out_channel = channels[cur_res * 2] - inject_noise = "small" not in self.opt.netG - convs.append( - StyledConv(in_channel, out_channel, 3, upsample=True, blur_kernel=blur_kernel, inject_noise=inject_noise) - ) - cur_res = cur_res * 2 - - convs.append(ConvLayer(channels[cur_res], 3, 1)) - - self.convs = nn.Sequential(*convs) - - def forward(self, input): - return self.convs(input) - - -class StyleGAN2Generator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, use_dropout=False, n_blocks=6, padding_type='reflect', no_antialias=False, opt=None): - super().__init__() - self.opt = opt - self.encoder = StyleGAN2Encoder(input_nc, output_nc, ngf, use_dropout, n_blocks, padding_type, no_antialias, opt) - self.decoder = StyleGAN2Decoder(input_nc, output_nc, ngf, use_dropout, n_blocks, padding_type, no_antialias, opt) - - def forward(self, input, layers=[], encode_only=False): - feat, feats = self.encoder(input, layers, True) - if encode_only: - return feats - else: - fake = self.decoder(feat) - - if len(layers) > 0: - return fake, feats - else: - return fake diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py deleted file mode 100644 index 11d46b97705db60fb6a4eb5fa7da10ac78acb8bc..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/samplers/score_hlr_sampler.py +++ /dev/null @@ -1,264 +0,0 @@ -import torch -from mmcv.ops import nms_match - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler -from .sampling_result import SamplingResult - - -@BBOX_SAMPLERS.register_module() -class ScoreHLRSampler(BaseSampler): - r"""Importance-based Sample Reweighting (ISR_N), described in `Prime Sample - Attention in Object Detection `_. - - Score hierarchical local rank (HLR) differentiates with RandomSampler in - negative part. It firstly computes Score-HLR in a two-step way, - then linearly maps score hlr to the loss weights. - - Args: - num (int): Total number of sampled RoIs. - pos_fraction (float): Fraction of positive samples. - context (:class:`BaseRoIHead`): RoI head that the sampler belongs to. - neg_pos_ub (int): Upper bound of the ratio of num negative to num - positive, -1 means no upper bound. - add_gt_as_proposals (bool): Whether to add ground truth as proposals. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - score_thr (float): Minimum score that a negative sample is to be - considered as valid bbox. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0, - score_thr=0.05, - iou_thr=0.5, - **kwargs): - super().__init__(num, pos_fraction, neg_pos_ub, add_gt_as_proposals) - self.k = k - self.bias = bias - self.score_thr = score_thr - self.iou_thr = iou_thr - self.context = context - # context of cascade detectors is a list, so distinguish them here. - if not hasattr(context, 'num_stages'): - self.bbox_roi_extractor = context.bbox_roi_extractor - self.bbox_head = context.bbox_head - self.with_shared_head = context.with_shared_head - if self.with_shared_head: - self.shared_head = context.shared_head - else: - self.bbox_roi_extractor = context.bbox_roi_extractor[ - context.current_stage] - self.bbox_head = context.bbox_head[context.current_stage] - - @staticmethod - def random_choice(gallery, num): - """Randomly select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0).flatten() - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes, - feats=None, - img_meta=None, - **kwargs): - """Sample negative samples. - - Score-HLR sampler is done in the following steps: - 1. Take the maximum positive score prediction of each negative samples - as s_i. - 2. Filter out negative samples whose s_i <= score_thr, the left samples - are called valid samples. - 3. Use NMS-Match to divide valid samples into different groups, - samples in the same group will greatly overlap with each other - 4. Rank the matched samples in two-steps to get Score-HLR. - (1) In the same group, rank samples with their scores. - (2) In the same score rank across different groups, - rank samples with their scores again. - 5. Linearly map Score-HLR to the final label weights. - - Args: - assign_result (:obj:`AssignResult`): result of assigner. - num_expected (int): Expected number of samples. - bboxes (Tensor): bbox to be sampled. - feats (Tensor): Features come from FPN. - img_meta (dict): Meta information dictionary. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0).flatten() - num_neg = neg_inds.size(0) - if num_neg == 0: - return neg_inds, None - with torch.no_grad(): - neg_bboxes = bboxes[neg_inds] - neg_rois = bbox2roi([neg_bboxes]) - bbox_result = self.context._bbox_forward(feats, neg_rois) - cls_score, bbox_pred = bbox_result['cls_score'], bbox_result[ - 'bbox_pred'] - - ori_loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=None, - labels=neg_inds.new_full((num_neg, ), - self.bbox_head.num_classes), - label_weights=cls_score.new_ones(num_neg), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - - # filter out samples with the max score lower than score_thr - max_score, argmax_score = cls_score.softmax(-1)[:, :-1].max(-1) - valid_inds = (max_score > self.score_thr).nonzero().view(-1) - invalid_inds = (max_score <= self.score_thr).nonzero().view(-1) - num_valid = valid_inds.size(0) - num_invalid = invalid_inds.size(0) - - num_expected = min(num_neg, num_expected) - num_hlr = min(num_valid, num_expected) - num_rand = num_expected - num_hlr - if num_valid > 0: - valid_rois = neg_rois[valid_inds] - valid_max_score = max_score[valid_inds] - valid_argmax_score = argmax_score[valid_inds] - valid_bbox_pred = bbox_pred[valid_inds] - - # valid_bbox_pred shape: [num_valid, #num_classes, 4] - valid_bbox_pred = valid_bbox_pred.view( - valid_bbox_pred.size(0), -1, 4) - selected_bbox_pred = valid_bbox_pred[range(num_valid), - valid_argmax_score] - pred_bboxes = self.bbox_head.bbox_coder.decode( - valid_rois[:, 1:], selected_bbox_pred) - pred_bboxes_with_score = torch.cat( - [pred_bboxes, valid_max_score[:, None]], -1) - group = nms_match(pred_bboxes_with_score, self.iou_thr) - - # imp: importance - imp = cls_score.new_zeros(num_valid) - for g in group: - g_score = valid_max_score[g] - # g_score has already sorted - rank = g_score.new_tensor(range(g_score.size(0))) - imp[g] = num_valid - rank + g_score - _, imp_rank_inds = imp.sort(descending=True) - _, imp_rank = imp_rank_inds.sort() - hlr_inds = imp_rank_inds[:num_expected] - - if num_rand > 0: - rand_inds = torch.randperm(num_invalid)[:num_rand] - select_inds = torch.cat( - [valid_inds[hlr_inds], invalid_inds[rand_inds]]) - else: - select_inds = valid_inds[hlr_inds] - - neg_label_weights = cls_score.new_ones(num_expected) - - up_bound = max(num_expected, num_valid) - imp_weights = (up_bound - - imp_rank[hlr_inds].float()) / up_bound - neg_label_weights[:num_hlr] = imp_weights - neg_label_weights[num_hlr:] = imp_weights.min() - neg_label_weights = (self.bias + - (1 - self.bias) * neg_label_weights).pow( - self.k) - ori_selected_loss = ori_loss[select_inds] - new_loss = ori_selected_loss * neg_label_weights - norm_ratio = ori_selected_loss.sum() / new_loss.sum() - neg_label_weights *= norm_ratio - else: - neg_label_weights = cls_score.new_ones(num_expected) - select_inds = torch.randperm(num_neg)[:num_expected] - - return neg_inds[select_inds], neg_label_weights - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - img_meta=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - tuple[:obj:`SamplingResult`, Tensor]: Sampling result and negetive - label weights. - """ - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals: - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds, neg_label_weights = self.neg_sampler._sample_neg( - assign_result, - num_expected_neg, - bboxes, - img_meta=img_meta, - **kwargs) - - return SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags), neg_label_weights diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py deleted file mode 100644 index 7bb7160cb4bf2f47876f6e8373142aa5846920a9..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/utils/gaussian_target.py +++ /dev/null @@ -1,185 +0,0 @@ -from math import sqrt - -import torch - - -def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'): - """Generate 2D gaussian kernel. - - Args: - radius (int): Radius of gaussian kernel. - sigma (int): Sigma of gaussian function. Default: 1. - dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32. - device (str): Device of gaussian tensor. Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius + 1) * (2 * radius + 1)`` shape. - """ - x = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x + y * y) / (2 * sigma * sigma)).exp() - - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def gen_gaussian_target(heatmap, center, radius, k=1): - """Generate 2D gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius (int): Radius of gaussian kernel. - k (int): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter = 2 * radius + 1 - gaussian_kernel = gaussian2D( - radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device) - - x, y = center - - height, width = heatmap.shape[:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius - top:radius + bottom, - radius - left:radius + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def gaussian_radius(det_size, min_overlap): - r"""Generate 2D gaussian radius. - - This function is modified from the `official github repo - `_. - - Given ``min_overlap``, radius could computed by a quadratic equation - according to Vieta's formulas. - - There are 3 cases for computing gaussian radius, details are following: - - - Explanation of figure: ``lt`` and ``br`` indicates the left-top and - bottom-right corner of ground truth box. ``x`` indicates the - generated corner at the limited position when ``radius=r``. - - - Case1: one corner is inside the gt box and the other is outside. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x----------+--+ - | | | | - | | | | height - | | overlap | | - | | | | - | | | | v - +--+---------br--+ - - | | | - +----------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad - {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ - {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case2: both two corners are inside the gt box. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x-------+ | - | | | | - | |overlap| | height - | | | | - | +-------x--+ - | | | v - +----------+-br - - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad - {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ - {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case3: both two corners are outside the gt box. - - .. code:: text - - |< width >| - - x--+----------------+ - | | | - +-lt-------------+ | - - | | | | ^ - | | | | - | | overlap | | height - | | | | - | | | | v - | +------------br--+ - - | | | - +----------------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad - {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ - {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ - {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a} - - Args: - det_size (list[int]): Shape of object. - min_overlap (float): Min IoU with ground truth for boxes generated by - keypoints inside the gaussian kernel. - - Returns: - radius (int): Radius of gaussian kernel. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 - sq1) / (2 * a1) - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 - sq2) / (2 * a2) - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / (2 * a3) - return min(r1, r2, r3) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py deleted file mode 100644 index 1f5b971252bfc971c3ffbaa27746d69b1d3ea9fd..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/builder.py +++ /dev/null @@ -1,46 +0,0 @@ -import warnings - -from annotator.uniformer.mmcv.cnn import MODELS as MMCV_MODELS -from annotator.uniformer.mmcv.utils import Registry - -MODELS = Registry('models', parent=MMCV_MODELS) - -BACKBONES = MODELS -NECKS = MODELS -HEADS = MODELS -LOSSES = MODELS -SEGMENTORS = MODELS - - -def build_backbone(cfg): - """Build backbone.""" - return BACKBONES.build(cfg) - - -def build_neck(cfg): - """Build neck.""" - return NECKS.build(cfg) - - -def build_head(cfg): - """Build head.""" - return HEADS.build(cfg) - - -def build_loss(cfg): - """Build loss.""" - return LOSSES.build(cfg) - - -def build_segmentor(cfg, train_cfg=None, test_cfg=None): - """Build segmentor.""" - if train_cfg is not None or test_cfg is not None: - warnings.warn( - 'train_cfg and test_cfg is deprecated, ' - 'please specify them in model', UserWarning) - assert cfg.get('train_cfg') is None or train_cfg is None, \ - 'train_cfg specified in both outer field and model field ' - assert cfg.get('test_cfg') is None or test_cfg is None, \ - 'test_cfg specified in both outer field and model field ' - return SEGMENTORS.build( - cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg)) diff --git a/spaces/abidlabs/Voice-Cloning/README.md b/spaces/abidlabs/Voice-Cloning/README.md deleted file mode 100644 index ca77a6d2447b360e821eac2e543cb55d1722f5a5..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Voice-Cloning/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: ⚡ -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.8 -app_file: app.py -pinned: false -license: mit -duplicated_from: BilalSardar/Voice-Cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py deleted file mode 100644 index 2262d9f2779f0dc98eb5d0486c6249290b132b65..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/canvas/headless.py +++ /dev/null @@ -1,71 +0,0 @@ -import pyglet -import warnings - -from .base import Display, Screen, ScreenMode, Canvas - - -from ctypes import * -from pyglet.libs.egl import egl -from pyglet.libs.egl import eglext - - -class HeadlessDisplay(Display): - - def __init__(self): - super().__init__() - # TODO: fix this placeholder: - self._screens = [HeadlessScreen(self, 0, 0, 1920, 1080)] - - num_devices = egl.EGLint() - eglext.eglQueryDevicesEXT(0, None, byref(num_devices)) - if num_devices.value > 0: - headless_device = pyglet.options['headless_device'] - if headless_device < 0 or headless_device >= num_devices.value: - raise ValueError(f'Invalid EGL devide id: {headless_device}') - devices = (eglext.EGLDeviceEXT * num_devices.value)() - eglext.eglQueryDevicesEXT(num_devices.value, devices, byref(num_devices)) - self._display_connection = eglext.eglGetPlatformDisplayEXT( - eglext.EGL_PLATFORM_DEVICE_EXT, devices[headless_device], None) - else: - warnings.warn('No device available for EGL device platform. Using native display type.') - display = egl.EGLNativeDisplayType() - self._display_connection = egl.eglGetDisplay(display) - - egl.eglInitialize(self._display_connection, None, None) - - def get_screens(self): - return self._screens - - def __del__(self): - egl.eglTerminate(self._display_connection) - - -class HeadlessCanvas(Canvas): - def __init__(self, display, egl_surface): - super().__init__(display) - self.egl_surface = egl_surface - - -class HeadlessScreen(Screen): - def __init__(self, display, x, y, width, height): - super().__init__(display, x, y, width, height) - - def get_matching_configs(self, template): - canvas = HeadlessCanvas(self.display, None) - configs = template.match(canvas) - # XXX deprecate - for config in configs: - config.screen = self - return configs - - def get_modes(self): - pass - - def get_mode(self): - pass - - def set_mode(self, mode): - pass - - def restore_mode(self): - pass diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py deleted file mode 100644 index e4890e109f53a3d904e3874f4e97e7c8e31aa03c..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/win32/wintab.py +++ /dev/null @@ -1,401 +0,0 @@ -import ctypes -from collections import defaultdict -import pyglet -from pyglet.input.base import DeviceOpenException -from pyglet.input.base import Tablet, TabletCanvas -from pyglet.libs.win32 import libwintab as wintab -from pyglet.util import debug_print - -_debug = debug_print('debug_input') - -lib = wintab.lib - - -def wtinfo(category, index, buffer): - size = lib.WTInfoW(category, index, None) - assert size <= ctypes.sizeof(buffer) - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return buffer - - -def wtinfo_string(category, index): - size = lib.WTInfoW(category, index, None) - buffer = ctypes.create_unicode_buffer(size) - lib.WTInfoW(category, index, buffer) - return buffer.value - - -def wtinfo_uint(category, index): - buffer = wintab.UINT() - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return buffer.value - - -def wtinfo_word(category, index): - buffer = wintab.WORD() - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return buffer.value - - -def wtinfo_dword(category, index): - buffer = wintab.DWORD() - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return buffer.value - - -def wtinfo_wtpkt(category, index): - buffer = wintab.WTPKT() - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return buffer.value - - -def wtinfo_bool(category, index): - buffer = wintab.BOOL() - lib.WTInfoW(category, index, ctypes.byref(buffer)) - return bool(buffer.value) - - -class WintabTablet(Tablet): - def __init__(self, index): - self._device = wintab.WTI_DEVICES + index - self.name = wtinfo_string(self._device, wintab.DVC_NAME).strip() - self.id = wtinfo_string(self._device, wintab.DVC_PNPID) - - hardware = wtinfo_uint(self._device, wintab.DVC_HARDWARE) - # phys_cursors = hardware & wintab.HWC_PHYSID_CURSORS - - n_cursors = wtinfo_uint(self._device, wintab.DVC_NCSRTYPES) - first_cursor = wtinfo_uint(self._device, wintab.DVC_FIRSTCSR) - - self.pressure_axis = wtinfo(self._device, wintab.DVC_NPRESSURE, wintab.AXIS()) - - self.cursors = [] - self._cursor_map = {} - - for i in range(n_cursors): - cursor = WintabTabletCursor(self, i + first_cursor) - if not cursor.bogus: - self.cursors.append(cursor) - self._cursor_map[i + first_cursor] = cursor - - def open(self, window): - return WintabTabletCanvas(self, window) - - -class WintabTabletCanvas(TabletCanvas): - override_keys = False - - def __init__(self, device, window, msg_base=wintab.WT_DEFBASE): - super(WintabTabletCanvas, self).__init__(window) - - self.device = device - self.msg_base = msg_base - - # Get the extension masks available. Only need to do this once. - global _extension_masks - if not _extension_masks: - _extension_masks = get_extension_masks() - - # Just use system context, for similarity w/ os x and xinput. - # WTI_DEFCONTEXT detaches mouse from tablet, which is nice, but not - # possible on os x afiak. - self.context_info = context_info = wintab.LOGCONTEXT() - wtinfo(wintab.WTI_DEFSYSCTX, 0, context_info) - context_info.lcMsgBase = msg_base - context_info.lcOptions |= wintab.CXO_MESSAGES - - # If you change this, change definition of PACKET also. - context_info.lcPktData = ( - wintab.PK_CHANGED | wintab.PK_CURSOR | wintab.PK_BUTTONS | - wintab.PK_X | wintab.PK_Y | wintab.PK_Z | - wintab.PK_NORMAL_PRESSURE | wintab.PK_TANGENT_PRESSURE | - wintab.PK_ORIENTATION) | _extension_masks - context_info.lcPktMode = 0 # All absolute (PACKETMODE) - - self._context = lib.WTOpenW(window._hwnd, ctypes.byref(context_info), True) - if not self._context: - raise DeviceOpenException("Couldn't open tablet context") - - window._event_handlers[msg_base + wintab.WT_PACKET] = self._event_wt_packet - window._event_handlers[msg_base + wintab.WT_PROXIMITY] = self._event_wt_proximity - - if _extension_masks: - window._event_handlers[msg_base + wintab.WT_PACKETEXT] = self._event_wt_packetext - - self._current_cursor = None - self._pressure_scale = device.pressure_axis.get_scale() - self._pressure_bias = device.pressure_axis.get_bias() - - self.express_keys = defaultdict(lambda: defaultdict(bool)) # [control_id][location_id] - self.express_key_ct = 0 - self.touchrings = [] # Not currently implemented. - self.touchstrips = [] # Not currently implemented. - - # Override test - for tablet_id in range(get_tablet_count()): - control_count = self.extension_get(wintab.WTX_EXPKEYS2, tablet_id, 0, 0, - wintab.TABLET_PROPERTY_CONTROLCOUNT) - self.express_key_ct = control_count - assert _debug(f"Controls Found: {control_count}") - if self.override_keys is True: - for control_id in range(control_count): - function_count = self.extension_get(wintab.WTX_EXPKEYS2, tablet_id, control_id, 0, - wintab.TABLET_PROPERTY_FUNCCOUNT) - for function_id in range(function_count): - self.extension_set(wintab.WTX_EXPKEYS2, tablet_id, control_id, function_id, - wintab.TABLET_PROPERTY_OVERRIDE, wintab.BOOL(True)) - - def extension_get(self, extension, tablet_id, control_id, function_id, property_id, value_type=wintab.UINT): - prop = wintab.EXTPROPERTY() - - prop.version = 0 - prop.tabletIndex = tablet_id - prop.controlIndex = control_id - prop.functionIndex = function_id - prop.propertyID = property_id - prop.reserved = 0 - prop.dataSize = ctypes.sizeof(value_type) - - success = lib.WTExtGet(self._context, extension, ctypes.byref(prop)) - if success: - return ctypes.cast(prop.data, ctypes.POINTER(value_type)).contents.value - - return 0 - - def extension_set(self, extension, tablet_id, control_id, function_id, property_id, value): - prop = wintab.EXTPROPERTY() - prop.version = 0 - prop.tabletIndex = tablet_id - prop.controlIndex = control_id - prop.functionIndex = function_id - prop.propertyID = property_id - prop.reserved = 0 - prop.dataSize = ctypes.sizeof(value) - prop.data[0] = value.value - - success = lib.WTExtSet(self._context, extension, ctypes.byref(prop)) - if success: - return True - - return False - - def close(self): - lib.WTClose(self._context) - self._context = None - - del self.window._event_handlers[self.msg_base + wintab.WT_PACKET] - del self.window._event_handlers[self.msg_base + wintab.WT_PROXIMITY] - - if _extension_masks: - del self.window._event_handlers[self.msg_base + wintab.WT_PACKETEXT] - - def _set_current_cursor(self, cursor_type): - if self._current_cursor: - self.dispatch_event('on_leave', self._current_cursor) - - self._current_cursor = self.device._cursor_map.get(cursor_type, None) - - if self._current_cursor: - self.dispatch_event('on_enter', self._current_cursor) - - @pyglet.window.win32.Win32EventHandler(0) - def _event_wt_packet(self, msg, wParam, lParam): - if lParam != self._context: - return - - packet = wintab.PACKET() - if lib.WTPacket(self._context, wParam, ctypes.byref(packet)) == 0: - return - - if not packet.pkChanged: - return - - window_x, window_y = self.window.get_location() # TODO cache on window - window_y = self.window.screen.height - window_y - self.window.height - x = packet.pkX - window_x - y = packet.pkY - window_y - pressure = (packet.pkNormalPressure + self._pressure_bias) * self._pressure_scale - - if self._current_cursor is None: - self._set_current_cursor(packet.pkCursor) - - self.dispatch_event('on_motion', self._current_cursor, x, y, pressure, 0., 0., packet.pkButtons) - - @pyglet.window.win32.Win32EventHandler(0) - def _event_wt_packetext(self, msg, wParam, lParam): - packet = wintab.PACKETEXT() - if lib.WTPacket(lParam, wParam, ctypes.byref(packet)) == 0: - return - - # Proper context exists in the packet, not the lParam. - if packet.pkBase.nContext == self._context: - if packet.pkExpKeys.nControl < self.express_key_ct: - current_state = self.express_keys[packet.pkExpKeys.nControl][packet.pkExpKeys.nLocation] - new_state = bool(packet.pkExpKeys.nState) - if current_state != new_state: - event_type = "on_express_key_press" if new_state else "on_express_key_release" - - self.express_keys[packet.pkExpKeys.nControl][packet.pkExpKeys.nLocation] = new_state - - self.dispatch_event(event_type, packet.pkExpKeys.nControl, packet.pkExpKeys.nLocation) - - @pyglet.window.win32.Win32EventHandler(0) - def _event_wt_proximity(self, msg, wParam, lParam): - if wParam != self._context: - return - - if not lParam & 0xffff0000: - # Not a hardware proximity event - return - - if not lParam & 0xffff: - # Going out - self.dispatch_event('on_leave', self._current_cursor) - - # If going in, proximity event will be generated by next event, which - # can actually grab a cursor id. - self._current_cursor = None - - def on_express_key_press(self, control_id: int, location_id: int): - """An event called when an ExpressKey is pressed down. - - :Parameters: - `control_id` : int - Zero-based index number given to the assigned key by the driver. - The same control_id may exist in multiple locations, which the location_id is used to differentiate. - `location_id: int - Zero-based index indicating side of tablet where control id was found. - Some tablets may have clusters of ExpressKey's on various areas of the tablet. - (0 = left, 1 = right, 2 = top, 3 = bottom, 4 = transducer). - - :event: - """ - - def on_express_key_release(self, control_id: int, location_id: int): - """An event called when an ExpressKey is released. - - :Parameters: - `control_id` : int - Zero-based index number given to the assigned key by the driver. - The same control_id may exist in multiple locations, which the location_id is used to differentiate. - `location_id: int - Zero-based index indicating side of tablet where control id was found. - Some tablets may have clusters of ExpressKey's on various areas of the tablet. - (0 = left, 1 = right, 2 = top, 3 = bottom, 4 = transducer). - - :event: - """ - - -WintabTabletCanvas.register_event_type('on_express_key_press') -WintabTabletCanvas.register_event_type('on_express_key_release') - - -class WintabTabletCursor: - def __init__(self, device, index): - self.device = device - self._cursor = wintab.WTI_CURSORS + index - - self.name = wtinfo_string(self._cursor, wintab.CSR_NAME).strip() - self.active = wtinfo_bool(self._cursor, wintab.CSR_ACTIVE) - pktdata = wtinfo_wtpkt(self._cursor, wintab.CSR_PKTDATA) - - # A whole bunch of cursors are reported by the driver, but most of - # them are hogwash. Make sure a cursor has at least X and Y data - # before adding it to the device. - self.bogus = not (pktdata & wintab.PK_X and pktdata & wintab.PK_Y) - if self.bogus: - return - - self.id = (wtinfo_dword(self._cursor, wintab.CSR_TYPE) << 32) | \ - wtinfo_dword(self._cursor, wintab.CSR_PHYSID) - - def __repr__(self): - return 'WintabCursor(%r)' % self.name - - -def get_spec_version(): - spec_version = wtinfo_word(wintab.WTI_INTERFACE, wintab.IFC_SPECVERSION) - return spec_version - - -def get_interface_name(): - interface_name = wtinfo_string(wintab.WTI_INTERFACE, wintab.IFC_WINTABID) - return interface_name - - -def get_implementation_version(): - impl_version = wtinfo_word(wintab.WTI_INTERFACE, wintab.IFC_IMPLVERSION) - return impl_version - - -def extension_index(ext): - """Check if a particular extension exists within the driver.""" - exists = True - i = 0 - index = 0xFFFFFFFF - - while exists: - tag = wintab.UINT() - exists = lib.WTInfoW(wintab.WTI_EXTENSIONS + i, wintab.EXT_TAG, ctypes.byref(tag)) - if tag.value == ext: - index = i - break - - i += 1 - - if index != 0xFFFFFFFF: - return index - - return None - - -def get_extension_masks(): - """Determine which extension support is available by getting the masks.""" - masks = 0 - tr_idx = extension_index(wintab.WTX_TOUCHRING) - if tr_idx is not None: - assert _debug("Touchring support found") - masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + tr_idx, wintab.EXT_MASK) - else: - assert _debug("Touchring extension not found.") - - ts_idx = extension_index(wintab.WTX_TOUCHSTRIP) - if ts_idx is not None: - assert _debug("Touchstrip support found.") - masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + ts_idx, wintab.EXT_MASK) - else: - assert _debug("Touchstrip extension not found.") - - expkeys_idx = extension_index(wintab.WTX_EXPKEYS2) - if expkeys_idx is not None: - assert _debug("ExpressKey support found.") - masks |= wtinfo_uint(wintab.WTI_EXTENSIONS + expkeys_idx, wintab.EXT_MASK) - else: - assert _debug("ExpressKey extension not found.") - - return masks - - -def get_tablet_count(): - """Return just the number of current devices.""" - spec_version = get_spec_version() - assert _debug(f"Wintab Version: {spec_version}") - if spec_version < 0x101: - return 0 - - n_devices = wtinfo_uint(wintab.WTI_INTERFACE, wintab.IFC_NDEVICES) - return n_devices - - -_extension_masks = None - - -def get_tablets(display=None): - # Require spec version 1.1 or greater - n_devices = get_tablet_count() - if not n_devices: - return [] - - devices = [WintabTablet(i) for i in range(n_devices)] - return devices diff --git a/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py b/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py deleted file mode 100644 index 90e324b478d9c67b8acbb3cd1fa0946e024f07f8..0000000000000000000000000000000000000000 --- a/spaces/adrabi-abderrahim/english-pronunciation-practice/app.py +++ /dev/null @@ -1,50 +0,0 @@ -from gtts import gTTS -from transformers import pipeline -import gradio as gr -import uuid - -asr = pipeline('automatic-speech-recognition', "facebook/wav2vec2-conformer-rope-large-960h-ft") -corrector = pipeline("text2text-generation", model="pszemraj/grammar-synthesis-small") - -transcribe = lambda audio: asr(audio)['text'].lower() - -def to_audio(s): - audio_path = f'/tmp/{uuid.uuid4()}.mp3' - tts = gTTS(s, tld='us') - tts.save(audio_path) - return audio_path - - -def transcription(audio, history): - if audio: - message = transcribe(audio) - history.append(( (audio, ) , message)) - results = corrector(message) - results = '\n'.join([t['generated_text'] for t in results]) - history.append( (None, f'**[Grammar and examples]**\n {results}') ) - - return history - -def chat(message, history): - audio_path = to_audio(message) - history.append((message, (audio_path,))) - results = corrector(message) - results = '\n'.join([t['generated_text'] for t in results]) - history.append( (None, f'**[Grammar and examples]**\n {results}') ) - - return None, history - -with gr.Blocks(theme=gr.themes.Soft()) as learning: - gr.Markdown('# The main aim of this app is to help English learners to speak fluently.') - - chatbot = gr.Chatbot() - - with gr.Row(): - message = gr.Textbox(label='Send your message to TTS') - microphone = gr.Audio(label="Transcribe", source="microphone", type="filepath") - - microphone.change(transcription, [microphone, chatbot], [chatbot]) - microphone.change(lambda:None, None, microphone) - message.submit(chat, [message, chatbot], [message, chatbot]) - -learning.launch() \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/op/__init__.py b/spaces/akhaliq/JoJoGAN/op/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py b/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py deleted file mode 100644 index 77a6e05eae6a939ae7575ae70b7173644141fffe..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py +++ /dev/null @@ -1,56 +0,0 @@ -from encoder.data_objects.random_cycler import RandomCycler -from encoder.data_objects.speaker_batch import SpeakerBatch -from encoder.data_objects.speaker import Speaker -from encoder.params_data import partials_n_frames -from torch.utils.data import Dataset, DataLoader -from pathlib import Path - -# TODO: improve with a pool of speakers for data efficiency - -class SpeakerVerificationDataset(Dataset): - def __init__(self, datasets_root: Path): - self.root = datasets_root - speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()] - if len(speaker_dirs) == 0: - raise Exception("No speakers found. Make sure you are pointing to the directory " - "containing all preprocessed speaker directories.") - self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs] - self.speaker_cycler = RandomCycler(self.speakers) - - def __len__(self): - return int(1e10) - - def __getitem__(self, index): - return next(self.speaker_cycler) - - def get_logs(self): - log_string = "" - for log_fpath in self.root.glob("*.txt"): - with log_fpath.open("r") as log_file: - log_string += "".join(log_file.readlines()) - return log_string - - -class SpeakerVerificationDataLoader(DataLoader): - def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None, - batch_sampler=None, num_workers=0, pin_memory=False, timeout=0, - worker_init_fn=None): - self.utterances_per_speaker = utterances_per_speaker - - super().__init__( - dataset=dataset, - batch_size=speakers_per_batch, - shuffle=False, - sampler=sampler, - batch_sampler=batch_sampler, - num_workers=num_workers, - collate_fn=self.collate, - pin_memory=pin_memory, - drop_last=False, - timeout=timeout, - worker_init_fn=worker_init_fn - ) - - def collate(self, speakers): - return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames) - \ No newline at end of file diff --git a/spaces/alalalyuqing/White-box-Cartoonization/README.md b/spaces/alalalyuqing/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/alalalyuqing/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/alamin655/websurfx/Dockerfile b/spaces/alamin655/websurfx/Dockerfile deleted file mode 100644 index f779730963d287d426904d4758f12b58c14f55bd..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -FROM rust:latest AS chef -# We only pay the installation cost once, -# it will be cached from the second build onwards -RUN cargo install cargo-chef --locked - -WORKDIR /app - -FROM chef AS planner -COPY . . -RUN cargo chef prepare --recipe-path recipe.json - -FROM chef AS builder -COPY --from=planner /app/recipe.json recipe.json -# Build dependencies - this is the caching Docker layer! -RUN cargo chef cook --release --recipe-path recipe.json - -# Build application -COPY . . -RUN cargo install --path . - -# We do not need the Rust toolchain to run the binary! -FROM gcr.io/distroless/cc-debian12 -COPY --from=builder /app/public/ /opt/websurfx/public/ -COPY --from=builder /app/websurfx/config.lua /etc/xdg/websurfx/config.lua -COPY --from=builder /app/websurfx/allowlist.txt /etc/xdg/websurfx/allowlist.txt -COPY --from=builder /app/websurfx/blocklist.txt /etc/xdg/websurfx/blocklist.txt -COPY --from=builder /usr/local/cargo/bin/* /usr/local/bin/ -CMD ["websurfx"] diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py deleted file mode 100644 index cf7b01aed5afcef2924ebb10c15499c4497d5ea2..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py +++ /dev/null @@ -1,46 +0,0 @@ -import logging -import os -from typing import Optional - -from pip._vendor.pep517.wrappers import HookMissing, Pep517HookCaller - -from pip._internal.utils.subprocess import runner_with_spinner_message - -logger = logging.getLogger(__name__) - - -def build_wheel_editable( - name: str, - backend: Pep517HookCaller, - metadata_directory: str, - tempd: str, -) -> Optional[str]: - """Build one InstallRequirement using the PEP 660 build process. - - Returns path to wheel if successfully built. Otherwise, returns None. - """ - assert metadata_directory is not None - try: - logger.debug("Destination directory: %s", tempd) - - runner = runner_with_spinner_message( - f"Building editable for {name} (pyproject.toml)" - ) - with backend.subprocess_runner(runner): - try: - wheel_name = backend.build_editable( - tempd, - metadata_directory=metadata_directory, - ) - except HookMissing as e: - logger.error( - "Cannot build editable %s because the build " - "backend does not have the %s hook", - name, - e, - ) - return None - except Exception: - logger.error("Failed building editable for %s", name) - return None - return os.path.join(tempd, wheel_name) diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py deleted file mode 100644 index a3d2ca32f1f430e5356106e719a816da56f9f887..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py +++ /dev/null @@ -1,649 +0,0 @@ -from __future__ import print_function, unicode_literals, division - -import os -import re -import codecs -import platform - -from subprocess import check_output -from tempfile import mkdtemp -from functools import partial - -try: - from configparser import ConfigParser -except ImportError: - from ConfigParser import ConfigParser - -from .utils import log -from .utils.file_utils import DirectoryProcessor -from .utils.file_utils import verify_dir - - -class Rouge155(object): - """ - This is a wrapper for the ROUGE 1.5.5 summary evaluation package. - This class is designed to simplify the evaluation process by: - - 1) Converting summaries into a format ROUGE understands. - 2) Generating the ROUGE configuration file automatically based - on filename patterns. - - This class can be used within Python like this: - - rouge = Rouge155() - rouge.system_dir = 'test/systems' - rouge.model_dir = 'test/models' - - # The system filename pattern should contain one group that - # matches the document ID. - rouge.system_filename_pattern = 'SL.P.10.R.11.SL062003-(\d+).html' - - # The model filename pattern has '#ID#' as a placeholder for the - # document ID. If there are multiple model summaries, pyrouge - # will use the provided regex to automatically match them with - # the corresponding system summary. Here, [A-Z] matches - # multiple model summaries for a given #ID#. - rouge.model_filename_pattern = 'SL.P.10.R.[A-Z].SL062003-#ID#.html' - - rouge_output = rouge.evaluate() - print(rouge_output) - output_dict = rouge.output_to_dict(rouge_ouput) - print(output_dict) - -> {'rouge_1_f_score': 0.95652, - 'rouge_1_f_score_cb': 0.95652, - 'rouge_1_f_score_ce': 0.95652, - 'rouge_1_precision': 0.95652, - [...] - - - To evaluate multiple systems: - - rouge = Rouge155() - rouge.system_dir = '/PATH/TO/systems' - rouge.model_dir = 'PATH/TO/models' - for system_id in ['id1', 'id2', 'id3']: - rouge.system_filename_pattern = \ - 'SL.P/.10.R.{}.SL062003-(\d+).html'.format(system_id) - rouge.model_filename_pattern = \ - 'SL.P.10.R.[A-Z].SL062003-#ID#.html' - rouge_output = rouge.evaluate(system_id) - print(rouge_output) - - """ - - def __init__(self, rouge_dir=None, rouge_args=None, log_level=None): - """ - Create a Rouge155 object. - - rouge_dir: Directory containing Rouge-1.5.5.pl - rouge_args: Arguments to pass through to ROUGE if you - don't want to use the default pyrouge - arguments. - - """ - if log_level is None: - self.log = log.get_global_console_logger() - else: - self.log = log.get_global_console_logger(log_level) - self.__set_dir_properties() - self._config_file = None - self._settings_file = self.__get_config_path() - self.__set_rouge_dir(rouge_dir) - self.args = self.__clean_rouge_args(rouge_args) - self._system_filename_pattern = None - self._model_filename_pattern = None - - def save_home_dir(self): - config = ConfigParser() - section = "pyrouge settings" - config.add_section(section) - config.set(section, "home_dir", self._home_dir) - with open(self._settings_file, "w") as f: - config.write(f) - self.log.info("Set ROUGE home directory to {}.".format(self._home_dir)) - - @property - def settings_file(self): - """ - Path of the setttings file, which stores the ROUGE home dir. - - """ - return self._settings_file - - @property - def bin_path(self): - """ - The full path of the ROUGE binary (although it's technically - a script), i.e. rouge_home_dir/ROUGE-1.5.5.pl - - """ - if self._bin_path is None: - raise Exception( - "ROUGE path not set. Please set the ROUGE home directory " - "and ensure that ROUGE-1.5.5.pl exists in it." - ) - return self._bin_path - - @property - def system_filename_pattern(self): - """ - The regular expression pattern for matching system summary - filenames. The regex string. - - E.g. "SL.P.10.R.11.SL062003-(\d+).html" will match the system - filenames in the SPL2003/system folder of the ROUGE SPL example - in the "sample-test" folder. - - Currently, there is no support for multiple systems. - - """ - return self._system_filename_pattern - - @system_filename_pattern.setter - def system_filename_pattern(self, pattern): - self._system_filename_pattern = pattern - - @property - def model_filename_pattern(self): - """ - The regular expression pattern for matching model summary - filenames. The pattern needs to contain the string "#ID#", - which is a placeholder for the document ID. - - E.g. "SL.P.10.R.[A-Z].SL062003-#ID#.html" will match the model - filenames in the SPL2003/system folder of the ROUGE SPL - example in the "sample-test" folder. - - "#ID#" is a placeholder for the document ID which has been - matched by the "(\d+)" part of the system filename pattern. - The different model summaries for a given document ID are - matched by the "[A-Z]" part. - - """ - return self._model_filename_pattern - - @model_filename_pattern.setter - def model_filename_pattern(self, pattern): - self._model_filename_pattern = pattern - - @property - def config_file(self): - return self._config_file - - @config_file.setter - def config_file(self, path): - config_dir, _ = os.path.split(path) - verify_dir(config_dir, "configuration file") - self._config_file = path - - def split_sentences(self): - """ - ROUGE requires texts split into sentences. In case the texts - are not already split, this method can be used. - - """ - from pyrouge.utils.sentence_splitter import PunktSentenceSplitter - - self.log.info("Splitting sentences.") - ss = PunktSentenceSplitter() - sent_split_to_string = lambda s: "\n".join(ss.split(s)) - process_func = partial( - DirectoryProcessor.process, function=sent_split_to_string - ) - self.__process_summaries(process_func) - - @staticmethod - def convert_summaries_to_rouge_format(input_dir, output_dir): - """ - Convert all files in input_dir into a format ROUGE understands - and saves the files to output_dir. The input files are assumed - to be plain text with one sentence per line. - - input_dir: Path of directory containing the input files. - output_dir: Path of directory in which the converted files - will be saved. - - """ - DirectoryProcessor.process( - input_dir, output_dir, Rouge155.convert_text_to_rouge_format - ) - - @staticmethod - def convert_text_to_rouge_format(text, title="dummy title"): - """ - Convert a text to a format ROUGE understands. The text is - assumed to contain one sentence per line. - - text: The text to convert, containg one sentence per line. - title: Optional title for the text. The title will appear - in the converted file, but doesn't seem to have - any other relevance. - - Returns: The converted text as string. - - """ - sentences = text.split("\n") - sent_elems = [ - '[{i}] ' - "{text}".format(i=i, text=sent) - for i, sent in enumerate(sentences, start=1) - ] - html = """ - -{title} - - -{elems} - -""".format( - title=title, elems="\n".join(sent_elems) - ) - - return html - - @staticmethod - def write_config_static( - system_dir, - system_filename_pattern, - model_dir, - model_filename_pattern, - config_file_path, - system_id=None, - ): - """ - Write the ROUGE configuration file, which is basically a list - of system summary files and their corresponding model summary - files. - - pyrouge uses regular expressions to automatically find the - matching model summary files for a given system summary file - (cf. docstrings for system_filename_pattern and - model_filename_pattern). - - system_dir: Path of directory containing - system summaries. - system_filename_pattern: Regex string for matching - system summary filenames. - model_dir: Path of directory containing - model summaries. - model_filename_pattern: Regex string for matching model - summary filenames. - config_file_path: Path of the configuration file. - system_id: Optional system ID string which - will appear in the ROUGE output. - - """ - system_filenames = [f for f in os.listdir(system_dir)] - system_models_tuples = [] - - system_filename_pattern = re.compile(system_filename_pattern) - for system_filename in sorted(system_filenames): - match = system_filename_pattern.match(system_filename) - if match: - id = match.groups(0)[0] - model_filenames = Rouge155.__get_model_filenames_for_id( - id, model_dir, model_filename_pattern - ) - system_models_tuples.append((system_filename, sorted(model_filenames))) - if not system_models_tuples: - raise Exception( - "Did not find any files matching the pattern {} " - "in the system summaries directory {}.".format( - system_filename_pattern.pattern, system_dir - ) - ) - - with codecs.open(config_file_path, "w", encoding="utf-8") as f: - f.write('') - for task_id, (system_filename, model_filenames) in enumerate( - system_models_tuples, start=1 - ): - - eval_string = Rouge155.__get_eval_string( - task_id, - system_id, - system_dir, - system_filename, - model_dir, - model_filenames, - ) - f.write(eval_string) - f.write("") - - def write_config(self, config_file_path=None, system_id=None): - """ - Write the ROUGE configuration file, which is basically a list - of system summary files and their matching model summary files. - - This is a non-static version of write_config_file_static(). - - config_file_path: Path of the configuration file. - system_id: Optional system ID string which will - appear in the ROUGE output. - - """ - if not system_id: - system_id = 1 - if (not config_file_path) or (not self._config_dir): - self._config_dir = mkdtemp() - config_filename = "rouge_conf.xml" - else: - config_dir, config_filename = os.path.split(config_file_path) - verify_dir(config_dir, "configuration file") - self._config_file = os.path.join(self._config_dir, config_filename) - Rouge155.write_config_static( - self._system_dir, - self._system_filename_pattern, - self._model_dir, - self._model_filename_pattern, - self._config_file, - system_id, - ) - self.log.info("Written ROUGE configuration to {}".format(self._config_file)) - - def evaluate(self, system_id=1, rouge_args=None): - """ - Run ROUGE to evaluate the system summaries in system_dir against - the model summaries in model_dir. The summaries are assumed to - be in the one-sentence-per-line HTML format ROUGE understands. - - system_id: Optional system ID which will be printed in - ROUGE's output. - - Returns: Rouge output as string. - - """ - self.write_config(system_id=system_id) - options = self.__get_options(rouge_args) - command = [self._bin_path] + options - env = os.environ.copy() - if hasattr(self, "_home_dir") and self._home_dir: - env["ROUGE_EVAL_HOME"] = self._home_dir - self.log.info("Running ROUGE with command {}".format(" ".join(command))) - rouge_output = check_output(command, env=env).decode("UTF-8") - return rouge_output - - def convert_and_evaluate(self, system_id=1, split_sentences=False, rouge_args=None): - """ - Convert plain text summaries to ROUGE format and run ROUGE to - evaluate the system summaries in system_dir against the model - summaries in model_dir. Optionally split texts into sentences - in case they aren't already. - - This is just a convenience method combining - convert_summaries_to_rouge_format() and evaluate(). - - split_sentences: Optional argument specifying if - sentences should be split. - system_id: Optional system ID which will be printed - in ROUGE's output. - - Returns: ROUGE output as string. - - """ - if split_sentences: - self.split_sentences() - self.__write_summaries() - rouge_output = self.evaluate(system_id, rouge_args) - return rouge_output - - def output_to_dict(self, output): - """ - Convert the ROUGE output into python dictionary for further - processing. - - """ - # 0 ROUGE-1 Average_R: 0.02632 (95%-conf.int. 0.02632 - 0.02632) - pattern = re.compile( - r"(\d+) (ROUGE-\S+) (Average_\w): (\d.\d+) " - r"\(95%-conf.int. (\d.\d+) - (\d.\d+)\)" - ) - results = {} - for line in output.split("\n"): - match = pattern.match(line) - if match: - ( - sys_id, - rouge_type, - measure, - result, - conf_begin, - conf_end, - ) = match.groups() - measure = { - "Average_R": "recall", - "Average_P": "precision", - "Average_F": "f_score", - }[measure] - rouge_type = rouge_type.lower().replace("-", "_") - key = "{}_{}".format(rouge_type, measure) - results[key] = float(result) - results["{}_cb".format(key)] = float(conf_begin) - results["{}_ce".format(key)] = float(conf_end) - return results - - ################################################################### - # Private methods - - def __set_rouge_dir(self, home_dir=None): - """ - Verfify presence of ROUGE-1.5.5.pl and data folder, and set - those paths. - - """ - if not home_dir: - self._home_dir = self.__get_rouge_home_dir_from_settings() - else: - self._home_dir = home_dir - self.save_home_dir() - self._bin_path = os.path.join(self._home_dir, "ROUGE-1.5.5.pl") - self.data_dir = os.path.join(self._home_dir, "data") - if not os.path.exists(self._bin_path): - raise Exception( - "ROUGE binary not found at {}. Please set the " - "correct path by running pyrouge_set_rouge_path " - "/path/to/rouge/home.".format(self._bin_path) - ) - - def __get_rouge_home_dir_from_settings(self): - config = ConfigParser() - with open(self._settings_file) as f: - if hasattr(config, "read_file"): - config.read_file(f) - else: - # use deprecated python 2.x method - config.readfp(f) - rouge_home_dir = config.get("pyrouge settings", "home_dir") - return rouge_home_dir - - @staticmethod - def __get_eval_string( - task_id, system_id, system_dir, system_filename, model_dir, model_filenames - ): - """ - ROUGE can evaluate several system summaries for a given text - against several model summaries, i.e. there is an m-to-n - relation between system and model summaries. The system - summaries are listed in the tag and the model summaries - in the tag. pyrouge currently only supports one system - summary per text, i.e. it assumes a 1-to-n relation between - system and model summaries. - - """ - peer_elems = '

    {name}

    '.format( - id=system_id, name=system_filename - ) - - model_elems = [ - '{name}'.format(id=chr(65 + i), name=name) - for i, name in enumerate(model_filenames) - ] - - model_elems = "\n\t\t\t".join(model_elems) - eval_string = """ - - {model_root} - {peer_root} - - - - {peer_elems} - - - {model_elems} - - -""".format( - task_id=task_id, - model_root=model_dir, - model_elems=model_elems, - peer_root=system_dir, - peer_elems=peer_elems, - ) - return eval_string - - def __process_summaries(self, process_func): - """ - Helper method that applies process_func to the files in the - system and model folders and saves the resulting files to new - system and model folders. - - """ - temp_dir = mkdtemp() - new_system_dir = os.path.join(temp_dir, "system") - os.mkdir(new_system_dir) - new_model_dir = os.path.join(temp_dir, "model") - os.mkdir(new_model_dir) - self.log.info( - "Processing summaries. Saving system files to {} and " - "model files to {}.".format(new_system_dir, new_model_dir) - ) - process_func(self._system_dir, new_system_dir) - process_func(self._model_dir, new_model_dir) - self._system_dir = new_system_dir - self._model_dir = new_model_dir - - def __write_summaries(self): - self.log.info("Writing summaries.") - self.__process_summaries(self.convert_summaries_to_rouge_format) - - @staticmethod - def __get_model_filenames_for_id(id, model_dir, model_filenames_pattern): - pattern = re.compile(model_filenames_pattern.replace("#ID#", id)) - model_filenames = [f for f in os.listdir(model_dir) if pattern.match(f)] - if not model_filenames: - raise Exception( - "Could not find any model summaries for the system" - " summary with ID {}. Specified model filename pattern was: " - "{}".format(id, model_filenames_pattern) - ) - return model_filenames - - def __get_options(self, rouge_args=None): - """ - Get supplied command line arguments for ROUGE or use default - ones. - - """ - if self.args: - options = self.args.split() - elif rouge_args: - options = rouge_args.split() - else: - options = [ - "-e", - self._data_dir, - "-c", - 95, - "-2", - "-1", - "-U", - "-r", - 1000, - "-n", - 4, - "-w", - 1.2, - "-a", - ] - options = list(map(str, options)) - - options = self.__add_config_option(options) - return options - - def __create_dir_property(self, dir_name, docstring): - """ - Generate getter and setter for a directory property. - - """ - property_name = "{}_dir".format(dir_name) - private_name = "_" + property_name - setattr(self, private_name, None) - - def fget(self): - return getattr(self, private_name) - - def fset(self, path): - verify_dir(path, dir_name) - setattr(self, private_name, path) - - p = property(fget=fget, fset=fset, doc=docstring) - setattr(self.__class__, property_name, p) - - def __set_dir_properties(self): - """ - Automatically generate the properties for directories. - - """ - directories = [ - ("home", "The ROUGE home directory."), - ("data", "The path of the ROUGE 'data' directory."), - ("system", "Path of the directory containing system summaries."), - ("model", "Path of the directory containing model summaries."), - ] - for (dirname, docstring) in directories: - self.__create_dir_property(dirname, docstring) - - def __clean_rouge_args(self, rouge_args): - """ - Remove enclosing quotation marks, if any. - - """ - if not rouge_args: - return - quot_mark_pattern = re.compile('"(.+)"') - match = quot_mark_pattern.match(rouge_args) - if match: - cleaned_args = match.group(1) - return cleaned_args - else: - return rouge_args - - def __add_config_option(self, options): - return options + ["-m"] + [self._config_file] - - def __get_config_path(self): - if platform.system() == "Windows": - parent_dir = os.getenv("APPDATA") - config_dir_name = "pyrouge" - elif os.name == "posix": - parent_dir = os.path.expanduser("~") - config_dir_name = ".pyrouge" - else: - parent_dir = os.path.dirname(__file__) - config_dir_name = "" - config_dir = os.path.join(parent_dir, config_dir_name) - if not os.path.exists(config_dir): - os.makedirs(config_dir) - return os.path.join(config_dir, "settings.ini") - - -if __name__ == "__main__": - import argparse - from utils.argparsers import rouge_path_parser - - parser = argparse.ArgumentParser(parents=[rouge_path_parser]) - args = parser.parse_args() - - rouge = Rouge155(args.rouge_home) - rouge.save_home_dir() diff --git a/spaces/allknowingroger/Image-Models-Test166/README.md b/spaces/allknowingroger/Image-Models-Test166/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test166/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test74/README.md b/spaces/allknowingroger/Image-Models-Test74/README.md deleted file mode 100644 index 7b8fa7cf47c8831cfa8e64966fbf7e7a32f6685e..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test74/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test73 ---- - - \ No newline at end of file diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py deleted file mode 100644 index f0cd09daaaae0adaa349f91139dc60c7ac79c028..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Gravityengine.py +++ /dev/null @@ -1,27 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.xunika.uk/' -model = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - yield response.json()['choices'][0]['message']['content'] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py b/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py deleted file mode 100644 index 3055a97a65bd4bd2137f6ce1ec182e8d63637a13..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/modules/chat.py +++ /dev/null @@ -1,592 +0,0 @@ -import ast -import base64 -import copy -import io -import json -import logging -import re -from datetime import datetime -from pathlib import Path - -import yaml -from PIL import Image - -import modules.shared as shared -from modules.extensions import apply_extensions -from modules.html_generator import chat_html_wrapper, make_thumbnail -from modules.text_generation import (generate_reply, get_encoded_length, - get_max_prompt_length) -from modules.utils import replace_all - - -def get_turn_substrings(state, instruct=False): - if instruct: - if 'turn_template' not in state or state['turn_template'] == '': - template = '<|user|>\n<|user-message|>\n<|bot|>\n<|bot-message|>\n' - else: - template = state['turn_template'].replace(r'\n', '\n') - else: - template = '<|user|>: <|user-message|>\n<|bot|>: <|bot-message|>\n' - - replacements = { - '<|user|>': state['name1_instruct' if instruct else 'name1'].strip(), - '<|bot|>': state['name2_instruct' if instruct else 'name2'].strip(), - } - - output = { - 'user_turn': template.split('<|bot|>')[0], - 'bot_turn': '<|bot|>' + template.split('<|bot|>')[1], - 'user_turn_stripped': template.split('<|bot|>')[0].split('<|user-message|>')[0], - 'bot_turn_stripped': '<|bot|>' + template.split('<|bot|>')[1].split('<|bot-message|>')[0], - } - - for k in output: - output[k] = replace_all(output[k], replacements) - - return output - - -def generate_chat_prompt(user_input, state, **kwargs): - impersonate = kwargs.get('impersonate', False) - _continue = kwargs.get('_continue', False) - also_return_rows = kwargs.get('also_return_rows', False) - history = state.get('history', shared.history['internal']) - is_instruct = state['mode'] == 'instruct' - - # Finding the maximum prompt size - chat_prompt_size = state['chat_prompt_size'] - if shared.soft_prompt: - chat_prompt_size -= shared.soft_prompt_tensor.shape[1] - - max_length = min(get_max_prompt_length(state), chat_prompt_size) - - all_substrings = { - 'chat': get_turn_substrings(state, instruct=False), - 'instruct': get_turn_substrings(state, instruct=True) - } - substrings = all_substrings['instruct' if is_instruct else 'chat'] - - # Creating the template for "chat-instruct" mode - if state['mode'] == 'chat-instruct': - wrapper = '' - command = state['chat-instruct_command'].replace('<|character|>', state['name2'] if not impersonate else state['name1']) - wrapper += state['context_instruct'] - wrapper += all_substrings['instruct']['user_turn'].replace('<|user-message|>', command) - wrapper += all_substrings['instruct']['bot_turn_stripped'] - if impersonate: - wrapper += substrings['user_turn_stripped'].rstrip(' ') - else: - wrapper += apply_extensions("bot_prefix", substrings['bot_turn_stripped'].rstrip(' ')) - else: - wrapper = '<|prompt|>' - - # Building the prompt - min_rows = 3 - i = len(history) - 1 - rows = [state['context_instruct'] if is_instruct else f"{state['context'].strip()}\n"] - while i >= 0 and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) < max_length: - if _continue and i == len(history) - 1: - rows.insert(1, substrings['bot_turn_stripped'] + history[i][1].strip()) - else: - rows.insert(1, substrings['bot_turn'].replace('<|bot-message|>', history[i][1].strip())) - - string = history[i][0] - if string not in ['', '<|BEGIN-VISIBLE-CHAT|>']: - rows.insert(1, replace_all(substrings['user_turn'], {'<|user-message|>': string.strip(), '<|round|>': str(i)})) - - i -= 1 - - if impersonate: - if state['mode'] == 'chat-instruct': - min_rows = 1 - else: - min_rows = 2 - rows.append(substrings['user_turn_stripped'].rstrip(' ')) - elif not _continue: - # Adding the user message - if len(user_input) > 0: - rows.append(replace_all(substrings['user_turn'], {'<|user-message|>': user_input.strip(), '<|round|>': str(len(history))})) - - # Adding the Character prefix - if state['mode'] != 'chat-instruct': - rows.append(apply_extensions("bot_prefix", substrings['bot_turn_stripped'].rstrip(' '))) - - while len(rows) > min_rows and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) >= max_length: - rows.pop(1) - - prompt = wrapper.replace('<|prompt|>', ''.join(rows)) - if also_return_rows: - return prompt, rows - else: - return prompt - - -def get_stopping_strings(state): - stopping_strings = [] - if state['mode'] in ['instruct', 'chat-instruct']: - stopping_strings += [ - state['turn_template'].split('<|user-message|>')[1].split('<|bot|>')[0] + '<|bot|>', - state['turn_template'].split('<|bot-message|>')[1] + '<|user|>' - ] - - replacements = { - '<|user|>': state['name1_instruct'], - '<|bot|>': state['name2_instruct'] - } - - for i in range(len(stopping_strings)): - stopping_strings[i] = replace_all(stopping_strings[i], replacements).rstrip(' ').replace(r'\n', '\n') - - if state['mode'] in ['chat', 'chat-instruct']: - stopping_strings += [ - f"\n{state['name1']}:", - f"\n{state['name2']}:" - ] - - stopping_strings += ast.literal_eval(f"[{state['custom_stopping_strings']}]") - return stopping_strings - - -def extract_message_from_reply(reply, state): - next_character_found = False - stopping_strings = get_stopping_strings(state) - - if state['stop_at_newline']: - lines = reply.split('\n') - reply = lines[0].strip() - if len(lines) > 1: - next_character_found = True - else: - for string in stopping_strings: - idx = reply.find(string) - if idx != -1: - reply = reply[:idx] - next_character_found = True - - # If something like "\nYo" is generated just before "\nYou:" - # is completed, trim it - if not next_character_found: - for string in stopping_strings: - for j in range(len(string) - 1, 0, -1): - if reply[-j:] == string[:j]: - reply = reply[:-j] - break - else: - continue - - break - - return reply, next_character_found - - -def chatbot_wrapper(text, state, regenerate=False, _continue=False): - if shared.model_name == 'None' or shared.model is None: - logging.error("No model is loaded! Select one in the Model tab.") - yield shared.history['visible'] - return - - # Defining some variables - cumulative_reply = '' - just_started = True - visible_text = None - eos_token = '\n' if state['stop_at_newline'] else None - stopping_strings = get_stopping_strings(state) - - # Preparing the input - if not any((regenerate, _continue)): - text, visible_text = apply_extensions('input_hijack', text, visible_text) - if visible_text is None: - visible_text = text - - text = apply_extensions('input', text) - # *Is typing...* - yield shared.history['visible'] + [[visible_text, shared.processing_message]] - else: - text, visible_text = shared.history['internal'][-1][0], shared.history['visible'][-1][0] - if regenerate: - shared.history['visible'].pop() - shared.history['internal'].pop() - # *Is typing...* - yield shared.history['visible'] + [[visible_text, shared.processing_message]] - elif _continue: - last_reply = [shared.history['internal'][-1][1], shared.history['visible'][-1][1]] - yield shared.history['visible'][:-1] + [[visible_text, last_reply[1] + '...']] - - # Generating the prompt - kwargs = {'_continue': _continue} - prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs) - if prompt is None: - prompt = generate_chat_prompt(text, state, **kwargs) - - # Generate - for i in range(state['chat_generation_attempts']): - reply = None - for j, reply in enumerate(generate_reply(prompt + cumulative_reply, state, eos_token=eos_token, stopping_strings=stopping_strings, is_chat=True)): - reply = cumulative_reply + reply - - # Extracting the reply - reply, next_character_found = extract_message_from_reply(reply, state) - visible_reply = re.sub("(||{{user}})", state['name1'], reply) - visible_reply = apply_extensions("output", visible_reply) - - # We need this global variable to handle the Stop event, - # otherwise gradio gets confused - if shared.stop_everything: - return shared.history['visible'] - - if just_started: - just_started = False - if not _continue: - shared.history['internal'].append(['', '']) - shared.history['visible'].append(['', '']) - - if _continue: - shared.history['internal'][-1] = [text, last_reply[0] + reply] - shared.history['visible'][-1] = [visible_text, last_reply[1] + visible_reply] - yield shared.history['visible'] - elif not (j == 0 and visible_reply.strip() == ''): - shared.history['internal'][-1] = [text, reply] - shared.history['visible'][-1] = [visible_text, visible_reply] - yield shared.history['visible'] - - if next_character_found: - break - - if reply in [None, '']: - break - else: - cumulative_reply = reply - - yield shared.history['visible'] - - -def impersonate_wrapper(text, state): - if shared.model_name == 'None' or shared.model is None: - logging.error("No model is loaded! Select one in the Model tab.") - yield '' - return - - # Defining some variables - cumulative_reply = '' - eos_token = '\n' if state['stop_at_newline'] else None - prompt = generate_chat_prompt('', state, impersonate=True) - stopping_strings = get_stopping_strings(state) - - yield text + '...' - cumulative_reply = text - for i in range(state['chat_generation_attempts']): - reply = None - for reply in generate_reply(prompt + cumulative_reply, state, eos_token=eos_token, stopping_strings=stopping_strings, is_chat=True): - reply = cumulative_reply + reply - reply, next_character_found = extract_message_from_reply(reply, state) - yield reply - if next_character_found: - break - - if reply in [None, '']: - break - else: - cumulative_reply = reply - - yield cumulative_reply - - -def generate_chat_reply(text, state, regenerate=False, _continue=False): - if regenerate or _continue: - text = '' - if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0: - yield shared.history['visible'] - return - - for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue): - yield history - - -# Same as above but returns HTML -def generate_chat_reply_wrapper(text, state, regenerate=False, _continue=False): - for history in generate_chat_reply(text, state, regenerate, _continue): - yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style']) - - -def remove_last_message(): - if len(shared.history['visible']) > 0 and shared.history['internal'][-1][0] != '<|BEGIN-VISIBLE-CHAT|>': - last = shared.history['visible'].pop() - shared.history['internal'].pop() - else: - last = ['', ''] - - return last[0] - - -def send_last_reply_to_input(): - if len(shared.history['internal']) > 0: - return shared.history['internal'][-1][1] - else: - return '' - - -def replace_last_reply(text): - if len(shared.history['visible']) > 0: - shared.history['visible'][-1][1] = text - shared.history['internal'][-1][1] = apply_extensions("input", text) - - -def send_dummy_message(text): - shared.history['visible'].append([text, '']) - shared.history['internal'].append([apply_extensions("input", text), '']) - - -def send_dummy_reply(text): - if len(shared.history['visible']) > 0 and not shared.history['visible'][-1][1] == '': - shared.history['visible'].append(['', '']) - shared.history['internal'].append(['', '']) - - shared.history['visible'][-1][1] = text - shared.history['internal'][-1][1] = apply_extensions("input", text) - - -def clear_chat_log(greeting, mode): - shared.history['visible'] = [] - shared.history['internal'] = [] - - if mode != 'instruct': - if greeting != '': - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]] - shared.history['visible'] += [['', apply_extensions("output", greeting)]] - - save_history(mode) - - -def redraw_html(name1, name2, mode, style, reset_cache=False): - return chat_html_wrapper(shared.history['visible'], name1, name2, mode, style, reset_cache=reset_cache) - - -def tokenize_dialogue(dialogue, name1, name2): - history = [] - messages = [] - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('(\n|^)[Aa]non:', '\\1You:', dialogue) - dialogue = re.sub('(\n|^)\[CHARACTER\]:', f'\\g<1>{name2}:', dialogue) - idx = [m.start() for m in re.finditer(f"(^|\n)({re.escape(name1)}|{re.escape(name2)}):", dialogue)] - if len(idx) == 0: - return history - - for i in range(len(idx) - 1): - messages.append(dialogue[idx[i]:idx[i + 1]].strip()) - - messages.append(dialogue[idx[-1]:].strip()) - entry = ['', ''] - for i in messages: - if i.startswith(f'{name1}:'): - entry[0] = i[len(f'{name1}:'):].strip() - elif i.startswith(f'{name2}:'): - entry[1] = i[len(f'{name2}:'):].strip() - if not (len(entry[0]) == 0 and len(entry[1]) == 0): - history.append(entry) - - entry = ['', ''] - - print("\033[1;32;1m\nDialogue tokenized to:\033[0;37;0m\n", end='') - for row in history: - for column in row: - print("\n") - for line in column.strip().split('\n'): - print("| " + line + "\n") - - print("|\n") - print("------------------------------") - - return history - - -def save_history(mode, timestamp=False): - # Instruct mode histories should not be saved as if - # Alpaca or Vicuna were characters - if mode == 'instruct': - if not timestamp: - return - - fname = f"Instruct_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json" - else: - if timestamp: - fname = f"{shared.character}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json" - else: - fname = f"{shared.character}_persistent.json" - - if not Path('logs').exists(): - Path('logs').mkdir() - - with open(Path(f'logs/{fname}'), 'w', encoding='utf-8') as f: - f.write(json.dumps({'data': shared.history['internal'], 'data_visible': shared.history['visible']}, indent=2)) - - return Path(f'logs/{fname}') - - -def load_history(file, name1, name2): - file = file.decode('utf-8') - try: - j = json.loads(file) - if 'data' in j: - shared.history['internal'] = j['data'] - if 'data_visible' in j: - shared.history['visible'] = j['data_visible'] - else: - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - except: - shared.history['internal'] = tokenize_dialogue(file, name1, name2) - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - - -def replace_character_names(text, name1, name2): - text = text.replace('{{user}}', name1).replace('{{char}}', name2) - return text.replace('', name1).replace('', name2) - - -def build_pygmalion_style_context(data): - context = "" - if 'char_persona' in data and data['char_persona'] != '': - context += f"{data['char_name']}'s Persona: {data['char_persona']}\n" - - if 'world_scenario' in data and data['world_scenario'] != '': - context += f"Scenario: {data['world_scenario']}\n" - - context = f"{context.strip()}\n\n" - return context - - -def generate_pfp_cache(character): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - for path in [Path(f"characters/{character}.{extension}") for extension in ['png', 'jpg', 'jpeg']]: - if path.exists(): - img = make_thumbnail(Image.open(path)) - img.save(Path('cache/pfp_character.png'), format='PNG') - return img - - return None - - -def load_character(character, name1, name2, instruct=False): - shared.character = character - context = greeting = turn_template = "" - greeting_field = 'greeting' - picture = None - - # Deleting the profile picture cache, if any - if Path("cache/pfp_character.png").exists(): - Path("cache/pfp_character.png").unlink() - - if character != 'None': - folder = 'characters' if not instruct else 'characters/instruction-following' - picture = generate_pfp_cache(character) - for extension in ["yml", "yaml", "json"]: - filepath = Path(f'{folder}/{character}.{extension}') - if filepath.exists(): - break - - file_contents = open(filepath, 'r', encoding='utf-8').read() - data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents) - - # Finding the bot's name - for k in ['name', 'bot', '<|bot|>', 'char_name']: - if k in data and data[k] != '': - name2 = data[k] - break - - # Find the user name (if any) - for k in ['your_name', 'user', '<|user|>']: - if k in data and data[k] != '': - name1 = data[k] - break - - for field in ['context', 'greeting', 'example_dialogue', 'char_persona', 'char_greeting', 'world_scenario']: - if field in data: - data[field] = replace_character_names(data[field], name1, name2) - - if 'context' in data: - context = data['context'] - if not instruct: - context = context.strip() + '\n' - elif "char_persona" in data: - context = build_pygmalion_style_context(data) - greeting_field = 'char_greeting' - - if 'example_dialogue' in data: - context += f"{data['example_dialogue'].strip()}\n" - - if greeting_field in data: - greeting = data[greeting_field] - - if 'turn_template' in data: - turn_template = data['turn_template'] - - else: - context = shared.settings['context'] - name2 = shared.settings['name2'] - greeting = shared.settings['greeting'] - turn_template = shared.settings['turn_template'] - - if not instruct: - shared.history['internal'] = [] - shared.history['visible'] = [] - if Path(f'logs/{shared.character}_persistent.json').exists(): - load_history(open(Path(f'logs/{shared.character}_persistent.json'), 'rb').read(), name1, name2) - else: - # Insert greeting if it exists - if greeting != "": - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]] - shared.history['visible'] += [['', apply_extensions("output", greeting)]] - - # Create .json log files since they don't already exist - save_history('instruct' if instruct else 'chat') - - return name1, name2, picture, greeting, context, repr(turn_template)[1:-1] - - -def upload_character(json_file, img, tavern=False): - json_file = json_file if type(json_file) == str else json_file.decode('utf-8') - data = json.loads(json_file) - outfile_name = data["char_name"] - i = 1 - while Path(f'characters/{outfile_name}.json').exists(): - outfile_name = f'{data["char_name"]}_{i:03d}' - i += 1 - - if tavern: - outfile_name = f'TavernAI-{outfile_name}' - - with open(Path(f'characters/{outfile_name}.json'), 'w', encoding='utf-8') as f: - f.write(json_file) - - if img is not None: - img = Image.open(io.BytesIO(img)) - img.save(Path(f'characters/{outfile_name}.png')) - - logging.info(f'New character saved to "characters/{outfile_name}.json".') - return outfile_name - - -def upload_tavern_character(img, name1, name2): - _img = Image.open(io.BytesIO(img)) - _img.getexif() - decoded_string = base64.b64decode(_img.info['chara']) - _json = json.loads(decoded_string) - _json = {"char_name": _json['name'], "char_persona": _json['description'], "char_greeting": _json["first_mes"], "example_dialogue": _json['mes_example'], "world_scenario": _json['scenario']} - return upload_character(json.dumps(_json), img, tavern=True) - - -def upload_your_profile_picture(img): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - if img is None: - if Path("cache/pfp_me.png").exists(): - Path("cache/pfp_me.png").unlink() - else: - img = make_thumbnail(img) - img.save(Path('cache/pfp_me.png')) - logging.info('Profile picture saved to "cache/pfp_me.png"') diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py deleted file mode 100644 index 38e906243d898d7fc071c0fe218338c5cace3ea1..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from .sam import Sam -from .image_encoder import ImageEncoderViT -from .mask_decoder import MaskDecoder -from .prompt_encoder import PromptEncoder -from .transformer import TwoWayTransformer diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py deleted file mode 100644 index e0fbf3a33747f447d396dd0d564e92c904cfabac..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/ScuNET/scripts/scunet_model.py +++ /dev/null @@ -1,87 +0,0 @@ -import os.path -import sys -import traceback - -import PIL.Image -import numpy as np -import torch -from basicsr.utils.download_util import load_file_from_url - -import modules.upscaler -from modules import devices, modelloader -from scunet_model_arch import SCUNet as net - - -class UpscalerScuNET(modules.upscaler.Upscaler): - def __init__(self, dirname): - self.name = "ScuNET" - self.model_name = "ScuNET GAN" - self.model_name2 = "ScuNET PSNR" - self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth" - self.model_url2 = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth" - self.user_path = dirname - super().__init__() - model_paths = self.find_models(ext_filter=[".pth"]) - scalers = [] - add_model2 = True - for file in model_paths: - if "http" in file: - name = self.model_name - else: - name = modelloader.friendly_name(file) - if name == self.model_name2 or file == self.model_url2: - add_model2 = False - try: - scaler_data = modules.upscaler.UpscalerData(name, file, self, 4) - scalers.append(scaler_data) - except Exception: - print(f"Error loading ScuNET model: {file}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - if add_model2: - scaler_data2 = modules.upscaler.UpscalerData(self.model_name2, self.model_url2, self) - scalers.append(scaler_data2) - self.scalers = scalers - - def do_upscale(self, img: PIL.Image, selected_file): - torch.cuda.empty_cache() - - model = self.load_model(selected_file) - if model is None: - return img - - device = devices.get_device_for('scunet') - img = np.array(img) - img = img[:, :, ::-1] - img = np.moveaxis(img, 2, 0) / 255 - img = torch.from_numpy(img).float() - img = img.unsqueeze(0).to(device) - - with torch.no_grad(): - output = model(img) - output = output.squeeze().float().cpu().clamp_(0, 1).numpy() - output = 255. * np.moveaxis(output, 0, 2) - output = output.astype(np.uint8) - output = output[:, :, ::-1] - torch.cuda.empty_cache() - return PIL.Image.fromarray(output, 'RGB') - - def load_model(self, path: str): - device = devices.get_device_for('scunet') - if "http" in path: - filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name, - progress=True) - else: - filename = path - if not os.path.exists(os.path.join(self.model_path, filename)) or filename is None: - print(f"ScuNET: Unable to load model from {filename}", file=sys.stderr) - return None - - model = net(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64) - model.load_state_dict(torch.load(filename), strict=True) - model.eval() - for k, v in model.named_parameters(): - v.requires_grad = False - model = model.to(device) - - return model - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py deleted file mode 100644 index c4d7ddbb4f84ada85733a991aebfcc66ca39db71..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/MspImagePlugin.py +++ /dev/null @@ -1,194 +0,0 @@ -# -# The Python Imaging Library. -# -# MSP file handling -# -# This is the format used by the Paint program in Windows 1 and 2. -# -# History: -# 95-09-05 fl Created -# 97-01-03 fl Read/write MSP images -# 17-02-21 es Fixed RLE interpretation -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995-97. -# Copyright (c) Eric Soroos 2017. -# -# See the README file for information on usage and redistribution. -# -# More info on this format: https://archive.org/details/gg243631 -# Page 313: -# Figure 205. Windows Paint Version 1: "DanM" Format -# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03 -# -# See also: https://www.fileformat.info/format/mspaint/egff.htm - -import io -import struct - -from . import Image, ImageFile -from ._binary import i16le as i16 -from ._binary import o16le as o16 - -# -# read MSP files - - -def _accept(prefix): - return prefix[:4] in [b"DanM", b"LinS"] - - -## -# Image plugin for Windows MSP images. This plugin supports both -# uncompressed (Windows 1.0). - - -class MspImageFile(ImageFile.ImageFile): - - format = "MSP" - format_description = "Windows Paint" - - def _open(self): - - # Header - s = self.fp.read(32) - if not _accept(s): - raise SyntaxError("not an MSP file") - - # Header checksum - checksum = 0 - for i in range(0, 32, 2): - checksum = checksum ^ i16(s, i) - if checksum != 0: - raise SyntaxError("bad MSP checksum") - - self.mode = "1" - self._size = i16(s, 4), i16(s, 6) - - if s[:4] == b"DanM": - self.tile = [("raw", (0, 0) + self.size, 32, ("1", 0, 1))] - else: - self.tile = [("MSP", (0, 0) + self.size, 32, None)] - - -class MspDecoder(ImageFile.PyDecoder): - # The algo for the MSP decoder is from - # https://www.fileformat.info/format/mspaint/egff.htm - # cc-by-attribution -- That page references is taken from the - # Encyclopedia of Graphics File Formats and is licensed by - # O'Reilly under the Creative Common/Attribution license - # - # For RLE encoded files, the 32byte header is followed by a scan - # line map, encoded as one 16bit word of encoded byte length per - # line. - # - # NOTE: the encoded length of the line can be 0. This was not - # handled in the previous version of this encoder, and there's no - # mention of how to handle it in the documentation. From the few - # examples I've seen, I've assumed that it is a fill of the - # background color, in this case, white. - # - # - # Pseudocode of the decoder: - # Read a BYTE value as the RunType - # If the RunType value is zero - # Read next byte as the RunCount - # Read the next byte as the RunValue - # Write the RunValue byte RunCount times - # If the RunType value is non-zero - # Use this value as the RunCount - # Read and write the next RunCount bytes literally - # - # e.g.: - # 0x00 03 ff 05 00 01 02 03 04 - # would yield the bytes: - # 0xff ff ff 00 01 02 03 04 - # - # which are then interpreted as a bit packed mode '1' image - - _pulls_fd = True - - def decode(self, buffer): - - img = io.BytesIO() - blank_line = bytearray((0xFF,) * ((self.state.xsize + 7) // 8)) - try: - self.fd.seek(32) - rowmap = struct.unpack_from( - f"<{self.state.ysize}H", self.fd.read(self.state.ysize * 2) - ) - except struct.error as e: - raise OSError("Truncated MSP file in row map") from e - - for x, rowlen in enumerate(rowmap): - try: - if rowlen == 0: - img.write(blank_line) - continue - row = self.fd.read(rowlen) - if len(row) != rowlen: - raise OSError( - "Truncated MSP file, expected %d bytes on row %s", (rowlen, x) - ) - idx = 0 - while idx < rowlen: - runtype = row[idx] - idx += 1 - if runtype == 0: - (runcount, runval) = struct.unpack_from("Bc", row, idx) - img.write(runval * runcount) - idx += 2 - else: - runcount = runtype - img.write(row[idx : idx + runcount]) - idx += runcount - - except struct.error as e: - raise OSError(f"Corrupted MSP file in row {x}") from e - - self.set_as_raw(img.getvalue(), ("1", 0, 1)) - - return -1, 0 - - -Image.register_decoder("MSP", MspDecoder) - - -# -# write MSP files (uncompressed only) - - -def _save(im, fp, filename): - - if im.mode != "1": - raise OSError(f"cannot write mode {im.mode} as MSP") - - # create MSP header - header = [0] * 16 - - header[0], header[1] = i16(b"Da"), i16(b"nM") # version 1 - header[2], header[3] = im.size - header[4], header[5] = 1, 1 - header[6], header[7] = 1, 1 - header[8], header[9] = im.size - - checksum = 0 - for h in header: - checksum = checksum ^ h - header[12] = checksum # FIXME: is this the right field? - - # header - for h in header: - fp.write(o16(h)) - - # image body - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 32, ("1", 0, 1))]) - - -# -# registry - -Image.register_open(MspImageFile.format, MspImageFile, _accept) -Image.register_save(MspImageFile.format, _save) - -Image.register_extension(MspImageFile.format, ".msp") diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py deleted file mode 100644 index 12b9c5d0f55993bf8750564882a351fc3f8055f0..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/dummy_masked_lm.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from omegaconf import II - -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyMaskedLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, - metadata={ - "help": "max number of total tokens over all" - " segments per sample for BERT dataset" - }, - ) - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig) -class DummyMaskedLMTask(FairseqTask): - def __init__(self, cfg: DummyMaskedLMConfig): - super().__init__(cfg) - - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - logger.info("dictionary: {} types".format(len(self.dictionary))) - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - - mask_idx = 0 - pad_idx = 1 - seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1 - mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15% - src = seq.clone() - src[mask] = mask_idx - tgt = torch.full_like(seq, pad_idx) - tgt[mask] = seq[mask] - - self.dummy_src = src - self.dummy_tgt = tgt - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py b/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py deleted file mode 100644 index e7127440030297812a9f4df38cfd6b4cba340c39..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/tools/get_imagenet_21k_full_tar_json.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import numpy as np -import pickle -import io -import gzip -import sys -import time -from nltk.corpus import wordnet -from tqdm import tqdm -import operator -import torch - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -sys.path.insert(0, 'third_party/Deformable-DETR') -from detic.data.tar_dataset import DiskTarDataset, _TarDataset - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--imagenet_dir", default='datasets/imagenet/ImageNet-21k/') - parser.add_argument("--tarfile_path", default='datasets/imagenet/metadata-22k/tar_files.npy') - parser.add_argument("--tar_index_dir", default='datasets/imagenet/metadata-22k/tarindex_npy') - parser.add_argument("--out_path", default='datasets/imagenet/annotations/imagenet-22k_image_info.json') - parser.add_argument("--workers", default=16, type=int) - args = parser.parse_args() - - - start_time = time.time() - print('Building dataset') - dataset = DiskTarDataset(args.tarfile_path, args.tar_index_dir) - end_time = time.time() - print(f"Took {end_time-start_time} seconds to make the dataset.") - print(f"Have {len(dataset)} samples.") - print('dataset', dataset) - - - tar_files = np.load(args.tarfile_path) - categories = [] - for i, tar_file in enumerate(tar_files): - wnid = tar_file[-13:-4] - synset = wordnet.synset_from_pos_and_offset('n', int(wnid[1:])) - synonyms = [x.name() for x in synset.lemmas()] - category = { - 'id': i + 1, - 'synset': synset.name(), - 'name': synonyms[0], - 'def': synset.definition(), - 'synonyms': synonyms, - } - categories.append(category) - print('categories', len(categories)) - - data_loader = torch.utils.data.DataLoader( - dataset, batch_size=1, shuffle=False, - num_workers=args.workers, - collate_fn=operator.itemgetter(0), - ) - images = [] - for img, label, index in tqdm(data_loader): - if label == -1: - continue - image = { - 'id': int(index) + 1, - 'pos_category_ids': [int(label) + 1], - 'height': int(img.height), - 'width': int(img.width), - 'tar_index': int(index), - } - images.append(image) - - data = {'categories': categories, 'images': images, 'annotations': []} - try: - for k, v in data.items(): - print(k, len(v)) - print('Saving to ', args.out_path) - json.dump(data, open(args.out_path, 'w')) - except: - pass - import pdb; pdb.set_trace() - diff --git a/spaces/ashhhh23/lordofthemysteries/Dockerfile b/spaces/ashhhh23/lordofthemysteries/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/ashhhh23/lordofthemysteries/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html deleted file mode 100644 index 975b1f7a1de1d68314bce22df14646ba176e277b..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/ChunShan Feng.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - ChunShan Feng - - - - -
    -

    ChunShan Feng

    - -
    -
    Mentee to Mentor 

    1- What's your motivation to become a mentor with SharpestMinds?
    - Landing a job in a new country is not easy. Immigrated from China and can share knowledge and experience with newcomers on how to make the transition and also want to help people land jobs. 

    2- What's your career journey been like as a data scientist?
    - Have 15 years of work experience in the manufacturing industry and have done DA & project development. Also, teach colleagues in China. Have an interest in analyzing data. 
    - Currently working in a software consulting company. Doing analysis and building dashboards on Power BI. 

    3- According to you, what's the biggest challenge faced by newcomers when entering a data role? How can you help them with this?
    - The biggest challenge is having less experience working with projects, and less on the job training. They need project experience and add it to their resume. 
    - Teach them what problems needs to be solved. and how we solve real-world problems. 

    4- How was the experience as an SM Mentee?
    - Mentor helped with how to write a resume and how to prepare for an interview and improve the project. 

    5- Do you have any questions regarding the SM platform?
    - None yet. 
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/atticus/image-text-retrival-huster/misc/dataset.py b/spaces/atticus/image-text-retrival-huster/misc/dataset.py deleted file mode 100644 index 8b375272dc0a0ab39a568777b1f24aa80a96667f..0000000000000000000000000000000000000000 --- a/spaces/atticus/image-text-retrival-huster/misc/dataset.py +++ /dev/null @@ -1,278 +0,0 @@ -""" -****************** COPYRIGHT AND CONFIDENTIALITY INFORMATION ****************** -Copyright (c) 2018 [Thomson Licensing] -All Rights Reserved -This program contains proprietary information which is a trade secret/business \ -secret of [Thomson Licensing] and is protected, even if unpublished, under \ -applicable Copyright laws (including French droit d'auteur) and/or may be \ -subject to one or more patent(s). -Recipient is to retain this program in confidence and is not permitted to use \ -or make copies thereof other than as permitted in a written agreement with \ -[Thomson Licensing] unless otherwise expressly allowed by applicable laws or \ -by [Thomson Licensing] under express agreement. -Thomson Licensing is a company of the group TECHNICOLOR -******************************************************************************* -This scripts permits one to reproduce training and experiments of: - Engilberge, M., Chevallier, L., Pérez, P., & Cord, M. (2018, April). - Finding beans in burgers: Deep semantic-visual embedding with localization. - In Proceedings of CVPR (pp. 3984-3993) - -Author: Martin Engilberge -""" - -import json -import os -import re - -import numpy as np -import torch -import torch.utils.data as data - -from misc.config import path -from misc.utils import encode_sentence, _load_dictionary -from PIL import Image -from pycocotools import mask as maskUtils -from pycocotools.coco import COCO -from visual_genome import local as vg - -class OnlineRetrival(data.Dataset): - def __init__(self) -> None: - super(OnlineRetrival).__init__() - - def __getitem__(self, index, raw=False): - # TODO: 输入文字, 输出句子编码 - pass - - -class CocoCaptionsRV(data.Dataset): - - def __init__(self, root=path["COCO_ROOT"], coco_json_file_path=path["COCO_RESTVAL_SPLIT"], word_dict_path=path["WORD_DICT"], sset="train", transform=None): - # self.root = os.path.join(root, "images/") - self.root = root - self.transform = transform - - # dataset.json come from Karpathy neural talk repository and contain the restval split of coco - with open(coco_json_file_path, 'r') as f: - datas = json.load(f) - - if sset == "train": - self.content = [x for x in datas["images"] if x["split"] == "train"] - elif sset == "trainrv": - self.content = [x for x in datas["images"] if x["split"] == "train" or x["split"] == "restval"] - elif sset == "val": - self.content = [x for x in datas["images"] if x["split"] == "val"] - else: - self.content = [x for x in datas["images"] if x["split"] == "test"] - - self.content = [(os.path.join(y["filepath"], y["filename"]), [x["raw"] for x in y["sentences"]]) for y in self.content] - - path_params = os.path.join(word_dict_path, 'utable.npy') - self.params = np.load(path_params, encoding='latin1') - self.dico = _load_dictionary(word_dict_path) - - def __getitem__(self, index, raw=False): - idx = index / 5 - - idx_cap = index % 5 - - path = self.content[int(idx)][0] - target = self.content[int(idx)][1][idx_cap] - if raw: - return path, target - - img = Image.open(os.path.join(self.root, path)).convert('RGB') - - if self.transform is not None: - img = self.transform(img) - - target = encode_sentence(target, self.params, self.dico) - - return img, target - - def __len__(self): - return len(self.content) * 5 - - -class VgCaptions(data.Dataset): - - def __init__(self, coco_root=path["COCO_ROOT"], vg_path_ann=path["VG_ANN"], path_vg_img=path["VG_IMAGE"], coco_json_file_path=path["COCO_RESTVAL_SPLIT"], word_dict_path=path["WORD_DICT"], image=True, transform=None): - self.transform = transform - self.image = image - - path_params = os.path.join(word_dict_path, 'utable.npy') - self.params = np.load(path_params, encoding='latin1') - self.dico = _load_dictionary(word_dict_path) - - self.path_vg_img = path_vg_img - - ids = vg.get_all_image_data(vg_path_ann) - regions = vg.get_all_region_descriptions(vg_path_ann) - - annFile = os.path.join(coco_root, "annotations/captions_val2014.json") - coco = COCO(annFile) - ids_val_coco = list(coco.imgs.keys()) - - # Uncomment following bloc to evaluate only on validation set from Rest/Val split - # with open(coco_json_file_path, 'r') as f: # coco_json_file_path = "/home/wp01/users/engilbergem/dev/trunk/CPLApplications/deep/PytorchApplications/coco/dataset.json" - # datas = json.load(f) - # ids_val_coco = [x['cocoid'] for x in datas["images"] if x["split"] == "val"] # list(coco.imgs.keys()) - - self.data = [x for x in zip(ids, regions) if x[0].coco_id in ids_val_coco] - self.imgs_paths = [x[0].id for x in self.data] - self.nb_regions = [len([x.phrase for x in y[1]]) - for y in self.data] - self.captions = [x.phrase for y in self.data for x in y[1]] - # print() - def __getitem__(self, index, raw=False): - - if self.image: - - id_vg = self.data[index][0].id - img = Image.open(os.path.join(self.path_vg_img, - str(id_vg) + ".jpg")).convert('RGB') - - if raw: - return img - - if self.transform is not None: - img = self.transform(img) - - return img - else: - target = self.captions[index] - - # If the caption is incomplete we set it to zero - if len(target) < 3: - target = torch.FloatTensor(1, 620) - else: - target = encode_sentence(target, self.params, self.dico) - - return target - - def __len__(self): - if self.image: - return len(self.data) - else: - return len(self.captions) - - -class CocoSemantic(data.Dataset): - - def __init__(self, coco_root=path["COCO_ROOT"], word_dict_path=path["WORD_DICT"], transform=None): - self.coco_root = coco_root - - annFile = os.path.join(coco_root, "annotations/instances_val2014.json") - self.coco = COCO(annFile) - self.ids = list(self.coco.imgs.keys()) - self.transform = transform - - path_params = os.path.join(word_dict_path, 'utable.npy') - params = np.load(path_params, encoding='latin1') - dico = _load_dictionary(word_dict_path) - - self.categories = self.coco.loadCats(self.coco.getCatIds()) - # repeats category with plural version - categories_sent = [cat['name'] + " " + cat['name'] + "s" for cat in self.categories] - self.categories_w2v = [encode_sentence(cat, params, dico, tokenize=True) for cat in categories_sent] - - def __getitem__(self, index, raw=False): - img_id = self.ids[index] - ann_ids = self.coco.getAnnIds(imgIds=img_id) - anns = self.coco.loadAnns(ann_ids) - - target = dict() - - path = self.coco.loadImgs(img_id)[0]['file_name'] - - img = Image.open(os.path.join(self.coco_root, "images/val2014/", path)).convert('RGB') - img_size = img.size - - for ann in anns: - key = [cat['name'] for cat in self.categories if cat['id'] == ann["category_id"]][0] - - if key not in target: - target[key] = list() - - if type(ann['segmentation']) != list: - if type(ann['segmentation']['counts']) == list: - rle = maskUtils.frPyObjects( - [ann['segmentation']], img_size[0], img_size[1]) - else: - rle = [ann['segmentation']] - - target[key] += [("rle", rle)] - else: - target[key] += ann["segmentation"] - - if raw: - return path, target - - if self.transform is not None: - img = self.transform(img) - - return img, img_size, target - - def __len__(self): - return len(self.ids) - - -class FileDataset(data.Dataset): - - def __init__(self, img_dir_paths, imgs=None, transform=None): - self.transform = transform - self.root = img_dir_paths - self.imgs = imgs or [os.path.join(img_dir_paths, f) for f in os.listdir(img_dir_paths) if re.match(r'.*\.jpg', f)] - - def __getitem__(self, index): - - img = Image.open(self.imgs[index]).convert('RGB') - - if self.transform is not None: - img = self.transform(img) - - return img - - def get_image_list(self): - return self.imgs - - def __len__(self): - return len(self.imgs) - - -class TextDataset(data.Dataset): - - def __init__(self, text_path, word_dict_path=path["WORD_DICT"]): - - with open(text_path) as f: - lines = f.readlines() - - self.sent_list = [line.rstrip('\n') for line in lines] - - path_params = os.path.join(word_dict_path, 'utable.npy') - self.params = np.load(path_params, encoding='latin1') - self.dico = _load_dictionary(word_dict_path) - - def __getitem__(self, index): - - caption = self.sent_list[index] - - caption = encode_sentence(caption, self.params, self.dico) - - return caption - - def __len__(self): - return len(self.sent_list) - - -class TextEncoder(object): - - def __init__(self, word_dict_path=path["WORD_DICT"]): - - path_params = os.path.join(word_dict_path, 'utable.npy') - self.params = np.load(path_params, encoding='latin1', allow_pickle=True) - self.dico = _load_dictionary(word_dict_path) - - def encode(self, text): - - caption = encode_sentence(text, self.params, self.dico) - return caption diff --git a/spaces/auto-academic/auto-draft/latex_templates/Default/backgrounds.tex b/spaces/auto-academic/auto-draft/latex_templates/Default/backgrounds.tex deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/autonomous019/image_story_generator/app.py b/spaces/autonomous019/image_story_generator/app.py deleted file mode 100644 index f08cf0080b9573bdecc80c4795eae92d96bc6642..0000000000000000000000000000000000000000 --- a/spaces/autonomous019/image_story_generator/app.py +++ /dev/null @@ -1,154 +0,0 @@ -from transformers import ViTConfig, ViTForImageClassification -from transformers import ViTFeatureExtractor -from PIL import Image -import requests -import matplotlib.pyplot as plt -import gradio as gr -from gradio.mix import Parallel -from transformers import ImageClassificationPipeline, PerceiverForImageClassificationConvProcessing, PerceiverFeatureExtractor -from transformers import VisionEncoderDecoderModel -from transformers import AutoTokenizer -import torch -from transformers import ( - AutoModelForCausalLM, - LogitsProcessorList, - MinLengthLogitsProcessor, - StoppingCriteriaList, - MaxLengthCriteria, -) - -# https://github.com/NielsRogge/Transformers-Tutorials/blob/master/HuggingFace_vision_ecosystem_overview_(June_2022).ipynb -# option 1: load with randomly initialized weights (train from scratch) - -#tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") -#model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") - - -config = ViTConfig(num_hidden_layers=12, hidden_size=768) -model = ViTForImageClassification(config) - -#print(config) - -feature_extractor = ViTFeatureExtractor() -# or, to load one that corresponds to a checkpoint on the hub: -#feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") - -#the following gets called by classify_image() -feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv") -model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv") -#google/vit-base-patch16-224, deepmind/vision-perceiver-conv -image_pipe = ImageClassificationPipeline(model=model, feature_extractor=feature_extractor) - -def create_story(text_seed): - #tokenizer = AutoTokenizer.from_pretrained("gpt2") - #model = AutoModelForCausalLM.from_pretrained("gpt2") - - #eleutherAI gpt-3 based - tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M") - model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M") - - # set pad_token_id to eos_token_id because GPT2 does not have a EOS token - model.config.pad_token_id = model.config.eos_token_id - - #input_prompt = "It might be possible to" - input_prompt = text_seed - input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids - - # instantiate logits processors - logits_processor = LogitsProcessorList( - [ - MinLengthLogitsProcessor(10, eos_token_id=model.config.eos_token_id), - ] - ) - stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=100)]) - - outputs = model.greedy_search( - input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria - ) - - result_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) - return result_text - - - - - - -def self_caption(image): - repo_name = "ydshieh/vit-gpt2-coco-en" - #test_image = "cats.jpg" - test_image = image - #url = 'http://images.cocodataset.org/val2017/000000039769.jpg' - #test_image = Image.open(requests.get(url, stream=True).raw) - #test_image.save("cats.png") - - feature_extractor2 = ViTFeatureExtractor.from_pretrained(repo_name) - tokenizer = AutoTokenizer.from_pretrained(repo_name) - model2 = VisionEncoderDecoderModel.from_pretrained(repo_name) - pixel_values = feature_extractor2(test_image, return_tensors="pt").pixel_values - print("Pixel Values") - print(pixel_values) - # autoregressively generate text (using beam search or other decoding strategy) - generated_ids = model2.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True) - - # decode into text - preds = tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True) - preds = [pred.strip() for pred in preds] - print("Predictions") - print(preds) - print("The preds type is : ",type(preds)) - pred_keys = ["Prediction"] - pred_value = preds - - pred_dictionary = dict(zip(pred_keys, pred_value)) - print("Pred dictionary") - print(pred_dictionary) - #return(pred_dictionary) - preds = ' '.join(preds) - story = create_story(preds) - story = ' '.join(story) - return story - - -def classify_image(image): - results = image_pipe(image) - - print("RESULTS") - print(results) - # convert to format Gradio expects - output = {} - for prediction in results: - predicted_label = prediction['label'] - score = prediction['score'] - output[predicted_label] = score - print("OUTPUT") - print(output) - return output - - -image = gr.inputs.Image(type="pil") -label = gr.outputs.Label(num_top_classes=5) -examples = [ ["cats.jpg"], ["batter.jpg"],["drinkers.jpg"] ] -title = "Generate a Story from an Image" -description = "Demo for classifying images with Perceiver IO. To use it, simply upload an image and click 'submit', a story is autogenerated as well" -article = "

    " - -img_info1 = gr.Interface( - fn=classify_image, - inputs=image, - outputs=label, -) - -img_info2 = gr.Interface( - fn=self_caption, - inputs=image, - #outputs=label, - outputs = [ - gr.outputs.Textbox(label = 'Story') -], -) - -Parallel(img_info1,img_info2, inputs=image, title=title, description=description, examples=examples, enable_queue=True).launch(debug=True) -#Parallel(img_info1,img_info2, inputs=image, outputs=label, title=title, description=description, examples=examples, enable_queue=True).launch(debug=True) - - diff --git a/spaces/avichr/HebEMO_demo/README.md b/spaces/avichr/HebEMO_demo/README.md deleted file mode 100644 index 14d8ba106370eed5f63bd83ce7b4a66205deaaaf..0000000000000000000000000000000000000000 --- a/spaces/avichr/HebEMO_demo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: HebEMO_demo -emoji: 📚 -colorFrom: purple -colorTo: pink -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/awacke1/MultiplayerImageRecognition/README.md b/spaces/awacke1/MultiplayerImageRecognition/README.md deleted file mode 100644 index 8330f3bc36e94ca10ed3488804d5e7d18cfa9122..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MultiplayerImageRecognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MultiplayerImageRecognition -emoji: 🏃 -colorFrom: pink -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py deleted file mode 100644 index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/lpips/networks.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Sequence - -from itertools import chain - -import torch -import torch.nn as nn -from torchvision import models - -from criteria.lpips.utils import normalize_activation - - -def get_network(net_type: str): - if net_type == 'alex': - return AlexNet() - elif net_type == 'squeeze': - return SqueezeNet() - elif net_type == 'vgg': - return VGG16() - else: - raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') - - -class LinLayers(nn.ModuleList): - def __init__(self, n_channels_list: Sequence[int]): - super(LinLayers, self).__init__([ - nn.Sequential( - nn.Identity(), - nn.Conv2d(nc, 1, 1, 1, 0, bias=False) - ) for nc in n_channels_list - ]) - - for param in self.parameters(): - param.requires_grad = False - - -class BaseNet(nn.Module): - def __init__(self): - super(BaseNet, self).__init__() - - # register buffer - self.register_buffer( - 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer( - 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def set_requires_grad(self, state: bool): - for param in chain(self.parameters(), self.buffers()): - param.requires_grad = state - - def z_score(self, x: torch.Tensor): - return (x - self.mean) / self.std - - def forward(self, x: torch.Tensor): - x = self.z_score(x) - - output = [] - for i, (_, layer) in enumerate(self.layers._modules.items(), 1): - x = layer(x) - if i in self.target_layers: - output.append(normalize_activation(x)) - if len(output) == len(self.target_layers): - break - return output - - -class SqueezeNet(BaseNet): - def __init__(self): - super(SqueezeNet, self).__init__() - - self.layers = models.squeezenet1_1(True).features - self.target_layers = [2, 5, 8, 10, 11, 12, 13] - self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] - - self.set_requires_grad(False) - - -class AlexNet(BaseNet): - def __init__(self): - super(AlexNet, self).__init__() - - self.layers = models.alexnet(True).features - self.target_layers = [2, 5, 8, 10, 12] - self.n_channels_list = [64, 192, 384, 256, 256] - - self.set_requires_grad(False) - - -class VGG16(BaseNet): - def __init__(self): - super(VGG16, self).__init__() - - self.layers = models.vgg16(True).features - self.target_layers = [4, 9, 16, 23, 30] - self.n_channels_list = [64, 128, 256, 512, 512] - - self.set_requires_grad(False) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py deleted file mode 100644 index fe2dcde2a2493f2ee33d4765b83a4724e98aeab7..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222254.py +++ /dev/null @@ -1,67 +0,0 @@ -import os -os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[:, :, ::-1]) - -title = "让美好回忆更清晰" - - -description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。" - -article = "

    本项目克隆自akhaliq@huggingface,稍作修改 | GFPGAN Github Repo

    visitor badge
    " - -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True,share=True) - - diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py deleted file mode 100644 index 94065d6b1f7a3bdd3a07a47847c07dbd1d70a745..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621104301.py +++ /dev/null @@ -1,42 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -background = st.selectbox("表格线条是否隐藏",(False,True)) -extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取")) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - if extractor_mode == "单页抽取": - page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - #tables[1].to_excel(result,index=False) - for i in range(0,len(tables)): - table = tables[i].df - sheetname = str(i) - table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") - if extractor_mode == "全文抽取": - tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background) - result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - with open('result_all.xlsx','rb') as f: - st.download_button('抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel") diff --git a/spaces/betterme/Nice/git_init.sh b/spaces/betterme/Nice/git_init.sh deleted file mode 100644 index 4298a16845a89be8b2f401219dbb3fd4183cab20..0000000000000000000000000000000000000000 --- a/spaces/betterme/Nice/git_init.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -#git config --global credential.helper store - -git add ./* -git commit -m "update" # git commit --amend -m '重新commit' - -git pull -git push -f \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py deleted file mode 100644 index 766e2c2499f0a1866e88424a319b99a9df973fc4..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py +++ /dev/null @@ -1,39 +0,0 @@ -import html -import json -import os -import urllib.parse - -from modules import shared, ui_extra_networks, sd_models - - -class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage): - def __init__(self): - super().__init__('Checkpoints') - - def refresh(self): - shared.refresh_checkpoints() - - def list_items(self): - checkpoint: sd_models.CheckpointInfo - for name, checkpoint in sd_models.checkpoints_list.items(): - path, ext = os.path.splitext(checkpoint.filename) - previews = [path + ".png", path + ".preview.png"] - - preview = None - for file in previews: - if os.path.isfile(file): - preview = self.link_preview(file) - break - - yield { - "name": checkpoint.name_for_extra, - "filename": path, - "preview": preview, - "search_term": self.search_terms_from_path(checkpoint.filename) + " " + (checkpoint.sha256 or ""), - "onclick": '"' + html.escape(f"""return selectCheckpoint({json.dumps(name)})""") + '"', - "local_preview": path + ".png", - } - - def allowed_directories_for_previews(self): - return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None] - diff --git a/spaces/binker/interpreter5/functional.py b/spaces/binker/interpreter5/functional.py deleted file mode 100644 index fb433a9729741d434b1a6d2d4b7651cfa26427f4..0000000000000000000000000000000000000000 --- a/spaces/binker/interpreter5/functional.py +++ /dev/null @@ -1,116 +0,0 @@ -from bot_backend import * -import base64 -import time - - -def chat_completion(bot_backend: BotBackend): - model_choice = bot_backend.gpt_model_choice - config = bot_backend.config - kwargs_for_chat_completion = bot_backend.kwargs_for_chat_completion - - assert config['model'][model_choice]['available'], f"{model_choice} is not available for your API key" - - response = openai.ChatCompletion.create(**kwargs_for_chat_completion) - return response - - -def add_function_response_to_bot_history(content_to_display, history, unique_id): - images, text = [], [] - - # terminal output - error_occurred = False - for mark, out_str in content_to_display: - if mark in ('stdout', 'execute_result_text', 'display_text'): - text.append(out_str) - elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'): - if 'png' in mark: - images.append(('png', out_str)) - else: - images.append(('jpg', out_str)) - elif mark == 'error': - text.append(delete_color_control_char(out_str)) - error_occurred = True - text = '\n'.join(text).strip('\n') - if error_occurred: - history.append([None, f'❌Terminal output:\n```shell\n\n{text}\n```']) - else: - history.append([None, f'✔️Terminal output:\n```shell\n{text}\n```']) - - # image output - for filetype, img in images: - image_bytes = base64.b64decode(img) - temp_path = f'cache/temp_{unique_id}' - if not os.path.exists(temp_path): - os.mkdir(temp_path) - path = f'{temp_path}/{hash(time.time())}.{filetype}' - with open(path, 'wb') as f: - f.write(image_bytes) - history.append( - [ - None, - f'' - ] - ) - - -def parse_json(function_args: str, finished: bool): - """ - GPT may generate non-standard JSON format string, which contains '\n' in string value, leading to error when using - `json.loads()`. - Here we implement a parser to extract code directly from non-standard JSON string. - :return: code string if successfully parsed otherwise None - """ - parser_log = { - 'met_begin_{': False, - 'begin_"code"': False, - 'end_"code"': False, - 'met_:': False, - 'met_end_}': False, - 'met_end_code_"': False, - "code_begin_index": 0, - "code_end_index": 0 - } - try: - for index, char in enumerate(function_args): - if char == '{': - parser_log['met_begin_{'] = True - elif parser_log['met_begin_{'] and char == '"': - if parser_log['met_:']: - if finished: - parser_log['code_begin_index'] = index + 1 - break - else: - if index + 1 == len(function_args): - return '' - else: - temp_code_str = function_args[index + 1:] - if '\n' in temp_code_str: - return temp_code_str.strip('\n') - else: - return json.loads(function_args + '"}')['code'] - elif parser_log['begin_"code"']: - parser_log['end_"code"'] = True - else: - parser_log['begin_"code"'] = True - elif parser_log['end_"code"'] and char == ':': - parser_log['met_:'] = True - else: - continue - if finished: - for index, char in enumerate(function_args[::-1]): - back_index = -1 - index - if char == '}': - parser_log['met_end_}'] = True - elif parser_log['met_end_}'] and char == '"': - parser_log['code_end_index'] = back_index - 1 - break - else: - continue - code_str = function_args[parser_log['code_begin_index']: parser_log['code_end_index'] + 1] - if '\n' in code_str: - return code_str.strip('\n') - else: - return json.loads(function_args)['code'] - - except Exception as e: - return None diff --git a/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md b/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md deleted file mode 100644 index bef591721ca1818c102771e6fdf792ed0c4b0591..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download latest software of Kundli software and get horoscope matching rashifal panchang and more.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    Free Kundli Software is one of the most sought after feature of AstroSage. AstroSageis the number one website of astrology and it has initiated many useful servicesjust for the ease of its visitors. This is the software that prepares birth chart(Kundli) of a person using the personal details is one of the best software. Kundlisoftware gives a comprehensive report of a person's complete life. It contains adetailed description of the planetary positions at the time of your birth and atthe present time as well. There is not even a single detail left that should bementioned in one's Kundli. A proper chart indicating the position of all the planetsat the time a person was born is also provided for better understanding. To getyour Kundli using this unique Kundli Software for free, fill your details in followingform:

    -

    download latest software of kundli software


    Download Zip ✯✯✯ https://urloso.com/2uyO1a



    -

    The significance of the Kundli is known to almost everyone. The Kundli preparedby Kundli software of AstroSage illustrates the events of life with complete knowledgeof astrology. The language used to describe all the events is very easy to understand.The given sections in the prepared Kundli will take you by surprise as they notonly describe your events of life but also make you aware of many kinds of Doshas.It is better to know beforehand that your Kundli consists of any Dosha or not, asthey play a very considerable role in one's life. For example, if any kind of Doshais present in the Kundli of a person then that person is required to perform someremedies to avoid severe conditions in life. There is only one way of coming toknow about Dosha, that is, the birth chart. Kundli not only gives report of presentDosha but also describes that the position of a planet in your Kundli is maleficor positive for any case.

    -

    Preparing a Kundli is a wise thing to do in numerous ways. As it is already mentionedthat Kundli covers all the aspects of life then with the help of a Kundli it becomeseasy to predict the upcoming events of life. It is your own choice that in whicharea of life you are most interested. As far as a Kundli is concerned, it will provideyou predictions on all the desired areas of life. The best thing about the Kundlisoftware is that it not only gives an analyzed report of the overall life but alsotells a person about his life from time to time. Through the Kundli software youwill also come to know about the time period in which you are going to go throughShani Sade Sati. Shani sade sati is the time duration of Saturn that affects thelife of a native tremendously. The effects could be very great and even very adverse.Shani sade sati is divided into three phases. Your Kundli will give an accurateprediction on how your shani sade sati is going to be and its time duration willalso be told.

    -

    One more major feature of Kundli software is that it also gives the prediction ofone's life analyzed by Lal Kitab. Lal Kitab is one of the most acknowledged mediumof making predictions and is being fondly followed. The remedies that could resolveyour problems are given hand to hand that are very easy to perform and are extremelyeffective.

    -

    In short, a Kundli is not only a way of coming to know about your life but it alsohelps in improving one's quality of life. The Kundli software available on AstroSagecan be accessed without any charges. What could be better than a Kundli softwarethat tells everything about life without leaving a stone untouched for free? So,get ready to explore the facts of your own life.

    -

    The Kundli software download is easier than ever before and very user friendly. When downloading is just a click away why wait. Download Kundli software and get your Janam Kundli and see what destiny has in store for you. Secure your childs future getting a detailed Janam Kundli done. Get to know about compatibility between perspective bride and groom downloading our professional Kundli software.

    -

    -

    Free Kundali software download can make your calculations easier. One of the most tedious of tasks in astrology is Vedic horoscope calculation and charting. As an extension of calculation of planetary positions, this portal also provides free services of horoscope compatibility matching for matrimonial purposes. This process, also known as kundali milan orguna milap, is followed extensively in marriage system in India.

    -

    What makes this Kundli software download so much in demand is that it is both simple software to use with detailed professional analysis ( that is it displays all the relevant information about a chart, namely, the positions of planets, divisional chart positions, Nakshatras, Vimshottari dasha, Bhava etc.. moreover all this is absolutely free .

    -

    Speedy, automated yet accurate Kundli software download offers an exhaustive study and expert recommendations of the planetary positions. Find the exact longitude and latitude position of the place of birth by using the place finder and locating the city from the menu.

    -

    This software was designed and written by P.V.R. Narasimha Rao. He is a software engineer and astrologer hailing from India and living near Boston, US. He has engineering degrees from IIT, Madras/Chennai and Rice University, Houston. He is also a Sanskrit scholar. He authored a textbook, many magazine articles and research articles and teaches astrology near Boston. You can read more about him here.

    -

    In terms of the range of calculations available, technical depth and breadth, level of customizability of calculations and ease of use, Jagannatha Hora is unsurpassed by any contemporary Vedic astrology software package. If interested, please check out a nearly complete list of the features.

    -

    Note: The zip files do not have any problem with them. Many people have successfully downloaded and installed the software. If you are unable to unzip them after downloading them or unable to install, it means that your download did not succeed for some reason. Keep trying until you succeed. We cannot help you with this and there is no use in sending us an email.

    -

    We do not distribute or sell the software in CDs. You have to download the software from the internet. You may also try to find someone who has already downloaded it and get them to make a CD for you.

    -

    Thank you for using this software. Please use it to help people and to conduct researches to enrich our collective understanding of Vedic astrology. It is the author's earnest and sincere hope that your use of this software will result in a lot of souls being helped and also in a renaissance in the knowledge of Vedic astrology!

    -

    MB Janam Kundali Software is an advanced birth chart calculation software based on Vedic Astrology. It tells you accurately the astronomical location of planets at the time of an individual's birth. This program gives you the natal chart in both North Indian and South Indian Styles.

    -

    The software will install a desktop icon, and uninstalling the program will leave a folder in the user's Program Files. Overall, the program does what it should, but the interface could be cleaner, and novice astrologers will need additional resources to interpret their results.

    -

    Latest Kundli software that works with Windows vista, XP and 7.
    Its new features include:
    facility to update to pdf, jpeg
    ratna vichar, stone suggession
    manglik vichar
    mantra, upaay
    kundil software for windows 7, kundli, latest kundli, kundli computer zone, czone kundli, kundli 7, kundli software for windows 7

    -

    It is part from religion & spirituality category and is licensed as shareware for Windows 32-bit and 64-bit platform and can be used as a free trial until the trial period will end. The Kundli demo is available to all software users as a free download with potential restrictions compared with the full version.

    -

    Kundli Software is one of the few astrology apps that has garnered 5 million downloads in the field of online astrology., i.e. the number of users across the country and abroad has now exceeded 50 lakhs mark. This is the biggest success for this app, as the organic download of any app is majorly valued and reflects on the popularity. Get Free astrology software, Kundli software and aaj ka rashifal by AstroSage.com in 11 languages.

    -

    Kundli for Windows is an astrology software
    Main features:
    - Good presentation.
    - Accurate calculations.
    - Screen preview.
    - Storage of horsocopes and modules for future references.
    - South/North Indian Charting.
    - Aynamsa N.C. Lahiri/ K.P. / B.V. Raman.
    - Latitude and Logitude databases, Time Zones database.

    -

    Kundli is an excellent astrology software for Windows operating systems. Downloading this app on your device will provide you with a good presentation, accurate calculations, storage of horoscopes and modules for the future, a wide range of references, Y2K compatibility, and a lot more.

    -

    It offers unlimited storage to preserve relevant astrological data and information. Also, personalized branding becomes very easy with Dhruv Astro Software as your name and address details appear in each and every page of the report generated with the help of this software.

    -

    Avails you with an Elaborate and Coloured Kundli containing more than 200 pages.To top it all, this software puts to use the most detailed systems of astrology to make things more convenient for professional practitioners of astrology.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md b/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md deleted file mode 100644 index dc671aa2a8dfff6141ce59b41078f18626ffebe2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Harry Potter E A Ordem Da Fenix 720p Dublado Em Como Harry enfrentou a ameaa de Voldemort e a tirania de Umbridge.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Harry Potter E A Ordem Da Fenix 720p Dublado Em


    DOWNLOAD ⚙⚙⚙ https://urloso.com/2uyQ9d



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md b/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md deleted file mode 100644 index 68c59453d501c89e57898d9fde64b8bcf0f96d14..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Journey to the West Conquering the Demons 720p A Must-See Movie for Fans of Stephen Chow and Chinese Mythology.md +++ /dev/null @@ -1,6 +0,0 @@ -

    journey to the west conquering the demons 720p download


    Download Zip ✸✸✸ https://urloso.com/2uyO9g



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py deleted file mode 100644 index deb886c0417285ed1d5ad85eb941fa1ac757cdab..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_inference.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -from itertools import count -import torch -from caffe2.proto import caffe2_pb2 -from caffe2.python import core - -from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format -from .shared import ScopedWS, get_pb_arg_vali, get_pb_arg_vals, infer_device_type - -logger = logging.getLogger(__name__) - - -# ===== ref: mobile-vision predictor's 'Caffe2Wrapper' class ====== -class ProtobufModel(torch.nn.Module): - """ - Wrapper of a caffe2's protobuf model. - It works just like nn.Module, but running caffe2 under the hood. - Input/Output are tuple[tensor] that match the caffe2 net's external_input/output. - """ - - _ids = count(0) - - def __init__(self, predict_net, init_net): - logger.info(f"Initializing ProtobufModel for: {predict_net.name} ...") - super().__init__() - assert isinstance(predict_net, caffe2_pb2.NetDef) - assert isinstance(init_net, caffe2_pb2.NetDef) - # create unique temporary workspace for each instance - self.ws_name = "__tmp_ProtobufModel_{}__".format(next(self._ids)) - self.net = core.Net(predict_net) - - logger.info("Running init_net once to fill the parameters ...") - with ScopedWS(self.ws_name, is_reset=True, is_cleanup=False) as ws: - ws.RunNetOnce(init_net) - uninitialized_external_input = [] - for blob in self.net.Proto().external_input: - if blob not in ws.Blobs(): - uninitialized_external_input.append(blob) - ws.CreateBlob(blob) - ws.CreateNet(self.net) - - self._error_msgs = set() - self._input_blobs = uninitialized_external_input - - def _infer_output_devices(self, inputs): - """ - Returns: - list[str]: list of device for each external output - """ - - def _get_device_type(torch_tensor): - assert torch_tensor.device.type in ["cpu", "cuda"] - assert torch_tensor.device.index == 0 - return torch_tensor.device.type - - predict_net = self.net.Proto() - input_device_types = { - (name, 0): _get_device_type(tensor) for name, tensor in zip(self._input_blobs, inputs) - } - device_type_map = infer_device_type( - predict_net, known_status=input_device_types, device_name_style="pytorch" - ) - ssa, versions = core.get_ssa(predict_net) - versioned_outputs = [(name, versions[name]) for name in predict_net.external_output] - output_devices = [device_type_map[outp] for outp in versioned_outputs] - return output_devices - - def forward(self, inputs): - """ - Args: - inputs (tuple[torch.Tensor]) - - Returns: - tuple[torch.Tensor] - """ - assert len(inputs) == len(self._input_blobs), ( - f"Length of inputs ({len(inputs)}) " - f"doesn't match the required input blobs: {self._input_blobs}" - ) - - with ScopedWS(self.ws_name, is_reset=False, is_cleanup=False) as ws: - for b, tensor in zip(self._input_blobs, inputs): - ws.FeedBlob(b, tensor) - - try: - ws.RunNet(self.net.Proto().name) - except RuntimeError as e: - if not str(e) in self._error_msgs: - self._error_msgs.add(str(e)) - logger.warning("Encountered new RuntimeError: \n{}".format(str(e))) - logger.warning("Catch the error and use partial results.") - - c2_outputs = [ws.FetchBlob(b) for b in self.net.Proto().external_output] - # Remove outputs of current run, this is necessary in order to - # prevent fetching the result from previous run if the model fails - # in the middle. - for b in self.net.Proto().external_output: - # Needs to create uninitialized blob to make the net runable. - # This is "equivalent" to: ws.RemoveBlob(b) then ws.CreateBlob(b), - # but there'no such API. - ws.FeedBlob(b, f"{b}, a C++ native class of type nullptr (uninitialized).") - - # Cast output to torch.Tensor on the desired device - output_devices = ( - self._infer_output_devices(inputs) - if any(t.device.type != "cpu" for t in inputs) - else ["cpu" for _ in self.net.Proto().external_output] - ) - - outputs = [] - for name, c2_output, device in zip( - self.net.Proto().external_output, c2_outputs, output_devices - ): - if not isinstance(c2_output, np.ndarray): - raise RuntimeError( - "Invalid output for blob {}, received: {}".format(name, c2_output) - ) - outputs.append(torch.tensor(c2_output).to(device=device)) - return tuple(outputs) - - -class ProtobufDetectionModel(torch.nn.Module): - """ - A class works just like a pytorch meta arch in terms of inference, but running - caffe2 model under the hood. - """ - - def __init__(self, predict_net, init_net, *, convert_outputs=None): - """ - Args: - predict_net, init_net (core.Net): caffe2 nets - convert_outptus (callable): a function that converts caffe2 - outputs to the same format of the original pytorch model. - By default, use the one defined in the caffe2 meta_arch. - """ - super().__init__() - self.protobuf_model = ProtobufModel(predict_net, init_net) - self.size_divisibility = get_pb_arg_vali(predict_net, "size_divisibility", 0) - self.device = get_pb_arg_vals(predict_net, "device", b"cpu").decode("ascii") - - if convert_outputs is None: - meta_arch = get_pb_arg_vals(predict_net, "meta_architecture", b"GeneralizedRCNN") - meta_arch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[meta_arch.decode("ascii")] - self._convert_outputs = meta_arch.get_outputs_converter(predict_net, init_net) - else: - self._convert_outputs = convert_outputs - - def _convert_inputs(self, batched_inputs): - # currently all models convert inputs in the same way - return convert_batched_inputs_to_c2_format( - batched_inputs, self.size_divisibility, self.device - ) - - def forward(self, batched_inputs): - c2_inputs = self._convert_inputs(batched_inputs) - c2_results = self.protobuf_model(c2_inputs) - c2_results = dict(zip(self.protobuf_model.net.Proto().external_output, c2_results)) - return self._convert_outputs(batched_inputs, c2_inputs, c2_results) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h deleted file mode 100644 index 965c1bfd47b58f9802d1c3fd69a5962517b2da61..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/deformable/deform_conv.h +++ /dev/null @@ -1,377 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -#if defined(WITH_CUDA) || defined(WITH_HIP) -int deform_conv_forward_cuda( - at::Tensor input, - at::Tensor weight, - at::Tensor offset, - at::Tensor output, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step); - -int deform_conv_backward_input_cuda( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradInput, - at::Tensor gradOffset, - at::Tensor weight, - at::Tensor columns, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step); - -int deform_conv_backward_parameters_cuda( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - float scale, - int im2col_step); - -void modulated_deform_conv_cuda_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor output, - at::Tensor columns, - int kernel_h, - int kernel_w, - const int stride_h, - const int stride_w, - const int pad_h, - const int pad_w, - const int dilation_h, - const int dilation_w, - const int group, - const int deformable_group, - const bool with_bias); - -void modulated_deform_conv_cuda_backward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor columns, - at::Tensor grad_input, - at::Tensor grad_weight, - at::Tensor grad_bias, - at::Tensor grad_offset, - at::Tensor grad_mask, - at::Tensor grad_output, - int kernel_h, - int kernel_w, - int stride_h, - int stride_w, - int pad_h, - int pad_w, - int dilation_h, - int dilation_w, - int group, - int deformable_group, - const bool with_bias); - -#endif - -inline int deform_conv_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor offset, - at::Tensor output, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_forward_cuda( - input, - weight, - offset, - output, - columns, - ones, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline int deform_conv_backward_input( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradInput, - at::Tensor gradOffset, - at::Tensor weight, - at::Tensor columns, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - int im2col_step) { - if (gradOutput.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_backward_input_cuda( - input, - offset, - gradOutput, - gradInput, - gradOffset, - weight, - columns, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline int deform_conv_backward_filter( - at::Tensor input, - at::Tensor offset, - at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, - at::Tensor ones, - int kW, - int kH, - int dW, - int dH, - int padW, - int padH, - int dilationW, - int dilationH, - int group, - int deformable_group, - float scale, - int im2col_step) { - if (gradOutput.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return deform_conv_backward_parameters_cuda( - input, - offset, - gradOutput, - gradWeight, - columns, - ones, - kW, - kH, - dW, - dH, - padW, - padH, - dilationW, - dilationH, - group, - deformable_group, - scale, - im2col_step); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline void modulated_deform_conv_forward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor output, - at::Tensor columns, - int kernel_h, - int kernel_w, - const int stride_h, - const int stride_w, - const int pad_h, - const int pad_w, - const int dilation_h, - const int dilation_w, - const int group, - const int deformable_group, - const bool with_bias) { - if (input.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return modulated_deform_conv_cuda_forward( - input, - weight, - bias, - ones, - offset, - mask, - output, - columns, - kernel_h, - kernel_w, - stride_h, - stride_w, - pad_h, - pad_w, - dilation_h, - dilation_w, - group, - deformable_group, - with_bias); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -inline void modulated_deform_conv_backward( - at::Tensor input, - at::Tensor weight, - at::Tensor bias, - at::Tensor ones, - at::Tensor offset, - at::Tensor mask, - at::Tensor columns, - at::Tensor grad_input, - at::Tensor grad_weight, - at::Tensor grad_bias, - at::Tensor grad_offset, - at::Tensor grad_mask, - at::Tensor grad_output, - int kernel_h, - int kernel_w, - int stride_h, - int stride_w, - int pad_h, - int pad_w, - int dilation_h, - int dilation_w, - int group, - int deformable_group, - const bool with_bias) { - if (grad_output.is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!"); - TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!"); - TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!"); - TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!"); - return modulated_deform_conv_cuda_backward( - input, - weight, - bias, - ones, - offset, - mask, - columns, - grad_input, - grad_weight, - grad_bias, - grad_offset, - grad_mask, - grad_output, - kernel_h, - kernel_w, - stride_h, - stride_w, - pad_h, - pad_w, - dilation_h, - dilation_w, - group, - deformable_group, - with_bias); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - AT_ERROR("This operator is not implemented on CPU"); -} - -} // namespace detectron2 diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py deleted file mode 100644 index a2d0f6648b349c5ea39fd29785b77c961a58fa22..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/sampling.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch - -from detectron2.layers import nonzero_tuple - -__all__ = ["subsample_labels"] - - -def subsample_labels( - labels: torch.Tensor, num_samples: int, positive_fraction: float, bg_label: int -): - """ - Return `num_samples` (or fewer, if not enough found) - random samples from `labels` which is a mixture of positives & negatives. - It will try to return as many positives as possible without - exceeding `positive_fraction * num_samples`, and then try to - fill the remaining slots with negatives. - - Args: - labels (Tensor): (N, ) label vector with values: - * -1: ignore - * bg_label: background ("negative") class - * otherwise: one or more foreground ("positive") classes - num_samples (int): The total number of labels with value >= 0 to return. - Values that are not sampled will be filled with -1 (ignore). - positive_fraction (float): The number of subsampled labels with values > 0 - is `min(num_positives, int(positive_fraction * num_samples))`. The number - of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`. - In order words, if there are not enough positives, the sample is filled with - negatives. If there are also not enough negatives, then as many elements are - sampled as is possible. - bg_label (int): label index of background ("negative") class. - - Returns: - pos_idx, neg_idx (Tensor): - 1D vector of indices. The total length of both is `num_samples` or fewer. - """ - positive = nonzero_tuple((labels != -1) & (labels != bg_label))[0] - negative = nonzero_tuple(labels == bg_label)[0] - - num_pos = int(num_samples * positive_fraction) - # protect against not enough positive examples - num_pos = min(positive.numel(), num_pos) - num_neg = num_samples - num_pos - # protect against not enough negative examples - num_neg = min(negative.numel(), num_neg) - - # randomly select positive and negative examples - perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] - perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] - - pos_idx = positive[perm1] - neg_idx = negative[perm2] - return pos_idx, neg_idx diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py deleted file mode 100644 index 9b7245454dfcccb4e822a6634168d405c0e791bb..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageQt.py +++ /dev/null @@ -1,216 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a simple Qt image interface. -# -# history: -# 2006-06-03 fl: created -# 2006-06-04 fl: inherit from QImage instead of wrapping it -# 2006-06-05 fl: removed toimage helper; move string support to ImageQt -# 2013-11-13 fl: add support for Qt5 (aurelien.ballier@cyclonit.com) -# -# Copyright (c) 2006 by Secret Labs AB -# Copyright (c) 2006 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import sys -from io import BytesIO - -from . import Image -from ._util import is_path - -qt_versions = [ - ["6", "PyQt6"], - ["side6", "PySide6"], -] - -# If a version has already been imported, attempt it first -qt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True) -for qt_version, qt_module in qt_versions: - try: - if qt_module == "PyQt6": - from PyQt6.QtCore import QBuffer, QIODevice - from PyQt6.QtGui import QImage, QPixmap, qRgba - elif qt_module == "PySide6": - from PySide6.QtCore import QBuffer, QIODevice - from PySide6.QtGui import QImage, QPixmap, qRgba - except (ImportError, RuntimeError): - continue - qt_is_installed = True - break -else: - qt_is_installed = False - qt_version = None - - -def rgb(r, g, b, a=255): - """(Internal) Turns an RGB color into a Qt compatible color integer.""" - # use qRgb to pack the colors, and then turn the resulting long - # into a negative integer with the same bitpattern. - return qRgba(r, g, b, a) & 0xFFFFFFFF - - -def fromqimage(im): - """ - :param im: QImage or PIL ImageQt object - """ - buffer = QBuffer() - if qt_version == "6": - try: - qt_openmode = QIODevice.OpenModeFlag - except AttributeError: - qt_openmode = QIODevice.OpenMode - else: - qt_openmode = QIODevice - buffer.open(qt_openmode.ReadWrite) - # preserve alpha channel with png - # otherwise ppm is more friendly with Image.open - if im.hasAlphaChannel(): - im.save(buffer, "png") - else: - im.save(buffer, "ppm") - - b = BytesIO() - b.write(buffer.data()) - buffer.close() - b.seek(0) - - return Image.open(b) - - -def fromqpixmap(im): - return fromqimage(im) - # buffer = QBuffer() - # buffer.open(QIODevice.ReadWrite) - # # im.save(buffer) - # # What if png doesn't support some image features like animation? - # im.save(buffer, 'ppm') - # bytes_io = BytesIO() - # bytes_io.write(buffer.data()) - # buffer.close() - # bytes_io.seek(0) - # return Image.open(bytes_io) - - -def align8to32(bytes, width, mode): - """ - converts each scanline of data from 8 bit to 32 bit aligned - """ - - bits_per_pixel = {"1": 1, "L": 8, "P": 8, "I;16": 16}[mode] - - # calculate bytes per line and the extra padding if needed - bits_per_line = bits_per_pixel * width - full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8) - bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0) - - extra_padding = -bytes_per_line % 4 - - # already 32 bit aligned by luck - if not extra_padding: - return bytes - - new_data = [] - for i in range(len(bytes) // bytes_per_line): - new_data.append( - bytes[i * bytes_per_line : (i + 1) * bytes_per_line] - + b"\x00" * extra_padding - ) - - return b"".join(new_data) - - -def _toqclass_helper(im): - data = None - colortable = None - exclusive_fp = False - - # handle filename, if given instead of image name - if hasattr(im, "toUtf8"): - # FIXME - is this really the best way to do this? - im = str(im.toUtf8(), "utf-8") - if is_path(im): - im = Image.open(im) - exclusive_fp = True - - qt_format = QImage.Format if qt_version == "6" else QImage - if im.mode == "1": - format = qt_format.Format_Mono - elif im.mode == "L": - format = qt_format.Format_Indexed8 - colortable = [] - for i in range(256): - colortable.append(rgb(i, i, i)) - elif im.mode == "P": - format = qt_format.Format_Indexed8 - colortable = [] - palette = im.getpalette() - for i in range(0, len(palette), 3): - colortable.append(rgb(*palette[i : i + 3])) - elif im.mode == "RGB": - # Populate the 4th channel with 255 - im = im.convert("RGBA") - - data = im.tobytes("raw", "BGRA") - format = qt_format.Format_RGB32 - elif im.mode == "RGBA": - data = im.tobytes("raw", "BGRA") - format = qt_format.Format_ARGB32 - elif im.mode == "I;16" and hasattr(qt_format, "Format_Grayscale16"): # Qt 5.13+ - im = im.point(lambda i: i * 256) - - format = qt_format.Format_Grayscale16 - else: - if exclusive_fp: - im.close() - msg = f"unsupported image mode {repr(im.mode)}" - raise ValueError(msg) - - size = im.size - __data = data or align8to32(im.tobytes(), size[0], im.mode) - if exclusive_fp: - im.close() - return {"data": __data, "size": size, "format": format, "colortable": colortable} - - -if qt_is_installed: - - class ImageQt(QImage): - def __init__(self, im): - """ - An PIL image wrapper for Qt. This is a subclass of PyQt's QImage - class. - - :param im: A PIL Image object, or a file name (given either as - Python string or a PyQt string object). - """ - im_data = _toqclass_helper(im) - # must keep a reference, or Qt will crash! - # All QImage constructors that take data operate on an existing - # buffer, so this buffer has to hang on for the life of the image. - # Fixes https://github.com/python-pillow/Pillow/issues/1370 - self.__data = im_data["data"] - super().__init__( - self.__data, - im_data["size"][0], - im_data["size"][1], - im_data["format"], - ) - if im_data["colortable"]: - self.setColorTable(im_data["colortable"]) - - -def toqimage(im): - return ImageQt(im) - - -def toqpixmap(im): - # # This doesn't work. For now using a dumb approach. - # im_data = _toqclass_helper(im) - # result = QPixmap(im_data["size"][0], im_data["size"][1]) - # result.loadFromData(im_data["data"]) - qimage = toqimage(im) - return QPixmap.fromImage(qimage) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py deleted file mode 100644 index 5ccbc77e64d1c92c99cbd7158d047bab54cb9f3d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/data/coco_panoptic_separated.py +++ /dev/null @@ -1,26 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.evaluation import ( - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - SemSegEvaluator, -) - -from .coco import dataloader - -dataloader.train.dataset.names = "coco_2017_train_panoptic_separated" -dataloader.train.dataset.filter_empty = False -dataloader.test.dataset.names = "coco_2017_val_panoptic_separated" - - -dataloader.evaluator = [ - L(COCOEvaluator)( - dataset_name="${...test.dataset.names}", - ), - L(SemSegEvaluator)( - dataset_name="${...test.dataset.names}", - ), - L(COCOPanopticEvaluator)( - dataset_name="${...test.dataset.names}", - ), -] diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py deleted file mode 100644 index 7cac1bb7663b985165000b2b351d6ff630d2ba3f..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/transform_data.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import BinaryIO, Dict, Union -import torch - - -def normalized_coords_transform(x0, y0, w, h): - """ - Coordinates transform that maps top left corner to (-1, -1) and bottom - right corner to (1, 1). Used for torch.grid_sample to initialize the - grid - """ - - def f(p): - return (2 * (p[0] - x0) / w - 1, 2 * (p[1] - y0) / h - 1) - - return f - - -class DensePoseTransformData(object): - - # Horizontal symmetry label transforms used for horizontal flip - MASK_LABEL_SYMMETRIES = [0, 1, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 14] - # fmt: off - POINT_LABEL_SYMMETRIES = [ 0, 1, 2, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15, 18, 17, 20, 19, 22, 21, 24, 23] # noqa - # fmt: on - - def __init__(self, uv_symmetries: Dict[str, torch.Tensor], device: torch.device): - self.mask_label_symmetries = DensePoseTransformData.MASK_LABEL_SYMMETRIES - self.point_label_symmetries = DensePoseTransformData.POINT_LABEL_SYMMETRIES - self.uv_symmetries = uv_symmetries - self.device = torch.device("cpu") - - def to(self, device: torch.device, copy: bool = False) -> "DensePoseTransformData": - """ - Convert transform data to the specified device - - Args: - device (torch.device): device to convert the data to - copy (bool): flag that specifies whether to copy or to reference the data - in case the device is the same - Return: - An instance of `DensePoseTransformData` with data stored on the specified device - """ - if self.device == device and not copy: - return self - uv_symmetry_map = {} - for key in self.uv_symmetries: - uv_symmetry_map[key] = self.uv_symmetries[key].to(device=device, copy=copy) - return DensePoseTransformData(uv_symmetry_map, device) - - @staticmethod - def load(io: Union[str, BinaryIO]): - """ - Args: - io: (str or binary file-like object): input file to load data from - Returns: - An instance of `DensePoseTransformData` with transforms loaded from the file - """ - import scipy.io - - uv_symmetry_map = scipy.io.loadmat(io) - uv_symmetry_map_torch = {} - for key in ["U_transforms", "V_transforms"]: - uv_symmetry_map_torch[key] = [] - map_src = uv_symmetry_map[key] - map_dst = uv_symmetry_map_torch[key] - for i in range(map_src.shape[1]): - map_dst.append(torch.from_numpy(map_src[0, i]).to(dtype=torch.float)) - uv_symmetry_map_torch[key] = torch.stack(map_dst, dim=0) - transform_data = DensePoseTransformData(uv_symmetry_map_torch, device=torch.device("cpu")) - return transform_data diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py deleted file mode 100644 index f507dffe56a4121756874186eacdc9be0cbcdee1..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/export_model.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import os -from typing import Dict, List, Tuple -import torch -from torch import Tensor, nn - -import detectron2.data.transforms as T -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import build_detection_test_loader, detection_utils -from detectron2.evaluation import COCOEvaluator, inference_on_dataset, print_csv_format -from detectron2.export import ( - STABLE_ONNX_OPSET_VERSION, - TracingAdapter, - dump_torchscript_IR, - scripting_with_instances, -) -from detectron2.modeling import GeneralizedRCNN, RetinaNet, build_model -from detectron2.modeling.postprocessing import detector_postprocess -from detectron2.projects.point_rend import add_pointrend_config -from detectron2.structures import Boxes -from detectron2.utils.env import TORCH_VERSION -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import setup_logger - - -def setup_cfg(args): - cfg = get_cfg() - # cuda context is initialized before creating dataloader, so we don't fork anymore - cfg.DATALOADER.NUM_WORKERS = 0 - add_pointrend_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - return cfg - - -def export_caffe2_tracing(cfg, torch_model, inputs): - from detectron2.export import Caffe2Tracer - - tracer = Caffe2Tracer(cfg, torch_model, inputs) - if args.format == "caffe2": - caffe2_model = tracer.export_caffe2() - caffe2_model.save_protobuf(args.output) - # draw the caffe2 graph - caffe2_model.save_graph(os.path.join(args.output, "model.svg"), inputs=inputs) - return caffe2_model - elif args.format == "onnx": - import onnx - - onnx_model = tracer.export_onnx() - onnx.save(onnx_model, os.path.join(args.output, "model.onnx")) - elif args.format == "torchscript": - ts_model = tracer.export_torchscript() - with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f: - torch.jit.save(ts_model, f) - dump_torchscript_IR(ts_model, args.output) - - -# experimental. API not yet final -def export_scripting(torch_model): - assert TORCH_VERSION >= (1, 8) - fields = { - "proposal_boxes": Boxes, - "objectness_logits": Tensor, - "pred_boxes": Boxes, - "scores": Tensor, - "pred_classes": Tensor, - "pred_masks": Tensor, - "pred_keypoints": torch.Tensor, - "pred_keypoint_heatmaps": torch.Tensor, - } - assert args.format == "torchscript", "Scripting only supports torchscript format." - - class ScriptableAdapterBase(nn.Module): - # Use this adapter to workaround https://github.com/pytorch/pytorch/issues/46944 - # by not retuning instances but dicts. Otherwise the exported model is not deployable - def __init__(self): - super().__init__() - self.model = torch_model - self.eval() - - if isinstance(torch_model, GeneralizedRCNN): - - class ScriptableAdapter(ScriptableAdapterBase): - def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]: - instances = self.model.inference(inputs, do_postprocess=False) - return [i.get_fields() for i in instances] - - else: - - class ScriptableAdapter(ScriptableAdapterBase): - def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]: - instances = self.model(inputs) - return [i.get_fields() for i in instances] - - ts_model = scripting_with_instances(ScriptableAdapter(), fields) - with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f: - torch.jit.save(ts_model, f) - dump_torchscript_IR(ts_model, args.output) - # TODO inference in Python now missing postprocessing glue code - return None - - -# experimental. API not yet final -def export_tracing(torch_model, inputs): - assert TORCH_VERSION >= (1, 8) - image = inputs[0]["image"] - inputs = [{"image": image}] # remove other unused keys - - if isinstance(torch_model, GeneralizedRCNN): - - def inference(model, inputs): - # use do_postprocess=False so it returns ROI mask - inst = model.inference(inputs, do_postprocess=False)[0] - return [{"instances": inst}] - - else: - inference = None # assume that we just call the model directly - - traceable_model = TracingAdapter(torch_model, inputs, inference) - - if args.format == "torchscript": - ts_model = torch.jit.trace(traceable_model, (image,)) - with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f: - torch.jit.save(ts_model, f) - dump_torchscript_IR(ts_model, args.output) - elif args.format == "onnx": - with PathManager.open(os.path.join(args.output, "model.onnx"), "wb") as f: - torch.onnx.export(traceable_model, (image,), f, opset_version=STABLE_ONNX_OPSET_VERSION) - logger.info("Inputs schema: " + str(traceable_model.inputs_schema)) - logger.info("Outputs schema: " + str(traceable_model.outputs_schema)) - - if args.format != "torchscript": - return None - if not isinstance(torch_model, (GeneralizedRCNN, RetinaNet)): - return None - - def eval_wrapper(inputs): - """ - The exported model does not contain the final resize step, which is typically - unused in deployment but needed for evaluation. We add it manually here. - """ - input = inputs[0] - instances = traceable_model.outputs_schema(ts_model(input["image"]))[0]["instances"] - postprocessed = detector_postprocess(instances, input["height"], input["width"]) - return [{"instances": postprocessed}] - - return eval_wrapper - - -def get_sample_inputs(args): - - if args.sample_image is None: - # get a first batch from dataset - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - first_batch = next(iter(data_loader)) - return first_batch - else: - # get a sample data - original_image = detection_utils.read_image(args.sample_image, format=cfg.INPUT.FORMAT) - # Do same preprocessing as DefaultPredictor - aug = T.ResizeShortestEdge( - [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST - ) - height, width = original_image.shape[:2] - image = aug.get_transform(original_image).apply_image(original_image) - image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) - - inputs = {"image": image, "height": height, "width": width} - - # Sample ready - sample_inputs = [inputs] - return sample_inputs - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Export a model for deployment.") - parser.add_argument( - "--format", - choices=["caffe2", "onnx", "torchscript"], - help="output format", - default="torchscript", - ) - parser.add_argument( - "--export-method", - choices=["caffe2_tracing", "tracing", "scripting"], - help="Method to export models", - default="tracing", - ) - parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file") - parser.add_argument("--sample-image", default=None, type=str, help="sample image for input") - parser.add_argument("--run-eval", action="store_true") - parser.add_argument("--output", help="output directory for the converted model") - parser.add_argument( - "opts", - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER, - ) - args = parser.parse_args() - logger = setup_logger() - logger.info("Command line arguments: " + str(args)) - PathManager.mkdirs(args.output) - # Disable re-specialization on new shapes. Otherwise --run-eval will be slow - torch._C._jit_set_bailout_depth(1) - - cfg = setup_cfg(args) - - # create a torch model - torch_model = build_model(cfg) - DetectionCheckpointer(torch_model).resume_or_load(cfg.MODEL.WEIGHTS) - torch_model.eval() - - # convert and save model - if args.export_method == "caffe2_tracing": - sample_inputs = get_sample_inputs(args) - exported_model = export_caffe2_tracing(cfg, torch_model, sample_inputs) - elif args.export_method == "scripting": - exported_model = export_scripting(torch_model) - elif args.export_method == "tracing": - sample_inputs = get_sample_inputs(args) - exported_model = export_tracing(torch_model, sample_inputs) - - # run evaluation with the converted model - if args.run_eval: - assert exported_model is not None, ( - "Python inference is not yet implemented for " - f"export_method={args.export_method}, format={args.format}." - ) - logger.info("Running evaluation ... this takes a long time if you export to CPU.") - dataset = cfg.DATASETS.TEST[0] - data_loader = build_detection_test_loader(cfg, dataset) - # NOTE: hard-coded evaluator. change to the evaluator for your dataset - evaluator = COCOEvaluator(dataset, output_dir=args.output) - metrics = inference_on_dataset(exported_model, data_loader, evaluator) - print_csv_format(metrics) - logger.info("Success.") diff --git a/spaces/chenglu/chenglu-my_awesome_model/README.md b/spaces/chenglu/chenglu-my_awesome_model/README.md deleted file mode 100644 index 413ae078f4b04759cb5cba58045548fecc460c54..0000000000000000000000000000000000000000 --- a/spaces/chenglu/chenglu-my_awesome_model/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chenglu-my Awesome Model -emoji: 👀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py deleted file mode 100644 index 6a6e4c3e64f1a9e8b9da73fc6fbebf8a33e5602d..0000000000000000000000000000000000000000 --- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/tone_sandhi.py +++ /dev/null @@ -1,769 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi: - def __init__(self): - self.must_neural_tone_words = { - "麻烦", - "麻利", - "鸳鸯", - "高粱", - "骨头", - "骆驼", - "马虎", - "首饰", - "馒头", - "馄饨", - "风筝", - "难为", - "队伍", - "阔气", - "闺女", - "门道", - "锄头", - "铺盖", - "铃铛", - "铁匠", - "钥匙", - "里脊", - "里头", - "部分", - "那么", - "道士", - "造化", - "迷糊", - "连累", - "这么", - "这个", - "运气", - "过去", - "软和", - "转悠", - "踏实", - "跳蚤", - "跟头", - "趔趄", - "财主", - "豆腐", - "讲究", - "记性", - "记号", - "认识", - "规矩", - "见识", - "裁缝", - "补丁", - "衣裳", - "衣服", - "衙门", - "街坊", - "行李", - "行当", - "蛤蟆", - "蘑菇", - "薄荷", - "葫芦", - "葡萄", - "萝卜", - "荸荠", - "苗条", - "苗头", - "苍蝇", - "芝麻", - "舒服", - "舒坦", - "舌头", - "自在", - "膏药", - "脾气", - "脑袋", - "脊梁", - "能耐", - "胳膊", - "胭脂", - "胡萝", - "胡琴", - "胡同", - "聪明", - "耽误", - "耽搁", - "耷拉", - "耳朵", - "老爷", - "老实", - "老婆", - "老头", - "老太", - "翻腾", - "罗嗦", - "罐头", - "编辑", - "结实", - "红火", - "累赘", - "糨糊", - "糊涂", - "精神", - "粮食", - "簸箕", - "篱笆", - "算计", - "算盘", - "答应", - "笤帚", - "笑语", - "笑话", - "窟窿", - "窝囊", - "窗户", - "稳当", - "稀罕", - "称呼", - "秧歌", - "秀气", - "秀才", - "福气", - "祖宗", - "砚台", - "码头", - "石榴", - "石头", - "石匠", - "知识", - "眼睛", - "眯缝", - "眨巴", - "眉毛", - "相声", - "盘算", - "白净", - "痢疾", - "痛快", - "疟疾", - "疙瘩", - "疏忽", - "畜生", - "生意", - "甘蔗", - "琵琶", - "琢磨", - "琉璃", - "玻璃", - "玫瑰", - "玄乎", - "狐狸", - "状元", - "特务", - "牲口", - "牙碜", - "牌楼", - "爽快", - "爱人", - "热闹", - "烧饼", - "烟筒", - "烂糊", - "点心", - "炊帚", - "灯笼", - "火候", - "漂亮", - "滑溜", - "溜达", - "温和", - "清楚", - "消息", - "浪头", - "活泼", - "比方", - "正经", - "欺负", - "模糊", - "槟榔", - "棺材", - "棒槌", - "棉花", - "核桃", - "栅栏", - "柴火", - "架势", - "枕头", - "枇杷", - "机灵", - "本事", - "木头", - "木匠", - "朋友", - "月饼", - "月亮", - "暖和", - "明白", - "时候", - "新鲜", - "故事", - "收拾", - "收成", - "提防", - "挖苦", - "挑剔", - "指甲", - "指头", - "拾掇", - "拳头", - "拨弄", - "招牌", - "招呼", - "抬举", - "护士", - "折腾", - "扫帚", - "打量", - "打算", - "打点", - "打扮", - "打听", - "打发", - "扎实", - "扁担", - "戒指", - "懒得", - "意识", - "意思", - "情形", - "悟性", - "怪物", - "思量", - "怎么", - "念头", - "念叨", - "快活", - "忙活", - "志气", - "心思", - "得罪", - "张罗", - "弟兄", - "开通", - "应酬", - "庄稼", - "干事", - "帮手", - "帐篷", - "希罕", - "师父", - "师傅", - "巴结", - "巴掌", - "差事", - "工夫", - "岁数", - "屁股", - "尾巴", - "少爷", - "小气", - "小伙", - "将就", - "对头", - "对付", - "寡妇", - "家伙", - "客气", - "实在", - "官司", - "学问", - "学生", - "字号", - "嫁妆", - "媳妇", - "媒人", - "婆家", - "娘家", - "委屈", - "姑娘", - "姐夫", - "妯娌", - "妥当", - "妖精", - "奴才", - "女婿", - "头发", - "太阳", - "大爷", - "大方", - "大意", - "大夫", - "多少", - "多么", - "外甥", - "壮实", - "地道", - "地方", - "在乎", - "困难", - "嘴巴", - "嘱咐", - "嘟囔", - "嘀咕", - "喜欢", - "喇嘛", - "喇叭", - "商量", - "唾沫", - "哑巴", - "哈欠", - "哆嗦", - "咳嗽", - "和尚", - "告诉", - "告示", - "含糊", - "吓唬", - "后头", - "名字", - "名堂", - "合同", - "吆喝", - "叫唤", - "口袋", - "厚道", - "厉害", - "千斤", - "包袱", - "包涵", - "匀称", - "勤快", - "动静", - "动弹", - "功夫", - "力气", - "前头", - "刺猬", - "刺激", - "别扭", - "利落", - "利索", - "利害", - "分析", - "出息", - "凑合", - "凉快", - "冷战", - "冤枉", - "冒失", - "养活", - "关系", - "先生", - "兄弟", - "便宜", - "使唤", - "佩服", - "作坊", - "体面", - "位置", - "似的", - "伙计", - "休息", - "什么", - "人家", - "亲戚", - "亲家", - "交情", - "云彩", - "事情", - "买卖", - "主意", - "丫头", - "丧气", - "两口", - "东西", - "东家", - "世故", - "不由", - "不在", - "下水", - "下巴", - "上头", - "上司", - "丈夫", - "丈人", - "一辈", - "那个", - "菩萨", - "父亲", - "母亲", - "咕噜", - "邋遢", - "费用", - "冤家", - "甜头", - "介绍", - "荒唐", - "大人", - "泥鳅", - "幸福", - "熟悉", - "计划", - "扑腾", - "蜡烛", - "姥爷", - "照顾", - "喉咙", - "吉他", - "弄堂", - "蚂蚱", - "凤凰", - "拖沓", - "寒碜", - "糟蹋", - "倒腾", - "报复", - "逻辑", - "盘缠", - "喽啰", - "牢骚", - "咖喱", - "扫把", - "惦记", - } - self.must_not_neural_tone_words = { - "男子", - "女子", - "分子", - "原子", - "量子", - "莲子", - "石子", - "瓜子", - "电子", - "人人", - "虎虎", - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, finals: List[str]) -> List[str]: - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if ( - j - 1 >= 0 - and item == word[j - 1] - and pos[0] in {"n", "v", "a"} - and word not in self.must_not_neural_tone_words - ): - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif ( - len(word) > 1 - and word[-1] in "们子" - and pos in {"r", "n"} - and word not in self.must_not_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif ( - ge_idx >= 1 - and (word[ge_idx - 1].isnumeric() or word[ge_idx - 1] in "几有两半多各整每做是") - ) or word == "个": - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if ( - word in self.must_neural_tone_words - or word[-2:] in self.must_neural_tone_words - ): - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"] - ): - return finals - # "一" between reduplication words should be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword) :] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[: -len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif ( - i == 1 - and not self._all_tone_three(sub) - and finals_list[i][0][-1] == "3" - and finals_list[0][-1][-1] == "3" - ): - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, "d")) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and word == "一" - and i + 1 < len(seg) - and seg[i - 1][0] == seg[i + 1][0] - and seg[i - 1][1] == "v" - ): - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if ( - i - 2 >= 0 - and seg[i - 1][0] == "一" - and seg[i - 2][0] == word - and pos == "v" - ): - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and self._all_tone_three(sub_finals_list[i - 1]) - and self._all_tone_three(sub_finals_list[i]) - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]] - ) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if ( - i - 1 >= 0 - and sub_finals_list[i - 1][-1][-1] == "3" - and sub_finals_list[i][0][-1] == "3" - and not merge_last[i - 1] - ): - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if ( - not self._is_reduplication(seg[i - 1][0]) - and len(seg[i - 1][0]) + len(seg[i][0]) <= 3 - ): - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/chronopt-research/ViTExCo/app.py b/spaces/chronopt-research/ViTExCo/app.py deleted file mode 100644 index 52e4d9fd2fdbac5623040708fa77ba7b6b7b349d..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/app.py +++ /dev/null @@ -1,215 +0,0 @@ -import numpy as np -import shutil -import os -import argparse -import torch -import glob -from tqdm import tqdm -from PIL import Image -from collections import OrderedDict -from src.models.vit.config import load_config -import torchvision.transforms as transforms -import cv2 -from skimage import io - -from src.models.CNN.ColorVidNet import GeneralColorVidNet -from src.models.vit.embed import GeneralEmbedModel -from src.models.CNN.NonlocalNet import GeneralWarpNet -from src.models.CNN.FrameColor import frame_colorization -from src.utils import ( - RGB2Lab, - ToTensor, - Normalize, - uncenter_l, - tensor_lab2rgb, - SquaredPadding, - UnpaddingSquare -) - -import gradio as gr - -def load_params(ckpt_file): - params = torch.load(ckpt_file, map_location=device) - new_params = [] - for key, value in params.items(): - new_params.append((key, value)) - return OrderedDict(new_params) - -def custom_transform(transforms, img): - for transform in transforms: - if isinstance(transform, SquaredPadding): - img,padding=transform(img, return_paddings=True) - else: - img = transform(img) - return img.to(device), padding - -def save_frames(predicted_rgb, video_name, frame_name): - if predicted_rgb is not None: - predicted_rgb = np.clip(predicted_rgb, 0, 255).astype(np.uint8) - # frame_path_parts = frame_path.split(os.sep) - # if os.path.exists(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2])): - # shutil.rmtree(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2])) - # os.makedirs(os.path.join(OUTPUT_RESULT_PATH, frame_path_parts[-2]), exist_ok=True) - predicted_rgb = np.transpose(predicted_rgb, (1,2,0)) - pil_img = Image.fromarray(predicted_rgb) - pil_img.save(os.path.join(OUTPUT_RESULT_PATH, video_name, frame_name)) - -def extract_frames_from_video(video_path): - cap = cv2.VideoCapture(video_path) - fps = cap.get(cv2.CAP_PROP_FPS) - - # remove if exists folder - output_frames_path = os.path.join(INPUT_VIDEO_FRAMES_PATH, os.path.basename(video_path)) - if os.path.exists(output_frames_path): - shutil.rmtree(output_frames_path) - - # make new folder - os.makedirs(output_frames_path) - - currentframe = 0 - frame_path_list = [] - while(True): - - # reading from frame - ret,frame = cap.read() - - if ret: - name = os.path.join(output_frames_path, f'{currentframe:09d}.jpg') - frame_path_list.append(name) - cv2.imwrite(name, frame) - currentframe += 1 - else: - break - - cap.release() - cv2.destroyAllWindows() - - return frame_path_list, fps - -def combine_frames_from_folder(frames_list_path, fps = 30): - frames_list = glob.glob(f'{frames_list_path}/*.jpg') - frames_list.sort() - - sample_shape = cv2.imread(frames_list[0]).shape - - output_video_path = os.path.join(frames_list_path, 'output_video.mp4') - out = cv2.VideoWriter(output_video_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (sample_shape[1], sample_shape[0])) - for filename in frames_list: - img = cv2.imread(filename) - out.write(img) - - out.release() - return output_video_path - - -def upscale_image(I_current_rgb, I_current_ab_predict): - H, W = I_current_rgb.size - high_lab_transforms = [ - SquaredPadding(target_size=max(H,W)), - RGB2Lab(), - ToTensor(), - Normalize() - ] - # current_frame_pil_rgb = Image.fromarray(np.clip(I_current_rgb.squeeze(0).permute(1,2,0).cpu().numpy() * 255, 0, 255).astype('uint8')) - high_lab_current, paddings = custom_transform(high_lab_transforms, I_current_rgb) - high_lab_current = torch.unsqueeze(high_lab_current,dim=0).to(device) - high_l_current = high_lab_current[:, 0:1, :, :] - high_ab_current = high_lab_current[:, 1:3, :, :] - upsampler = torch.nn.Upsample(scale_factor=max(H,W)/224,mode="bilinear") - high_ab_predict = upsampler(I_current_ab_predict) - I_predict_rgb = tensor_lab2rgb(torch.cat((uncenter_l(high_l_current), high_ab_predict), dim=1)) - upadded = UnpaddingSquare() - I_predict_rgb = upadded(I_predict_rgb, paddings) - return I_predict_rgb - -def colorize_video(video_path, ref_np): - frames_list, fps = extract_frames_from_video(video_path) - - frame_ref = Image.fromarray(ref_np).convert("RGB") - I_last_lab_predict = None - IB_lab, IB_paddings = custom_transform(transforms, frame_ref) - IB_lab = IB_lab.unsqueeze(0).to(device) - IB_l = IB_lab[:, 0:1, :, :] - IB_ab = IB_lab[:, 1:3, :, :] - - with torch.no_grad(): - I_reference_lab = IB_lab - I_reference_l = I_reference_lab[:, 0:1, :, :] - I_reference_ab = I_reference_lab[:, 1:3, :, :] - I_reference_rgb = tensor_lab2rgb(torch.cat((uncenter_l(I_reference_l), I_reference_ab), dim=1)).to(device) - features_B = embed_net(I_reference_rgb) - - video_path_parts = frames_list[0].split(os.sep) - - if os.path.exists(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2])): - shutil.rmtree(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2])) - os.makedirs(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2]), exist_ok=True) - - for frame_path in tqdm(frames_list): - curr_frame = Image.open(frame_path).convert("RGB") - IA_lab, IA_paddings = custom_transform(transforms, curr_frame) - IA_lab = IA_lab.unsqueeze(0).to(device) - IA_l = IA_lab[:, 0:1, :, :] - IA_ab = IA_lab[:, 1:3, :, :] - - if I_last_lab_predict is None: - I_last_lab_predict = torch.zeros_like(IA_lab).to(device) - - with torch.no_grad(): - I_current_lab = IA_lab - I_current_ab_predict, _ = frame_colorization( - IA_l, - I_reference_lab, - I_last_lab_predict, - features_B, - embed_net, - nonlocal_net, - colornet, - luminance_noise=0, - temperature=1e-10, - joint_training=False - ) - I_last_lab_predict = torch.cat((IA_l, I_current_ab_predict), dim=1) - - # IA_predict_rgb = tensor_lab2rgb(torch.cat((uncenter_l(IA_l), I_current_ab_predict), dim=1)) - IA_predict_rgb = upscale_image(curr_frame, I_current_ab_predict) - #IA_predict_rgb = torch.nn.functional.upsample_bilinear(IA_predict_rgb, scale_factor=2) - save_frames(IA_predict_rgb.squeeze(0).cpu().numpy() * 255, video_path_parts[-2], os.path.basename(frame_path)) - return combine_frames_from_folder(os.path.join(OUTPUT_RESULT_PATH, video_path_parts[-2]), fps) - -if __name__ == '__main__': - # Init global variables - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - INPUT_VIDEO_FRAMES_PATH = 'inputs' - OUTPUT_RESULT_PATH = 'outputs' - weight_path = 'checkpoints' - - embed_net=GeneralEmbedModel(pretrained_model="swin-tiny", device=device).to(device) - nonlocal_net = GeneralWarpNet(feature_channel=128).to(device) - colornet=GeneralColorVidNet(7).to(device) - - embed_net.eval() - nonlocal_net.eval() - colornet.eval() - - # Load weights - # embed_net_params = load_params(os.path.join(weight_path, "embed_net.pth")) - nonlocal_net_params = load_params(os.path.join(weight_path, "nonlocal_net.pth")) - colornet_params = load_params(os.path.join(weight_path, "colornet.pth")) - - # embed_net.load_state_dict(embed_net_params, strict=True) - nonlocal_net.load_state_dict(nonlocal_net_params, strict=True) - colornet.load_state_dict(colornet_params, strict=True) - - transforms = [SquaredPadding(target_size=224), - RGB2Lab(), - ToTensor(), - Normalize()] - - #examples = [[vid, ref] for vid, ref in zip(sorted(glob.glob('examples/*/*.mp4')), sorted(glob.glob('examples/*/*.jpg')))] - demo = gr.Interface(colorize_video, - inputs=[gr.Video(), gr.Image()], - outputs="playable_video")#, - #examples=examples, - #cache_examples=True) - demo.launch() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py deleted file mode 100644 index fb75d5a3723fa5247ae864114a355b60c9fb870d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/upload_button.py +++ /dev/null @@ -1,211 +0,0 @@ -"""gr.UploadButton() component.""" - -from __future__ import annotations - -import tempfile -import warnings -from typing import Any, Callable, Literal - -from gradio_client import utils as client_utils -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import FileSerializable - -from gradio import utils -from gradio.components.base import Component, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Clickable, Uploadable - -set_documentation_group("component") - - -@document() -class UploadButton(Clickable, Uploadable, IOComponent, FileSerializable): - """ - Used to create an upload button, when cicked allows a user to upload files that satisfy the specified file type or generic files (if file_type not set). - Preprocessing: passes the uploaded file as a {file-object} or {List[file-object]} depending on `file_count` (or a {bytes}/{List{bytes}} depending on `type`) - Postprocessing: expects function to return a {str} path to a file, or {List[str]} consisting of paths to files. - Examples-format: a {str} path to a local file that populates the component. - Demos: upload_button - """ - - def __init__( - self, - label: str = "Upload a File", - value: str | list[str] | Callable | None = None, - *, - variant: Literal["primary", "secondary", "stop"] = "secondary", - visible: bool = True, - size: Literal["sm", "lg"] | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - type: Literal["file", "bytes"] = "file", - file_count: Literal["single", "multiple", "directory"] = "single", - file_types: list[str] | None = None, - **kwargs, - ): - """ - Parameters: - label: Text to display on the button. Defaults to "Upload a File". - value: File or list of files to upload by default. - variant: 'primary' for main call-to-action, 'secondary' for a more subdued style, 'stop' for a stop button. - visible: If False, component will be hidden. - size: Size of the button. Can be "sm" or "lg". - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: If False, the UploadButton will be in a disabled state. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - type: Type of value to be returned by component. "file" returns a temporary file object with the same base name as the uploaded file, whose full path can be retrieved by file_obj.name, "binary" returns an bytes object. - file_count: if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory". - file_types: List of type of files to be uploaded. "file" allows any file to be uploaded, "image" allows only image files to be uploaded, "audio" allows only audio files to be uploaded, "video" allows only video files to be uploaded, "text" allows only text files to be uploaded. - """ - self.type = type - self.file_count = file_count - if file_count == "directory" and file_types is not None: - warnings.warn( - "The `file_types` parameter is ignored when `file_count` is 'directory'." - ) - if file_types is not None and not isinstance(file_types, list): - raise ValueError( - f"Parameter file_types must be a list. Received {file_types.__class__.__name__}" - ) - self.size = size - self.file_types = file_types - self.label = label - self.variant = variant - IOComponent.__init__( - self, - label=label, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - scale=scale, - min_width=min_width, - interactive=interactive, - **kwargs, - ) - - def get_config(self): - return { - "label": self.label, - "value": self.value, - "size": self.size, - "file_count": self.file_count, - "file_types": self.file_types, - "scale": self.scale, - "min_width": self.min_width, - "variant": self.variant, - "interactive": self.interactive, - **Component.get_config(self), - } - - @staticmethod - def update( - value: str - | list[str] - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - size: Literal["sm", "lg"] | None = None, - variant: Literal["primary", "secondary", "stop"] | None = None, - interactive: bool | None = None, - visible: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - ): - return { - "variant": variant, - "interactive": interactive, - "size": size, - "visible": visible, - "value": value, - "scale": scale, - "min_width": min_width, - "__type__": "update", - } - - def preprocess( - self, x: list[dict[str, Any]] | None - ) -> ( - bytes - | tempfile._TemporaryFileWrapper - | list[bytes | tempfile._TemporaryFileWrapper] - | None - ): - """ - Parameters: - x: List of JSON objects with filename as 'name' property and base64 data as 'data' property - Returns: - File objects in requested format - """ - if x is None: - return None - - def process_single_file(f) -> bytes | tempfile._TemporaryFileWrapper: - file_name, data, is_file = ( - f["name"], - f["data"], - f.get("is_file", False), - ) - if self.type == "file": - if is_file: - path = self.make_temp_copy_if_needed(file_name) - else: - data, _ = client_utils.decode_base64_to_binary(data) - path = self.file_bytes_to_file( - data, dir=self.DEFAULT_TEMP_DIR, file_name=file_name - ) - path = str(utils.abspath(path)) - self.temp_files.add(path) - file = tempfile.NamedTemporaryFile( - delete=False, dir=self.DEFAULT_TEMP_DIR - ) - file.name = path - file.orig_name = file_name # type: ignore - return file - elif self.type == "bytes": - if is_file: - with open(file_name, "rb") as file_data: - return file_data.read() - return client_utils.decode_base64_to_binary(data)[0] - else: - raise ValueError( - "Unknown type: " - + str(self.type) - + ". Please choose from: 'file', 'bytes'." - ) - - if self.file_count == "single": - if isinstance(x, list): - return process_single_file(x[0]) - else: - return process_single_file(x) - else: - if isinstance(x, list): - return [process_single_file(f) for f in x] - else: - return process_single_file(x) - - def style( - self, - *, - full_width: bool | None = None, - size: Literal["sm", "lg"] | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if full_width is not None: - warn_deprecation( - "Use `scale` in place of full_width in the constructor. " - "scale=1 will make the button expand, whereas 0 will not." - ) - self.scale = 1 if full_width else None - if size is not None: - self.size = size - return self diff --git a/spaces/cifkao/context-probing/README.md b/spaces/cifkao/context-probing/README.md deleted file mode 100644 index 91cf7861e877ddf264db20ef666373d69bccf502..0000000000000000000000000000000000000000 --- a/spaces/cifkao/context-probing/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Context Length Probing -emoji: 🔎 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit -models: -- distilgpt2 -- gpt2 -- EleutherAI/gpt-neo-125m -- roneneldan/TinyStories-8M -- roneneldan/TinyStories-33M ---- diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md b/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md deleted file mode 100644 index 3f0ce2a5fb1033c99934d7af74f4413fd7447956..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Free Latest Games For Nokia 6300 The Ultimate Guide to Gaming on Your Phone.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Found 21706 Free Nokia 6300 Java Games. Download Nokia 6300 Software for free to your mobile phone or tablet.Touchscreen 128x128 128x160 176x204 176x208 176x220 208x208 240x320 240x400 320x240 352x416 360x640 480x800 New Popular Top Rated Folders (All) ActionAdventureArcadeCasino / CardCasualOtherPuzzle / BoardRacingRole PlayingRomanceShoot Em UpSimulationSportsStrategy

  • Bride's Resentment (Tooth Bride) (English Translation) (240x320)addygrover

    -

    Download Free Latest Games For Nokia 6300


    DOWNLOADhttps://tinurli.com/2uwj3Z



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md b/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md deleted file mode 100644 index 18478f916ea547b804cf9d5f0c6d50c72d3d7afb..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/English Hate Story 3 Movie Download Bluray Hindi Movies A Revenge Saga with a Twist.md +++ /dev/null @@ -1,6 +0,0 @@ -

    English Hate Story 3 Movie Download Bluray Hindi Movies


    Download ……… https://tinurli.com/2uwhNL



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md b/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md deleted file mode 100644 index fb76257ca4bb915b19b5fc51def37eae05b1bf4d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/FULL Architect 3D Ultimate Plus 2017 19.0.1.1001 License Keys A Complete Guide.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Same here... Quite the mistake. I booted Ophcrack ASAP when I got to my new school this year and cracked the pass within seconds, it's "envision" WTF lol but I also think they should disable LM hash in the first place because there's nothing older than XP SP3, leaving LM hash to be only a security issue

    -

    Mediafire Lanschool Cracked


    Download Ziphttps://tinurli.com/2uwjTu



    -

    Student for LanSchool Classic is a free app for Android published in the Teaching & Training Tools list of apps, part of Education.

    The company that develops Student for LanSchool Classic is Lenovo Software. The latest version released by its developer is 9.1.0.66. This app was rated by 1 users of our site and has an average rating of 0.5.

    To install Student for LanSchool Classic on your Android device, just click the green Continue To App button above to start the installation process. The app is listed on our website since 2021-08-11 and was downloaded 1134 times. We have already checked if the download link is safe, however for your own protection we recommend that you scan the downloaded app with your antivirus. Your antivirus may detect the Student for LanSchool Classic as malware as malware if the download link to com.lanschool.student is broken.

    How to install Student for LanSchool Classic on your Android device:

    • Click on the Continue To App button on our website. This will redirect you to Google Play.
    • Once the Student for LanSchool Classic is shown in the Google Play listing of your Android device, you can start its download and installation. Tap on the Install button located below the search bar and to the right of the app icon.
    • A pop-up window with the permissions required by Student for LanSchool Classic will be shown. Click on Accept to continue the process.
    • Student for LanSchool Classic will be downloaded onto your device, displaying a progress. Once the download completes, the installation will start and you'll get a notification after the installation is finished.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md b/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md deleted file mode 100644 index 7e2fb4f99da032bb6d5e196eb38b7d92d49cfe25..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Manual For Singer 6038C Learn How to Sew with Your Machine.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    Need a manual for your Singer 6038 Sewing Machine? Below you can view and download the PDF manual for free. There are also frequently asked questions, a product rating and feedback from users to enable you to optimally use your product. If this is not the manual you want, please contact us.

    -

    According to the 20U Singer Sewing Machine manual, "How often you will need to clean and lubricate the machine will depend on how often you will use it. When in regular use, the machine should be cleaned periodically to remove lint and fluff which may have accumulated around the working parts."

    -

    Manual For Singer 6038C


    Download Zip 🆗 https://tinurli.com/2uwkzP



    -

    Unplug the machine. Open the motor compartment. Place a paper towel beneath the pressure foot, covering the arm/table of the machine. Put the sewing machine oil nozzle against the motor shaft (a long arm that goes up and down) and squirt a drop of oil. Squirt a drop of oil against any moving cog, wheel or part. Use the hand wheel to manually move the Singer machine gears, distributing the oil. Wipe excess oil away with paper towels. Plug in machine and test.

    -

    Unplug the machine. Consult the Singer sewing machine manual for the belt replacement instructions and replace the belt. Belts that are too tight or too loose will slow the machine. Plug in the machine and test. If the machine continues to run slowly, take the machine to a Singer repair shop.

    -

    You will see a list of service manuals or instruction manuals . If the manual you are requesting is not available and it can be substituted with another like machine manual that offers the same info it will be substituted.

    -

    Please Note:
    Your Sewing Machine or Serger Manual will arrive within 7- 9 working days, generally sooner many ship the same day. On the other hand, a hard to find manual will take a minimum of 2 to 3 weeks to arrive since it takes time for the manufacturer to locate your specific manual.

    Also note that some manuals may be photocopies and not originals because some manuals are out-of-print.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md b/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md deleted file mode 100644 index b416794d42e5ee8ff7861274d5900fee3b135bc1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Pronest 2012 Full License Crack !!HOT!! 41 31.md +++ /dev/null @@ -1,70 +0,0 @@ -## Pronest 2012 Full License Crack 41 31 - - - - - - ![Pronest 2012 Full License Crack !!HOT!! 41 31](https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSuxTjotovaQeNfF8r3Co8b6KfPSUBPAxsYceKc3hQJXnbTAW8cBfyOykU) - - - - - -**Pronest 2012 Full License Crack 41 31 >>>>> [https://venemena.blogspot.com/?download=2txRfK](https://venemena.blogspot.com/?download=2txRfK)** - - - - - - - - - - - - Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Pronest 2012 Full License Crack 41 31": - -# How to Get ProNest 2012 Full License Crack 41 31 for Free - - - -ProNest 2012 is a powerful nesting software that helps you optimize your cutting process and reduce material waste. It supports various cutting technologies, such as plasma, laser, waterjet, oxyfuel, and punch. It also integrates with Hypertherm SureCut™ technology, which provides advanced features like True Hole®, Rapid Part™, and True Bevel™. - - - -However, ProNest 2012 is not a cheap software. It requires a license key to activate and use all its features. If you are looking for a way to get ProNest 2012 full license crack 41 31 for free, you may be tempted to download it from some shady websites or torrent sites. But beware, these sources may contain viruses, malware, or spyware that can harm your computer or steal your personal information. - - - -So, how can you get ProNest 2012 full license crack 41 31 for free without risking your security and privacy? The answer is simple: you can't. There is no such thing as a free lunch. If you want to use ProNest 2012 legally and safely, you have to purchase it from the official website or an authorized reseller. This way, you can enjoy the benefits of ProNest 2012 without worrying about any legal or technical issues. - - - -ProNest 2012 is worth every penny you spend on it. It can help you improve your productivity, quality, and profitability. It can also save you time and money by reducing material waste and cutting costs. It is compatible with most CAD/CAM software and CNC machines. It has a user-friendly interface and a comprehensive online help system. - - - -If you are still not convinced, you can try ProNest 2012 for free for 14 days. You can download the trial version from the official website and see for yourself how ProNest 2012 can enhance your cutting performance. You can also contact the customer support team if you have any questions or issues. - - - -Don't fall for the trap of ProNest 2012 full license crack 41 31. It is illegal, unsafe, and unreliable. Instead, invest in ProNest 2012 and get the best nesting software for your cutting needs. - -Here are a few more paragraphs for the article: - -ProNest 2012 is not only a nesting software, but also a comprehensive CAD/CAM solution that can help you design, edit, and optimize your parts for cutting. You can use the integrated 2D CAD program to create and modify CAD files, or import them from various industry-standard file formats. You can also use the Variable Shape Parts library to generate common parts from templates. ProNest 2012 can automatically correct and smooth CAD files, map CAD layers to processes, and update nests for part revisions. - - - -ProNest 2012 also offers a work order processing module that can help you manage your cutting jobs more efficiently and effectively. You can create work orders with multiple parts and plates of different grades and gauges, and assign them to different machines and operators. You can also track the status of your work orders, generate reports, and export data to your ERP or MRP system. ProNest 2012 can help you improve your material utilization, reduce your inventory, and increase your on-time delivery. - - - -ProNest 2012 is compatible with all major brands and models of cutting machines, including plasma, laser, oxyfuel, waterjet, and combination punch. It supports advanced cutting features like beveling, drilling, tapping, marking, and repositioning. It also supports Hypertherm's SureCut™ technologies, which can improve your cut quality and reduce your operating costs. For example, True Hole® technology can produce significantly better hole quality than conventional plasma cutting methods. - - dfd1c89656 - - - - - diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py deleted file mode 100644 index 5a5d60d568c78b1546d0564b38a64fec2e2ca0b1..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/PsdImagePlugin.py +++ /dev/null @@ -1,303 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# Adobe PSD 2.5/3.0 file handling -# -# History: -# 1995-09-01 fl Created -# 1997-01-03 fl Read most PSD images -# 1997-01-18 fl Fixed P and CMYK support -# 2001-10-21 fl Added seek/tell support (for layers) -# -# Copyright (c) 1997-2001 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io - -from . import Image, ImageFile, ImagePalette -from ._binary import i8 -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import si16be as si16 - -MODES = { - # (photoshop mode, bits) -> (pil mode, required channels) - (0, 1): ("1", 1), - (0, 8): ("L", 1), - (1, 8): ("L", 1), - (2, 8): ("P", 1), - (3, 8): ("RGB", 3), - (4, 8): ("CMYK", 4), - (7, 8): ("L", 1), # FIXME: multilayer - (8, 8): ("L", 1), # duotone - (9, 8): ("LAB", 3), -} - - -# --------------------------------------------------------------------. -# read PSD images - - -def _accept(prefix): - return prefix[:4] == b"8BPS" - - -## -# Image plugin for Photoshop images. - - -class PsdImageFile(ImageFile.ImageFile): - format = "PSD" - format_description = "Adobe Photoshop" - _close_exclusive_fp_after_loading = False - - def _open(self): - read = self.fp.read - - # - # header - - s = read(26) - if not _accept(s) or i16(s, 4) != 1: - msg = "not a PSD file" - raise SyntaxError(msg) - - psd_bits = i16(s, 22) - psd_channels = i16(s, 12) - psd_mode = i16(s, 24) - - mode, channels = MODES[(psd_mode, psd_bits)] - - if channels > psd_channels: - msg = "not enough channels" - raise OSError(msg) - if mode == "RGB" and psd_channels == 4: - mode = "RGBA" - channels = 4 - - self.mode = mode - self._size = i32(s, 18), i32(s, 14) - - # - # color mode data - - size = i32(read(4)) - if size: - data = read(size) - if mode == "P" and size == 768: - self.palette = ImagePalette.raw("RGB;L", data) - - # - # image resources - - self.resources = [] - - size = i32(read(4)) - if size: - # load resources - end = self.fp.tell() + size - while self.fp.tell() < end: - read(4) # signature - id = i16(read(2)) - name = read(i8(read(1))) - if not (len(name) & 1): - read(1) # padding - data = read(i32(read(4))) - if len(data) & 1: - read(1) # padding - self.resources.append((id, name, data)) - if id == 1039: # ICC profile - self.info["icc_profile"] = data - - # - # layer and mask information - - self.layers = [] - - size = i32(read(4)) - if size: - end = self.fp.tell() + size - size = i32(read(4)) - if size: - _layer_data = io.BytesIO(ImageFile._safe_read(self.fp, size)) - self.layers = _layerinfo(_layer_data, size) - self.fp.seek(end) - self.n_frames = len(self.layers) - self.is_animated = self.n_frames > 1 - - # - # image descriptor - - self.tile = _maketile(self.fp, mode, (0, 0) + self.size, channels) - - # keep the file open - self._fp = self.fp - self.frame = 1 - self._min_frame = 1 - - def seek(self, layer): - if not self._seek_check(layer): - return - - # seek to given layer (1..max) - try: - name, mode, bbox, tile = self.layers[layer - 1] - self.mode = mode - self.tile = tile - self.frame = layer - self.fp = self._fp - return name, bbox - except IndexError as e: - msg = "no such layer" - raise EOFError(msg) from e - - def tell(self): - # return layer number (0=image, 1..max=layers) - return self.frame - - -def _layerinfo(fp, ct_bytes): - # read layerinfo block - layers = [] - - def read(size): - return ImageFile._safe_read(fp, size) - - ct = si16(read(2)) - - # sanity check - if ct_bytes < (abs(ct) * 20): - msg = "Layer block too short for number of layers requested" - raise SyntaxError(msg) - - for _ in range(abs(ct)): - # bounding box - y0 = i32(read(4)) - x0 = i32(read(4)) - y1 = i32(read(4)) - x1 = i32(read(4)) - - # image info - mode = [] - ct_types = i16(read(2)) - types = list(range(ct_types)) - if len(types) > 4: - continue - - for _ in types: - type = i16(read(2)) - - if type == 65535: - m = "A" - else: - m = "RGBA"[type] - - mode.append(m) - read(4) # size - - # figure out the image mode - mode.sort() - if mode == ["R"]: - mode = "L" - elif mode == ["B", "G", "R"]: - mode = "RGB" - elif mode == ["A", "B", "G", "R"]: - mode = "RGBA" - else: - mode = None # unknown - - # skip over blend flags and extra information - read(12) # filler - name = "" - size = i32(read(4)) # length of the extra data field - if size: - data_end = fp.tell() + size - - length = i32(read(4)) - if length: - fp.seek(length - 16, io.SEEK_CUR) - - length = i32(read(4)) - if length: - fp.seek(length, io.SEEK_CUR) - - length = i8(read(1)) - if length: - # Don't know the proper encoding, - # Latin-1 should be a good guess - name = read(length).decode("latin-1", "replace") - - fp.seek(data_end) - layers.append((name, mode, (x0, y0, x1, y1))) - - # get tiles - for i, (name, mode, bbox) in enumerate(layers): - tile = [] - for m in mode: - t = _maketile(fp, m, bbox, 1) - if t: - tile.extend(t) - layers[i] = name, mode, bbox, tile - - return layers - - -def _maketile(file, mode, bbox, channels): - tile = None - read = file.read - - compression = i16(read(2)) - - xsize = bbox[2] - bbox[0] - ysize = bbox[3] - bbox[1] - - offset = file.tell() - - if compression == 0: - # - # raw compression - tile = [] - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tile.append(("raw", bbox, offset, layer)) - offset = offset + xsize * ysize - - elif compression == 1: - # - # packbits compression - i = 0 - tile = [] - bytecount = read(channels * ysize * 2) - offset = file.tell() - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tile.append(("packbits", bbox, offset, layer)) - for y in range(ysize): - offset = offset + i16(bytecount, i) - i += 2 - - file.seek(offset) - - if offset & 1: - read(1) # padding - - return tile - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PsdImageFile.format, PsdImageFile, _accept) - -Image.register_extension(PsdImageFile.format, ".psd") - -Image.register_mime(PsdImageFile.format, "image/vnd.adobe.photoshop") diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py deleted file mode 100644 index 1ffb069fce9b2b9a03515404155a7e5cc439484a..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/async_timeout/__init__.py +++ /dev/null @@ -1,239 +0,0 @@ -import asyncio -import enum -import sys -import warnings -from types import TracebackType -from typing import Optional, Type - - -if sys.version_info >= (3, 8): - from typing import final -else: - from typing_extensions import final - - -if sys.version_info >= (3, 11): - - def _uncancel_task(task: "asyncio.Task[object]") -> None: - task.uncancel() - -else: - - def _uncancel_task(task: "asyncio.Task[object]") -> None: - pass - - -__version__ = "4.0.3" - - -__all__ = ("timeout", "timeout_at", "Timeout") - - -def timeout(delay: Optional[float]) -> "Timeout": - """timeout context manager. - - Useful in cases when you want to apply timeout logic around block - of code or in cases when asyncio.wait_for is not suitable. For example: - - >>> async with timeout(0.001): - ... async with aiohttp.get('https://github.com') as r: - ... await r.text() - - - delay - value in seconds or None to disable timeout logic - """ - loop = asyncio.get_running_loop() - if delay is not None: - deadline = loop.time() + delay # type: Optional[float] - else: - deadline = None - return Timeout(deadline, loop) - - -def timeout_at(deadline: Optional[float]) -> "Timeout": - """Schedule the timeout at absolute time. - - deadline argument points on the time in the same clock system - as loop.time(). - - Please note: it is not POSIX time but a time with - undefined starting base, e.g. the time of the system power on. - - >>> async with timeout_at(loop.time() + 10): - ... async with aiohttp.get('https://github.com') as r: - ... await r.text() - - - """ - loop = asyncio.get_running_loop() - return Timeout(deadline, loop) - - -class _State(enum.Enum): - INIT = "INIT" - ENTER = "ENTER" - TIMEOUT = "TIMEOUT" - EXIT = "EXIT" - - -@final -class Timeout: - # Internal class, please don't instantiate it directly - # Use timeout() and timeout_at() public factories instead. - # - # Implementation note: `async with timeout()` is preferred - # over `with timeout()`. - # While technically the Timeout class implementation - # doesn't need to be async at all, - # the `async with` statement explicitly points that - # the context manager should be used from async function context. - # - # This design allows to avoid many silly misusages. - # - # TimeoutError is raised immediately when scheduled - # if the deadline is passed. - # The purpose is to time out as soon as possible - # without waiting for the next await expression. - - __slots__ = ("_deadline", "_loop", "_state", "_timeout_handler", "_task") - - def __init__( - self, deadline: Optional[float], loop: asyncio.AbstractEventLoop - ) -> None: - self._loop = loop - self._state = _State.INIT - - self._task: Optional["asyncio.Task[object]"] = None - self._timeout_handler = None # type: Optional[asyncio.Handle] - if deadline is None: - self._deadline = None # type: Optional[float] - else: - self.update(deadline) - - def __enter__(self) -> "Timeout": - warnings.warn( - "with timeout() is deprecated, use async with timeout() instead", - DeprecationWarning, - stacklevel=2, - ) - self._do_enter() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - self._do_exit(exc_type) - return None - - async def __aenter__(self) -> "Timeout": - self._do_enter() - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - self._do_exit(exc_type) - return None - - @property - def expired(self) -> bool: - """Is timeout expired during execution?""" - return self._state == _State.TIMEOUT - - @property - def deadline(self) -> Optional[float]: - return self._deadline - - def reject(self) -> None: - """Reject scheduled timeout if any.""" - # cancel is maybe better name but - # task.cancel() raises CancelledError in asyncio world. - if self._state not in (_State.INIT, _State.ENTER): - raise RuntimeError(f"invalid state {self._state.value}") - self._reject() - - def _reject(self) -> None: - self._task = None - if self._timeout_handler is not None: - self._timeout_handler.cancel() - self._timeout_handler = None - - def shift(self, delay: float) -> None: - """Advance timeout on delay seconds. - - The delay can be negative. - - Raise RuntimeError if shift is called when deadline is not scheduled - """ - deadline = self._deadline - if deadline is None: - raise RuntimeError("cannot shift timeout if deadline is not scheduled") - self.update(deadline + delay) - - def update(self, deadline: float) -> None: - """Set deadline to absolute value. - - deadline argument points on the time in the same clock system - as loop.time(). - - If new deadline is in the past the timeout is raised immediately. - - Please note: it is not POSIX time but a time with - undefined starting base, e.g. the time of the system power on. - """ - if self._state == _State.EXIT: - raise RuntimeError("cannot reschedule after exit from context manager") - if self._state == _State.TIMEOUT: - raise RuntimeError("cannot reschedule expired timeout") - if self._timeout_handler is not None: - self._timeout_handler.cancel() - self._deadline = deadline - if self._state != _State.INIT: - self._reschedule() - - def _reschedule(self) -> None: - assert self._state == _State.ENTER - deadline = self._deadline - if deadline is None: - return - - now = self._loop.time() - if self._timeout_handler is not None: - self._timeout_handler.cancel() - - self._task = asyncio.current_task() - if deadline <= now: - self._timeout_handler = self._loop.call_soon(self._on_timeout) - else: - self._timeout_handler = self._loop.call_at(deadline, self._on_timeout) - - def _do_enter(self) -> None: - if self._state != _State.INIT: - raise RuntimeError(f"invalid state {self._state.value}") - self._state = _State.ENTER - self._reschedule() - - def _do_exit(self, exc_type: Optional[Type[BaseException]]) -> None: - if exc_type is asyncio.CancelledError and self._state == _State.TIMEOUT: - assert self._task is not None - _uncancel_task(self._task) - self._timeout_handler = None - self._task = None - raise asyncio.TimeoutError - # timeout has not expired - self._state = _State.EXIT - self._reject() - return None - - def _on_timeout(self) -> None: - assert self._task is not None - self._task.cancel() - self._state = _State.TIMEOUT - # drop the reference early - self._timeout_handler = None diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py deleted file mode 100644 index 8ec97c756a4b1023fd3963dd39b706f7c0e34373..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/commontypes.py +++ /dev/null @@ -1,80 +0,0 @@ -import sys -from . import model -from .error import FFIError - - -COMMON_TYPES = {} - -try: - # fetch "bool" and all simple Windows types - from _cffi_backend import _get_common_types - _get_common_types(COMMON_TYPES) -except ImportError: - pass - -COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') -COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above - -for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: - if _type.endswith('_t'): - COMMON_TYPES[_type] = _type -del _type - -_CACHE = {} - -def resolve_common_type(parser, commontype): - try: - return _CACHE[commontype] - except KeyError: - cdecl = COMMON_TYPES.get(commontype, commontype) - if not isinstance(cdecl, str): - result, quals = cdecl, 0 # cdecl is already a BaseType - elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: - result, quals = model.PrimitiveType(cdecl), 0 - elif cdecl == 'set-unicode-needed': - raise FFIError("The Windows type %r is only available after " - "you call ffi.set_unicode()" % (commontype,)) - else: - if commontype == cdecl: - raise FFIError( - "Unsupported type: %r. Please look at " - "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " - "and file an issue if you think this type should really " - "be supported." % (commontype,)) - result, quals = parser.parse_type_and_quals(cdecl) # recursive - - assert isinstance(result, model.BaseTypeByIdentity) - _CACHE[commontype] = result, quals - return result, quals - - -# ____________________________________________________________ -# extra types for Windows (most of them are in commontypes.c) - - -def win_common_types(): - return { - "UNICODE_STRING": model.StructType( - "_UNICODE_STRING", - ["Length", - "MaximumLength", - "Buffer"], - [model.PrimitiveType("unsigned short"), - model.PrimitiveType("unsigned short"), - model.PointerType(model.PrimitiveType("wchar_t"))], - [-1, -1, -1]), - "PUNICODE_STRING": "UNICODE_STRING *", - "PCUNICODE_STRING": "const UNICODE_STRING *", - - "TBYTE": "set-unicode-needed", - "TCHAR": "set-unicode-needed", - "LPCTSTR": "set-unicode-needed", - "PCTSTR": "set-unicode-needed", - "LPTSTR": "set-unicode-needed", - "PTSTR": "set-unicode-needed", - "PTBYTE": "set-unicode-needed", - "PTCHAR": "set-unicode-needed", - } - -if sys.platform == 'win32': - COMMON_TYPES.update(win_common_types()) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py deleted file mode 100644 index 6b20df315b23ecd1e3d0ec32c11c0b5ced577efe..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/click/_winconsole.py +++ /dev/null @@ -1,279 +0,0 @@ -# This module is based on the excellent work by Adam Bartoš who -# provided a lot of what went into the implementation here in -# the discussion to issue1602 in the Python bug tracker. -# -# There are some general differences in regards to how this works -# compared to the original patches as we do not need to patch -# the entire interpreter but just work in our little world of -# echo and prompt. -import io -import sys -import time -import typing as t -from ctypes import byref -from ctypes import c_char -from ctypes import c_char_p -from ctypes import c_int -from ctypes import c_ssize_t -from ctypes import c_ulong -from ctypes import c_void_p -from ctypes import POINTER -from ctypes import py_object -from ctypes import Structure -from ctypes.wintypes import DWORD -from ctypes.wintypes import HANDLE -from ctypes.wintypes import LPCWSTR -from ctypes.wintypes import LPWSTR - -from ._compat import _NonClosingTextIOWrapper - -assert sys.platform == "win32" -import msvcrt # noqa: E402 -from ctypes import windll # noqa: E402 -from ctypes import WINFUNCTYPE # noqa: E402 - -c_ssize_p = POINTER(c_ssize_t) - -kernel32 = windll.kernel32 -GetStdHandle = kernel32.GetStdHandle -ReadConsoleW = kernel32.ReadConsoleW -WriteConsoleW = kernel32.WriteConsoleW -GetConsoleMode = kernel32.GetConsoleMode -GetLastError = kernel32.GetLastError -GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) -CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( - ("CommandLineToArgvW", windll.shell32) -) -LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32)) - -STDIN_HANDLE = GetStdHandle(-10) -STDOUT_HANDLE = GetStdHandle(-11) -STDERR_HANDLE = GetStdHandle(-12) - -PyBUF_SIMPLE = 0 -PyBUF_WRITABLE = 1 - -ERROR_SUCCESS = 0 -ERROR_NOT_ENOUGH_MEMORY = 8 -ERROR_OPERATION_ABORTED = 995 - -STDIN_FILENO = 0 -STDOUT_FILENO = 1 -STDERR_FILENO = 2 - -EOF = b"\x1a" -MAX_BYTES_WRITTEN = 32767 - -try: - from ctypes import pythonapi -except ImportError: - # On PyPy we cannot get buffers so our ability to operate here is - # severely limited. - get_buffer = None -else: - - class Py_buffer(Structure): - _fields_ = [ - ("buf", c_void_p), - ("obj", py_object), - ("len", c_ssize_t), - ("itemsize", c_ssize_t), - ("readonly", c_int), - ("ndim", c_int), - ("format", c_char_p), - ("shape", c_ssize_p), - ("strides", c_ssize_p), - ("suboffsets", c_ssize_p), - ("internal", c_void_p), - ] - - PyObject_GetBuffer = pythonapi.PyObject_GetBuffer - PyBuffer_Release = pythonapi.PyBuffer_Release - - def get_buffer(obj, writable=False): - buf = Py_buffer() - flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE - PyObject_GetBuffer(py_object(obj), byref(buf), flags) - - try: - buffer_type = c_char * buf.len - return buffer_type.from_address(buf.buf) - finally: - PyBuffer_Release(byref(buf)) - - -class _WindowsConsoleRawIOBase(io.RawIOBase): - def __init__(self, handle): - self.handle = handle - - def isatty(self): - super().isatty() - return True - - -class _WindowsConsoleReader(_WindowsConsoleRawIOBase): - def readable(self): - return True - - def readinto(self, b): - bytes_to_be_read = len(b) - if not bytes_to_be_read: - return 0 - elif bytes_to_be_read % 2: - raise ValueError( - "cannot read odd number of bytes from UTF-16-LE encoded console" - ) - - buffer = get_buffer(b, writable=True) - code_units_to_be_read = bytes_to_be_read // 2 - code_units_read = c_ulong() - - rv = ReadConsoleW( - HANDLE(self.handle), - buffer, - code_units_to_be_read, - byref(code_units_read), - None, - ) - if GetLastError() == ERROR_OPERATION_ABORTED: - # wait for KeyboardInterrupt - time.sleep(0.1) - if not rv: - raise OSError(f"Windows error: {GetLastError()}") - - if buffer[0] == EOF: - return 0 - return 2 * code_units_read.value - - -class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): - def writable(self): - return True - - @staticmethod - def _get_error_message(errno): - if errno == ERROR_SUCCESS: - return "ERROR_SUCCESS" - elif errno == ERROR_NOT_ENOUGH_MEMORY: - return "ERROR_NOT_ENOUGH_MEMORY" - return f"Windows error {errno}" - - def write(self, b): - bytes_to_be_written = len(b) - buf = get_buffer(b) - code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 - code_units_written = c_ulong() - - WriteConsoleW( - HANDLE(self.handle), - buf, - code_units_to_be_written, - byref(code_units_written), - None, - ) - bytes_written = 2 * code_units_written.value - - if bytes_written == 0 and bytes_to_be_written > 0: - raise OSError(self._get_error_message(GetLastError())) - return bytes_written - - -class ConsoleStream: - def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None: - self._text_stream = text_stream - self.buffer = byte_stream - - @property - def name(self) -> str: - return self.buffer.name - - def write(self, x: t.AnyStr) -> int: - if isinstance(x, str): - return self._text_stream.write(x) - try: - self.flush() - except Exception: - pass - return self.buffer.write(x) - - def writelines(self, lines: t.Iterable[t.AnyStr]) -> None: - for line in lines: - self.write(line) - - def __getattr__(self, name: str) -> t.Any: - return getattr(self._text_stream, name) - - def isatty(self) -> bool: - return self.buffer.isatty() - - def __repr__(self): - return f"" - - -def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -_stream_factories: t.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = { - 0: _get_text_stdin, - 1: _get_text_stdout, - 2: _get_text_stderr, -} - - -def _is_console(f: t.TextIO) -> bool: - if not hasattr(f, "fileno"): - return False - - try: - fileno = f.fileno() - except (OSError, io.UnsupportedOperation): - return False - - handle = msvcrt.get_osfhandle(fileno) - return bool(GetConsoleMode(handle, byref(DWORD()))) - - -def _get_windows_console_stream( - f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] -) -> t.Optional[t.TextIO]: - if ( - get_buffer is not None - and encoding in {"utf-16-le", None} - and errors in {"strict", None} - and _is_console(f) - ): - func = _stream_factories.get(f.fileno()) - if func is not None: - b = getattr(f, "buffer", None) - - if b is None: - return None - - return func(b) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py deleted file mode 100644 index 175ce81563daf3e9a924701dd2c9d4b71084c286..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/timeTools.py +++ /dev/null @@ -1,88 +0,0 @@ -"""fontTools.misc.timeTools.py -- tools for working with OpenType timestamps. -""" - -import os -import time -from datetime import datetime, timezone -import calendar - - -epoch_diff = calendar.timegm((1904, 1, 1, 0, 0, 0, 0, 0, 0)) - -DAYNAMES = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] -MONTHNAMES = [ - None, - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", -] - - -def asctime(t=None): - """ - Convert a tuple or struct_time representing a time as returned by gmtime() - or localtime() to a 24-character string of the following form: - - >>> asctime(time.gmtime(0)) - 'Thu Jan 1 00:00:00 1970' - - If t is not provided, the current time as returned by localtime() is used. - Locale information is not used by asctime(). - - This is meant to normalise the output of the built-in time.asctime() across - different platforms and Python versions. - In Python 3.x, the day of the month is right-justified, whereas on Windows - Python 2.7 it is padded with zeros. - - See https://github.com/fonttools/fonttools/issues/455 - """ - if t is None: - t = time.localtime() - s = "%s %s %2s %s" % ( - DAYNAMES[t.tm_wday], - MONTHNAMES[t.tm_mon], - t.tm_mday, - time.strftime("%H:%M:%S %Y", t), - ) - return s - - -def timestampToString(value): - return asctime(time.gmtime(max(0, value + epoch_diff))) - - -def timestampFromString(value): - wkday, mnth = value[:7].split() - t = datetime.strptime(value[7:], " %d %H:%M:%S %Y") - t = t.replace(month=MONTHNAMES.index(mnth), tzinfo=timezone.utc) - wkday_idx = DAYNAMES.index(wkday) - assert t.weekday() == wkday_idx, '"' + value + '" has inconsistent weekday' - return int(t.timestamp()) - epoch_diff - - -def timestampNow(): - # https://reproducible-builds.org/specs/source-date-epoch/ - source_date_epoch = os.environ.get("SOURCE_DATE_EPOCH") - if source_date_epoch is not None: - return int(source_date_epoch) - epoch_diff - return int(time.time() - epoch_diff) - - -def timestampSinceEpoch(value): - return int(value - epoch_diff) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h deleted file mode 100644 index 4189f5e6f7446b7f0066a28987a759ed4034ceb8..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_sei.h +++ /dev/null @@ -1,127 +0,0 @@ -/* - * HEVC Supplementary Enhancement Information messages - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_HEVC_SEI_H -#define AVCODEC_HEVC_SEI_H - -#include - -#include "libavutil/buffer.h" - -#include "get_bits.h" -#include "hevc.h" -#include "h2645_sei.h" -#include "sei.h" - - -typedef enum { - HEVC_SEI_PIC_STRUCT_FRAME_DOUBLING = 7, - HEVC_SEI_PIC_STRUCT_FRAME_TRIPLING = 8 -} HEVC_SEI_PicStructType; - -typedef struct HEVCSEIPictureHash { - uint8_t md5[3][16]; - uint8_t is_md5; -} HEVCSEIPictureHash; - -typedef struct HEVCSEIFramePacking { - int present; - int arrangement_type; - int content_interpretation_type; - int quincunx_subsampling; - int current_frame_is_frame0_flag; -} HEVCSEIFramePacking; - -typedef struct HEVCSEIPictureTiming { - int picture_struct; -} HEVCSEIPictureTiming; - -typedef struct HEVCSEIMasteringDisplay { - int present; - uint16_t display_primaries[3][2]; - uint16_t white_point[2]; - uint32_t max_luminance; - uint32_t min_luminance; -} HEVCSEIMasteringDisplay; - -typedef struct HEVCSEIContentLight { - int present; - uint16_t max_content_light_level; - uint16_t max_pic_average_light_level; -} HEVCSEIContentLight; - -typedef struct HEVCSEIAlternativeTransfer { - int present; - int preferred_transfer_characteristics; -} HEVCSEIAlternativeTransfer; - -typedef struct HEVCSEITimeCode { - int present; - uint8_t num_clock_ts; - uint8_t clock_timestamp_flag[3]; - uint8_t units_field_based_flag[3]; - uint8_t counting_type[3]; - uint8_t full_timestamp_flag[3]; - uint8_t discontinuity_flag[3]; - uint8_t cnt_dropped_flag[3]; - uint16_t n_frames[3]; - uint8_t seconds_value[3]; - uint8_t minutes_value[3]; - uint8_t hours_value[3]; - uint8_t seconds_flag[3]; - uint8_t minutes_flag[3]; - uint8_t hours_flag[3]; - uint8_t time_offset_length[3]; - int32_t time_offset_value[3]; -} HEVCSEITimeCode; - -typedef struct HEVCSEI { - H2645SEI common; - HEVCSEIPictureHash picture_hash; - HEVCSEIPictureTiming picture_timing; - HEVCSEIMasteringDisplay mastering_display; - HEVCSEIContentLight content_light; - int active_seq_parameter_set_id; - HEVCSEITimeCode timecode; -} HEVCSEI; - -struct HEVCParamSets; - -int ff_hevc_decode_nal_sei(GetBitContext *gb, void *logctx, HEVCSEI *s, - const struct HEVCParamSets *ps, enum HEVCNALUnitType type); - -static inline int ff_hevc_sei_ctx_replace(HEVCSEI *dst, const HEVCSEI *src) -{ - return ff_h2645_sei_ctx_replace(&dst->common, &src->common); -} - -/** - * Reset SEI values that are stored on the Context. - * e.g. Caption data that was extracted during NAL - * parsing. - * - * @param sei HEVCSEI. - */ -static inline void ff_hevc_reset_sei(HEVCSEI *sei) -{ - ff_h2645_sei_reset(&sei->common); -} - -#endif /* AVCODEC_HEVC_SEI_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c deleted file mode 100644 index 0befaaa5306ec5bca79be9c2587efdcbf6abce20..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.c +++ /dev/null @@ -1,49 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AOM common functions - */ - -#include "libavutil/pixdesc.h" -#include "libaom.h" - -void ff_aom_image_copy_16_to_8(AVFrame *pic, struct aom_image *img) -{ - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pic->format); - int i; - - for (i = 0; i < desc->nb_components; i++) { - int w = img->d_w; - int h = img->d_h; - int x, y; - - if (i) { - w = (w + img->x_chroma_shift) >> img->x_chroma_shift; - h = (h + img->y_chroma_shift) >> img->y_chroma_shift; - } - - for (y = 0; y < h; y++) { - uint16_t *src = (uint16_t *)(img->planes[i] + y * img->stride[i]); - uint8_t *dst = pic->data[i] + y * pic->linesize[i]; - for (x = 0; x < w; x++) - *dst++ = *src++; - } - } -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c deleted file mode 100644 index 47fc5d6a4b2797fa222096e5961df1ae8c62f4d7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexdec.c +++ /dev/null @@ -1,206 +0,0 @@ -/* - * Copyright (C) 2008 David Conrad - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include -#include -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" - -typedef struct LibSpeexContext { - SpeexBits bits; - SpeexStereoState stereo; - void *dec_state; - int frame_size; - int pktsize; -} LibSpeexContext; - - -static av_cold int libspeex_decode_init(AVCodecContext *avctx) -{ - LibSpeexContext *s = avctx->priv_data; - const SpeexMode *mode; - SpeexHeader *header = NULL; - int spx_mode, channels = avctx->ch_layout.nb_channels; - - if (avctx->extradata && avctx->extradata_size >= 80) { - header = speex_packet_to_header(avctx->extradata, - avctx->extradata_size); - if (!header) - av_log(avctx, AV_LOG_WARNING, "Invalid Speex header\n"); - } - if (avctx->codec_tag == MKTAG('S', 'P', 'X', 'N')) { - int quality; - if (!avctx->extradata || avctx->extradata && avctx->extradata_size < 47) { - av_log(avctx, AV_LOG_ERROR, "Missing or invalid extradata.\n"); - return AVERROR_INVALIDDATA; - } - - quality = avctx->extradata[37]; - if (quality > 10) { - av_log(avctx, AV_LOG_ERROR, "Unsupported quality mode %d.\n", quality); - return AVERROR_PATCHWELCOME; - } - - s->pktsize = ((const int[]){5,10,15,20,20,28,28,38,38,46,62})[quality]; - - spx_mode = 0; - } else if (header) { - avctx->sample_rate = header->rate; - channels = header->nb_channels; - spx_mode = header->mode; - speex_header_free(header); - } else { - switch (avctx->sample_rate) { - case 8000: spx_mode = 0; break; - case 16000: spx_mode = 1; break; - case 32000: spx_mode = 2; break; - default: - /* libspeex can handle any mode if initialized as ultra-wideband */ - av_log(avctx, AV_LOG_WARNING, "Invalid sample rate: %d\n" - "Decoding as 32kHz ultra-wideband\n", - avctx->sample_rate); - spx_mode = 2; - } - } - - mode = speex_lib_get_mode(spx_mode); - if (!mode) { - av_log(avctx, AV_LOG_ERROR, "Unknown Speex mode %d", spx_mode); - return AVERROR_INVALIDDATA; - } - s->frame_size = 160 << spx_mode; - if (!avctx->sample_rate) - avctx->sample_rate = 8000 << spx_mode; - - if (channels < 1 || channels > 2) { - /* libspeex can handle mono or stereo if initialized as stereo */ - av_log(avctx, AV_LOG_ERROR, "Invalid channel count: %d.\n" - "Decoding as stereo.\n", channels); - channels = 2; - } - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = channels == 2 ? (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO : - (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - - speex_bits_init(&s->bits); - s->dec_state = speex_decoder_init(mode); - if (!s->dec_state) { - av_log(avctx, AV_LOG_ERROR, "Error initializing libspeex decoder.\n"); - return -1; - } - - if (channels == 2) { - SpeexCallback callback; - callback.callback_id = SPEEX_INBAND_STEREO; - callback.func = speex_std_stereo_request_handler; - callback.data = &s->stereo; - s->stereo = (SpeexStereoState)SPEEX_STEREO_STATE_INIT; - speex_decoder_ctl(s->dec_state, SPEEX_SET_HANDLER, &callback); - } - - return 0; -} - -static int libspeex_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - LibSpeexContext *s = avctx->priv_data; - int16_t *output; - int ret, consumed = 0; - avctx->sample_fmt = AV_SAMPLE_FMT_S16; - - /* get output buffer */ - frame->nb_samples = s->frame_size; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - output = (int16_t *)frame->data[0]; - - /* if there is not enough data left for the smallest possible frame or the - next 5 bits are a terminator code, reset the libspeex buffer using the - current packet, otherwise ignore the current packet and keep decoding - frames from the libspeex buffer. */ - if (speex_bits_remaining(&s->bits) < 5 || - speex_bits_peek_unsigned(&s->bits, 5) == 0xF) { - /* check for flush packet */ - if (!buf || !buf_size) { - *got_frame_ptr = 0; - return buf_size; - } - if (s->pktsize && buf_size == 62) - buf_size = s->pktsize; - /* set new buffer */ - speex_bits_read_from(&s->bits, buf, buf_size); - consumed = avpkt->size; - } - - /* decode a single frame */ - ret = speex_decode_int(s->dec_state, &s->bits, output); - if (ret <= -2) { - av_log(avctx, AV_LOG_ERROR, "Error decoding Speex frame.\n"); - return AVERROR_INVALIDDATA; - } - if (avctx->ch_layout.nb_channels == 2) - speex_decode_stereo_int(output, s->frame_size, &s->stereo); - - *got_frame_ptr = 1; - - if (!avctx->bit_rate) - speex_decoder_ctl(s->dec_state, SPEEX_GET_BITRATE, &avctx->bit_rate); - return consumed; -} - -static av_cold int libspeex_decode_close(AVCodecContext *avctx) -{ - LibSpeexContext *s = avctx->priv_data; - - speex_bits_destroy(&s->bits); - speex_decoder_destroy(s->dec_state); - - return 0; -} - -static av_cold void libspeex_decode_flush(AVCodecContext *avctx) -{ - LibSpeexContext *s = avctx->priv_data; - speex_bits_reset(&s->bits); -} - -const FFCodec ff_libspeex_decoder = { - .p.name = "libspeex", - CODEC_LONG_NAME("libspeex Speex"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_SPEEX, - .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, - .p.wrapper_name = "libspeex", - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(LibSpeexContext), - .init = libspeex_decode_init, - .close = libspeex_decode_close, - FF_CODEC_DECODE_CB(libspeex_decode_frame), - .flush = libspeex_decode_flush, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md b/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md deleted file mode 100644 index 0237af66d242840e6e2f6eff1d3b768566bca27d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/BSEB Class 12 Dummy Registration Card 2024 Out Steps to Download and Edit.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    How to Download Dummy Registration Card for Bihar Board Exams 2024

    -

    If you are a student who is going to appear for the Bihar Board exams in 2024, you must download and check your dummy registration card before the final registration card is issued. A dummy registration card is a provisional document that contains your personal and academic details for the exams. It helps you to verify and correct any errors in your name, photo, date of birth, nationality, gender, caste, religion, or subjects. In this article, we will tell you how to download dummy registration card for BSEB Matric exam 2024 and BSEB Intermediate exam 2024. We will also tell you how to make corrections in dummy registration card, what are the benefits and risks of dummy registration card, and show you some examples of dummy registration card.

    -

    download dummy registration card


    DOWNLOAD ★★★ https://urlca.com/2uO6m5



    -

    What is a Dummy Registration Card and Why You Need It?

    -

    A dummy registration card is a provisional document that contains your personal and academic details for the Bihar Board exams 2024. It is issued by the Bihar School Examination Board (BSEB) after you register yourself for the exams through your school. The dummy registration card has information such as:

    -
      -
    • Your name
    • -
    • Your photo
    • -
    • Your date of birth
    • -
    • Your nationality
    • -
    • Your gender
    • -
    • Your caste
    • -
    • Your religion
    • -
    • Your subjects
    • -
    • Your exam center details
    • -
    -

    You need to download and check your dummy registration card to verify and correct any errors before the final registration card is issued. The final registration card is the official document that confirms your eligibility for the exams.

    How to Download Dummy Registration Card for BSEB Matric Exam 2024?

    -

    If you are a student of class 10th who is going to appear for the BSEB Matric exam 2024, you can download your dummy registration card by following these simple steps:

    -
      -
    1. Visit the official website of Bihar School Examination Board at secondary.biharboardonline.com
    2. -
    3. Click on the link that says "Student Registration Card"
    4. -
    5. Enter your school code, name, father's name, and date of birth and click on login
    6. -
    7. Your BSEB Matric Dummy Registration Card 2024 will be displayed on the screen
    8. -
    9. Download and print the card for future reference
    10. -
    -

    You should download your dummy registration card as soon as possible and check it carefully for any errors. If you find any errors, you should contact your school principal and apply for corrections through them. The deadline for making corrections is June 26, 2023 for Matric students.

    -

    How to download BSEB dummy registration card for 10th and 12th class
    -Bihar board dummy registration card 2022-23 online download link
    -BSEB 12th dummy registration card correction date and process
    -Bihar board 10th dummy registration card 2022 pdf download
    -BSEB inter dummy registration card 2023-24 download for arts, commerce, science
    -Bihar board matric dummy registration card 2022-23 last date and fees
    -BSEB dummy registration card 2024 for class 9th, 10th, 11th, 12th
    -Bihar board intermediate dummy registration card download website
    -BSEB secondary dummy registration card online application form
    -Bihar board senior secondary dummy registration card download steps
    -BSEB dummy registration card admit card download for 10th and 12th exam
    -Bihar board dummy registration card result date and link
    -BSEB dummy registration card correction form download and submit
    -Bihar board dummy registration card verification and validation process
    -BSEB dummy registration card details and importance for students
    -Bihar board dummy registration card helpline number and email id
    -BSEB dummy registration card login page and password reset
    -Bihar board dummy registration card sample and format download
    -BSEB dummy registration card print out and duplicate copy download
    -Bihar board dummy registration card status check and update
    -BSEB dummy registration card error and mistake correction guide
    -Bihar board dummy registration card notification and news update
    -BSEB dummy registration card eligibility criteria and documents required
    -Bihar board dummy registration card exam date and time table download
    -BSEB dummy registration card syllabus and exam pattern download
    -Bihar board dummy registration card question paper and answer key download
    -BSEB dummy registration card cut off marks and merit list download
    -Bihar board dummy registration card revaluation and rechecking form download
    -BSEB dummy registration card supplementary exam form and admit card download
    -Bihar board dummy registration card migration certificate and mark sheet download

    -

    How to Download Dummy Registration Card for BSEB Intermediate Exam 2024?

    -

    If you are a student of class 12th who is going to appear for the BSEB Intermediate exam 2024, you can download your dummy registration card by following these simple steps:

    -
      -
    1. Visit the official website of Bihar School Examination Board at seniorsecondary.biharboardonline.com
    2. -
    3. Click on the link that says "Student Registration Card"
    4. -
    5. Enter your school code, father's name, and date of birth and click on login
    6. -
    7. Your BSEB Intermediate Dummy Registration Card 2024 will be displayed on the screen
    8. -
    9. Download and print the card for future reference
    10. -
    -

    You should download your dummy registration card as soon as possible and check it carefully for any errors. If you find any errors, you should contact your school principal and apply for corrections through them. The deadline for making corrections is June 23, 2023 for Intermediate students.

    -

    How to Make Corrections in Dummy Registration Card?

    -

    It is very important to make corrections in your dummy registration card if you find any errors in your name, photo, date of birth, nationality, gender, caste, religion, or subjects. These errors can affect your eligibility for the exams or result declaration. To make corrections in your dummy registration card, you need to follow these steps:

    -
      -
    • Check your dummy registration card carefully for any errors in your personal or academic details
    • -
    • If you find any errors, contact your school principal and apply for corrections through them
    • -
    • You need to submit a written application along with the proof of the correct details to your school principal
    • -
    • Your school principal will forward your application to the BSEB office for verification and correction
    • -
    • You will receive a confirmation message from BSEB after the correction is done
    • -
    • You can download your corrected dummy registration card from the official website of BSEB
    • -
    -

    You should make corrections in your dummy registration card within the deadline specified by BSEB. The deadline for making corrections is June 26, 2023 for Matric students and June 23, 2023 for Intermediate students.

    -

    Benefits of Dummy Registration Card

    -

    Dummy registration card is a useful document that helps you to avoid any mistakes in your final registration card that can affect your eligibility for the exams. Some of the benefits of dummy registration card are:

    -
      -
    • Dummy registration card helps you to verify and correct any errors in your personal or academic details before the final registration card is issued
    • -
    • Dummy registration card also helps you to prepare for the exams by knowing your subjects and exam center details
    • -
    • Dummy registration card acts as a proof of your registration for the exams and can be used in case of any discrepancy or dispute
    • -
    • Dummy registration card also helps you to get admission in colleges or universities after passing the exams
    • -
    -

    Risks of Dummy Registration Card

    -

    Dummy registration card is a provisional document that needs to be checked and corrected before the final registration card is issued. If you do not download or check your dummy registration card, you may face some risks such as:

    -
      -
    • If you do not download or check your dummy registration card, you may miss the opportunity to correct any errors in your final registration card
    • -
    • If you do not make corrections in your dummy registration card within the deadline, you may face problems during the exams or result declaration
    • -
    • If you do not have a valid dummy registration card, you may not be able to appear for the exams or get your result
    • -
    • If you have any mismatch or discrepancy in your dummy registration card and final registration card, you may face legal action or cancellation of your candidature
    • -
    -

    Therefore, it is very important to download and check your dummy registration card and make corrections if needed before the final registration card is issued.

    -

    Examples of Dummy Registration Card

    -

    Here are some examples of how a dummy registration card looks like for Matric and Intermediate students. You can see the details such as name, photo, date of birth, nationality, gender, caste, religion, subjects, and exam center details on the card.

    - - - - - - - - - -
    Matric Dummy Registration CardIntermediate Dummy Registration Card
    Matric Dummy Registration CardIntermediate Dummy Registration Card
    -

    Conclusion

    -

    In this article, we have explained how to download dummy registration card for Bihar Board exams 2024. We have also told you why you need to download and check your dummy registration card, how to make corrections in dummy registration card, what are the benefits and risks of dummy registration card, and shown you some examples of dummy registration card. We hope this article has helped you to understand the importance and process of downloading and checking your dummy registration card for BSEB Matric exam 2024 and BSEB Intermediate exam 2024. If you have any queries or doubts, you can ask us in the comments section below. We wish you all the best for your exams!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about dummy registration card for Bihar Board exams 2024.

    -
      -
    1. What is the official website to download dummy registration card for Bihar Board exams 2024?
    2. -

      The official website to download dummy registration card for Bihar Board exams 2024 is biharboardonline.com. You can visit the website and click on the link that says "Student Registration Card" to download your dummy registration card.

      -
    3. What is the deadline to make corrections in dummy registration card for Bihar Board exams 2024?
    4. -

      The deadline to make corrections in dummy registration card for Bihar Board exams 2024 is June 26, 2023 for Matric students and June 23, 2023 for Intermediate students. You need to contact your school principal and apply for corrections through them before the deadline.

      -
    5. What are the documents required to make corrections in dummy registration card for Bihar Board exams 2024?
    6. -

      You need to submit a written application along with the proof of the correct details to your school principal to make corrections in dummy registration card for Bihar Board exams 2024. The proof can be your Aadhaar card, birth certificate, caste certificate, school leaving certificate, or any other relevant document.

      -
    7. What are the consequences of not downloading or checking dummy registration card for Bihar Board exams 2024?
    8. -

      If you do not download or check your dummy registration card for Bihar Board exams 2024, you may face some consequences such as missing the opportunity to correct any errors in your final registration card, facing problems during the exams or result declaration, or having a mismatch or discrepancy in your dummy registration card and final registration card.

      -
    9. How can I contact BSEB if I have any issue with my dummy registration card for Bihar Board exams 2024?
    10. -

      You can contact BSEB if you have any issue with your dummy registration card for Bihar Board exams 2024 by calling their helpline number at +91-612-2232074 or +91-612-2232257. You can also email them at bsebsehelpdesk@gmail.com or visit their office at Sinha Library Road, Patna - 800017.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md b/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md deleted file mode 100644 index 52a64d1ba841b6dd611f4bf43e75d7c7a664ca25..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Easy Drawing for Kids PDFs and Have Fun Learning to Draw.md +++ /dev/null @@ -1,106 +0,0 @@ -
    -

    Easy Drawing for Kids: How to Download Free PDFs

    -

    Do you want to help your kids develop their creativity, imagination, and fine motor skills? Do you want to find some fun and easy drawing activities that they can enjoy at home or on the go? Do you want to access hundreds of free PDFs of easy drawing for kids that you can print or view on any device? If you answered yes to any of these questions, then this article is for you.

    -

    Introduction

    -

    What is easy drawing for kids?

    -

    Easy drawing for kids is a type of art activity that involves simple and step-by-step instructions on how to draw various objects, animals, characters, and scenes. Easy drawing for kids is suitable for children of all ages and skill levels, as it helps them learn the basic shapes, proportions, colors, and techniques of drawing. Easy drawing for kids can also boost their confidence, self-expression, and concentration.

    -

    easy drawing for kids.pdf free download


    Download Filehttps://urlca.com/2uOdWY



    -

    Why is easy drawing for kids beneficial?

    -

    Easy drawing for kids has many benefits for your child's development and well-being. Some of the benefits are:

    -
      -
    • It stimulates their brain and enhances their cognitive abilities, such as memory, problem-solving, and spatial awareness.
    • -
    • It improves their hand-eye coordination, fine motor skills, and dexterity.
    • -
    • It fosters their creativity, imagination, and originality.
    • -
    • It helps them express their emotions, feelings, and thoughts.
    • -
    • It relaxes them and reduces their stress and anxiety.
    • -
    • It makes them happy and proud of their achievements.
    • -
    -

    How to find and download free PDFs of easy drawing for kids?

    -

    One of the best ways to access easy drawing for kids activities is to download free PDFs from the internet. PDFs are portable document format files that can be viewed, printed, or shared on any device. PDFs are also easy to store, organize, and access on your computer or mobile device. To find and download free PDFs of easy drawing for kids, you need two things: a reliable source of PDFs and a good PDF reader app. In the next section, we will show you some of the best websites and apps for easy drawing for kids PDFs.

    -

    Main Body

    -

    Best websites for easy drawing for kids PDFs

    -

    There are many websites that offer free PDFs of easy drawing for kids activities. However, not all of them are safe, reliable, or high-quality. To help you find the best ones, we have selected three websites that we think are worth checking out. Here they are:

    -

    Art for Kids Hub

    -

    Art for Kids Hub is a popular YouTube channel that features hundreds of videos on how to draw various things in a fun and easy way. The channel is hosted by Rob, a father of four kids who loves doing art together with them. On their website, you can find free PDFs of each video lesson that you can download or print. The PDFs include a list of supplies, a grid guide, and step-by-step instructions with pictures. You can also browse the PDFs by category, such as animals, cartoons, holidays, seasons, etc.

    -

    easy drawing worksheets for kids.pdf free download
    -easy drawing lessons for kids.pdf free download
    -easy drawing tutorials for kids.pdf free download
    -easy drawing printables for kids.pdf free download
    -easy drawing activities for kids.pdf free download
    -easy drawing guides for kids.pdf free download
    -easy drawing ideas for kids.pdf free download
    -easy drawing projects for kids.pdf free download
    -easy drawing tips for kids.pdf free download
    -easy drawing instructions for kids.pdf free download
    -easy drawing animals for kids.pdf free download
    -easy drawing cartoons for kids.pdf free download
    -easy drawing nature for kids.pdf free download
    -easy drawing birds for kids.pdf free download
    -easy drawing reptiles for kids.pdf free download
    -easy drawing sea animals for kids.pdf free download
    -easy drawing holidays for kids.pdf free download
    -easy drawing bugs for kids.pdf free download
    -easy drawing food for kids.pdf free download
    -easy drawing calendar for kids.pdf free download
    -easy drawing superheroes for kids.pdf free download
    -easy drawing toys for kids.pdf free download
    -easy drawing vehicles for kids.pdf free download
    -easy drawing mazes for kids.pdf free download
    -easy drawing word searches for kids.pdf free download
    -easy coloring pages for kids.pdf free download
    -easy coloring books for kids.pdf free download
    -easy coloring worksheets for kids.pdf free download
    -easy coloring printables for kids.pdf free download
    -easy coloring activities for kids.pdf free download
    -how to draw easy things for kids.pdf free download
    -how to draw easy animals for kids.pdf free download
    -how to draw easy cartoons for kids.pdf free download
    -how to draw easy characters for kids.pdf free download
    -how to draw easy flowers for kids.pdf free download
    -how to draw easy faces for kids.pdf free download
    -how to draw easy dinosaurs for kids.pdf free download
    -how to draw easy dragons for kids.pdf free download
    -how to draw easy cars for kids.pdf free download
    -how to draw easy trucks for kids.pdf free download
    -learn to draw easy step by step for kids.pdf free download
    -learn to draw easy pictures for kids.pdf free download
    -learn to draw easy shapes for kids.pdf free download
    -learn to draw easy patterns for kids.pdf free download
    -learn to draw easy landscapes for kids.pdf free download
    -learn to draw easy people for kids.pdf free download
    -learn to draw easy objects for kids.pdf free download
    -learn to draw easy fruits and vegetables for kids.pdf free download

    -

    Easy Drawings for Kids

    -

    Easy Drawings for Kids is another YouTube channel that teaches kids how to draw cute and simple things. The channel has over 400 videos on topics such as food, fruits, flowers , vehicles, etc. On their website, you can find free PDFs of each video lesson that you can download or print. The PDFs include a grid guide and step-by-step instructions with pictures. You can also search the PDFs by keyword or browse them by category.

    -

    Art Projects for Kids

    -

    Art Projects for Kids is a website created by Kathy Barbro, an art teacher who shares her ideas and resources for kids' art projects. On her website, you can find over 1000 free PDFs of easy drawing for kids activities that you can download or print. The PDFs include a list of supplies, a grid guide, and step-by-step instructions with pictures. You can also filter the PDFs by grade level, subject, theme, medium, etc.

    -

    Best apps for easy drawing for kids PDFs

    -

    Once you have downloaded some PDFs of easy drawing for kids activities, you need a good app to view, edit, or share them on your device. There are many apps that can handle PDF files, but not all of them are user-friendly, secure, or feature-rich. To help you find the best ones, we have selected three apps that we think are worth trying out. Here they are:

    -

    Adobe Acrobat Reader

    -

    Adobe Acrobat Reader is one of the most popular and trusted apps for viewing, annotating, and signing PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also add comments, highlights, stamps, or drawings to your PDFs. You can also fill out forms, sign documents, or scan paper documents with your camera. You can also share your PDFs with others via email, cloud services, or social media.

    -

    PDF Reader by Kdan Mobile

    -

    PDF Reader by Kdan Mobile is another great app for managing and editing PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also annotate, highlight, underline, or strikeout text on your PDFs. You can also add shapes, stamps, signatures, or drawings to your PDFs. You can also merge, split, rotate, or reorder pages on your PDFs. You can also convert your PDFs to other formats such as Word, Excel , PowerPoint, or image files. You can also share your PDFs with others via email, cloud services, or QR code.

    -

    Xodo PDF Reader & Editor

    -

    Xodo PDF Reader & Editor is a powerful and versatile app for working with PDF files. It is available for free on Windows, Mac, iOS, Android, and web browsers. With this app, you can easily open and view any PDF file on your device. You can also annotate, highlight, bookmark, or search text on your PDFs. You can also add signatures, stamps, shapes, or drawings to your PDFs. You can also create, edit, or fill out forms on your PDFs. You can also collaborate with others on your PDFs in real-time via chat or voice call.

    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have shown you how to find and download free PDFs of easy drawing for kids activities. We have also recommended some of the best websites and apps for easy drawing for kids PDFs. Easy drawing for kids is a fun and beneficial art activity that can help your kids develop their skills and express their creativity. By downloading free PDFs of easy drawing for kids activities, you can provide your kids with endless hours of entertainment and learning.

    -

    Call to action

    -

    Now that you know how to download free PDFs of easy drawing for kids activities, why not give it a try? You can start by visiting one of the websites we mentioned above and downloading some PDFs that interest you and your kids. Then, you can open them with one of the apps we suggested and enjoy drawing together with your kids. You will be amazed by how much fun and satisfaction you and your kids will get from easy drawing for kids.

    -

    If you liked this article, please share it with your friends and family who might also be interested in easy drawing for kids. Also, feel free to leave a comment below and let us know what you think about easy drawing for kids and the websites and apps we recommended. We would love to hear from you!

    -

    FAQs

    -

    Here are some of the frequently asked questions about easy drawing for kids:

    -
      -
    • Q: How do I print the PDFs of easy drawing for kids?
    • -
    • A: To print the PDFs of easy drawing for kids, you need a printer that is connected to your device. Then, you can open the PDF file with one of the apps we mentioned above and select the print option. You can also adjust the settings such as paper size, orientation, quality, etc. before printing.
    • -
    • Q: How do I save the PDFs of easy drawing for kids on my device?
    • -
    • A: To save the PDFs of easy drawing for kids on your device, you need to download them from one of the websites we mentioned above. Then, you can choose where to save them on your device's storage or cloud service. You can also rename them or create folders to organize them.
    • -
    • Q: How do I edit the PDFs of easy drawing for kids?
    • -
    • A: To edit the PDFs of easy drawing for kids, you need one of the apps we mentioned above that allows editing features. Then, you can open the PDF file with the app and use the tools to add annotations, comments, highlights, stamps, signatures, shapes, drawings, etc. You can also edit the text or images on the PDF file if they are editable.
    • -
    • Q: How do I share the PDFs of easy drawing for kids with others?
    • -
    • A: To share the PDFs of easy drawing for kids with others, you need one of the apps we mentioned above that allows sharing features. Then, you can open the PDF file with the app and select the share option. You can then choose how to share it with others via email, cloud service, social media, QR code, etc.
    • -
    • Q: How do I find more PDFs of easy drawing for kids?
    • -
    • A: To find more PDFs of easy drawing for kids, you can visit more websites that offer free PDFs of easy drawing for kids activities. You can also search online using keywords such as "easy drawing for kids pdf", "how to draw pdf", "drawing tutorials pdf", etc.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md b/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md deleted file mode 100644 index 02caf2deda3feb0ab8c90fa2087a6b5dd3b4768b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Rebaixados Elite Brasil and Join the Demoted Car Community.md +++ /dev/null @@ -1,101 +0,0 @@ - -

    Rebaixados Elite Brasil: A Fun and Customizable Car Game

    -

    If you are a fan of car games, you might want to check out Rebaixados Elite Brasil, a Brazil-inspired demoted car game where you can customize your car and character. In this game, you can lower your car to the floor, change the color of the xenon, turn up the bass of the music, and drive around a realistic Brazilian city. You can also choose from various accessories for your car and character, such as wheels, speakers, shirts, glasses, caps, and shoes. Rebaixados Elite Brasil is a game that has the most modification options on your car and character.

    -

    What is Rebaixados Elite Brasil?

    -

    A Brazil-inspired demoted car game

    -

    Rebaixados Elite Brasil (REB) is a game that simulates the culture of demoted cars in Brazil. Demoted cars are cars that have been lowered to the ground, usually with modified suspensions, wheels, tires, exhausts, and sound systems. These cars are often driven by young people who enjoy music, speed, and style. REB is a game that lets you experience this culture virtually.

    -

    rebaixados elite brasil download uptodown


    Downloadhttps://urlca.com/2uO5MX



    -

    A game with many modification options

    -

    REB is a game that has the most modification options on your car and character. You can customize your car's color, wheels, glass, xenon, neon, speakers, LED, trunk, hood, doors, windows, wipers, and more. You can also customize your character's shirt, glasses, cap, shorts, shoes, and more. You can create your own unique style and personality in this game.

    -

    A game with realistic physics and graphics

    -

    REB is a game that has realistic physics and graphics. The cars have detailed models and interiors that you can view in 360 degrees. The cars also have interactive elements such as opening doors, hood, trunk, windows, turning on wipers, etc. The cars behave according to the laws of physics such as gravity, inertia, friction, etc. The game also has day and night mode and filters for the camera that enhance the visual effects.

    -

    How to download and play Rebaixados Elite Brasil?

    -

    Download from Google Play or App Store

    -

    REB is available for both Android and iOS devices. You can download it from Google Play or App Store for free. The game has over 50 million downloads on Google Play and over 1 million downloads on App Store. The game requires iOS 11.0 or later or Android 4.4 or later to run.

    -

    Install and launch the game

    -

    After downloading the game, you can install it on your device and launch it. The game will ask you to grant some permissions such as access to your storage, microphone, and camera. You can allow or deny these permissions according to your preference. The game will also ask you to choose your language from English, Portuguese, or Spanish.

    -

    Choose your car and character

    -

    When you start the game, you will see a garage with several cars to choose from. You can swipe left or right to see the different models and tap on the one you like. You can also tap on the character icon on the top left corner to choose your character. You can change the gender, skin color, hair style, and facial features of your character.

    -

    Explore the city and customize your car

    -

    After choosing your car and character, you can tap on the play button on the bottom right corner to enter the city. You can drive around the city and explore different locations such as gas stations, shops, parks, etc. You can also tap on the menu button on the top right corner to access various options such as customization, settings, camera, music, etc. You can customize your car and character by tapping on the customization button and selecting the category you want to modify. You can also buy new cars and accessories with the coins you earn by playing the game.

    -

    rebaixados elite brasil apk free download
    -rebaixados elite brasil game for android
    -rebaixados elite brasil mod apk unlimited money
    -rebaixados elite brasil online multiplayer
    -rebaixados elite brasil car customization
    -rebaixados elite brasil latest version update
    -rebaixados elite brasil sebby games
    -rebaixados elite brasil play store
    -rebaixados elite brasil apk combo
    -rebaixados elite brasil demoted car game
    -rebaixados elite brasil how to refuel car
    -rebaixados elite brasil best cars to lower
    -rebaixados elite brasil character customization
    -rebaixados elite brasil tips and tricks
    -rebaixados elite brasil gameplay video
    -rebaixados elite brasil review and rating
    -rebaixados elite brasil download for pc
    -rebaixados elite brasil cheats and hacks
    -rebaixados elite brasil new features and improvements
    -rebaixados elite brasil brazil-inspired game
    -rebaixados elite brasil offline mode
    -rebaixados elite brasil realistic graphics and physics
    -rebaixados elite brasil soundtrack and music
    -rebaixados elite brasil support and feedback
    -rebaixados elite brasil alternatives and similar games

    -

    What are the features of Rebaixados Elite Brasil?

    -

    Detailed car models and interiors

    -

    REB has over 30 car models to choose from, each with its own unique design and features. The cars have detailed interiors that you can view in 360 degrees by tapping on the camera button and selecting the interior mode. You can also interact with some elements of the car such as opening doors, hood, trunk, windows, turning on wipers, etc.

    -

    Various accessories for the car and character

    -

    REB has a lot of accessories for the car and character that you can buy with coins or watch ads to get for free. For the car, you can buy different types of wheels, speakers, neon lights, xenon lights, LED lights, stickers, license plates, etc. For the character, you can buy different types of shirts, glasses, caps, shorts, shoes, etc. You can also change the color of some accessories by tapping on them and selecting the color palette.

    -

    Neon, xenon, speakers, and wheels

    -

    REB has some special features that make the game more fun and realistic. You can turn on neon lights under your car by tapping on the neon button on the bottom left corner. You can change the color of the neon lights by tapping on them and selecting the color palette. You can also turn on xenon lights on your headlights by tapping on the xenon button on the bottom left corner. You can change the color of the xenon lights by tapping on them and selecting the color palette. You can also turn up the bass of the music by tapping on the speaker button on the bottom left corner. You can choose from different genres of music such as funk, rap, rock, etc. You can also change the size and style of your wheels by tapping on the wheel button on the bottom left corner. You can choose from different types of wheels such as steel, alloy, chrome, etc.

    -

    Day and night mode and camera filters

    -

    REB has a day and night mode that changes according to the time of the day. You can see the sun setting and rising in the game and enjoy the different lighting effects. You can also change the camera filters by tapping on the camera button and selecting the filter mode. You can choose from different filters such as sepia, black and white, vintage, etc.

    -

    Functional gas station and steering wheel control

    -

    REB has a functional gas station where you can refuel your car. You can see the gas level of your car on the top left corner of the screen. You can drive to the gas station and park your car near the pump. You can then tap on the gas button on the bottom left corner and drag it to your car. You can see the gas level increasing as you fill up your car. You can also control your car with a steering wheel by tapping on the steering wheel button on the bottom left corner. You can tilt your device to steer your car or use the arrows on the screen.

    -

    What are some tips and tricks for Rebaixados Elite Brasil?

    -

    Refuel your car regularly

    -

    One of the tips for playing REB is to refuel your car regularly. Your car consumes gas as you drive around the city and if you run out of gas, you will not be able to move your car. You can see the gas level of your car on the top left corner of the screen. You can refuel your car at the gas station by following the steps mentioned above.

    -

    Use the accelerometer or arrows to control your car

    -

    Another tip for playing REB is to use the accelerometer or arrows to control your car. You can choose between two modes of control: accelerometer or arrows. You can change the mode by tapping on the settings button on the top right corner and selecting the control option. If you choose accelerometer, you can tilt your device to steer your car. If you choose arrows, you can use the arrows on the screen to steer your car.

    -

    Turn up the bass of the music

    -

    A third tip for playing REB is to turn up the bass of the music. REB has a feature that lets you adjust the bass of the music by tapping on the speaker button on the bottom left corner. You can choose from different genres of music such as funk, rap, rock, etc. You can also change the volume of the music by tapping on the volume button on the bottom left corner. The music adds to the atmosphere and mood of the game and makes it more enjoyable.

    -

    Change the color of the xenon and neon

    -

    A fourth tip for playing REB is to change the color of the xenon and neon lights. REB has a feature that lets you turn on and off the xenon and neon lights by tapping on the xenon and neon buttons on the bottom left corner. You can also change the color of the lights by tapping on them and selecting the color palette. You can choose from different colors such as red, blue, green, yellow, etc. The lights make your car look more cool and stylish.

    -

    Join the Facebook group and YouTube channel for more updates

    -

    A fifth tip for playing REB is to join the Facebook group and YouTube channel for more updates. REB has a Facebook group and a YouTube channel where you can interact with other players, share your screenshots and videos, get tips and tricks, and get news and updates about the game. You can join the Facebook group by tapping on the Facebook button on the top right corner and following the link. You can subscribe to the YouTube channel by tapping on the YouTube button on the top right corner and following the link.

    -

    What are some reviews of Rebaixados Elite Brasil?

    -

    Positive reviews from users who enjoy the game

    -

    REB has received many positive reviews from users who enjoy the game. Here are some examples of positive reviews:

    -
      -
    • "This game is awesome! I love how you can customize your car and character. The graphics are amazing and the music is lit. I recommend this game to anyone who likes car games."
    • -
    • "This is one of the best car games I have ever played. The cars are realistic and have many options to modify. The city is beautiful and has many places to explore. The game is very fun and addictive."
    • -
    • "This game is very cool and fun. I like how you can lower your car to the floor, change the color of the xenon and neon, and turn up the bass of the music. The game is very realistic and has a lot of features."
    • -
    -

    Negative reviews from users who encounter glitches or data issues

    -

    REB has also received some negative reviews from users who encounter glitches or data issues. Here are some examples of negative reviews:

    -
      -
    • "This game is good but it has a lot of bugs and glitches. Sometimes the game crashes or freezes. Sometimes the car gets stuck or flips over. Sometimes the accessories disappear or change color. Please fix these issues."
    • -
    • "This game is nice but it has a problem with the data. I lost all my progress and coins after I updated the game. I had a lot of cars and accessories that I bought with real money. I contacted the support but they did not help me. This is unfair and frustrating."
    • -
    • "This game is boring and repetitive. It has no missions or challenges. It has no multiplayer mode or online chat. It has no traffic or police. It has no variety or excitement. It is just driving around the same city with the same cars."
    • -
    -

    Overall rating of 4.4 out of 5 stars on App Store and 4.3 out of 5 stars on Google Play

    -

    REB has an overall rating of 4.4 out of 5 stars on App Store and 4.3 out of 5 stars on Google Play, based on thousands of user reviews. The game has been praised for its graphics, customization, realism, and fun factor. The game has also been criticized for its bugs, glitches, data issues, and lack of variety.

    -

    Conclusion

    -

    Rebaixados Elite Brasil is a fun and customizable car game that simulates the culture of demoted cars in Brazil. You can lower your car to the floor, change the color of the xenon and neon, turn up the bass of the music, and drive around a realistic Brazilian city. You can also choose from various accessories for your car and character, such as wheels, speakers, shirts, glasses, caps, and shoes. You can download the game from Google Play or App Store for free and enjoy its features such as detailed car models and interiors, day and night mode and camera filters, functional gas station and steering wheel control, etc.

    -

    FAQs

    -
      -
    • Q: How can I earn more coins in Rebaixados Elite Brasil?
    • -
    • A: You can earn more coins by playing the game regularly, watching ads, completing tasks, or buying them with real money.
    • -
    • Q: How can I change the language of Rebaixados Elite Brasil?
    • -
    • A: You can change the language by tapping on the settings button on the top right corner and selecting the language option.
    • -
    • Q: How can I share my screenshots and videos of Rebaixados Elite Brasil?
    • -
    • A: You can share your screenshots and videos by tapping on the camera button on the top right corner and selecting the share option.
    • -
    • Q: How can I contact the developers of Rebaixados Elite Brasil?
    • -
    • A: You can contact the developers by tapping on the settings button on the top right corner and selecting the contact option.
    • -
    • Q: How can I rate and review Rebaixados Elite Brasil?
    • -
    • A: You can rate and review Rebaixados Elite Brasil by tapping on the settings button on the top right corner and selecting the rate option.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md b/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md deleted file mode 100644 index e44767614c13988af5a55a66c3eb3aad3f93f730..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Subway Surfers for Windows in Minutes Follow These Easy Steps.md +++ /dev/null @@ -1,85 +0,0 @@ -
    -

    How to Download Subway Surfers on Windows

    -

    Subway Surfers is a popular arcade game that lets you run, jump, slide, and surf through various subway tracks while dodging trains, obstacles, and an angry inspector. The game is available for Android and iOS devices, but you might want to play it on your PC for a bigger screen, better graphics, and more comfortable controls. In this article, we will show you three methods to download Subway Surfers on Windows using different tools. You can choose the one that suits you best and enjoy this fun and addictive game.

    -

    how to download subway surfers on windows


    Download >>>>> https://urlca.com/2uO4zW



    -

    Method 1: Using BlueStacks Emulator

    -

    BlueStacks is a powerful emulator that allows you to run Android apps and games on your PC. It has many features that enhance your gaming experience, such as Airplane Mode, Macros, Multi Instance, Script, etc. You can also play Subway Surfers online in your browser without downloading it. Here are the steps to download Subway Surfers on Windows using BlueStacks:

    -

    Step 1: Download and install BlueStacks on your PC

    -

    Go to the official website of BlueStacks and click on the "Download" button. This will download a .exe file that you need to run. Follow the instructions in the pop-up window to install BlueStacks on your PC.

    -

    Step 2: Complete Google sign-in to access the Play Store

    -

    After installing BlueStack

    s on your PC, you will see a welcome screen. Click on the "Google Play" icon and sign in with your Google account. This will give you access to the Play Store, where you can find and download Subway Surfers.

    -

    Step 3: Search for Subway Surfers in the Play Store and install it

    -

    In the Play Store, type "Subway Surfers" in the search bar and hit enter. You will see a list of results. Click on the one that says "Subway Surfers" by SYBO Games. This will take you to the game's page, where you can see its details, ratings, reviews, screenshots, etc. Click on the "Install" button to download Subway Surfers on your PC.

    -

    Step 4: Click on the Subway Surfers icon on the home screen and start playing

    -

    Once Subway Surfers is installed, you will see its icon on the home screen of BlueStacks. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with BlueStacks.

    -

    Method 2: Using APKPure App Store

    -

    APKPure App Store is an alternative app store that allows you to download and install Android apps and games on your PC. It has a large collection of apps and games that are updated regularly. You can also download older versions of apps and games if you want. Here are the steps to download Subway Surfers on Windows using APKPure App Store:

    -

    How to install subway surfers on windows 10 PC
    -Subway surfers windows download guide with bluestacks emulator
    -How to play subway surfers on windows without ads
    -Subway surfers for windows 10 free download and installation
    -How to run subway surfers on windows PC with apkpure
    -Subway surfers windows gameplay tips and tricks
    -How to use airplane mode in subway surfers on windows
    -Subway surfers for windows 10 review and rating
    -How to unlock new characters in subway surfers on windows
    -Subway surfers for windows 10 system requirements and compatibility
    -How to update subway surfers on windows PC
    -Subway surfers windows download link and file size
    -How to fix subway surfers not working on windows 10
    -Subway surfers for windows 10 cheats and hacks
    -How to connect subway surfers on windows with facebook
    -Subway surfers for windows 10 best settings and options
    -How to uninstall subway surfers from windows PC
    -Subway surfers for windows 10 keyboard controls and shortcuts
    -How to record subway surfers gameplay on windows PC
    -Subway surfers for windows 10 latest version and features
    -How to transfer subway surfers data from android to windows PC
    -Subway surfers for windows 10 offline mode and online mode
    -How to get unlimited coins and keys in subway surfers on windows
    -Subway surfers for windows 10 comparison with android and ios versions
    -How to change language and region in subway surfers on windows
    -How to customize subway surfers on windows PC with skins and themes
    -Subway surfers for windows 10 achievements and rewards
    -How to join subway surfers world tour on windows PC
    -Subway surfers for windows 10 challenges and missions
    -How to sync subway surfers progress across multiple devices on windows PC
    -How to download subway surfers mod apk on windows PC
    -Subway surfers for windows 10 pros and cons
    -How to play subway surfers in browser on windows PC without downloading
    -Subway surfers for windows 10 frequently asked questions and answers
    -How to contact subway surfers support team on windows PC

    -

    Step 1: Download and install APKPure App Store on your PC

    -

    Go to the official website of APKPure App Store and click on the "Download" button. This will download a .exe file that you need to run. Follow the instructions in the pop-up window to install APKPure App Store on your PC.

    -

    Step 2: Search for Subway Surfers in the APKPure App Store and download it

    -

    After installing APKPure App Store on your PC, you will see a user-friendly interface. Click on the "Search" icon and type "Subway Surfers" in the search bar and hit enter. You will see a list of results. Click on the one that says "Subway Surfers" by SYBO Games. This will take you to the game's page, where you can see its details, ratings, reviews, screenshots, etc. Click on the "Download" button to download Subway Surfers on your PC.

    -

    Step 3: Open the downloaded file and install Subway Surfers on your PC

    -

    Once Subway Surfers is downloaded, you will see a notification in the bottom right corner of your screen. Click on it to open the downloaded file. You will see a pop-up window that asks you to confirm the installation of Subway Surfers on your PC. Click on "Yes" to proceed. Follow the instructions in the pop-up window to install Subway Surfers on your PC.

    -

    Step 4: Click on the Subway Surfers icon on the desktop and start playing

    -

    Once Subway Surfers is installed, you will see its icon on your desktop. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with APKPure App Store.

    -

    Method 3: Using YouTube Video Guide

    -

    If you prefer watching a video guide rather than reading text instructions, you can use YouTube to find a video guide that shows you how to download Subway Surfers on Windows using different tools. There are many video guides available on YouTube that cover this topic, but you need to choose one that is clear, reliable, and up-to-date. Here are the steps to download Subway Surfers on Windows using YouTube video guide:

    -

    Step 1: Go to YouTube and search for "How to Download Subway Surfers game in PC or Laptop"

    -

    Go to the official website of YouTube and type "How to Download Subway Surfers game in PC or Laptop" in the search bar and hit enter. You will see a list of results that match your query.

    -

    Step 2: Choose a video guide that suits your preferences and watch it carefully

    -

    Browse through the results and choose a video guide that suits your preferences. Some factors that you might want to consider are: - The length of the video - The quality of the video - The credibility of the source - The date of the video - The number of views, likes, and comments Choose a video guide that has a high rating, a large number of views, a recent date, and a trustworthy source. Watch the video carefully and pay attention to the steps and instructions given by the narrator.

    -

    Step 3: Follow the instructions given in the video guide and download Subway Surfers on your PC

    -

    After watching the video guide, follow the instructions given by the narrator and download Subway Surfers on your PC. Depending on the video guide you chose, you might need to use different tools, such as emulators, app stores, or websites. Make sure you follow the steps correctly and download Subway Surfers from a safe and secure source.

    -

    Step 4: Click on the Subway Surfers icon on your PC and start playing

    -

    Once Subway Surfers is downloaded, you will see its icon on your PC. Click on it to launch the game. You can use your mouse or keyboard to control your character and perform various actions. You can also customize your settings, such as sound, graphics, language, etc. Enjoy playing Subway Surfers on your PC with YouTube video guide.

    -

    Conclusion

    -

    In this article, we have shown you three methods to download Subway Surfers on Windows using different tools. You can choose the one that suits you best and enjoy this fun and addictive game. Here are some tips for playing Subway Surfers on PC: - Collect coins and power-ups to boost your score and unlock new characters and items - Avoid crashing into trains, barriers, and other obstacles that will slow you down or end your run - Use hoverboards, jetpacks, magnets, and other gadgets to enhance your gameplay - Complete missions and challenges to earn rewards and achievements - Join events and seasons to experience new themes and locations We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about downloading Subway Surfers on PC:

    -

    Q: Is Subway Surfers free to play on PC?

    -

    A: Yes, Subway Surfers is free to play on PC. You don't need to pay anything to download or play it. However, you can make in-app purchases to buy coins, keys, or other items if you want.

    -

    Q: Is Subway Surfers safe to download on PC?

    -

    A: Yes, Subway Surfers is safe to download on PC as long as you use a reliable source, such as BlueStacks emulator, APKPure App Store, or YouTube video guide. Avoid downloading Subway Surfers from unknown or suspicious sources that might contain malware or viruses.

    -

    Q: Can I play Subway Surfers offline on PC?

    -

    A: Yes, you can play Subway Surfers offline on PC. You don't need an internet connection to play it. However, you might need an internet connection to download it or access some features, such as online leaderboards or events.

    -

    Q: Can I sync my progress between my PC and my mobile device?

    -

    A: Yes, you can sync your progress between your PC and your mobile device if you use the same Google account to sign in to both devices. This way, you can continue your game from where you left off on either device.

    -

    Q: How can I update Subway Surfers on PC?

    -

    A: You can update Subway Surfers on PC by following the same method that you used to download it. For example, if you used BlueStacks emulator, you can go to the Play Store and check for updates. If you used APKPure App Store, you can go to the app store and check for updates. If you used YouTube video guide, you can watch a new video guide that shows how to update Subway Surfers on PC.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py deleted file mode 100644 index 14861aa9ede4fea6a69a49f189bcab997b558148..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/layers/aspp.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from copy import deepcopy -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from .batch_norm import get_norm -from .blocks import DepthwiseSeparableConv2d -from .wrappers import Conv2d - - -class ASPP(nn.Module): - """ - Atrous Spatial Pyramid Pooling (ASPP). - """ - - def __init__( - self, - in_channels, - out_channels, - dilations, - *, - norm, - activation, - pool_kernel_size=None, - dropout: float = 0.0, - use_depthwise_separable_conv=False, - ): - """ - Args: - in_channels (int): number of input channels for ASPP. - out_channels (int): number of output channels. - dilations (list): a list of 3 dilations in ASPP. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. norm is - applied to all conv layers except the conv following - global average pooling. - activation (callable): activation function. - pool_kernel_size (tuple, list): the average pooling size (kh, kw) - for image pooling layer in ASPP. If set to None, it always - performs global average pooling. If not None, it must be - divisible by the shape of inputs in forward(). It is recommended - to use a fixed input feature size in training, and set this - option to match this size, so that it performs global average - pooling in training, and the size of the pooling window stays - consistent in inference. - dropout (float): apply dropout on the output of ASPP. It is used in - the official DeepLab implementation with a rate of 0.1: - https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/model.py#L532 # noqa - use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d - for 3x3 convs in ASPP, proposed in :paper:`DeepLabV3+`. - """ - super(ASPP, self).__init__() - assert len(dilations) == 3, "ASPP expects 3 dilations, got {}".format(len(dilations)) - self.pool_kernel_size = pool_kernel_size - self.dropout = dropout - use_bias = norm == "" - self.convs = nn.ModuleList() - # conv 1x1 - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # atrous convs - for dilation in dilations: - if use_depthwise_separable_conv: - self.convs.append( - DepthwiseSeparableConv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - norm1=norm, - activation1=deepcopy(activation), - norm2=norm, - activation2=deepcopy(activation), - ) - ) - else: - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # image pooling - # We do not add BatchNorm because the spatial resolution is 1x1, - # the original TF implementation has BatchNorm. - if pool_kernel_size is None: - image_pooling = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - else: - image_pooling = nn.Sequential( - nn.AvgPool2d(kernel_size=pool_kernel_size, stride=1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - weight_init.c2_xavier_fill(image_pooling[1]) - self.convs.append(image_pooling) - - self.project = Conv2d( - 5 * out_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - weight_init.c2_xavier_fill(self.project) - - def forward(self, x): - size = x.shape[-2:] - if self.pool_kernel_size is not None: - if size[0] % self.pool_kernel_size[0] or size[1] % self.pool_kernel_size[1]: - raise ValueError( - "`pool_kernel_size` must be divisible by the shape of inputs. " - "Input size: {} `pool_kernel_size`: {}".format(size, self.pool_kernel_size) - ) - res = [] - for conv in self.convs: - res.append(conv(x)) - res[-1] = F.interpolate(res[-1], size=size, mode="bilinear", align_corners=False) - res = torch.cat(res, dim=1) - res = self.project(res) - res = F.dropout(res, self.dropout, training=self.training) if self.dropout > 0 else res - return res diff --git a/spaces/crashedice/signify/signify/__init__.py b/spaces/crashedice/signify/signify/__init__.py deleted file mode 100644 index f102a9cadfa89ce554b3b26d2b90bfba2e05273c..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/signify/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.0.1" diff --git a/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py b/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py deleted file mode 100644 index ed00764f7c193ca9bcd0bf67196da59c30048a28..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -"""fontTools.ttLib -- a package for dealing with TrueType fonts.""" - -from fontTools.misc.loggingTools import deprecateFunction -import logging - - -log = logging.getLogger(__name__) - - -class TTLibError(Exception): - pass - - -class TTLibFileIsCollectionError(TTLibError): - pass - - -@deprecateFunction("use logging instead", category=DeprecationWarning) -def debugmsg(msg): - import time - - print(msg + time.strftime(" (%H:%M:%S)", time.localtime(time.time()))) - - -from fontTools.ttLib.ttFont import * -from fontTools.ttLib.ttCollection import TTCollection diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py deleted file mode 100644 index bdd32c51febf8a7aaaa0fbab65d55c387e7c9576..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/json_component.py +++ /dev/null @@ -1,122 +0,0 @@ -"""gr.JSON() component.""" - -from __future__ import annotations - -import json -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import JSONSerializable - -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import ( - Changeable, -) - -set_documentation_group("component") - - -@document() -class JSON(Changeable, IOComponent, JSONSerializable): - """ - Used to display arbitrary JSON output prettily. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {str} filepath to a file containing valid JSON -- or a {list} or {dict} that is valid JSON - - Demos: zip_to_json, blocks_xray - """ - - def __init__( - self, - value: str | dict | list | Callable | None = None, - *, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Default value. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "value": self.value, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - updated_config = { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def postprocess(self, y: dict | list | str | None) -> dict | list | None: - """ - Parameters: - y: either a string filepath to a JSON file, or a Python list or dict that can be converted to JSON - Returns: - JSON output in Python list or dict format - """ - if y is None: - return None - if isinstance(y, str): - return json.loads(y) - else: - return y - - def style(self, *, container: bool | None = None, **kwargs): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py deleted file mode 100644 index 0c796253489147f941b78b5bb04a82935a72edab..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/cli/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from gradio_client.cli import deploy_discord - -__all__ = ["deploy_discord"] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py deleted file mode 100644 index e6b60c18caa05288676c98d09a9db1ea2be2731d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -"""Read resources contained within a package.""" - -from ._common import ( - as_file, - files, - Package, -) - -from .abc import ResourceReader - - -__all__ = [ - 'Package', - 'ResourceReader', - 'as_file', - 'files', -] diff --git a/spaces/dcq/freegpt-webui/client/css/button.css b/spaces/dcq/freegpt-webui/client/css/button.css deleted file mode 100644 index 5f604a8460d048458249f78be9dc544ade84801e..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/css/button.css +++ /dev/null @@ -1,26 +0,0 @@ -.button { - display: flex; - padding: 8px 12px; - align-items: center; - justify-content: center; - border: 1px solid var(--conversations); - border-radius: var(--border-radius-1); - width: 100%; - background: transparent; - cursor: pointer; -} - -.button span { - color: var(--colour-3); - font-size: 0.875rem; -} - -.button i::before { - margin-right: 8px; -} - -@media screen and (max-width: 990px) { - .button span { - font-size: 0.75rem; - } -} diff --git a/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py b/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py deleted file mode 100644 index a8a51b5489a3ab877012c1c843b720472fabd591..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/community/stable_diffusion_controlnet_img2img.py +++ /dev/null @@ -1,989 +0,0 @@ -# Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/ - -import inspect -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import numpy as np -import PIL.Image -import torch -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import AutoencoderKL, ControlNetModel, DiffusionPipeline, UNet2DConditionModel, logging -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel -from diffusers.schedulers import KarrasDiffusionSchedulers -from diffusers.utils import ( - PIL_INTERPOLATION, - is_accelerate_available, - is_accelerate_version, - randn_tensor, - replace_example_docstring, -) - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import numpy as np - >>> import torch - >>> from PIL import Image - >>> from diffusers import ControlNetModel, UniPCMultistepScheduler - >>> from diffusers.utils import load_image - - >>> input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png") - - >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - - >>> pipe_controlnet = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16 - ) - - >>> pipe_controlnet.scheduler = UniPCMultistepScheduler.from_config(pipe_controlnet.scheduler.config) - >>> pipe_controlnet.enable_xformers_memory_efficient_attention() - >>> pipe_controlnet.enable_model_cpu_offload() - - # using image with edges for our canny controlnet - >>> control_image = load_image( - "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png") - - - >>> result_img = pipe_controlnet(controlnet_conditioning_image=control_image, - image=input_image, - prompt="an android robot, cyberpank, digitl art masterpiece", - num_inference_steps=20).images[0] - - >>> result_img.show() - ``` -""" - - -def prepare_image(image): - if isinstance(image, torch.Tensor): - # Batch single image - if image.ndim == 3: - image = image.unsqueeze(0) - - image = image.to(dtype=torch.float32) - else: - # preprocess image - if isinstance(image, (PIL.Image.Image, np.ndarray)): - image = [image] - - if isinstance(image, list) and isinstance(image[0], PIL.Image.Image): - image = [np.array(i.convert("RGB"))[None, :] for i in image] - image = np.concatenate(image, axis=0) - elif isinstance(image, list) and isinstance(image[0], np.ndarray): - image = np.concatenate([i[None, :] for i in image], axis=0) - - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - return image - - -def prepare_controlnet_conditioning_image( - controlnet_conditioning_image, - width, - height, - batch_size, - num_images_per_prompt, - device, - dtype, - do_classifier_free_guidance, -): - if not isinstance(controlnet_conditioning_image, torch.Tensor): - if isinstance(controlnet_conditioning_image, PIL.Image.Image): - controlnet_conditioning_image = [controlnet_conditioning_image] - - if isinstance(controlnet_conditioning_image[0], PIL.Image.Image): - controlnet_conditioning_image = [ - np.array(i.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]))[None, :] - for i in controlnet_conditioning_image - ] - controlnet_conditioning_image = np.concatenate(controlnet_conditioning_image, axis=0) - controlnet_conditioning_image = np.array(controlnet_conditioning_image).astype(np.float32) / 255.0 - controlnet_conditioning_image = controlnet_conditioning_image.transpose(0, 3, 1, 2) - controlnet_conditioning_image = torch.from_numpy(controlnet_conditioning_image) - elif isinstance(controlnet_conditioning_image[0], torch.Tensor): - controlnet_conditioning_image = torch.cat(controlnet_conditioning_image, dim=0) - - image_batch_size = controlnet_conditioning_image.shape[0] - - if image_batch_size == 1: - repeat_by = batch_size - else: - # image batch size is the same as prompt batch size - repeat_by = num_images_per_prompt - - controlnet_conditioning_image = controlnet_conditioning_image.repeat_interleave(repeat_by, dim=0) - - controlnet_conditioning_image = controlnet_conditioning_image.to(device=device, dtype=dtype) - - if do_classifier_free_guidance: - controlnet_conditioning_image = torch.cat([controlnet_conditioning_image] * 2) - - return controlnet_conditioning_image - - -class StableDiffusionControlNetImg2ImgPipeline(DiffusionPipeline): - """ - Inspired by: https://github.com/haofanwang/ControlNet-for-Diffusers/ - """ - - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel], - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - if isinstance(controlnet, (list, tuple)): - controlnet = MultiControlNetModel(controlnet) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - controlnet=controlnet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - # the safety checker can offload the vae again - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # control net hook has be manually offloaded as it alternates with unet - cpu_offload_with_hook(self.controlnet, device) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_controlnet_conditioning_image(self, image, prompt, prompt_embeds): - image_is_pil = isinstance(image, PIL.Image.Image) - image_is_tensor = isinstance(image, torch.Tensor) - image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image) - image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor) - - if not image_is_pil and not image_is_tensor and not image_is_pil_list and not image_is_tensor_list: - raise TypeError( - "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors" - ) - - if image_is_pil: - image_batch_size = 1 - elif image_is_tensor: - image_batch_size = image.shape[0] - elif image_is_pil_list: - image_batch_size = len(image) - elif image_is_tensor_list: - image_batch_size = len(image) - else: - raise ValueError("controlnet condition image is not valid") - - if prompt is not None and isinstance(prompt, str): - prompt_batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - prompt_batch_size = len(prompt) - elif prompt_embeds is not None: - prompt_batch_size = prompt_embeds.shape[0] - else: - raise ValueError("prompt or prompt_embeds are not valid") - - if image_batch_size != 1 and image_batch_size != prompt_batch_size: - raise ValueError( - f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}" - ) - - def check_inputs( - self, - prompt, - image, - controlnet_conditioning_image, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - strength=None, - controlnet_guidance_start=None, - controlnet_guidance_end=None, - controlnet_conditioning_scale=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # check controlnet condition image - - if isinstance(self.controlnet, ControlNetModel): - self.check_controlnet_conditioning_image(controlnet_conditioning_image, prompt, prompt_embeds) - elif isinstance(self.controlnet, MultiControlNetModel): - if not isinstance(controlnet_conditioning_image, list): - raise TypeError("For multiple controlnets: `image` must be type `list`") - - if len(controlnet_conditioning_image) != len(self.controlnet.nets): - raise ValueError( - "For multiple controlnets: `image` must have the same length as the number of controlnets." - ) - - for image_ in controlnet_conditioning_image: - self.check_controlnet_conditioning_image(image_, prompt, prompt_embeds) - else: - assert False - - # Check `controlnet_conditioning_scale` - - if isinstance(self.controlnet, ControlNetModel): - if not isinstance(controlnet_conditioning_scale, float): - raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.") - elif isinstance(self.controlnet, MultiControlNetModel): - if isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len( - self.controlnet.nets - ): - raise ValueError( - "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have" - " the same length as the number of controlnets" - ) - else: - assert False - - if isinstance(image, torch.Tensor): - if image.ndim != 3 and image.ndim != 4: - raise ValueError("`image` must have 3 or 4 dimensions") - - if image.ndim == 3: - image_batch_size = 1 - image_channels, image_height, image_width = image.shape - elif image.ndim == 4: - image_batch_size, image_channels, image_height, image_width = image.shape - else: - assert False - - if image_channels != 3: - raise ValueError("`image` must have 3 channels") - - if image.min() < -1 or image.max() > 1: - raise ValueError("`image` should be in range [-1, 1]") - - if self.vae.config.latent_channels != self.unet.config.in_channels: - raise ValueError( - f"The config of `pipeline.unet` expects {self.unet.config.in_channels} but received" - f" latent channels: {self.vae.config.latent_channels}," - f" Please verify the config of `pipeline.unet` and the `pipeline.vae`" - ) - - if strength < 0 or strength > 1: - raise ValueError(f"The value of `strength` should in [0.0, 1.0] but is {strength}") - - if controlnet_guidance_start < 0 or controlnet_guidance_start > 1: - raise ValueError( - f"The value of `controlnet_guidance_start` should in [0.0, 1.0] but is {controlnet_guidance_start}" - ) - - if controlnet_guidance_end < 0 or controlnet_guidance_end > 1: - raise ValueError( - f"The value of `controlnet_guidance_end` should in [0.0, 1.0] but is {controlnet_guidance_end}" - ) - - if controlnet_guidance_start > controlnet_guidance_end: - raise ValueError( - "The value of `controlnet_guidance_start` should be less than `controlnet_guidance_end`, but got" - f" `controlnet_guidance_start` {controlnet_guidance_start} >= `controlnet_guidance_end` {controlnet_guidance_end}" - ) - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - - init_latents = self.vae.config.scaling_factor * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - def _default_height_width(self, height, width, image): - if isinstance(image, list): - image = image[0] - - if height is None: - if isinstance(image, PIL.Image.Image): - height = image.height - elif isinstance(image, torch.Tensor): - height = image.shape[3] - - height = (height // 8) * 8 # round down to nearest multiple of 8 - - if width is None: - if isinstance(image, PIL.Image.Image): - width = image.width - elif isinstance(image, torch.Tensor): - width = image.shape[2] - - width = (width // 8) * 8 # round down to nearest multiple of 8 - - return height, width - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.Tensor, PIL.Image.Image] = None, - controlnet_conditioning_image: Union[ - torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image] - ] = None, - strength: float = 0.8, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - controlnet_conditioning_scale: Union[float, List[float]] = 1.0, - controlnet_guidance_start: float = 0.0, - controlnet_guidance_end: float = 1.0, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`torch.Tensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - controlnet_conditioning_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`): - The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can - also be accepted as an image. The control image is automatically resized to fit the output image. - strength (`float`, *optional*): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0): - The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added - to the residual in the original unet. - controlnet_guidance_start ('float', *optional*, defaults to 0.0): - The percentage of total steps the controlnet starts applying. Must be between 0 and 1. - controlnet_guidance_end ('float', *optional*, defaults to 1.0): - The percentage of total steps the controlnet ends applying. Must be between 0 and 1. Must be greater - than `controlnet_guidance_start`. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height, width = self._default_height_width(height, width, controlnet_conditioning_image) - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - image, - controlnet_conditioning_image, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - strength, - controlnet_guidance_start, - controlnet_guidance_end, - controlnet_conditioning_scale, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(self.controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float): - controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(self.controlnet.nets) - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare image, and controlnet_conditioning_image - image = prepare_image(image) - - # condition image(s) - if isinstance(self.controlnet, ControlNetModel): - controlnet_conditioning_image = prepare_controlnet_conditioning_image( - controlnet_conditioning_image=controlnet_conditioning_image, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=self.controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - elif isinstance(self.controlnet, MultiControlNetModel): - controlnet_conditioning_images = [] - - for image_ in controlnet_conditioning_image: - image_ = prepare_controlnet_conditioning_image( - controlnet_conditioning_image=image_, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=self.controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - - controlnet_conditioning_images.append(image_) - - controlnet_conditioning_image = controlnet_conditioning_images - else: - assert False - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - latents = self.prepare_latents( - image, - latent_timestep, - batch_size, - num_images_per_prompt, - prompt_embeds.dtype, - device, - generator, - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # compute the percentage of total steps we are at - current_sampling_percent = i / len(timesteps) - - if ( - current_sampling_percent < controlnet_guidance_start - or current_sampling_percent > controlnet_guidance_end - ): - # do not apply the controlnet - down_block_res_samples = None - mid_block_res_sample = None - else: - # apply the controlnet - down_block_res_samples, mid_block_res_sample = self.controlnet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - controlnet_cond=controlnet_conditioning_image, - conditioning_scale=controlnet_conditioning_scale, - return_dict=False, - ) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # If we do sequential model offloading, let's offload unet and controlnet - # manually for max memory savings - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - self.controlnet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py b/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py deleted file mode 100644 index 99a488788a0de6db78ae7c2c89038565efd29551..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/community/text_inpainting.py +++ /dev/null @@ -1,302 +0,0 @@ -from typing import Callable, List, Optional, Union - -import PIL -import torch -from transformers import ( - CLIPImageProcessor, - CLIPSegForImageSegmentation, - CLIPSegProcessor, - CLIPTextModel, - CLIPTokenizer, -) - -from diffusers import DiffusionPipeline -from diffusers.configuration_utils import FrozenDict -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from diffusers.utils import deprecate, is_accelerate_available, logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class TextInpainting(DiffusionPipeline): - r""" - Pipeline for text based inpainting using Stable Diffusion. - Uses CLIPSeg to get a mask from the given text, then calls the Inpainting pipeline with the generated mask - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - segmentation_model ([`CLIPSegForImageSegmentation`]): - CLIPSeg Model to generate mask from the given text. Please refer to the [model card]() for details. - segmentation_processor ([`CLIPSegProcessor`]): - CLIPSeg processor to get image, text features to translate prompt to English, if necessary. Please refer to the - [model card](https://huggingface.co/docs/transformers/model_doc/clipseg) for details. - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - segmentation_model: CLIPSegForImageSegmentation, - segmentation_processor: CLIPSegProcessor, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "skip_prk_steps") and scheduler.config.skip_prk_steps is False: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration" - " `skip_prk_steps`. `skip_prk_steps` should be set to True in the configuration file. Please make" - " sure to update the config accordingly as not setting `skip_prk_steps` in the config might lead to" - " incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face" - " Hub, it would be very nice if you could open a Pull request for the" - " `scheduler/scheduler_config.json` file" - ) - deprecate("skip_prk_steps not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["skip_prk_steps"] = True - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - self.register_modules( - segmentation_model=segmentation_model, - segmentation_processor=segmentation_processor, - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_sequential_cpu_offload(self): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device("cuda") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - text: str, - height: int = 512, - width: int = 512, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - text (`str``): - The text to use to generate the mask. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - # We use the input text to generate the mask - inputs = self.segmentation_processor( - text=[text], images=[image], padding="max_length", return_tensors="pt" - ).to(self.device) - outputs = self.segmentation_model(**inputs) - mask = torch.sigmoid(outputs.logits).cpu().detach().unsqueeze(-1).numpy() - mask_pil = self.numpy_to_pil(mask)[0].resize(image.size) - - # Run inpainting pipeline with the generated mask - inpainting_pipeline = StableDiffusionInpaintPipeline( - vae=self.vae, - text_encoder=self.text_encoder, - tokenizer=self.tokenizer, - unet=self.unet, - scheduler=self.scheduler, - safety_checker=self.safety_checker, - feature_extractor=self.feature_extractor, - ) - return inpainting_pipeline( - prompt=prompt, - image=image, - mask_image=mask_pil, - height=height, - width=width, - num_inference_steps=num_inference_steps, - guidance_scale=guidance_scale, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - eta=eta, - generator=generator, - latents=latents, - output_type=output_type, - return_dict=return_dict, - callback=callback, - callback_steps=callback_steps, - ) diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md b/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md deleted file mode 100644 index 0ed34966e9f1836d9744edf77f46c84bb8609e97..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/textual_inversion/README.md +++ /dev/null @@ -1,82 +0,0 @@ -## Textual Inversion fine-tuning example - -[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples. -The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion. - -## Running on Colab - -Colab for training -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) - -Colab for inference -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) - -## Running locally with PyTorch -### Installing the dependencies - -Before running the scripts, make sure to install the library's training dependencies: - -**Important** - -To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install . -``` - -Then cd in the example folder and run -```bash -pip install -r requirements.txt -``` - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - - -### Cat toy example - -You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-5`, so you'll need to visit [its card](https://huggingface.co/runwayml/stable-diffusion-v1-5), read the license and tick the checkbox if you agree. - -You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). - -Run the following command to authenticate your token - -```bash -huggingface-cli login -``` - -If you have already cloned the repo, then you won't need to go through these steps. - -
    - -Now let's get our dataset.Download 3-4 images from [here](https://drive.google.com/drive/folders/1fmJMs25nxS_rSNqS5hTcRdLem_YQXbq5) and save them in a directory. This will be our training data. - -## Use ONNXRuntime to accelerate training -In order to leverage onnxruntime to accelerate training, please use textual_inversion.py - -The command to train on custom data with onnxruntime: - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export DATA_DIR="path-to-dir-containing-images" - -accelerate launch textual_inversion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$DATA_DIR \ - --learnable_property="object" \ - --placeholder_token="" --initializer_token="toy" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --max_train_steps=3000 \ - --learning_rate=5.0e-04 --scale_lr \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --output_dir="textual_inversion_cat" -``` - -Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions. \ No newline at end of file diff --git a/spaces/deepset/search-all-the-docs/main.py b/spaces/deepset/search-all-the-docs/main.py deleted file mode 100644 index 1b6db20504e3a6deaa2dce1bc719e985ddfe9c6b..0000000000000000000000000000000000000000 --- a/spaces/deepset/search-all-the-docs/main.py +++ /dev/null @@ -1,208 +0,0 @@ -from typing import List, Tuple -from pathlib import Path -import os -import subprocess - -from dotenv import load_dotenv -from haystack.preview import Pipeline -from haystack.preview.dataclasses import GeneratedAnswer -from haystack.preview.components.retrievers import MemoryBM25Retriever -from haystack.preview.components.generators.openai.gpt import GPTGenerator -from haystack.preview.components.builders.answer_builder import AnswerBuilder -from haystack.preview.components.builders.prompt_builder import PromptBuilder -from haystack.preview.components.preprocessors import ( - DocumentCleaner, - TextDocumentSplitter, -) -from haystack.preview.components.writers import DocumentWriter -from haystack.preview.components.file_converters import TextFileToDocument -from haystack.preview.document_stores.memory import MemoryDocumentStore -import streamlit as st - -# Load the environment variables, we're going to need it for OpenAI -load_dotenv() - -# This is the list of documentation that we're going to fetch -DOCUMENTATIONS = [ - ( - "DocArray", - "https://github.com/docarray/docarray", - "./docs/**/*.md", - ), - ( - "Streamlit", - "https://github.com/streamlit/docs", - "./content/**/*.md", - ), - ( - "Jinja", - "https://github.com/pallets/jinja", - "./docs/**/*.rst", - ), - ( - "Pandas", - "https://github.com/pandas-dev/pandas", - "./doc/source/**/*.rst", - ), - ( - "Elasticsearch", - "https://github.com/elastic/elasticsearch", - "./docs/**/*.asciidoc", - ), - ( - "NumPy", - "https://github.com/numpy/numpy", - "./doc/**/*.rst", - ), -] - -DOCS_PATH = Path(__file__).parent / "downloaded_docs" - - -@st.cache_data(show_spinner=False) -def fetch(documentations: List[Tuple[str, str, str]]): - files = [] - # Create the docs path if it doesn't exist - DOCS_PATH.mkdir(parents=True, exist_ok=True) - - for name, url, pattern in documentations: - st.write(f"Fetching {name} repository") - repo = DOCS_PATH / name - # Attempt cloning only if it doesn't exist - if not repo.exists(): - subprocess.run(["git", "clone", "--depth", "1", url, str(repo)], check=True) - res = subprocess.run( - ["git", "rev-parse", "--abbrev-ref", "HEAD"], - check=True, - capture_output=True, - encoding="utf-8", - cwd=repo, - ) - branch = res.stdout.strip() - for p in repo.glob(pattern): - data = { - "path": p, - "metadata": { - "url_source": f"{url}/tree/{branch}/{p.relative_to(repo)}", - "suffix": p.suffix, - }, - } - files.append(data) - - return files - - -@st.cache_resource(show_spinner=False) -def document_store(): - # We're going to store the processed documents in here - return MemoryDocumentStore() - - -@st.cache_resource(show_spinner=False) -def index_files(files): - # We create some components - text_converter = TextFileToDocument(progress_bar=False) - document_cleaner = DocumentCleaner() - document_splitter = TextDocumentSplitter() - document_writer = DocumentWriter( - document_store=document_store(), policy="overwrite" - ) - - # And our pipeline - indexing_pipeline = Pipeline() - indexing_pipeline.add_component("converter", text_converter) - indexing_pipeline.add_component("cleaner", document_cleaner) - indexing_pipeline.add_component("splitter", document_splitter) - indexing_pipeline.add_component("writer", document_writer) - indexing_pipeline.connect("converter", "cleaner") - indexing_pipeline.connect("cleaner", "splitter") - indexing_pipeline.connect("splitter", "writer") - - # And now we save the documentation in our MemoryDocumentStore - paths = [] - metadata = [] - for f in files: - paths.append(f["path"]) - metadata.append(f["metadata"]) - indexing_pipeline.run( - { - "converter": { - "paths": paths, - "metadata": metadata, - } - } - ) - - -def search(question: str) -> GeneratedAnswer: - retriever = MemoryBM25Retriever(document_store=document_store(), top_k=5) - - template = ( - "Take a deep breath and think then answer given the context" - "Context: {{ documents|map(attribute='text')|replace('\n', ' ')|join(';') }}" - "Question: {{ query }}" - "Answer:" - ) - prompt_builder = PromptBuilder(template) - - OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "") - generator = GPTGenerator(api_key=OPENAI_API_KEY) - answer_builder = AnswerBuilder() - - query_pipeline = Pipeline() - - query_pipeline.add_component("docs_retriever", retriever) - query_pipeline.add_component("prompt_builder", prompt_builder) - query_pipeline.add_component("gpt35", generator) - query_pipeline.add_component("answer_builder", answer_builder) - - query_pipeline.connect("docs_retriever.documents", "prompt_builder.documents") - query_pipeline.connect("prompt_builder.prompt", "gpt35.prompt") - query_pipeline.connect("docs_retriever.documents", "answer_builder.documents") - query_pipeline.connect("gpt35.replies", "answer_builder.replies") - res = query_pipeline.run( - { - "docs_retriever": {"query": question}, - "prompt_builder": {"query": question}, - "answer_builder": {"query": question}, - } - ) - return res["answer_builder"]["answers"][0] - - -with st.status( - "Downloading documentation files...", - expanded=st.session_state.get("expanded", True), -) as status: - files = fetch(DOCUMENTATIONS) - status.update(label="Indexing documentation...") - index_files(files) - status.update( - label="Download and indexing complete!", state="complete", expanded=False - ) - st.session_state["expanded"] = False - - -st.header("🔎 Documentation finder", divider="rainbow") - -st.caption( - f"Use this to search answers for {', '.join([d[0] for d in DOCUMENTATIONS])}" -) - -if question := st.text_input( - label="What do you need to know?", placeholder="What is a DataFrame?" -): - with st.spinner("Waiting"): - answer = search(question) - - if not st.session_state.get("run_once", False): - st.balloons() - st.session_state["run_once"] = True - - st.markdown(answer.data) - with st.expander("See sources:"): - for document in answer.documents: - url_source = document.metadata.get("url_source", "") - st.write(url_source) - st.text(document.text) - st.divider() diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py deleted file mode 100644 index 5ff9624c5b14473667ea7ef246b321a76708bdc6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/actions/write_prd_review.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:45 -@Author : alexanderwu -@File : write_prd_review.py -""" -from metagpt.actions.action import Action - - -class WritePRDReview(Action): - def __init__(self, name, context=None, llm=None): - super().__init__(name, context, llm) - self.prd = None - self.desc = "Based on the PRD, conduct a PRD Review, providing clear and detailed feedback" - self.prd_review_prompt_template = """ - Given the following Product Requirement Document (PRD): - {prd} - - As a project manager, please review it and provide your feedback and suggestions. - """ - - async def run(self, prd): - self.prd = prd - prompt = self.prd_review_prompt_template.format(prd=self.prd) - review = await self._aask(prompt) - return review diff --git a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py b/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py deleted file mode 100644 index 73ca9671d8e3cd83f1c6f0e0e35df36ba446916b..0000000000000000000000000000000000000000 --- a/spaces/deinferno/Latent_Consistency_Model_OpenVino_CPU/lcm_scheduler.py +++ /dev/null @@ -1,529 +0,0 @@ -# Copyright 2023 Stanford University Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion -# and https://github.com/hojonathanho/diffusion - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.utils import BaseOutput, logging -from diffusers.utils.torch_utils import randn_tensor -from diffusers.schedulers.scheduling_utils import SchedulerMixin - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -class LCMSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's `step` function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample `(x_{0})` based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - denoised: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -# Copied from diffusers.schedulers.scheduling_ddim.rescale_zero_terminal_snr -def rescale_zero_terminal_snr(betas: torch.FloatTensor) -> torch.FloatTensor: - """ - Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1) - - - Args: - betas (`torch.FloatTensor`): - the betas that the scheduler is being initialized with. - - Returns: - `torch.FloatTensor`: rescaled betas with zero terminal SNR - """ - # Convert betas to alphas_bar_sqrt - alphas = 1.0 - betas - alphas_cumprod = torch.cumprod(alphas, dim=0) - alphas_bar_sqrt = alphas_cumprod.sqrt() - - # Store old values. - alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone() - alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone() - - # Shift so the last timestep is zero. - alphas_bar_sqrt -= alphas_bar_sqrt_T - - # Scale so the first timestep is back to the old value. - alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T) - - # Convert alphas_bar_sqrt to betas - alphas_bar = alphas_bar_sqrt**2 # Revert sqrt - alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod - alphas = torch.cat([alphas_bar[0:1], alphas]) - betas = 1 - alphas - - return betas - - -class LCMScheduler(SchedulerMixin, ConfigMixin): - """ - `LCMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with - non-Markovian guidance. - - This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. [`~ConfigMixin`] takes care of storing all config - attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be - accessed via `scheduler.config.num_train_timesteps`. [`SchedulerMixin`] provides general loading and saving - functionality via the [`SchedulerMixin.save_pretrained`] and [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`, defaults to 1000): - The number of diffusion steps to train the model. - beta_start (`float`, defaults to 0.0001): - The starting `beta` value of inference. - beta_end (`float`, defaults to 0.02): - The final `beta` value. - beta_schedule (`str`, defaults to `"linear"`): - The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, *optional*): - Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. - original_inference_steps (`int`, *optional*, defaults to 50): - The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we - will ultimately take `num_inference_steps` evenly spaced timesteps to form the final timestep schedule. - clip_sample (`bool`, defaults to `True`): - Clip the predicted sample for numerical stability. - clip_sample_range (`float`, defaults to 1.0): - The maximum magnitude for sample clipping. Valid only when `clip_sample=True`. - set_alpha_to_one (`bool`, defaults to `True`): - Each diffusion step uses the alphas product value at that step and at the previous one. For the final step - there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the alpha value at step 0. - steps_offset (`int`, defaults to 0): - An offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False` to make the last step use step 0 for the previous alpha product like in Stable - Diffusion. - prediction_type (`str`, defaults to `epsilon`, *optional*): - Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), - `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen - Video](https://imagen.research.google/video/paper.pdf) paper). - thresholding (`bool`, defaults to `False`): - Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such - as Stable Diffusion. - dynamic_thresholding_ratio (`float`, defaults to 0.995): - The ratio for the dynamic thresholding method. Valid only when `thresholding=True`. - sample_max_value (`float`, defaults to 1.0): - The threshold value for dynamic thresholding. Valid only when `thresholding=True`. - timestep_spacing (`str`, defaults to `"leading"`): - The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and - Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information. - rescale_betas_zero_snr (`bool`, defaults to `False`): - Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and - dark samples instead of limiting it to samples with medium brightness. Loosely related to - [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). - """ - - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.00085, - beta_end: float = 0.012, - beta_schedule: str = "scaled_linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - original_inference_steps: int = 50, - clip_sample: bool = False, - clip_sample_range: float = 1.0, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - timestep_spacing: str = "leading", - rescale_betas_zero_snr: bool = False, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - # Rescale for zero SNR - if rescale_betas_zero_snr: - self.betas = rescale_zero_terminal_snr(self.betas) - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64)) - - self._step_index = None - - # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._init_step_index - def _init_step_index(self, timestep): - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - - index_candidates = (self.timesteps == timestep).nonzero() - - # The sigma index that is taken for the **very** first `step` - # is always the second index (or the last index if there is only 1) - # This way we can ensure we don't accidentally skip a sigma in - # case we start in the middle of the denoising schedule (e.g. for image-to-image) - if len(index_candidates) > 1: - step_index = index_candidates[1] - else: - step_index = index_candidates[0] - - self._step_index = step_index.item() - - @property - def step_index(self): - return self._step_index - - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): - The input sample. - timestep (`int`, *optional*): - The current timestep in the diffusion chain. - Returns: - `torch.FloatTensor`: - A scaled input sample. - """ - return sample - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample - def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor: - """ - "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the - prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by - s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing - pixels from saturation at each step. We find that dynamic thresholding results in significantly better - photorealism as well as better image-text alignment, especially when using very large guidance weights." - - https://arxiv.org/abs/2205.11487 - """ - dtype = sample.dtype - batch_size, channels, *remaining_dims = sample.shape - - if dtype not in (torch.float32, torch.float64): - sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half - - # Flatten sample for doing quantile calculation along each image - sample = sample.reshape(batch_size, channels * np.prod(remaining_dims)) - - abs_sample = sample.abs() # "a certain percentile absolute pixel value" - - s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1) - s = torch.clamp( - s, min=1, max=self.config.sample_max_value - ) # When clamped to min=1, equivalent to standard clipping to [-1, 1] - s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0 - sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s" - - sample = sample.reshape(batch_size, channels, *remaining_dims) - sample = sample.to(dtype) - - return sample - - def set_timesteps( - self, - num_inference_steps: int, - device: Union[str, torch.device] = None, - original_inference_steps: Optional[int] = None, - ): - """ - Sets the discrete timesteps used for the diffusion chain (to be run before inference). - - Args: - num_inference_steps (`int`): - The number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, *optional*): - The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - original_inference_steps (`int`, *optional*): - The original number of inference steps, which will be used to generate a linearly-spaced timestep - schedule (which is different from the standard `diffusers` implementation). We will then take - `num_inference_steps` timesteps from this schedule, evenly spaced in terms of indices, and use that as - our final timestep schedule. If not set, this will default to the `original_inference_steps` attribute. - """ - - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - original_steps = ( - original_inference_steps if original_inference_steps is not None else self.original_inference_steps - ) - - if original_steps > self.config.num_train_timesteps: - raise ValueError( - f"`original_steps`: {original_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - if num_inference_steps > original_steps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `original_inference_steps`:" - f" {original_steps} because the final timestep schedule will be a subset of the" - f" `original_inference_steps`-sized initial timestep schedule." - ) - - # LCM Timesteps Setting - # Currently, only linear spacing is supported. - c = self.config.num_train_timesteps // original_steps - # LCM Training Steps Schedule - lcm_origin_timesteps = np.asarray(list(range(1, original_steps + 1))) * c - 1 - skipping_step = len(lcm_origin_timesteps) // num_inference_steps - # LCM Inference Steps Schedule - timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] - - self.timesteps = torch.from_numpy(timesteps.copy()).to(device=device, dtype=torch.long) - - self._step_index = None - - def get_scalings_for_boundary_condition_discrete(self, t): - self.sigma_data = 0.5 # Default: 0.5 - - # By dividing 0.1: This is almost a delta function at t=0. - c_skip = self.sigma_data**2 / ((t / 0.1) ** 2 + self.sigma_data**2) - c_out = (t / 0.1) / ((t / 0.1) ** 2 + self.sigma_data**2) ** 0.5 - return c_skip, c_out - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[LCMSchedulerOutput, Tuple]: - """ - Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): - The direct output from learned diffusion model. - timestep (`float`): - The current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - A current instance of a sample created by the diffusion process. - generator (`torch.Generator`, *optional*): - A random number generator. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] or `tuple`. - Returns: - [`~schedulers.scheduling_utils.LCMSchedulerOutput`] or `tuple`: - If return_dict is `True`, [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] is returned, otherwise a - tuple is returned where the first element is the sample tensor. - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - if self.step_index is None: - self._init_step_index(timestep) - - # 1. get previous step value - prev_step_index = self.step_index + 1 - if prev_step_index < len(self.timesteps): - prev_timestep = self.timesteps[prev_step_index] - else: - prev_timestep = timestep - - # 2. compute alphas, betas - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - # 3. Get scalings for boundary conditions - c_skip, c_out = self.get_scalings_for_boundary_condition_discrete(timestep) - - # 4. Compute the predicted original sample x_0 based on the model parameterization - if self.config.prediction_type == "epsilon": # noise-prediction - predicted_original_sample = (sample - beta_prod_t.sqrt() * model_output) / alpha_prod_t.sqrt() - elif self.config.prediction_type == "sample": # x-prediction - predicted_original_sample = model_output - elif self.config.prediction_type == "v_prediction": # v-prediction - predicted_original_sample = alpha_prod_t.sqrt() * sample - beta_prod_t.sqrt() * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or" - " `v_prediction` for `LCMScheduler`." - ) - - # 5. Clip or threshold "predicted x_0" - if self.config.thresholding: - predicted_original_sample = self._threshold_sample(predicted_original_sample) - elif self.config.clip_sample: - predicted_original_sample = predicted_original_sample.clamp( - -self.config.clip_sample_range, self.config.clip_sample_range - ) - - # 6. Denoise model output using boundary conditions - denoised = c_out * predicted_original_sample + c_skip * sample - - # 7. Sample and inject noise z ~ N(0, I) for MultiStep Inference - # Noise is not used for one-step sampling. - if len(self.timesteps) > 1: - noise = randn_tensor(model_output.shape, generator=generator, device=model_output.device) - prev_sample = alpha_prod_t_prev.sqrt() * denoised + beta_prod_t_prev.sqrt() * noise - else: - prev_sample = denoised - - # upon completion increase step index by one - self._step_index += 1 - - if not return_dict: - return (prev_sample, denoised) - - return LCMSchedulerOutput(prev_sample=prev_sample, denoised=denoised) - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity - def get_velocity( - self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as sample - alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype) - timesteps = timesteps.to(sample.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(sample.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample - return velocity - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py b/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py deleted file mode 100644 index 1c0370aeb501a7d4b56d832f692a8391273ea6c0..0000000000000000000000000000000000000000 --- a/spaces/devendergarg14/Paraphrasing_with_GPT_Neo/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -import requests -import json -import os -API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B" -apikey=os.environ.get('api_key') -headers = {"Authorization": f"Bearer {apikey}"} -def query(input_sentence,num,start): - paraphrase_final=[] - for i in range(0,num): - intial="""These are the few examples of converting original sentences into paraphrased sentences.\n original: The gray clouds were a warning of an approaching storm.\n paraphrase: The coming storm was foretold by the dark clouds.\n original: Giraffes like Acacia leaves and hay, and they can consume 75 pounds of food a day.\n paraphrase: A giraffe can eat up to 75 pounds of Acacia leaves and hay daily.\n """ - full_input=intial+"original:"+input_sentence + "\n paraphrase:"+start - data = json.dumps({"inputs":full_input,"parameters":{"max_length":len(full_input.split())+70,"min_length":len(full_input.split())+70},"temperature":0.650+0.05*i}) - response = requests.request("POST", API_URL, headers=headers, data=data) - output=json.loads(response.content.decode("utf-8"))[0]['generated_text'] - paraphrase=output.split('paraphrase:',3)[-1] - paraphrase_text=paraphrase.split('original:',1)[0] - paraphrase_final.append( paraphrase_text.split('.',1)[0]+".") - return '\n\n'.join([i for i in paraphrase_final[0:]]) -title = "Paraphrasing with GPT-NEO" -description = "Gradio Demo for Paraphrasing with GPT-NEO. Simply add one line sentence in the Input. It is possible to control the start of output paraphrased sentences using optional Starting Point Input. If outputs are not satisfactory try to increase number of outputs" -article = "" -examples=[['The sky, at sunset, looked like a carnivorous flower.',4,'The coloured reddish'],['Inside us there is something that has no name, that something is what we are.',4,'']] -gr.Interface(fn=query, inputs=[gr.inputs.Textbox(lines=4, label="Input Text (Single Sentence)"), -gr.inputs.Slider( minimum=1, maximum=10, step=1, default=4, label="Numbers of Outputs"), -gr.inputs.Textbox(lines=1, label="Starting Point (optional)")], -outputs=["text"], -title=title,description=description, -article= article, -examples=examples, -allow_flagging='never').launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/dfurman/chat-all-in/README.md b/spaces/dfurman/chat-all-in/README.md deleted file mode 100644 index cba738f108e5d6564d32e6b379bd5717bf891ce7..0000000000000000000000000000000000000000 --- a/spaces/dfurman/chat-all-in/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat-All-In -emoji: 👔 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at diff --git a/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md b/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md deleted file mode 100644 index e598b27ad51dcaeeaef3ebcec8d7a0cbf9022e36..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key BEST Keygen.md +++ /dev/null @@ -1,105 +0,0 @@ -
    -

    How to Download and Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen

    - -

    If you are looking for a powerful and easy-to-use software to create stunning slideshows from your photos, videos, and texts, you might want to try AquaSoft SlideShow 10 Ultimate. This software lets you add thousands of effects, animations, and transitions to your slideshows, as well as configure smart templates and export them in 4K-UHD quality. You can also use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.

    -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen


    Downloadhttps://gohhs.com/2uFUke



    - -

    However, if you don't have the license key or you want to use the software on a different PC, you might need a crack to bypass the activation and run the software without any limitations or problems. In this article, we will show you how to download and use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen, which is one of the most reliable and working cracks available online.

    - -

    What is AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that replaces the original software executable (SlideShow.exe) with a modified one that removes the need for a valid license key or an online activation. This way, you can use the software without having to enter a license key every time you launch it.

    - -

    This crack was created by Steve Phillips, a well-known hacker who specializes in cracking PC software. It is compatible with the multi-language version of the software released in February 2017. It also fixes some bugs and errors that might occur in the original software.

    - -

    How to Download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    There are many websites that offer AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen for download, but not all of them are safe or trustworthy. Some of them might contain viruses, malware, or fake files that can harm your PC or steal your personal information. Therefore, you should be careful when choosing where to download the crack from.

    - -

    One of the best places to download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is OvaGames.net, a website that provides free and full version PC software with cracks. You can find the link to download the crack in their post about AquaSoft SlideShow 10 Ultimate-SKIDROW. The file size is about 5.6 GB and it is split into 6 parts of 990 MB each. You can use Mega.nz, GDrive, Direct FTP Link, Uptobox, or Upfile.Mobi to download the parts.

    - -

    Alternatively, you can also use torrent sites like The Pirate Bay or Kickass Torrents to download AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. However, you will need a torrent client like uTorrent or BitTorrent to do so. You should also use a VPN service to protect your privacy and avoid any legal issues.

    -

    - -

    How to Install and Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    After downloading AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen, you will need to extract it using a software like WinRAR or 7-Zip. You will get an ISO file that contains all the software files and the crack. You will need to mount this ISO file using a software like Daemon Tools or PowerISO.

    - -

    Then, you will need to install the software by following these steps:

    - -
      -
    1. Run setup.exe from the mounted ISO file.
    2. -
    3. Select your preferred language and destination folder.
    4. -
    5. Wait for the installation to complete.
    6. -
    7. Copy all files from SKIDROW folder (located inside ISO file) to your installation folder (where SlideShow.exe is located).
    8. -
    9. Replace existing files when prompted.
    10. -
    - -

    Congratulations! You have successfully installed AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. Now you can use the software by running SlideShow.exe from your installation folder.

    -

    What are the Benefits of Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has many benefits for users who want to create stunning slideshows from their photos, videos, and texts. Here are some of them:

    - -
      -
    • You can save money by not buying the license key or paying for a subscription service.
    • -
    • You can use the software on any PC that meets the minimum system requirements, regardless of the region or language.
    • -
    • You can use the software offline without needing an internet connection or an online activation.
    • -
    • You can enjoy the software without any interruptions, errors, or glitches that might occur in the original software.
    • -
    • You can access all the features, modes, and content of the software without any restrictions or limitations.
    • -
    - -

    Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a great way to create stunning slideshows from your photos, videos, and texts in a fast and easy way. You will be able to add thousands of effects, animations, and transitions to your slideshows, as well as configure smart templates and export them in 4K-UHD quality. You will also be able to use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.

    - -

    Is it Safe and Legal to Use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not entirely safe or legal. There are some risks and consequences that you should be aware of before downloading and using the crack. Here are some of them:

    - -
      -
    • You might download a fake or corrupted file that can damage your PC or infect it with viruses or malware.
    • -
    • You might violate the copyright laws or terms of service of AquaSoft or other parties involved in the production and distribution of the software.
    • -
    • You might face legal actions or penalties from AquaSoft or other parties involved in the production and distribution of the software.
    • -
    • You might lose your access to online features, updates, or support from AquaSoft or other parties involved in the production and distribution of the software.
    • -
    • You might have a poor performance or quality due to bugs, crashes, or compatibility issues that are not fixed by the crack.
    • -
    - -

    Using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not recommended for users who want to use the software safely and legally. You should always buy the license key or use a legitimate service to use AquaSoft SlideShow 10 Ultimate. This way, you will support the developers and publishers who worked hard to create this amazing software. You will also enjoy a better performance and quality with more features, updates, and support.

    -

    What are the Features of AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is not just a simple file that lets you use the software without a license key or an online activation. It also has some features that enhance your software experience and make it more enjoyable. Here are some of them:

    - -
      -
    • You can create slideshows from your photos, videos, and texts with thousands of effects, animations, and transitions. You can also add music, narration, captions, and logos to your slideshows.
    • -
    • You can configure smart templates that automatically adjust to your content and style preferences. You can also use the SlideShow-Master feature to create a new task from predefined templates and your photos with just a few clicks.
    • -
    • You can export your slideshows in 4K-UHD quality and various formats, such as MP4, AVI, MKV, MOV, WMV, and more. You can also burn your slideshows to DVD or Blu-ray discs or upload them to YouTube or Facebook.
    • -
    • You can edit your slideshows with advanced tools, such as timeline, storyboard, layout designer, image editor, video editor, and audio editor. You can also use keyframes, masks, chroma keying, and motion paths to create stunning effects.
    • -
    • You can preview your slideshows in real-time and adjust them according to your needs. You can also use the live output feature to display your slideshows on a second monitor or a projector.
    • -
    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that adds more fun and excitement to your software. You will be able to create stunning slideshows from your photos, videos, and texts in a fast and easy way. You will also be able to appreciate the software's design and production more.

    - -

    What are the Reviews of AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen?

    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has received positive reviews from users who have used it. Most of them praised it for being a powerful and easy-to-use software to create stunning slideshows from their photos, videos, and texts. They also liked the software's features, performance, quality, and compatibility. They said that using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen was a satisfying and enjoyable experience.

    - -

    However, some of them criticized it for being too expensive, complex, or unstable. They also disliked the software's bugs, glitches, errors, or limitations. They said that using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen was a disappointing and frustrating experience.

    - -

    Here are some examples of reviews from users who have used AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen:

    - -
    -

    "This software is amazing! It allows me to create stunning slideshows from my photos, videos, and texts with ease. It has thousands of effects, animations, and transitions to choose from. It also has smart templates and SlideShow-Master feature that make my work easier and faster. It exports my slideshows in 4K-UHD quality and various formats that I can share with my friends and family. I love using this software!"

    -
    - -
    -

    "This software is terrible! It is too expensive for what it offers. It is also too complex and hard to use for beginners like me. It has many bugs, glitches, errors, and limitations that ruin my slideshows. It exports my slideshows in low quality and formats that I can't play on my devices or platforms. I hate using this software!"

    -
    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen has received positive reviews from users who have used it. Most of them enjoyed it while some of them hated it. It depends on your personal taste and expectations whether you will like or dislike using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen.

    -

    Conclusion

    - -

    AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen is a file that allows you to use AquaSoft SlideShow 10 Ultimate without needing a valid license key or an online activation. It is one of the most reliable and working cracks available online. However, it is not safe or legal to use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. You should always buy the license key or use a legitimate service to use AquaSoft SlideShow 10 Ultimate. This way, you will support the developers and publishers who worked hard to create this amazing software. You will also enjoy a better performance and quality with more features, updates, and support.

    - -

    In this article, we have shown you how to download, install, and use AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We have also discussed the benefits, features, reviews, risks, and consequences of using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

    - -

    Thank you for reading this article. We hope that you have learned something new and useful about AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. We also hope that you have enjoyed using AquaSoft SlideShow 10 Ultimate 10.4.02 (x86x64) Incl Crack Serial Key Keygen. Have a great day!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md b/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md deleted file mode 100644 index 56f2a021969da03f8ad1ade66cfb979a4c64c293..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Dosprn.1.78.REPACK Full.Version.109.md +++ /dev/null @@ -1,14 +0,0 @@ -

    Dosprn.1.78.FULL.Version.109


    Download ››››› https://gohhs.com/2uFT1k



    - -.1717172117.17218.31.59.3147.81723.12.34.0[^1] - -The median follow-up time was 10 years (interquartile range, 5--19). Of the 129 patients, 114 had metastatic disease at diagnosis. The median time from diagnosis to death for all patients was 3 years (95% CI, 1--7). The median OS for all patients was 14.8 years (95% CI, 11.4--18.2). The patients were treated with a wide variety of chemotherapy regimens ([Table 2](#t0010)ref-type="table"), and most of them were treated with a combination of platinum and anthracycline-based drugs. In this study, 9 patients (7.0%) were treated with five drugs or more as a first-line chemotherapy regimen, such as epirubicin, paclitaxel, capecitabine, carboplatin, and gemcitabine. The patients received a median number of 12 chemotherapy courses (interquartile range, 9--13). The patients were treated with a median of seven cycles (interquartile range, 4--10) of chemotherapy. Of the 129 patients, 103 (80.5%) had recurrence and 69 (53.5%) had died at the time of the analysis. Of the 69 patients who died, 16 patients (23.2%) had MBC and 43 patients (64.2%) had OC. - -Efficacy #s0040 - --------- - -The EFS rates at 3 and 5 years for all patients were 40% and 33%, respectively. The median EFS was 7 years (95% CI, 5.8--8.2). The median time to recurrence was 2 years (95% CI, 1--3). The median OS was 14.8 years (95% CI, 11.4--18.2). The median time from recurrence to death was 4 years (95% CI, 3--6). The median OS for patients with MBC and OC was 8.4 and 14.4 years, respectively. The median EFS was 3.8 years (95% CI, 3.5--4.1) for patients with MBC and 14.7 years (95% CI, 10.7--17.6) for patients with OC. [Figure 1](#f0005){ref-type 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md b/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md deleted file mode 100644 index b5609fda2c30a853d8938c71e64cb0076efbd372..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Ex4 To Mq4 Decompiler 229 145.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    How to Convert Ex4 Files to Mq4 Files Using a Decompiler

    -

    Ex4 files are compiled programs for MetaTrader 4, a popular platform for forex trading and algorithmic trading. Mq4 files are the source code files that can be edited and modified using the MetaEditor tool. If you want to access the source code of an ex4 file, you might need to use a decompiler software that can convert ex4 to mq4.

    -

    Ex4 To Mq4 Decompiler 229 145


    Download ✒ ✒ ✒ https://gohhs.com/2uFSSJ



    -

    However, decompiling ex4 files is not a simple or legal process. According to some sources[^1^] [^2^], decompilers were available in the past, but they could only work with older versions of ex4 files. Nowadays, most ex4 files are protected and encrypted, making them impossible to decompile. Moreover, decompiling ex4 files might violate the intellectual property rights of the original developers, who distribute their programs without source code for a reason.

    -

    Therefore, before attempting to decompile an ex4 file, you should first contact its developer and ask for permission or access to the source code. This is the most ethical and respectful way to get the mq4 file you want. If the developer agrees, you can use their mq4 file for educational or personal purposes only. You should not modify, distribute, or sell their code without their consent.

    -

    If the developer does not agree or does not respond, you should respect their decision and look for other alternatives. You can try to find similar programs or indicators that have open source code, or you can learn how to code your own using the MetaEditor tool and the MQL4 language. You can also hire a professional programmer to create a custom program or indicator for you.

    -

    If you still insist on decompiling an ex4 file, you should be aware of the risks and challenges involved. You will need to find a reliable and updated decompiler software that can handle the latest versions of ex4 files[^3^] [^4^]. You will also need to have some knowledge of cryptography and binary decompilation[^1^], as well as MQL4 syntax and logic. Even if you manage to decompile an ex4 file, you will likely get an obfuscated and unreadable code that will be hard to understand and modify[^1^]. You will also be liable for any legal consequences that might arise from your actions.

    -

    -

    In conclusion, converting ex4 files to mq4 files using a decompiler is not a recommended or easy task. It is better to respect the developers' rights and wishes, and look for other ways to achieve your goals. If you want to learn more about MetaTrader 4, MQL4, and forex trading, you can visit the official website of MetaTrader or browse online forums and tutorials.

    - -

    MetaTrader 4 is one of the most popular and widely used platforms for forex trading and algorithmic trading. It allows traders to access the global financial markets, analyze price movements, execute orders, and create automated trading strategies using expert advisors (EAs) and custom indicators. MetaTrader 4 also provides a built-in programming environment called MetaEditor, where users can write and edit their own code using the MQL4 language.

    -

    MQL4 stands for MetaQuotes Language 4, and it is a high-level object-oriented programming language that is based on C++. It is designed specifically for developing trading applications for MetaTrader 4. MQL4 allows users to create EAs, indicators, scripts, libraries, and other programs that can interact with the MetaTrader 4 terminal and perform various trading tasks. MQL4 also supports graphical objects, mathematical functions, technical analysis tools, network functions, and more.

    -

    Ex4 files are the result of compiling MQL4 source code files (mq4 files) using the MetaEditor tool. Compiling is the process of transforming human-readable code into machine-readable code that can be executed by the MetaTrader 4 terminal. Ex4 files are faster and more efficient than mq4 files, but they cannot be modified or edited by the user. Ex4 files are usually distributed by developers who want to protect their intellectual property and prevent unauthorized copying or modification of their code.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md b/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md deleted file mode 100644 index fb51a5206ae84ac4fb7061e3ce5a83b42be1fd66..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Political Law Reviewer Ateneo Pdf Extra Quality Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    political law reviewer ateneo pdf download


    DOWNLOAD ->>> https://gohhs.com/2uFUyX



    - - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/dietician/rewriteData/main.py b/spaces/dietician/rewriteData/main.py deleted file mode 100644 index 3b8cd13a69b42f9376722e211d5240a74d244ac0..0000000000000000000000000000000000000000 --- a/spaces/dietician/rewriteData/main.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import gradio as gr -import requests - -def rewriteTheData(data): - - api_key = "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"; - - api_key = "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"; - - # json_input = json.loads(input); - URL = "https://api.openai.com/v1/chat/completions" - - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": data}, - {"role": "system", "content": "input is in the json formate, just rewrite the text and keep the meaning same. respond with json sormate data. don't add additional information like this is generated by AI"}], - "temperature" : 0.0, - "top_p":1.0, - "n" : 1, - "stream": False, - "presence_penalty":0, - "frequency_penalty":0, - } - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api_key}" - } - - response = requests.post(URL, headers=headers, json=payload, stream=False) - - json_data = json.loads(response.text) - - print(json_data) - - return json_data["choices"][0]['message']["content"] - - -def greet(data): - return rewriteTheData(data) - -demo = gr.Interface(fn=greet, inputs="text", outputs="text") - -demo.launch() \ No newline at end of file diff --git a/spaces/diffusers/latent-upscaler-tool/image_upscaling.py b/spaces/diffusers/latent-upscaler-tool/image_upscaling.py deleted file mode 100644 index 0db6bc7deee851a7a178f6196d39f47acecb40b2..0000000000000000000000000000000000000000 --- a/spaces/diffusers/latent-upscaler-tool/image_upscaling.py +++ /dev/null @@ -1,56 +0,0 @@ - -import numpy as np -import torch - -from transformers.tools.base import Tool, get_default_device -from transformers.utils import is_accelerate_available - -from diffusers import DiffusionPipeline - - -IMAGE_UPSCALING_DESCRIPTION = ( - "This is a tool that upscales an image. It takes one input: `image`, which should be " - "the image to upscale. It returns the upscaled image." -) - - -class ImageUpscalingTool(Tool): - default_stable_diffusion_checkpoint = "stabilityai/sd-x2-latent-upscaler" - description = IMAGE_UPSCALING_DESCRIPTION - name = "image_upscaler" - inputs = ['image'] - outputs = ['image'] - - def __init__(self, device=None, controlnet=None, stable_diffusion=None, **hub_kwargs) -> None: - if not is_accelerate_available(): - raise ImportError("Accelerate should be installed in order to use tools.") - - super().__init__() - - self.stable_diffusion = self.default_stable_diffusion_checkpoint - - self.device = device - self.hub_kwargs = hub_kwargs - - def setup(self): - if self.device is None: - self.device = get_default_device() - - self.pipeline = DiffusionPipeline.from_pretrained(self.stable_diffusion) - - self.pipeline.to(self.device) - if self.device.type == "cuda": - self.pipeline.to(torch_dtype=torch.float16) - - self.is_initialized = True - - def __call__(self, image): - if not self.is_initialized: - self.setup() - - return self.pipeline( - image=image, - prompt="", - num_inference_steps=30, - guidance_scale=0, - ).images[0] diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/japanese.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/start.bat b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/start.bat deleted file mode 100644 index 418d21233dbf720b0dd09821904d9d6a31b123a2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/start.bat +++ /dev/null @@ -1,2 +0,0 @@ -set PYTHON=venv\python.exe -start cmd /k "set PYTHON=%PYTHON%" \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/samplers/__init__.py b/spaces/dineshreddy/WALT/mmdet/datasets/samplers/__init__.py deleted file mode 100644 index 2596aeb2ccfc85b58624713c04453d34e94a4062..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/samplers/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .distributed_sampler import DistributedSampler -from .group_sampler import DistributedGroupSampler, GroupSampler - -__all__ = ['DistributedSampler', 'DistributedGroupSampler', 'GroupSampler'] diff --git a/spaces/dorkai/singpt-2.0/extensions/google_translate/script.py b/spaces/dorkai/singpt-2.0/extensions/google_translate/script.py deleted file mode 100644 index 68bc54b293086bed1a070a310d276060ee939d44..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt-2.0/extensions/google_translate/script.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from deep_translator import GoogleTranslator - -params = { - "language string": "ja", -} - -language_codes = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am', 'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az', 'Basque': 'eu', 'Belarusian': 'be', 'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg', 'Catalan': 'ca', 'Cebuano': 'ceb', 'Chinese (Simplified)': 'zh-CN', 'Chinese (Traditional)': 'zh-TW', 'Corsican': 'co', 'Croatian': 'hr', 'Czech': 'cs', 'Danish': 'da', 'Dutch': 'nl', 'English': 'en', 'Esperanto': 'eo', 'Estonian': 'et', 'Finnish': 'fi', 'French': 'fr', 'Frisian': 'fy', 'Galician': 'gl', 'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu', 'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hawaiian': 'haw', 'Hebrew': 'iw', 'Hindi': 'hi', 'Hmong': 'hmn', 'Hungarian': 'hu', 'Icelandic': 'is', 'Igbo': 'ig', 'Indonesian': 'id', 'Irish': 'ga', 'Italian': 'it', 'Japanese': 'ja', 'Javanese': 'jw', 'Kannada': 'kn', 'Kazakh': 'kk', 'Khmer': 'km', 'Korean': 'ko', 'Kurdish': 'ku', 'Kyrgyz': 'ky', 'Lao': 'lo', 'Latin': 'la', 'Latvian': 'lv', 'Lithuanian': 'lt', 'Luxembourgish': 'lb', 'Macedonian': 'mk', 'Malagasy': 'mg', 'Malay': 'ms', 'Malayalam': 'ml', 'Maltese': 'mt', 'Maori': 'mi', 'Marathi': 'mr', 'Mongolian': 'mn', 'Myanmar (Burmese)': 'my', 'Nepali': 'ne', 'Norwegian': 'no', 'Nyanja (Chichewa)': 'ny', 'Pashto': 'ps', 'Persian': 'fa', 'Polish': 'pl', 'Portuguese (Portugal, Brazil)': 'pt', 'Punjabi': 'pa', 'Romanian': 'ro', 'Russian': 'ru', 'Samoan': 'sm', 'Scots Gaelic': 'gd', 'Serbian': 'sr', 'Sesotho': 'st', 'Shona': 'sn', 'Sindhi': 'sd', 'Sinhala (Sinhalese)': 'si', 'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es', 'Sundanese': 'su', 'Swahili': 'sw', 'Swedish': 'sv', 'Tagalog (Filipino)': 'tl', 'Tajik': 'tg', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th', 'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz', 'Vietnamese': 'vi', 'Welsh': 'cy', 'Xhosa': 'xh', 'Yiddish': 'yi', 'Yoruba': 'yo', 'Zulu': 'zu'} - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - return GoogleTranslator(source=params['language string'], target='en').translate(string) - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - return GoogleTranslator(source='en', target=params['language string']).translate(string) - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - -def ui(): - # Finding the language name from the language code to use as the default value - language_name = list(language_codes.keys())[list(language_codes.values()).index(params['language string'])] - - # Gradio elements - language = gr.Dropdown(value=language_name, choices=[k for k in language_codes], label='Language') - - # Event functions to update the parameters in the backend - language.change(lambda x: params.update({"language string": language_codes[x]}), language, None) diff --git a/spaces/dorkai/singpt-2.0/extensions/silero_tts/script.py b/spaces/dorkai/singpt-2.0/extensions/silero_tts/script.py deleted file mode 100644 index f611dc27b7480cd357b77c0c407fcc2bd6df2679..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt-2.0/extensions/silero_tts/script.py +++ /dev/null @@ -1,169 +0,0 @@ -import time -from pathlib import Path - -import gradio as gr -import torch - -import modules.chat as chat -import modules.shared as shared - -torch._C._jit_set_profiling_mode(False) - -params = { - 'activate': True, - 'speaker': 'en_56', - 'language': 'en', - 'model_id': 'v3_en', - 'sample_rate': 48000, - 'device': 'cpu', - 'show_text': False, - 'autoplay': True, - 'voice_pitch': 'medium', - 'voice_speed': 'medium', -} - -current_params = params.copy() -voices_by_gender = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115'] -voice_pitches = ['x-low', 'low', 'medium', 'high', 'x-high'] -voice_speeds = ['x-slow', 'slow', 'medium', 'fast', 'x-fast'] - -# Used for making text xml compatible, needed for voice pitch and speed control -table = str.maketrans({ - "<": "<", - ">": ">", - "&": "&", - "'": "'", - '"': """, -}) - -def xmlesc(txt): - return txt.translate(table) - -def load_model(): - model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id']) - model.to(params['device']) - return model -model = load_model() - -def remove_surrounded_chars(string): - new_string = "" - in_star = False - for char in string: - if char == '*': - in_star = not in_star - elif not in_star: - new_string += char - return new_string - -def remove_tts_from_history(name1, name2): - for i, entry in enumerate(shared.history['internal']): - shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]] - return chat.generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def toggle_text_in_history(name1, name2): - for i, entry in enumerate(shared.history['visible']): - visible_reply = entry[1] - if visible_reply.startswith('')[0]}\n\n{reply}"] - else: - shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"] - return chat.generate_chat_output(shared.history['visible'], name1, name2, shared.character) - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - # Remove autoplay from the last reply - if (shared.args.chat or shared.args.cai_chat) and len(shared.history['internal']) > 0: - shared.history['visible'][-1] = [shared.history['visible'][-1][0], shared.history['visible'][-1][1].replace('controls autoplay>','controls>')] - - shared.processing_message = "*Is recording a voice message...*" - return string - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global model, current_params - - for i in params: - if params[i] != current_params[i]: - model = load_model() - current_params = params.copy() - break - - if params['activate'] == False: - return string - - original_string = string - string = remove_surrounded_chars(string) - string = string.replace('"', '') - string = string.replace('“', '') - string = string.replace('\n', ' ') - string = string.strip() - - if string == '': - string = '*Empty reply, try regenerating*' - else: - output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav') - prosody = ''.format(params['voice_speed'], params['voice_pitch']) - silero_input = f'{prosody}{xmlesc(string)}' - model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file)) - - autoplay = 'autoplay' if params['autoplay'] else '' - string = f'' - if params['show_text']: - string += f'\n\n{original_string}' - - shared.processing_message = "*Is typing...*" - return string - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - -def ui(): - # Gradio elements - with gr.Accordion("Silero TTS"): - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically') - show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player') - voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice') - with gr.Row(): - v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch') - v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed') - with gr.Row(): - convert = gr.Button('Permanently replace audios with the message texts') - convert_cancel = gr.Button('Cancel', visible=False) - convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False) - - # Convert history with confirmation - convert_arr = [convert_confirm, convert, convert_cancel] - convert.click(lambda :[gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr) - convert_confirm.click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - convert_confirm.click(remove_tts_from_history, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - convert_confirm.click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - convert_cancel.click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - - # Toggle message text in history - show_text.change(lambda x: params.update({"show_text": x}), show_text, None) - show_text.change(toggle_text_in_history, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - show_text.change(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({"activate": x}), activate, None) - autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None) - voice.change(lambda x: params.update({"speaker": x}), voice, None) - v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None) - v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None) diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/api-example-stream.py b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/api-example-stream.py deleted file mode 100644 index 49058776927c7d85e49f5f717d8a77135fb2f8a1..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/api-example-stream.py +++ /dev/null @@ -1,66 +0,0 @@ -import asyncio -import json -import sys - -try: - import websockets -except ImportError: - print("Websockets package not found. Make sure it's installed.") - -# For local streaming, the websockets are hosted without ssl - ws:// -HOST = 'localhost:5005' -URI = f'ws://{HOST}/api/v1/stream' - -# For reverse-proxied streaming, the remote will likely host with ssl - wss:// -# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream' - -async def run(context): - # Note: the selected defaults change from time to time. - request = { - 'prompt': context, - 'max_new_tokens': 250, - 'do_sample': True, - 'temperature': 1.3, - 'top_p': 0.1, - 'typical_p': 1, - 'repetition_penalty': 1.18, - 'top_k': 40, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, - 'seed': -1, - 'add_bos_token': True, - 'truncation_length': 2048, - 'ban_eos_token': False, - 'skip_special_tokens': True, - 'stopping_strings': [] - } - - async with websockets.connect(URI, ping_interval=None) as websocket: - await websocket.send(json.dumps(request)) - - yield context # Remove this if you just want to see the reply - - while True: - incoming_data = await websocket.recv() - incoming_data = json.loads(incoming_data) - - match incoming_data['event']: - case 'text_stream': - yield incoming_data['text'] - case 'stream_end': - return - - -async def print_response_stream(prompt): - async for response in run(prompt): - print(response, end='') - sys.stdout.flush() # If we don't flush, we won't see tokens in realtime. - - -if __name__ == '__main__': - prompt = "In order to make homemade bread, follow these steps:\n1)" - asyncio.run(print_response_stream(prompt)) diff --git a/spaces/editing-images/ledtisplusplus/share_btn.py b/spaces/editing-images/ledtisplusplus/share_btn.py deleted file mode 100644 index a9e04e13f10a7183e75bd606a77c378088a22b54..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ledtisplusplus/share_btn.py +++ /dev/null @@ -1,110 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = r"""async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, - }); - const url = await response.text(); - return url; - } - - function getButtonText(componentId) { - const buttonEl = gradioEl.querySelector(`${componentId} button`); - return buttonEl ? buttonEl.textContent : ''; - } - - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = [gradioEl.querySelector('#input_image img'), gradioEl.querySelector('#output_image img')]; - const concepts = [ - { value: getButtonText('#box1'), parent: gradioEl.querySelector('#box1 span[data-testid="block-info"]') }, - { value: getButtonText('#box2'), parent: gradioEl.querySelector('#box2 span[data-testid="block-info"]') }, - { value: getButtonText('#box3'), parent: gradioEl.querySelector('#box3 span[data-testid="block-info"]') } - ]; - - const promptTxt = gradioEl.querySelector('#target_prompt input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls[1]){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - async function processImage(imgEl, imgId) { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const fileType = blob.type.includes('png') ? 'png' : 'jpg'; - const fileName = `diffuse-the-rest-${imgId}.${fileType}`; - return new File([blob], fileName, { type: blob.type }); - } - - const files = await Promise.all(imgEls.map((imgEl, index) => processImage(imgEl, Date.now() + index % 200))); - const urls = await Promise.all(files.map((file) => uploadFile(file))); - - const labels = ['Source image', 'Target image']; - const htmlImgs = urls.map((url, index) => `
    ${labels[index]}:
    `); - - let descriptionMd = `
    ${htmlImgs.join(`\n`)}
    `; - - if (promptTxt) { - descriptionMd += `Target image prompt: ${promptTxt}
    `; - } else { - descriptionMd += `Target image prompt: ""
    `; - } - - const conceptHeaders = []; - const conceptDescriptions = []; - const conceptTableRows = []; - concepts.forEach((concept, index) => { - if (concept.value) { - const label = concept.parent.textContent.includes('Negative') ? `remove concept` : `add concept`; - conceptHeaders.push(`${label}`); - conceptDescriptions.push(`${label}: ${concept.value}`); - conceptTableRows.push(`${concept.value}`); - } - }); - - let title = 'Editing'; - if (promptTxt) { - title += ` "${promptTxt}"`; - } - if (conceptDescriptions.length > 0) { - title += ` to ${conceptDescriptions.join(', ')}`; - descriptionMd += ` - - ${conceptHeaders.join('\n')} - - - ${conceptTableRows.join('\n')} - -
    `; - } - - const params = new URLSearchParams({ - title: title, - description: descriptionMd, - preview: true, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/editing-images/ledits/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/PlayInteractively.py b/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/PlayInteractively.py deleted file mode 100644 index 547b08ab2c4373e23711636488145df148d7eb4e..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/PlayInteractively.py +++ /dev/null @@ -1,197 +0,0 @@ - - - -from tkinter import Tk -from PIL import Image, ImageTk -from tkinter.filedialog import askopenfilename -from GUI import View -from Inference import StyleCLIP -import argparse -#%% - - -class PlayInteractively(): #Controller - ''' - followed Model View Controller Design Pattern - - controller, model, view - ''' - def __init__(self,dataset_name='ffhq'): - - self.root = Tk() - self.view=View(self.root) - self.img_ratio=2 - self.style_clip=StyleCLIP(dataset_name) - - self.view.neutral.bind("", self.text_n) - self.view.target.bind("", self.text_t) - self.view.alpha.bind('', self.ChangeAlpha) - self.view.beta.bind('', self.ChangeBeta) - self.view.set_init.bind('', self.SetInit) - self.view.reset.bind('', self.Reset) - self.view.bg.bind('', self.open_img) - - - self.drawn = None - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", self.style_clip.target) -# - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", self.style_clip.neutral) - - - def Reset(self,event): - self.style_clip.GetDt2() - self.style_clip.M.alpha=[0] - - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(0) - - img=self.style_clip.GetImg() - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - - def SetInit(self,event): - codes=self.style_clip.GetCode() - self.style_clip.M.dlatent_tmp=[tmp[:,0] for tmp in codes] - print('set init') - - def ChangeAlpha(self,event): - tmp=self.view.alpha.get() - self.style_clip.M.alpha=[float(tmp)] - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - def ChangeBeta(self,event): - tmp=self.view.beta.get() - self.style_clip.beta=float(tmp) - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - def ChangeDataset(self,event): - - dataset_name=self.view.set_category.get() - - self.style_clip.LoadData(dataset_name) - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", self.style_clip.target) - - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", self.style_clip.neutral) - - def text_t(self,event): - tmp=self.view.target.get("1.0",'end') - tmp=tmp.replace('\n','') - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", tmp) - - print('target',tmp,'###') - self.style_clip.target=tmp - self.style_clip.GetDt2() - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(3) - self.style_clip.M.alpha=[3] - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - - def text_n(self,event): - tmp=self.view.neutral.get("1.0",'end') - tmp=tmp.replace('\n','') - - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", tmp) - - print('neutral',tmp,'###') - self.style_clip.neutral=tmp - self.view.target.delete(1.0, "end") - self.view.target.insert("end", tmp) - - - def run(self): - self.root.mainloop() - - def addImage(self,img): - self.view.bg.create_image(self.view.width/2, self.view.height/2, image=img, anchor='center') - self.image=img #save a copy of image. if not the image will disappear - - def addImage_m(self,img): - self.view.mani.create_image(512, 512, image=img, anchor='center') - self.image2=img - - - def openfn(self): - filename = askopenfilename(title='open',initialdir='./data/'+self.style_clip.M.dataset_name+'/',filetypes=[("all image format", ".jpg"),("all image format", ".png")]) - return filename - - def open_img(self,event): - x = self.openfn() - print(x) - - - img = Image.open(x) - img2 = img.resize(( 512,512), Image.ANTIALIAS) - img2 = ImageTk.PhotoImage(img2) - self.addImage(img2) - - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - img_index=x.split('/')[-1].split('.')[0] - img_index=int(img_index) - print(img_index) - self.style_clip.M.img_index=img_index - self.style_clip.M.dlatent_tmp=[tmp[img_index:(img_index+1)] for tmp in self.style_clip.M.dlatents] - - - self.style_clip.GetDt2() - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(3) - - #%% -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Process some integers.') - - parser.add_argument('--dataset_name',type=str,default='ffhq', - help='name of dataset, for example, ffhq') - - args = parser.parse_args() - dataset_name=args.dataset_name - - self=PlayInteractively(dataset_name) - self.run() - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/test_error.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/test_error.py deleted file mode 100644 index 9517ad0f74a898c8dafbe540d612b62278e0725d..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v2/test_error.py +++ /dev/null @@ -1,18 +0,0 @@ - -import json -error_tokens = [54611, 54612, 54613, 54614, 54615, 54616, 54617, 54618, 54619, 54620, 54621, 54622, - 54623, 54624, 54625, 54626, 54627, 54628, 54629, 54630, 54631, 54632, 54633] - -data = json.load(open("20B_tokenizer_chinese.v2.json", "r", encoding="utf-8")) -vocab = data["model"]["vocab"] -id2vocab = {idx: token for token, idx in vocab.items()} - - -for token_id in error_tokens: - token = id2vocab[token_id] - for tmp in vocab: - if token in tmp and token != tmp: - print("catch") - -# print("a") -# json.la \ No newline at end of file diff --git a/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/util.py b/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/util.py deleted file mode 100644 index 4eb8a63ab9d3b44dc4ebe251b86a5233c402a97f..0000000000000000000000000000000000000000 --- a/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/util.py +++ /dev/null @@ -1,40 +0,0 @@ -import streamlit as st -from PIL import Image -from backend.pipeline import PreTrainedPipeline -import pandas as pd -import io -import matplotlib.pyplot as plt -import numpy as np - - -def import_fig(): - image = st.file_uploader("Upload your picture.", type=["png", "jpg", "jpeg"]) - if image: - bytes_image = image.getvalue() - image = Image.open(io.BytesIO(bytes_image)) - st.image(image, caption=["We are classifying this image..."]) - return image - - -def plot(data=None): - - fig = plt.figure() - ax = fig.add_axes([0, 0, 1, 1]) - breeds = data.head(3)["label"].tolist() - labels = data.head(3)["score"].tolist() - ax.bar(breeds, labels) - ax.set_ylabel("Probability that your pet is breed X") - ax.grid("on") - - st.pyplot(fig) - - -@st.cache(allow_output_mutation=True) -def fastai_model(image): - if image: - model = PreTrainedPipeline(path="backend") - outputs = model(image) - - outputs_df = pd.DataFrame(outputs) - - return outputs_df.sort_values(by=["score"], ascending=False) diff --git a/spaces/evanpierce/3D_Photo_Inpainting2/app.py b/spaces/evanpierce/3D_Photo_Inpainting2/app.py deleted file mode 100644 index f551cf4a7f120091c72c7cccb1e8ea1882eb34c8..0000000000000000000000000000000000000000 --- a/spaces/evanpierce/3D_Photo_Inpainting2/app.py +++ /dev/null @@ -1,229 +0,0 @@ -# Repo source: https://github.com/vt-vl-lab/3d-photo-inpainting - -#import os -#os.environ['QT_DEBUG_PLUGINS'] = '1' - -import subprocess -#subprocess.run('ldd /home/user/.local/lib/python3.8/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so', shell=True) -#subprocess.run('pip list', shell=True) -subprocess.run('nvidia-smi', shell=True) - -from pyvirtualdisplay import Display -display = Display(visible=0, size=(1920, 1080)).start() -#subprocess.run('echo $DISPLAY', shell=True) - -# 3d inpainting imports -import numpy as np -import argparse -import glob -import os -from functools import partial -import vispy -import scipy.misc as misc -from tqdm import tqdm -import yaml -import time -import sys -from mesh import write_ply, read_ply, output_3d_photo -from utils import get_MiDaS_samples, read_MiDaS_depth -import torch -import cv2 -from skimage.transform import resize -import imageio -import copy -from networks import Inpaint_Color_Net, Inpaint_Depth_Net, Inpaint_Edge_Net -from MiDaS.run import run_depth -from boostmonodepth_utils import run_boostmonodepth -from MiDaS.monodepth_net import MonoDepthNet -import MiDaS.MiDaS_utils as MiDaS_utils -from bilateral_filtering import sparse_bilateral_filtering - -import torch - -# gradio imports -import gradio as gr -import uuid -from PIL import Image -from pathlib import Path -import shutil -from time import sleep - -def inpaint(img_name, num_frames, fps): - - config = yaml.load(open('argument.yml', 'r')) - - config['num_frames'] = num_frames - config['fps'] = fps - - if torch.cuda.is_available(): - config['gpu_ids'] = 0 - - if config['offscreen_rendering'] is True: - vispy.use(app='egl') - - os.makedirs(config['mesh_folder'], exist_ok=True) - os.makedirs(config['video_folder'], exist_ok=True) - os.makedirs(config['depth_folder'], exist_ok=True) - sample_list = get_MiDaS_samples(config['src_folder'], config['depth_folder'], config, config['specific'], img_name.stem) - normal_canvas, all_canvas = None, None - - if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0): - device = config["gpu_ids"] - else: - device = "cpu" - - print(f"running on device {device}") - - for idx in tqdm(range(len(sample_list))): - depth = None - sample = sample_list[idx] - print("Current Source ==> ", sample['src_pair_name']) - mesh_fi = os.path.join(config['mesh_folder'], sample['src_pair_name'] +'.ply') - image = imageio.imread(sample['ref_img_fi']) - - print(f"Running depth extraction at {time.time()}") - if config['use_boostmonodepth'] is True: - run_boostmonodepth(sample['ref_img_fi'], config['src_folder'], config['depth_folder']) - elif config['require_midas'] is True: - run_depth([sample['ref_img_fi']], config['src_folder'], config['depth_folder'], - config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=640) - - if 'npy' in config['depth_format']: - config['output_h'], config['output_w'] = np.load(sample['depth_fi']).shape[:2] - else: - config['output_h'], config['output_w'] = imageio.imread(sample['depth_fi']).shape[:2] - frac = config['longer_side_len'] / max(config['output_h'], config['output_w']) - config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac) - config['original_h'], config['original_w'] = config['output_h'], config['output_w'] - if image.ndim == 2: - image = image[..., None].repeat(3, -1) - if np.sum(np.abs(image[..., 0] - image[..., 1])) == 0 and np.sum(np.abs(image[..., 1] - image[..., 2])) == 0: - config['gray_image'] = True - else: - config['gray_image'] = False - image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA) - depth = read_MiDaS_depth(sample['depth_fi'], 3.0, config['output_h'], config['output_w']) - mean_loc_depth = depth[depth.shape[0]//2, depth.shape[1]//2] - if not(config['load_ply'] is True and os.path.exists(mesh_fi)): - vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False) - depth = vis_depths[-1] - model = None - torch.cuda.empty_cache() - print("Start Running 3D_Photo ...") - print(f"Loading edge model at {time.time()}") - depth_edge_model = Inpaint_Edge_Net(init_weights=True) - depth_edge_weight = torch.load(config['depth_edge_model_ckpt'], - map_location=torch.device(device)) - depth_edge_model.load_state_dict(depth_edge_weight) - depth_edge_model = depth_edge_model.to(device) - depth_edge_model.eval() - - print(f"Loading depth model at {time.time()}") - depth_feat_model = Inpaint_Depth_Net() - depth_feat_weight = torch.load(config['depth_feat_model_ckpt'], - map_location=torch.device(device)) - depth_feat_model.load_state_dict(depth_feat_weight, strict=True) - depth_feat_model = depth_feat_model.to(device) - depth_feat_model.eval() - depth_feat_model = depth_feat_model.to(device) - print(f"Loading rgb model at {time.time()}") - rgb_model = Inpaint_Color_Net() - rgb_feat_weight = torch.load(config['rgb_feat_model_ckpt'], - map_location=torch.device(device)) - rgb_model.load_state_dict(rgb_feat_weight) - rgb_model.eval() - rgb_model = rgb_model.to(device) - graph = None - - - print(f"Writing depth ply (and basically doing everything) at {time.time()}") - rt_info = write_ply(image, - depth, - sample['int_mtx'], - mesh_fi, - config, - rgb_model, - depth_edge_model, - depth_edge_model, - depth_feat_model) - - if rt_info is False: - continue - rgb_model = None - color_feat_model = None - depth_edge_model = None - depth_feat_model = None - torch.cuda.empty_cache() - if config['save_ply'] is True or config['load_ply'] is True: - verts, colors, faces, Height, Width, hFov, vFov = read_ply(mesh_fi) - else: - verts, colors, faces, Height, Width, hFov, vFov = rt_info - - - print(f"Making video at {time.time()}") - videos_poses, video_basename = copy.deepcopy(sample['tgts_poses']), sample['tgt_name'] - top = (config.get('original_h') // 2 - sample['int_mtx'][1, 2] * config['output_h']) - left = (config.get('original_w') // 2 - sample['int_mtx'][0, 2] * config['output_w']) - down, right = top + config['output_h'], left + config['output_w'] - border = [int(xx) for xx in [top, down, left, right]] - normal_canvas, all_canvas = output_3d_photo(verts.copy(), colors.copy(), faces.copy(), copy.deepcopy(Height), copy.deepcopy(Width), copy.deepcopy(hFov), copy.deepcopy(vFov), - copy.deepcopy(sample['tgt_pose']), sample['video_postfix'], copy.deepcopy(sample['ref_pose']), copy.deepcopy(config['video_folder']), - image.copy(), copy.deepcopy(sample['int_mtx']), config, image, - videos_poses, video_basename, config.get('original_h'), config.get('original_w'), border=border, depth=depth, normal_canvas=normal_canvas, all_canvas=all_canvas, - mean_loc_depth=mean_loc_depth) - -def resizer(input_img, max_img_size=512): - width, height = input_img.size - long_edge = height if height >= width else width - if long_edge > max_img_size: - ratio = max_img_size / long_edge - resized_width = int(ratio * width) - resized_height = int(ratio * height) - resized_input_img = input_img.resize((resized_width, resized_height), resample=2) - return resized_input_img - - else: - return input_img - -def main_app(input_img, num_frames, fps): - - # resize down - input_img = resizer(input_img) - - # Save image in necessary folder for inpainting - #img_name = Path(str(uuid.uuid4()) + '.jpg') - img_name = Path('sample.jpg') - save_folder = Path('image') - input_img.save(save_folder/img_name) - - inpaint(img_name, num_frames, fps) - - #subprocess.run('ls -l', shell=True) - #subprocess.run('ls image -l', shell=True) - #subprocess.run('ls video/ -l', shell=True) - - # Get output video path & return - input_img_path = str(save_folder/img_name) - out_vid_path = 'video/{0}_circle.mp4'.format(img_name.stem) - - return out_vid_path - -video_choices = ['dolly-zoom-in', 'zoom-in', 'circle', 'swing'] -gradio_inputs = [gr.inputs.Image(type='pil', label='Input Image'), - gr.inputs.Slider(minimum=60, maximum=240, step=1, default=120, label="Number of Frames"), - gr.inputs.Slider(minimum=10, maximum=40, step=1, default=20, label="Frames per Second (FPS)")] - -gradio_outputs = [gr.outputs.Video(label='Output Video')] -examples = [ ['moon.jpg'], ['dog.jpg'] ] - -description="Convert an image into a trajectory-following video. Images are automatically resized down to a max edge of 512. | NOTE: The current runtime for a sample is around 400-700 seconds. Running on a lower number of frames could help! Do be patient as this is on CPU-only, BUT if this space maybe gets a GPU one day, it's already configured to run with GPU-support :) If you have a GPU, feel free to use the author's original repo (linked at the bottom of this path, they have a collab notebook!) You can also run this space/gradio app locally!" - -article = "

    3D Photography using Context-aware Layered Depth Inpainting | Github Project Page | Github Repo

    " - -iface = gr.Interface(fn=main_app, inputs=gradio_inputs , outputs=gradio_outputs, examples=examples, - title='3D Image Inpainting', - description=description, - article=article, - enable_queue=True) - -iface.launch(debug=True) diff --git a/spaces/facebook/MusicGen/tests/utils/__init__.py b/spaces/facebook/MusicGen/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/falterWliame/Face_Mask_Detection/Anurag I21 Software Free Crack !!LINK!! Download.md b/spaces/falterWliame/Face_Mask_Detection/Anurag I21 Software Free Crack !!LINK!! Download.md deleted file mode 100644 index a79ad5231ac0fcd3553e63e3c82a66cdb45e01e9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Anurag I21 Software Free Crack !!LINK!! Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Anurag i21 software free crack download


    Download 🆗 https://urlca.com/2uDck0



    - -Anurag i21 Ultra Photoshop Plugin Retouching Software Free Download Anurag i21 Photoshop Plugin Retouching Software 2013 free ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Ashcroft Solid State Physics Solution Manual Rar.md b/spaces/falterWliame/Face_Mask_Detection/Ashcroft Solid State Physics Solution Manual Rar.md deleted file mode 100644 index 31762e30b1392c8a357ad1992e79048eeb5c6dff..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Ashcroft Solid State Physics Solution Manual Rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ashcroft Solid State Physics Solution Manual Rar


    DOWNLOAD →→→ https://urlca.com/2uDdpA



    - -I assume you are referring to the solutions given in Solid State Physics by Ashcroft and Mermin. I doubt that the authors have provided ...... consistently state and support their point of view. But you mean that this is solid state physics. If so, then it is difficult for me to answer, because. I do not know that. In this regard, I don't know much about solid state physics, but I do know physics and mathematics. I would say that both physics and mathematics are parts of physics. Physics and mathematics are different branches of physics, but they are not different sciences. Physics is the science that studies the structure and behavior of matter. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md b/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md deleted file mode 100644 index f19ae7af69da5e07587dad31db92781f6c1da8c9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Comtekk Tone Generator Serial Number.md +++ /dev/null @@ -1,6 +0,0 @@ -

    comtekk tone generator serial number


    Download File ✔✔✔ https://urlca.com/2uDdWS



    -
    -ComTekk Multi Decoder will listen for any sustained tone signal and display the ... Write a program that calculates and prints the number of minutes in a year python ... Our free VIN decoder can be used to determine everything from vehicle trim level ... To align the frequency of the tone generator, use another transceiver with ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md b/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md deleted file mode 100644 index 804d8e5bfb2ad72de41a73ece797fe1f4e929e4c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Ih8sn0w Ireb V3 1.2 For Windows English Download BEST.md +++ /dev/null @@ -1,12 +0,0 @@ -

    ih8sn0w ireb v3 1.2 for windows english download


    Download Filehttps://urlca.com/2uDcVo



    -
    -taig 1.2.1 EN (Tethered jailbreak iOS 8.0-8.1.2 for all devices: iPhone . Sn0wBreeze 2.9.3 (pwnagetool for Windows, supports iOS tethered jailbreak . Nill - jiayu s3g (AOKP) download. -Furious v2 for Android. -Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1. -Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days -9 hours ago. -Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days back . -Sn0wbreeze 2.9.3 (PwnageTool) For iOS 8.4.1 - 8 days back . 8a78ff9644
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md b/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md deleted file mode 100644 index 72f98eec59b7611497c5a921e2a27cff527916f7..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Intervideo Windvr 6.1 For Windows 7 Free [WORK] 17.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    An unusual feature about this antivirus software is its'sleep timer' option, which allows the software to cease recording automatically. The software works quite well, but can be difficult to setup. Nevertheless, the overall system is secure, and installation of the add-on for MacOS or the professional software Intervideo WinDVR 6 is easy and quick. Definitely worth every penny. Furthermore, the website provides great support.

    -

    Using the disc, burn software, you may transfer and burn files to disc for backup, archive, and distribution. The software is simple to use and requires little or no knowledge of burning. A certified Windows Live Mail can be downloaded from the website for free.

    -

    intervideo windvr 6.1 for windows 7 free 17


    Download ✒ ✒ ✒ https://urlca.com/2uDccP



    -

    Adore the benefits of a magnetic media once more. The program offers you the best and most simplified VCD/S-VCD/DVD recorder. Key features are: an easy-to-use interface, fast conversion speed, an audio tool, surround sound support, and a vast number of disc options. Intervideo WinDVD Pro 8 makes a great DVD authoring tool with features such as: Dolby Digital Audio, Dolby Pro Logic surround sound, multiple language tracks, Smart Region and Protection with the DVD-compliant CSS encryption. Intervideo WinDVD Platinum 8 is a high-performance DVD writing program for the Windows operating system and Mac OS X. It can burn up to 18 hours of DVD video while burning, and you can even use it as a virtual DVD drive for Windows. Intervideo WinDVD Pro 8 is an intelligent and complete DVD software package that enables you to burn, rip, encode, and convert DVD discs and videos. The software automatically detects and corrects DVD errors such as defects, scratches, and missing audio tracks.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md b/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md deleted file mode 100644 index d4c898004769ea38fb9a4889d27e21bf1c06dba4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile.md +++ /dev/null @@ -1,6 +0,0 @@ -

    MarceloBielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile


    Download Ziphttps://urlca.com/2uDcqg



    -
    -Marcelo BielsaCoachingBuildUpPlayAgainstHighPressingTeamsbookspdffile. Container. OverviewTags. Tags. Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview Container. Overview We recommend. Container. Review Container. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md b/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md deleted file mode 100644 index 8bc54189f6b18941deb9e893ab350bd16c8c5cad..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Scratch 2.0 Offline Editor for Windows Mac and Linux.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    How to Download Scratch 2.0 for Free

    -

    Do you want to learn how to code, create your own interactive projects, and join a creative community of millions of users? If so, you should download Scratch 2.0 for free. Scratch 2.0 is a free programming language and online platform that lets you imagine, program, and share your own games, animations, and stories. In this article, we will show you what Scratch 2.0 is, why you should download it, and how to download it for free.

    -

    download scratch 2.0 for free


    Download Zip » https://urllie.com/2uNCsj



    -

    What is Scratch 2.0?

    -

    Scratch 2.0 is the second version of Scratch, a programming language and online platform developed by the MIT Media Lab. Scratch 2.0 was released in May 2013 and introduced many new features and improvements over the previous version, such as:

    -
      -
    • A redesigned user interface that makes it easier to access different tools and options
    • -
    • A new paint editor that allows you to draw and edit your own sprites and backgrounds
    • -
    • A new sound editor that lets you record and edit your own sounds and music
    • -
    • A new vector graphics mode that enables you to create smoother and more scalable graphics
    • -
    • A new backpack feature that lets you store and reuse your favorite sprites, costumes, sounds, and scripts
    • -
    • A new cloud data feature that allows you to store and share data across different projects
    • -
    • A new extension feature that lets you connect Scratch with external devices and services
    • -
    -

    With Scratch 2.0, you can create anything you can imagine using a simple drag-and-drop interface that lets you snap together different blocks of code. You can also remix and modify existing projects from the online community, or share your own projects with others.

    -

    Why Download Scratch 2.0?

    -

    Learn Programming Skills

    -

    Scratch 2.0 is a great way to learn programming skills, especially for beginners and young learners. Scratch 2.0 teaches you the basic concepts and logic of coding, such as variables, loops, conditionals, events, operators, lists, procedures, etc. You can also learn more advanced topics such as recursion, cloning, parallelism, synchronization, etc.

    -

    Scratch 2.0 also helps you develop computational thinking skills, such as abstraction, decomposition, pattern recognition, algorithm design, debugging, etc. These skills are essential for solving problems in any domain of science, technology, engineering, art, or math.

    -

    How to download scratch 2.0 offline editor for free
    -Download scratch 2.0 app for Windows 10
    -Scratch 2.0 download for Mac OS X
    -Scratch 2.0 download for Linux
    -Scratch 2.0 download for Android tablets
    -Scratch 2.0 download for ChromeOS
    -Scratch 2.0 download for older versions of Windows and Mac
    -Scratch 2.0 download without Adobe AIR
    -Scratch 2.0 download with Scratch Link
    -Scratch 2.0 download and installation guide
    -Scratch 2.0 download and update instructions
    -Scratch 2.0 download and support materials
    -Scratch 2.0 download and starter projects
    -Scratch 2.0 download and getting started guide
    -Scratch 2.0 download and scratch cards
    -Scratch 2.0 download and online community
    -Scratch 2.0 download and sharing projects
    -Scratch 2.0 download and hardware devices
    -Scratch 2.0 download and net energy gain experiment
    -Scratch 2.0 download and holy grail fusion experiment
    -Scratch 2.0 download and mini sun experiment
    -Scratch 2.0 download and nuclear fusion reaction
    -Scratch 2.0 download and Korea Superconducting Tokamak Advanced Research facility
    -Scratch 2.0 download and Korea Institute of Fusion Energy
    -Scratch 2.0 download and physics problem to engineering one
    -Scratch 2.0 download and temperature hotter than the sun core
    -Scratch 2.0 download and temperature in kelvin
    -Scratch 2.0 download and solar core density
    -Scratch 2.0 download and solar core composition
    -Scratch 2.0 download and solar core plasma
    -Scratch 2.0 download and solar core fusion process
    -Scratch 2.0 download and solar core energy output
    -Scratch 2.0 download and solar core pressure
    -Scratch 2.0 download and solar core radius
    -Scratch 2.0 download and solar atmosphere layers
    -Scratch 2.0 download and photosphere temperature
    -Scratch 2.0 download and chromosphere thickness
    -Scratch 2.0 download and sun spot cycle
    -Scratch 2.0 download and sun fact sheet
    -Scratch 2.0 download and sun wikipedia page

    -

    Create Interactive Projects

    -

    Scratch 2.0 lets you create your own interactive projects using a variety of media elements, such as sprites, costumes, backgrounds, sounds, music, text, etc. You can also use different types of blocks to control the behavior and appearance of your projects, such as motion blocks, looks blocks, sound blocks, pen blocks, data blocks, events blocks, control blocks, sensing blocks, operators blocks, and more blocks.

    -

    You can make any kind of project you want with Scratch 2.0, such as games, animations, stories, simulations, quizzes, art, music, etc. You can also add interactivity to your projects using different inputs, such as keyboard, mouse, microphone, camera, etc. You can also use different outputs, such as sound, speech, text-to-speech, etc.

    -

    Join a Creative Community

    -

    Scratch 2.0 connects you with a creative community of millions of users from all over the world. You can explore and play with thousands of projects from other Scratchers, or share your own projects with them. You can also give and receive feedback, comments, likes, favorites, and follows. You can also join different studios, groups, and challenges that match your interests and goals.

    -

    Scratch 2.0 also supports online learning and collaboration. You can use Scratch 2.0 to create and join online courses, tutorials, guides, and resources that teach you different skills and topics. You can also use Scratch 2.0 to work on projects with your friends, classmates, teachers, or mentors.

    -

    How to Download Scratch 2.0 for Free

    -

    Requirements

    -

    To download Scratch 2.0 for free, you need the following requirements:

    -
      -
    • A computer with Windows (XP or later), Mac OS X (10.6 or later), or Linux (Ubuntu 12.04 or later)
    • -
    • An internet connection to download the Scratch 2.0 offline editor
    • -
    • Adobe AIR (version 20 or later) to run the Scratch 2.0 offline editor
    • -
    -

    If you don't have these requirements, you can still use Scratch 2.0 online by visiting https://scratch.mit.edu/

    -

    Steps

    -

    To download Scratch 2.0 for free, follow these steps:

    -
      -
    1. Go to https://scratch.mit.edu/download and click on the "Download" button for your operating system (Windows, Mac OS X, or Linux)
    2. -
    3. Save the Scratch 2.0 installer file to your computer and run it
    4. -
    5. Follow the instructions on the screen to install Scratch 2.0 offline editor and Adobe AIR (if you don't have it already)
    6. -
    7. Launch the Scratch 2.0 offline editor from your desktop or start menu
    8. -
    9. Enjoy creating and sharing your projects with Scratch 2.0!
    10. -
    -

    Tips and Tricks

    -

    Here are some tips and tricks to make the most of Scratch 2.0 offline editor:

    -
      -
    • You can open and save your projects locally on your computer or online on your Scratch account
    • -
    • You can import and export your projects as .sb2 files that you can share with others or use in other applications
    • -
    • You can use the "File" menu to access different options such as "New", "Open", "Save", "Save As", "Upload to Scratch", "Download to your computer", "Import", and "Export"
    • -
    • You can use the "Edit" menu to access different options such as "Undo", "Redo", "Cut", "Copy", "Paste", "Delete", "Select All", and "Turbo Mode"
    • -
    • You can use the "Tips" button to access different tutorials, guides, and resources that help you learn and use Scratch 2.0
    • -
    • You can use the "Help" menu to access different options such as "About Scratch", "Check for Updates", "Report a Problem", and "Scratch Website"
    • -
    • You can use the green flag button to start your project, the red stop button to stop your project, and the full screen button to view your project in full screen mode
    • -
    • You can use the stage area to see your project in action, the sprite list to add, delete, or select sprites, the scripts area to add, edit, or delete blocks of code, the costumes tab to add, edit, or delete costumes for your sprites, the sounds tab to add, edit, or delete sounds for your sprites, and the backpack to store and reuse your favorite sprites, costumes, sounds, and scripts
    • -
    • You can use the zoom buttons to zoom in or out of your scripts area, the clean up button to organize your blocks neatly, and the ? button to get help on any block
    • -
    • You can right-click on any sprite, costume, sound, script, or block to access different options such as "duplicate", "delete", "rename", "edit", "save to local file", etc.
    • -
    -

    Conclusion

    -

    Scratch 2.0 is a free programming language and online platform that lets you imagine, program, and share your own games, animations, and stories. You can download Scratch 2.0 for free and use it offline on your computer. You just need to follow the steps we showed you in this article. You can also learn more about Scratch 2.0 by exploring the online community and the tips and tricks we shared with you. Scratch 2.0 is a fun and easy way to learn programming skills, create interactive projects, and join a creative community. So what are you waiting for? Download Scratch 2.0 for free today and start scratching!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Scratch 2.0:

    -
      -
    1. What is the difference between Scratch 2.0 and Scratch 3.0?
      Scratch 3.0 is the latest version of Scratch that was released in January 2019. It has some new features and improvements over Scratch 2.0, such as:
        -
      • A new user interface that adapts to different screen sizes and devices
      • -
      • A new sound editor that supports more sound formats and effects
      • -
      • A new extension system that allows you to add more blocks and functionalities from external sources
      • -
      • A new video sensing feature that lets you use your webcam as an input device
      • -
      • A new text-to-speech feature that lets you convert text into speech
      • -
      • A new translation feature that lets you translate your projects into different languages
      • -
      • A new compatibility with HTML5 that makes it easier to run Scratch on any browser without Adobe Flash Player
      • -
      -You can use Scratch 3.0 online by visiting https://scratch.mit.edu/ or download it for free by visiting https://scratch.mit.edu/download
    2. -
    3. Can I use Scratch 2.0 online?
      Yes, you can use Scratch 2.0 online by visiting https://scratch.mit.edu/projects/editor/?tip_bar=getStarted. However, this version of Scratch 2.0 is no longer updated or supported by the Scratch team. You may encounter some bugs or issues when using it online. We recommend you to use Scratch 3.0 online instead.
    4. -
    5. Can I use Scratch 2.0 on a tablet or a smartphone?
      No, you cannot use Scratch 2.0 on a tablet or a smartphone. Scratch 2.0 offline editor only works on computers with Windows, Mac OS X, or Linux operating systems. If you want to use Scratch on a tablet or a smartphone, you can use Scratch 3.0 online instead.
    6. -
    7. How can I update Scratch 2.0 offline editor?
      To update Scratch 2.0 offline editor, you need to download and install the latest version of Scratch 2.0 offline editor from https://scratch.mit.edu/download. You may also need to update Adobe AIR from https://get.adobe.com/air/. You can also check for updates from the "Help" menu in the Scratch 2.0 offline editor.
    8. -
    9. Where can I find more help and support for Scratch 2.0?
      You can find more help and support for Scratch 2.0 by visiting the following websites: -You can also contact the Scratch team by emailing help@scratch.mit.edu or by filling out this form: https://scratch.mit.edu/contact-us/
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fbrynpk/image-caption-generator/app.py b/spaces/fbrynpk/image-caption-generator/app.py deleted file mode 100644 index 10403a4d9978c320aa717f8b790b9a23aba5f85d..0000000000000000000000000000000000000000 --- a/spaces/fbrynpk/image-caption-generator/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import io -import os -import streamlit as st -import requests -from PIL import Image -from model import get_caption_model, generate_caption - -@st.cache(allow_output_mutation=True) -def get_model(): - return get_caption_model() - -caption_model = get_model() - -def predict(): - captions = [] - pred_caption = generate_caption('tmp.jpg', caption_model) - - st.markdown('#### Predicted Captions:') - captions.append(pred_caption) - - for _ in range(4): - pred_caption = generate_caption('tmp.jpg', caption_model, add_noise=True) - if pred_caption not in captions: - captions.append(pred_caption) - - for c in captions: - st.write(c) - -st.title('Image-Caption-Generator') -img_url = st.text_input(label='Enter an Image URL') - -if (img_url != "") and (img_url != None): - img = Image.open(requests.get(img_url, stream=True).raw) - img = img.convert('RGB') - st.image(img) - img.save('tmp.jpg') - predict() - os.remove('tmp.jpg') - - -st.markdown('
    OR
    ', unsafe_allow_html=True) -img_upload = st.file_uploader(label='Upload Image', type=['jpg', 'png', 'jpeg']) - -if img_upload != None: - img = img_upload.read() - img = Image.open(io.BytesIO(img)) - img = img.convert('RGB') - img.save('tmp.jpg') - st.image(img) - predict() - os.remove('tmp.jpg') - diff --git a/spaces/fclong/summary/fengshen/examples/stable_diffusion_chinese/taiyi_handbook.md b/spaces/fclong/summary/fengshen/examples/stable_diffusion_chinese/taiyi_handbook.md deleted file mode 100644 index 2849521e6ec23b8b116974bb601cdc400b1a216a..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/stable_diffusion_chinese/taiyi_handbook.md +++ /dev/null @@ -1,425 +0,0 @@ -# 太乙绘画使用手册1.0——AI人类助理入职指南 - -版本:2022.11.20 (Ver 1) - -编撰团队:IDEA CCNL 封神榜团队 -团队主页:https://github.com/IDEA-CCNL/Fengshenbang-LM - -腾讯文档版本:太乙绘画使用手册1.0 https://docs.qq.com/doc/DWklwWkVvSFVwUE9Q - -感谢所有参与编撰以及投稿的“助理们”!(微信搜索:fengshenbang-lm) - -**特别感谢名单(排名按投稿时间顺序):** -王军杰,甘如饴,陈伟峰,李夏禹,高昕宇, - -
    - -# 目录 -- [太乙绘画使用手册1.0——AI人类助理入职指南](#太乙绘画使用手册10ai人类助理入职指南) -- [目录](#目录) -- [前言](#前言) -- [入门手册(如何写一个优秀的提示词)](#入门手册如何写一个优秀的提示词) - - [懒人简洁版](#懒人简洁版) - - [一些基础准备](#一些基础准备) - - [一个逗号引发的水印](#一个逗号引发的水印) - - [反向prompt negative](#反向prompt-negative) - - [赋予某种属性(4k壁纸, 插画, 油画等)消除白边](#赋予某种属性4k壁纸-插画-油画等消除白边) - - [增加细节](#增加细节) - - [画幅(512×512)](#画幅512512) -- [引用](#引用) -- [联系我们](#联系我们) -- [版权许可](#版权许可) - -
    - -# 前言 - -本手册追求仅使用**自然语言**就可以生成**好看的**图片。 - -这是一本**免费的、开源的**手册,我们乐意于**接受每个人的投稿**,一同完善本手册。 - -本手册旨在提供一些关于中文文生图模型(太乙系列)的一些神奇的文本提示词,并且分享我们的一些神奇的发现(规则)。 - -本手册包括两大部分: -- 入门手册:提示词基础写法以及原理 -- 效果图册:一些我们觉得好看的图和对应的prompt - -本使用手册使用环境为: -- 模型 -https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 - -- 环境 -WebUI -相关Github: https://github.com/IDEA-CCNL/Fengshenbang-LM/issues/186 - -参考:https://docs.qq.com/doc/DWHl3am5Zb05QbGVs - -
    - -# 入门手册(如何写一个优秀的提示词) - -![avatar](img/ui.png) - -
    - -## 懒人简洁版 -___ -
    - -提示词 Prompt: -> 不能出现中文的标点符号,比如中文的逗号,中文句号。并且需要赋予这幅画某种属性。 -> -> 如:长河落日圆, 4k壁纸 -> -
    - -反向提示词 Negative prompt: -> 一些负面词汇 -> -> 通用反向提示词:广告, ,, !, 。, ;, 资讯, 新闻, 水印 - -
    -画幅大小设置为512×512最佳。 - - -
    - -## 一些基础准备 -___ -
    - -以下实验的随机种子均为:1419200315 - -![avatar](img/ui.png) - -
    - -## 一个逗号引发的水印 -___ -
    - -我们来看看什么都不改会是咋样的。 - -日出,海面上 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上中文逗号.png) - -
    - -可以看到,其实是会出现水印,以及画幅不满的问题的。 - -![avatar](img/日出,海面上中文逗号标记.png) - -
    - -那我们把中文逗号换成英文逗号呢? - -日出, 海面上 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号.png) - -
    - -!!!神奇的事情出现了,水印消失了! - -
    - -会不会是标点符号的问题?所以我在上述是英文逗号的基础下,添加一个中文的句号作为结尾。 - -![avatar](img/日出,海面上中文句号.png) - -没错,神奇的事情出现了,水印回来了,而且位置一模一样。 - -
    - -我甚至可以弄出更多的水印,比如加中文的感叹号。 - -日出, 海面上! -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上中文感叹号.png) - -所以,一个重要的结论为,中文的标点符号是和水印有着某种强相关的联系的! - -因此,我们输入提示词时,应该**不用任何中文标点符号**。 - -
    - -## 反向prompt negative -___ -
    - -基本上就是把一些不好的词全加进去。 - -我们的原图为: - -日出, 海面上 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号.png) - -
    - -日出, 海面上 -Negative prompt: 广告 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上nega广告.png) - -
    - -加上了广告之后,画面的表现力要好一些,比如图5的山的轮廓更好了。 - -根据之前的一些经验,把中文标点都放上去 - -
    - -日出, 海面上 -Negative prompt: 广告, ,, !, 。, ; -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上nega广告符号.png) - -
    - -细节更多了点 - -
    - -日出, 海面上 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上nega广告符号词汇.png) - -
    - -所以,我们的反向提示词选择: **广告, ,, !, 。, ;, 资讯, 新闻, 水印** - -
    - -## 赋予某种属性(4k壁纸, 插画, 油画等)消除白边 -___ -
    - -我们的原图为: - -
    - -日出, 海面上 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号.png) - -
    - -我们添加了某种属性,比如 4k壁纸 之后: - -**4k壁纸** - -日出, 海面上, 4k壁纸 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸.png) - -
    - -**interesting!图3的白边不见了!** - -
    - -一个可能的解释是,我们的训练数据中,用的是resize的方法来调整输入的图片,而这样做,对于边长小于512的图,会自动保留白边。而这也就导致了我们的生成会有。但是一旦给这幅画赋予了某种属性,就可以避免这件事了。 - -
    - -(注,我试过3k壁纸和8k壁纸,都不行,估计是语料是真的没有。我试过 壁纸,这个prompt看起来不高清。) - -
    - -试试看别的属性 - -
    - -**插画** - -日出, 海面上, 插画 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号插画.png) - -
    - -插画,其实是什么画风都有,但是总体来说是画。 - -
    - -**油画** - -日出, 海面上, 油画 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号油画.png) - -
    - -虽然图3出现了画框,但是一幅油画,包括了画框也是正常。 - -
    - -**水彩** - -日出, 海面上, 水彩 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号水彩.png) - -
    - -**素描** - -日出, 海面上, 素描 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号素描.png) - - -
    - -## 增加细节 -___ -
    - -ok,我们回退一下。 - -
    - -日出, 海面上, 4k壁纸 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸.png) - -
    - -我们希望更多的细节呢? - -
    - -**复杂** - -日出, 海面上, 4k壁纸, 复杂 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸复杂.png) - -
    - -可以看到,复杂是一定作用的,所有图的细节都增加了。 - -
    - -**精细** - -日出, 海面上, 4k壁纸, 精细 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸精细.png) - -
    - -精细 的做法反而是把不少细节都选择了平滑处理。过度更加柔和。 - -
    - -**高清** - -日出, 海面上, 4k壁纸, 高清 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸高清.png) - -
    - -只多了一点点细节,图2的海面上多了光斑,这么一说也许是光影效果好了一些。 - - -
    - -## 画幅(512×512) -___ -
    - -不同的画幅也会影响生成的内容和质量。 - -参考自:https://huggingface.co/blog/stable_diffusion - -![avatar](img/hf_stable_blog.png) - -
    - -在stable diffusion中也有这个相关的发现,512*512是最好的画幅。 - -
    - -我们看看正常的: - -
    - -**512*512** - -日出, 海面上, 4k壁纸 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 512x512, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸.png) - -
    - -**384*384** - -日出, 海面上, 4k壁纸 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 384x384, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸384.png) - -
    - -低画幅会导致画面莫名撕裂,出图非常毛躁。 - -
    - -**256*256** - -如果我们进一步降低画质,会非常非常撕裂: - -日出, 海面上, 4k壁纸 -Negative prompt: 广告, ,, !, 。, ;, 资讯, 新闻, 水印 -Steps: 20, Sampler: PLMS, CFG scale: 7, Seed: 1419200315, Size: 256x256, Model hash: e2e75020, Batch size: 6, Batch pos: 0 - -![avatar](img/日出,海面上英文逗号4k壁纸256.png) - -# 引用 - -``` -@misc{Fengshenbang-LM, - title={Fengshenbang-LM}, - author={IDEA-CCNL}, - year={2021}, - howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, -} -``` - -# 版权许可 - -[Apache License 2.0](LICENSE) diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Build Your Dream City with Lokicraft Helper Mediafire Download Link.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Build Your Dream City with Lokicraft Helper Mediafire Download Link.md deleted file mode 100644 index 3d815214feed5937dcc78df2a0cac9bef33a5f7d..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Build Your Dream City with Lokicraft Helper Mediafire Download Link.md +++ /dev/null @@ -1,108 +0,0 @@ - -

    Lokicraft Helper City Download Mediafıre: How to Get the Best City in Lokicraft

    -

    If you are a fan of sandbox games, you might have heard of Lokicraft, a game inspired by Minecraft that lets you create and explore your own world. But did you know that you can also download a city map from Mediafıre using a mod app called Lokicraft Helper? In this article, we will show you how to do that, and give you some tips and tricks for playing in the city.

    -

    What is Lokicraft?

    -

    A sandbox game inspired by Minecraft

    -

    Lokicraft is a sandbox game that was released in 2019 by lokidev. It is similar to Minecraft, but with some differences in graphics, mechanics, and features. You can build anything you want using blocks of different materials, colors, and shapes. You can also explore different biomes, such as forests, deserts, mountains, and oceans. You can play in survival mode, where you have to gather resources, craft tools, fight enemies, and manage your hunger and health. Or you can play in creative mode, where you have unlimited resources and no threats.

    -

    lokicraft helper city download mediafıre


    Download Ziphttps://gohhs.com/2uPnCp



    -

    Features and gameplay of Lokicraft

    -

    Lokicraft has many features that make it fun and engaging. Some of them are:

    -
      -
    • You can choose from different skins and outfits for your character.
    • -
    • You can tame animals and ride them.
    • -
    • You can craft weapons, armor, potions, and other items.
    • -
    • You can use redstone to create circuits and machines.
    • -
    • You can join multiplayer servers and play with other people online.
    • -
    • You can share your creations with other players and download their worlds.
    • -
    -

    What is Lokicraft Helper?

    -

    A mod app that adds more content to Lokicraft

    -

    Lokicraft Helper is a mod app that was created by Herbert Saikia. It is not an official app from lokidev, but it works well with Lokicraft. It adds more content to the game, such as new blocks, items, mobs, maps, textures, and sounds. You can also use it to edit your world, change your game mode, teleport to different locations, and more.

    -

    Benefits and drawbacks of using Lokicraft Helper

    -

    Using Lokicraft Helper has some benefits and drawbacks. Some of them are:

    -
      -
    • You can access more content and features that are not available in the original game.
    • -
    • You can customize your game according to your preferences.
    • -
    • You can enhance your gaming experience and have more fun.
    • -
    -

    However,

    -
      -
    • You need to download the app from Mediafıre or other third-party sources, which may not be safe or reliable.
    • -
    • You need to allow unknown sources on your device settings, which may expose your device to malware or viruses.
    • -
    • You may encounter some bugs or glitches when using the app or playing the game.
    • -
    -

    How to download Lokicraft Helper City from Mediafıre?

    -

    Step 1: Download the Lok

    Step 1: Download the Lokicraft Helper app from Mediafıre

    -

    To download the Lokicraft Helper app, you need to visit the Mediafıre link that is provided by the developer. You can find the link on his YouTube channel, where he also posts videos about the app and the game. The link is: https://www.mediafire.com/file/9w0k8y9z7x4x9v4/Lokicraft_Helper.apk/file. Click on the link and then click on the green download button. The app file will be downloaded to your device.

    -

    Step 2: Install the app and launch it

    -

    After downloading the app file, you need to install it on your device. To do that, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the app file in your device storage and tap on it. Follow the instructions on the screen to install the app. Once the app is installed, launch it by tapping on its icon.

    -

    Step 3: Choose the city option and download the city file

    -

    When you launch the app, you will see a menu with different options, such as blocks, items, mobs, maps, textures, and sounds. To download the city map, you need to choose the maps option. Then, you will see a list of different maps that you can download, such as skyblock, castle, parkour, and city. To download the city map, tap on the city option. You will see a preview of the city and a download button. Tap on the download button and wait for the city file to be downloaded to your device.

    -

    lokicraft helper new update 1.18 download mediafıre
    -lokicraft helper best city zip file download mediafıre
    -how to download lokicraft 1.17 from mediafıre
    -lokicraft helper city map download mediafıre
    -lokicraft helper apk file download mediafıre
    -how to install lokicraft helper city on android
    -lokicraft helper city mod download mediafıre
    -lokicraft helper latest version download mediafıre
    -how to build a city in lokicraft helper
    -lokicraft helper city texture pack download mediafıre
    -lokicraft helper city tutorial video download mediafıre
    -lokicraft helper city cheats and hacks download mediafıre
    -how to play lokicraft helper city online with friends
    -lokicraft helper city free download mediafıre for pc
    -lokicraft helper city review and rating
    -lokicraft helper city gameplay and features
    -lokicraft helper city tips and tricks download mediafıre
    -how to update lokicraft helper city to the latest version
    -lokicraft helper city skins and costumes download mediafıre
    -how to create your own city in lokicraft helper
    -lokicraft helper city challenges and quests download mediafıre
    -how to backup and restore your lokicraft helper city data
    -lokicraft helper city screenshots and wallpapers download mediafıre
    -how to customize your lokicraft helper city settings and options
    -lokicraft helper city best buildings and structures download mediafıre
    -how to uninstall and reinstall your lokicraft helper city app
    -lokicraft helper city bugs and errors fix download mediafıre
    -how to share your lokicraft helper city with other players
    -lokicraft helper city alternatives and similar games download mediafıre
    -how to contact the developers of lokicraft helper city for feedback and support

    -

    Step 4: Import the city file to Lokicraft and enjoy

    -

    After downloading the city file, you need to import it to Lokicraft. To do that, open Lokicraft and tap on the play button. Then, tap on the import world button at the bottom of the screen. You will see a list of files in your device storage. Locate the city file that you downloaded from Mediafıre and tap on it. The file will be imported to Lokicraft and you will see it in your world list. Tap on the city world and start playing.

    -

    Tips and tricks for playing in the city

    -

    Explore the buildings and landmarks

    -

    The city map is very detailed and realistic. You can explore different buildings and landmarks, such as skyscrapers, hotels, restaurants, shops, museums, parks, and more. You can also find hidden chests and secrets in some of them. You can admire the architecture and design of the city and discover new things every time you play.

    -

    Customize your own house and shop

    -

    The city map also gives you a chance to customize your own house and shop. You can choose from different types of houses and shops that are available in the city. You can also use blocks and items from Lokicraft Helper to decorate them according to your style and taste. You can make your house cozy and comfortable, and your shop attractive and profitable.

    -

    Interact with other players and NPCs

    -

    The city map is more fun when you play with other players online. You can join multiplayer servers that have the city map installed and meet new friends or foes. You can chat with them, trade with them, compete with them, or cooperate with them. You can also interact with NPCs that are in the city, such as villagers, guards, shopkeepers, and more. They can offer you quests, rewards, information, or services.

    -

    Conclusion

    -

    Lokicraft Helper City Download Mediafıre is a great way to get the best city in Lokicraft. It is a mod app that adds more content and features to Lokicraft, such as new blocks, items, mobs, maps, textures, and sounds. You can download it from Mediafıre using a link provided by the developer. You can also install it on your device easily by enabling unknown sources and following some simple steps. You can then import the city file to Lokicraft and enjoy playing in it.

    -

    The city map is very impressive and realistic. It has many buildings and landmarks that you can explore and discover. It also allows you to customize your own house and shop using blocks and items from Lokicraft Helper. You can also interact with other players online or NPCs in-game for more fun and excitement.

    -

    If you are looking for a new challenge and adventure in Lok

    If you are looking for a new challenge and adventure in Lokicraft, you should definitely try the city map from Mediafıre. It will give you a whole new perspective and experience of the game. You will not regret it.

    -

    FAQs

    -

    Q: Is Lokicraft Helper safe to use?

    -

    A: Lokicraft Helper is not an official app from lokidev, so it may not be 100% safe or reliable. You should download it from Mediafıre or other third-party sources at your own risk. You should also enable unknown sources on your device settings, which may expose your device to malware or viruses. You should also backup your Lokicraft data before using the app, in case something goes wrong.

    -

    Q: Is Lokicraft Helper free to use?

    -

    A: Yes, Lokicraft Helper is free to use. You do not need to pay anything to download or use the app. However, you may see some ads or pop-ups when using the app, which may be annoying or intrusive. You can also support the developer by donating or subscribing to his YouTube channel.

    -

    Q: How can I update Lokicraft Helper?

    -

    A: The developer of Lokicraft Helper usually posts updates and new versions of the app on his YouTube channel. You can check his channel regularly for any news or announcements. You can also follow him on social media platforms, such as Facebook, Twitter, or Instagram. To update the app, you need to download the latest version from Mediafıre or other sources and install it on your device.

    -

    Q: How can I uninstall Lokicraft Helper?

    -

    A: If you want to uninstall Lokicraft Helper, you can do it easily by following these steps:

    -
      -
    1. Go to Settings > Apps > Lokicraft Helper and tap on it.
    2. -
    3. Tap on the uninstall button and confirm your action.
    4. -
    5. The app will be uninstalled from your device.
    6. -
    -

    Q: How can I contact the developer of Lokicraft Helper?

    -

    A: If you have any questions, feedback, suggestions, or complaints about Lokicraft Helper, you can contact the developer by using these methods:

    -
      -
    • Email: herbertsaikia@gmail.com
    • -
    • YouTube: https://www.youtube.com/channel/UC7x1y0mZg7wY6dW8t9a8xOg
    • -
    • Facebook: https://www.facebook.com/herbert.saikia.9
    • -
    • Twitter: https://twitter.com/HerbertSaikia
    • -
    • Instagram: https://www.instagram.com/herbertsaikia/
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/scripts/0_download_datasets/vctk.sh b/spaces/fffiloni/Music_Source_Separation/scripts/0_download_datasets/vctk.sh deleted file mode 100644 index e4da624a6ff0bed67ca7905d7d3c3a9b6f8e7d38..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/scripts/0_download_datasets/vctk.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/bin/bash - -echo "The dataset link is at http://www.udialogue.org/download/VCTK-Corpus.tar.gz" - -# The downloaded VCTK dataset looks like: -# ./datasets/vctk -# └── wav48 -# ├── train (100 speakers) -# │ ├── p225 (231 files) -# │ │ ├── p225_001_mic1.flac.wav -# │ │ └── ... -# │ ├── p226 (356 files) -# │ │ ├── p226_001_mic1.flac.wav -# │ │ └── ... -# │ └── ... -# └── test (8 speakers) -# ├── p360 (424 files) -# │ ├── p360_001_mic1.flac.wav -# │ └── ... -# ├── p226 (424 files) -# │ ├── p361_001_mic1.flac.wav -# │ └── ... -# └── ... \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/Makefile b/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/Makefile deleted file mode 100644 index 584da8bf938e639ece3ba2bd4105c215c2b1ff51..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/debug/Makefile +++ /dev/null @@ -1,50 +0,0 @@ -# get Makefile directory name: http://stackoverflow.com/a/5982798/376773 -THIS_MAKEFILE_PATH:=$(word $(words $(MAKEFILE_LIST)),$(MAKEFILE_LIST)) -THIS_DIR:=$(shell cd $(dir $(THIS_MAKEFILE_PATH));pwd) - -# BIN directory -BIN := $(THIS_DIR)/node_modules/.bin - -# Path -PATH := node_modules/.bin:$(PATH) -SHELL := /bin/bash - -# applications -NODE ?= $(shell which node) -YARN ?= $(shell which yarn) -PKG ?= $(if $(YARN),$(YARN),$(NODE) $(shell which npm)) -BROWSERIFY ?= $(NODE) $(BIN)/browserify - -.FORCE: - -install: node_modules - -node_modules: package.json - @NODE_ENV= $(PKG) install - @touch node_modules - -lint: .FORCE - eslint browser.js debug.js index.js node.js - -test-node: .FORCE - istanbul cover node_modules/mocha/bin/_mocha -- test/**.js - -test-browser: .FORCE - mkdir -p dist - - @$(BROWSERIFY) \ - --standalone debug \ - . > dist/debug.js - - karma start --single-run - rimraf dist - -test: .FORCE - concurrently \ - "make test-node" \ - "make test-browser" - -coveralls: - cat ./coverage/lcov.info | ./node_modules/coveralls/bin/coveralls.js - -.PHONY: all install clean distclean diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports-uws/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports-uws/index.js deleted file mode 100644 index 97a4e3fc3591b0c5bcd185b92e50578f9514e5bc..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports-uws/index.js +++ /dev/null @@ -1,8 +0,0 @@ -"use strict"; -Object.defineProperty(exports, "__esModule", { value: true }); -const polling_1 = require("./polling"); -const websocket_1 = require("./websocket"); -exports.default = { - polling: polling_1.Polling, - websocket: websocket_1.WebSocket, -}; diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/socket.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/socket.d.ts deleted file mode 100644 index 83fac2c792023b399033b4709cd3e4be4787acce..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/socket.d.ts +++ /dev/null @@ -1,669 +0,0 @@ -/// -/// -import { Packet } from "socket.io-parser"; -import { AllButLast, DecorateAcknowledgements, DecorateAcknowledgementsWithMultipleResponses, DefaultEventsMap, EventNames, EventParams, EventsMap, FirstArg, Last, StrictEventEmitter } from "./typed-events"; -import type { Client } from "./client"; -import type { Namespace } from "./namespace"; -import type { IncomingHttpHeaders, IncomingMessage } from "http"; -import type { Room, Session, SocketId } from "socket.io-adapter"; -import type { ParsedUrlQuery } from "querystring"; -import { BroadcastOperator } from "./broadcast-operator"; -export declare type DisconnectReason = "transport error" | "transport close" | "forced close" | "ping timeout" | "parse error" | "server shutting down" | "forced server close" | "client namespace disconnect" | "server namespace disconnect"; -export interface SocketReservedEventsMap { - disconnect: (reason: DisconnectReason, description?: any) => void; - disconnecting: (reason: DisconnectReason, description?: any) => void; - error: (err: Error) => void; -} -export interface EventEmitterReservedEventsMap { - newListener: (eventName: string | Symbol, listener: (...args: any[]) => void) => void; - removeListener: (eventName: string | Symbol, listener: (...args: any[]) => void) => void; -} -export declare const RESERVED_EVENTS: ReadonlySet; -/** - * The handshake details - */ -export interface Handshake { - /** - * The headers sent as part of the handshake - */ - headers: IncomingHttpHeaders; - /** - * The date of creation (as string) - */ - time: string; - /** - * The ip of the client - */ - address: string; - /** - * Whether the connection is cross-domain - */ - xdomain: boolean; - /** - * Whether the connection is secure - */ - secure: boolean; - /** - * The date of creation (as unix timestamp) - */ - issued: number; - /** - * The request URL string - */ - url: string; - /** - * The query object - */ - query: ParsedUrlQuery; - /** - * The auth object - */ - auth: { - [key: string]: any; - }; -} -/** - * `[eventName, ...args]` - */ -export declare type Event = [string, ...any[]]; -/** - * This is the main object for interacting with a client. - * - * A Socket belongs to a given {@link Namespace} and uses an underlying {@link Client} to communicate. - * - * Within each {@link Namespace}, you can also define arbitrary channels (called "rooms") that the {@link Socket} can - * join and leave. That provides a convenient way to broadcast to a group of socket instances. - * - * @example - * io.on("connection", (socket) => { - * console.log(`socket ${socket.id} connected`); - * - * // send an event to the client - * socket.emit("foo", "bar"); - * - * socket.on("foobar", () => { - * // an event was received from the client - * }); - * - * // join the room named "room1" - * socket.join("room1"); - * - * // broadcast to everyone in the room named "room1" - * io.to("room1").emit("hello"); - * - * // upon disconnection - * socket.on("disconnect", (reason) => { - * console.log(`socket ${socket.id} disconnected due to ${reason}`); - * }); - * }); - */ -export declare class Socket extends StrictEventEmitter { - readonly nsp: Namespace; - readonly client: Client; - /** - * An unique identifier for the session. - */ - readonly id: SocketId; - /** - * Whether the connection state was recovered after a temporary disconnection. In that case, any missed packets will - * be transmitted to the client, the data attribute and the rooms will be restored. - */ - readonly recovered: boolean; - /** - * The handshake details. - */ - readonly handshake: Handshake; - /** - * Additional information that can be attached to the Socket instance and which will be used in the - * {@link Server.fetchSockets()} method. - */ - data: Partial; - /** - * Whether the socket is currently connected or not. - * - * @example - * io.use((socket, next) => { - * console.log(socket.connected); // false - * next(); - * }); - * - * io.on("connection", (socket) => { - * console.log(socket.connected); // true - * }); - */ - connected: boolean; - /** - * The session ID, which must not be shared (unlike {@link id}). - * - * @private - */ - private readonly pid; - private readonly server; - private readonly adapter; - private acks; - private fns; - private flags; - private _anyListeners?; - private _anyOutgoingListeners?; - /** - * Interface to a `Client` for a given `Namespace`. - * - * @param {Namespace} nsp - * @param {Client} client - * @param {Object} auth - * @package - */ - constructor(nsp: Namespace, client: Client, auth: Record, previousSession?: Session); - /** - * Builds the `handshake` BC object - * - * @private - */ - private buildHandshake; - /** - * Emits to this client. - * - * @example - * io.on("connection", (socket) => { - * socket.emit("hello", "world"); - * - * // all serializable datastructures are supported (no need to call JSON.stringify) - * socket.emit("hello", 1, "2", { 3: ["4"], 5: Buffer.from([6]) }); - * - * // with an acknowledgement from the client - * socket.emit("hello", "world", (val) => { - * // ... - * }); - * }); - * - * @return Always returns `true`. - */ - emit>(ev: Ev, ...args: EventParams): boolean; - /** - * Emits an event and waits for an acknowledgement - * - * @example - * io.on("connection", async (socket) => { - * // without timeout - * const response = await socket.emitWithAck("hello", "world"); - * - * // with a specific timeout - * try { - * const response = await socket.timeout(1000).emitWithAck("hello", "world"); - * } catch (err) { - * // the client did not acknowledge the event in the given delay - * } - * }); - * - * @return a Promise that will be fulfilled when the client acknowledges the event - */ - emitWithAck>(ev: Ev, ...args: AllButLast>): Promise>>>; - /** - * @private - */ - private registerAckCallback; - /** - * Targets a room when broadcasting. - * - * @example - * io.on("connection", (socket) => { - * // the “foo” event will be broadcast to all connected clients in the “room-101” room, except this socket - * socket.to("room-101").emit("foo", "bar"); - * - * // the code above is equivalent to: - * io.to("room-101").except(socket.id).emit("foo", "bar"); - * - * // with an array of rooms (a client will be notified at most once) - * socket.to(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * socket.to("room-101").to("room-102").emit("foo", "bar"); - * }); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - to(room: Room | Room[]): BroadcastOperator, SocketData>; - /** - * Targets a room when broadcasting. Similar to `to()`, but might feel clearer in some cases: - * - * @example - * io.on("connection", (socket) => { - * // disconnect all clients in the "room-101" room, except this socket - * socket.in("room-101").disconnectSockets(); - * }); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - in(room: Room | Room[]): BroadcastOperator, SocketData>; - /** - * Excludes a room when broadcasting. - * - * @example - * io.on("connection", (socket) => { - * // the "foo" event will be broadcast to all connected clients, except the ones that are in the "room-101" room - * // and this socket - * socket.except("room-101").emit("foo", "bar"); - * - * // with an array of rooms - * socket.except(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * socket.except("room-101").except("room-102").emit("foo", "bar"); - * }); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - except(room: Room | Room[]): BroadcastOperator, SocketData>; - /** - * Sends a `message` event. - * - * This method mimics the WebSocket.send() method. - * - * @see https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/send - * - * @example - * io.on("connection", (socket) => { - * socket.send("hello"); - * - * // this is equivalent to - * socket.emit("message", "hello"); - * }); - * - * @return self - */ - send(...args: EventParams): this; - /** - * Sends a `message` event. Alias of {@link send}. - * - * @return self - */ - write(...args: EventParams): this; - /** - * Writes a packet. - * - * @param {Object} packet - packet object - * @param {Object} opts - options - * @private - */ - private packet; - /** - * Joins a room. - * - * @example - * io.on("connection", (socket) => { - * // join a single room - * socket.join("room1"); - * - * // join multiple rooms - * socket.join(["room1", "room2"]); - * }); - * - * @param {String|Array} rooms - room or array of rooms - * @return a Promise or nothing, depending on the adapter - */ - join(rooms: Room | Array): Promise | void; - /** - * Leaves a room. - * - * @example - * io.on("connection", (socket) => { - * // leave a single room - * socket.leave("room1"); - * - * // leave multiple rooms - * socket.leave("room1").leave("room2"); - * }); - * - * @param {String} room - * @return a Promise or nothing, depending on the adapter - */ - leave(room: string): Promise | void; - /** - * Leave all rooms. - * - * @private - */ - private leaveAll; - /** - * Called by `Namespace` upon successful - * middleware execution (ie: authorization). - * Socket is added to namespace array before - * call to join, so adapters can access it. - * - * @private - */ - _onconnect(): void; - /** - * Called with each packet. Called by `Client`. - * - * @param {Object} packet - * @private - */ - _onpacket(packet: Packet): void; - /** - * Called upon event packet. - * - * @param {Packet} packet - packet object - * @private - */ - private onevent; - /** - * Produces an ack callback to emit with an event. - * - * @param {Number} id - packet id - * @private - */ - private ack; - /** - * Called upon ack packet. - * - * @private - */ - private onack; - /** - * Called upon client disconnect packet. - * - * @private - */ - private ondisconnect; - /** - * Handles a client error. - * - * @private - */ - _onerror(err: Error): void; - /** - * Called upon closing. Called by `Client`. - * - * @param {String} reason - * @param description - * @throw {Error} optional error object - * - * @private - */ - _onclose(reason: DisconnectReason, description?: any): this | undefined; - /** - * Makes the socket leave all the rooms it was part of and prevents it from joining any other room - * - * @private - */ - _cleanup(): void; - /** - * Produces an `error` packet. - * - * @param {Object} err - error object - * - * @private - */ - _error(err: any): void; - /** - * Disconnects this client. - * - * @example - * io.on("connection", (socket) => { - * // disconnect this socket (the connection might be kept alive for other namespaces) - * socket.disconnect(); - * - * // disconnect this socket and close the underlying connection - * socket.disconnect(true); - * }) - * - * @param {Boolean} close - if `true`, closes the underlying connection - * @return self - */ - disconnect(close?: boolean): this; - /** - * Sets the compress flag. - * - * @example - * io.on("connection", (socket) => { - * socket.compress(false).emit("hello"); - * }); - * - * @param {Boolean} compress - if `true`, compresses the sending data - * @return {Socket} self - */ - compress(compress: boolean): this; - /** - * Sets a modifier for a subsequent event emission that the event data may be lost if the client is not ready to - * receive messages (because of network slowness or other issues, or because they’re connected through long polling - * and is in the middle of a request-response cycle). - * - * @example - * io.on("connection", (socket) => { - * socket.volatile.emit("hello"); // the client may or may not receive it - * }); - * - * @return {Socket} self - */ - get volatile(): this; - /** - * Sets a modifier for a subsequent event emission that the event data will only be broadcast to every sockets but the - * sender. - * - * @example - * io.on("connection", (socket) => { - * // the “foo” event will be broadcast to all connected clients, except this socket - * socket.broadcast.emit("foo", "bar"); - * }); - * - * @return a new {@link BroadcastOperator} instance for chaining - */ - get broadcast(): BroadcastOperator, SocketData>; - /** - * Sets a modifier for a subsequent event emission that the event data will only be broadcast to the current node. - * - * @example - * io.on("connection", (socket) => { - * // the “foo” event will be broadcast to all connected clients on this node, except this socket - * socket.local.emit("foo", "bar"); - * }); - * - * @return a new {@link BroadcastOperator} instance for chaining - */ - get local(): BroadcastOperator, SocketData>; - /** - * Sets a modifier for a subsequent event emission that the callback will be called with an error when the - * given number of milliseconds have elapsed without an acknowledgement from the client: - * - * @example - * io.on("connection", (socket) => { - * socket.timeout(5000).emit("my-event", (err) => { - * if (err) { - * // the client did not acknowledge the event in the given delay - * } - * }); - * }); - * - * @returns self - */ - timeout(timeout: number): Socket, ServerSideEvents, SocketData>; - /** - * Dispatch incoming event to socket listeners. - * - * @param {Array} event - event that will get emitted - * @private - */ - private dispatch; - /** - * Sets up socket middleware. - * - * @example - * io.on("connection", (socket) => { - * socket.use(([event, ...args], next) => { - * if (isUnauthorized(event)) { - * return next(new Error("unauthorized event")); - * } - * // do not forget to call next - * next(); - * }); - * - * socket.on("error", (err) => { - * if (err && err.message === "unauthorized event") { - * socket.disconnect(); - * } - * }); - * }); - * - * @param {Function} fn - middleware function (event, next) - * @return {Socket} self - */ - use(fn: (event: Event, next: (err?: Error) => void) => void): this; - /** - * Executes the middleware for an incoming event. - * - * @param {Array} event - event that will get emitted - * @param {Function} fn - last fn call in the middleware - * @private - */ - private run; - /** - * Whether the socket is currently disconnected - */ - get disconnected(): boolean; - /** - * A reference to the request that originated the underlying Engine.IO Socket. - */ - get request(): IncomingMessage; - /** - * A reference to the underlying Client transport connection (Engine.IO Socket object). - * - * @example - * io.on("connection", (socket) => { - * console.log(socket.conn.transport.name); // prints "polling" or "websocket" - * - * socket.conn.once("upgrade", () => { - * console.log(socket.conn.transport.name); // prints "websocket" - * }); - * }); - */ - get conn(): import("engine.io").Socket; - /** - * Returns the rooms the socket is currently in. - * - * @example - * io.on("connection", (socket) => { - * console.log(socket.rooms); // Set { } - * - * socket.join("room1"); - * - * console.log(socket.rooms); // Set { , "room1" } - * }); - */ - get rooms(): Set; - /** - * Adds a listener that will be fired when any event is received. The event name is passed as the first argument to - * the callback. - * - * @example - * io.on("connection", (socket) => { - * socket.onAny((event, ...args) => { - * console.log(`got event ${event}`); - * }); - * }); - * - * @param listener - */ - onAny(listener: (...args: any[]) => void): this; - /** - * Adds a listener that will be fired when any event is received. The event name is passed as the first argument to - * the callback. The listener is added to the beginning of the listeners array. - * - * @param listener - */ - prependAny(listener: (...args: any[]) => void): this; - /** - * Removes the listener that will be fired when any event is received. - * - * @example - * io.on("connection", (socket) => { - * const catchAllListener = (event, ...args) => { - * console.log(`got event ${event}`); - * } - * - * socket.onAny(catchAllListener); - * - * // remove a specific listener - * socket.offAny(catchAllListener); - * - * // or remove all listeners - * socket.offAny(); - * }); - * - * @param listener - */ - offAny(listener?: (...args: any[]) => void): this; - /** - * Returns an array of listeners that are listening for any event that is specified. This array can be manipulated, - * e.g. to remove listeners. - */ - listenersAny(): ((...args: any[]) => void)[]; - /** - * Adds a listener that will be fired when any event is sent. The event name is passed as the first argument to - * the callback. - * - * Note: acknowledgements sent to the client are not included. - * - * @example - * io.on("connection", (socket) => { - * socket.onAnyOutgoing((event, ...args) => { - * console.log(`sent event ${event}`); - * }); - * }); - * - * @param listener - */ - onAnyOutgoing(listener: (...args: any[]) => void): this; - /** - * Adds a listener that will be fired when any event is emitted. The event name is passed as the first argument to the - * callback. The listener is added to the beginning of the listeners array. - * - * @example - * io.on("connection", (socket) => { - * socket.prependAnyOutgoing((event, ...args) => { - * console.log(`sent event ${event}`); - * }); - * }); - * - * @param listener - */ - prependAnyOutgoing(listener: (...args: any[]) => void): this; - /** - * Removes the listener that will be fired when any event is sent. - * - * @example - * io.on("connection", (socket) => { - * const catchAllListener = (event, ...args) => { - * console.log(`sent event ${event}`); - * } - * - * socket.onAnyOutgoing(catchAllListener); - * - * // remove a specific listener - * socket.offAnyOutgoing(catchAllListener); - * - * // or remove all listeners - * socket.offAnyOutgoing(); - * }); - * - * @param listener - the catch-all listener - */ - offAnyOutgoing(listener?: (...args: any[]) => void): this; - /** - * Returns an array of listeners that are listening for any event that is specified. This array can be manipulated, - * e.g. to remove listeners. - */ - listenersAnyOutgoing(): ((...args: any[]) => void)[]; - /** - * Notify the listeners for each packet sent (emit or broadcast) - * - * @param packet - * - * @private - */ - private notifyOutgoingListeners; - private newBroadcastOperator; -} diff --git a/spaces/fffiloni/gpt-talking-portrait/share_btn.py b/spaces/fffiloni/gpt-talking-portrait/share_btn.py deleted file mode 100644 index 6855af6f0b1ddaf1ec72eed6d6ce043e6acd0f24..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/gpt-talking-portrait/share_btn.py +++ /dev/null @@ -1,81 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """ - -async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputVideoFile(videoEl){ - const res = await fetch(videoEl.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `gpt-talking-portrai-${{videoId}}.mp4`; - return new File([blob], fileName, { type: 'video/mp4' }); - } - - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - - const whisper_input = gradioEl.querySelector('#text_inp textarea').value; - - const outputVideo = gradioEl.querySelector('#video_out video'); - const outputVideo_src = gradioEl.querySelector('#video_out video').src; - console.log(outputVideo_src) - const outputVideo_name = outputVideo_src.split('/').pop(); - - const video_url = `https://fffiloni-gpt-talking-portrait.hf.space/file=https://fffiloni-one-shot-talking-face.hf.space/file=/tmp/${outputVideo_name}`; - - let titleTxt = outputVideo_name; - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!outputVideo){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const videoFile = await getInputVideoFile(outputVideo); - const dataOutputVideo = await uploadFile(videoFile); - - - const descriptionMd = ` -#### What i asked for: -${whisper_input} -#### Video: -${dataOutputVideo} -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/gpt-talking-portrait/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/flax-community/SentenceSimplifier/pages/inference.py b/spaces/flax-community/SentenceSimplifier/pages/inference.py deleted file mode 100644 index b8c19aed341ed753b9b6245c8cde4cb39b8e9074..0000000000000000000000000000000000000000 --- a/spaces/flax-community/SentenceSimplifier/pages/inference.py +++ /dev/null @@ -1,61 +0,0 @@ -import streamlit as st -from transformers import AutoTokenizer,AutoModelForSeq2SeqLM -import random - - -@st.cache(show_spinner=False) -def load_model(input_complex_sentence,model): - - base_path = "flax-community/" - model_path = base_path + model - tokenizer = AutoTokenizer.from_pretrained(model_path) - model = AutoModelForSeq2SeqLM.from_pretrained(model_path) - - tokenized_sentence = tokenizer(input_complex_sentence,return_tensors="pt") - result = model.generate(tokenized_sentence['input_ids'],attention_mask = tokenized_sentence['attention_mask'],max_length=256,num_beams=5) - generated_sentence = tokenizer.decode(result[0],skip_special_tokens=True) - - return generated_sentence - -def load_page(): - - st.sidebar.title("🧠Sentence Simplifier") - st.title("Sentence Split in English using T5 Variants") - st.write("Sentence Split is the task of **dividing a long Complex Sentence into Simple Sentences**") - - st.sidebar.write("## UI Options") - model = st.sidebar.selectbox( - "Please Choose the Model", - ("t5-base-wikisplit","t5-v1_1-base-wikisplit", "byt5-base-wikisplit","t5-large-wikisplit")) - - change_example = st.sidebar.checkbox("Try Random Examples") - - examples = [ - "Mary likes to play football in her freetime whenever she meets with her friends that are very nice people.", - "It broadcasts on AM frequency 1600 kHz and is under ownership of Multicultural Broadcasting with studios in Surrey , British Columbia .", - "On March 1 , the Blackhawks played in their 2nd outdoor game in franchise history at Soldier Field in part of the new NHL Stadium Series ", - "'' The Rain Song '' is a love ballad , over 7 minutes in length , and is considered by singer Robert Plant to be his best overall vocal performance .", - "The resulting knowledge about human kinesiology and sport nutrition combined with his distinctive posing styles makes Kamali a sought out bodybuilder for seminars and guest appearances and has been featured in many bodybuilding articles , as well as being on the cover of MUSCLEMAG magazine .", - "The East London Line closed on 22 December 2007 and reopened on 27 April 2010 , becoming part of the new London Overground system .", - "' Bandolier - Budgie ' , a free iTunes app for iPad , iPhone and iPod touch , released in December 2011 , tells the story of the making of Bandolier in the band 's own words - including an extensive audio interview with Burke Shelley .", - "' Eden Black ' was grown from seed in the late 1980s by Stephen Morley , under his conditions it produces pitchers that are almost completley black .", - "' Wilson should extend his stint on The Voice to renew public interest in the band ; given that they 're pulling out all the stops , they deserve all the acclaim that surrounded them for their first two albums .", - "'' '' New York Mining Disaster 1941 '' '' was the second EP released by the Bee Gees in 1967 on the Spin Records , like their first EP , it was released only in Australia .", - "'' ADAPTOGENS : Herbs for Strength , Stamina , and Stress Relief , '' Healing Arts Press , 2007 - contains a detailed monograph on Schisandra chinensis as well as highlights health benefits ." - ] - - if change_example: - example = examples[random.randint(0, len(examples)-1)] - input_complex_sentence = st.text_area("Please type a Complex Sentence to split",example) - split = st.button('Change and Split✂️') - else: - example=examples[0] - input_complex_sentence = st.text_area("Please type a Complex Sentence to split",example) - split = st.button('Split✂️') - - if split: - with st.spinner("Spliting Sentence...🧠"): - generated_sentence = load_model(input_complex_sentence, model) - sentence1, sentence2, _ = generated_sentence.split(".") - st.write("**Sentence1:** "+sentence1+".") - st.write("**Sentence2:** "+sentence2+".") diff --git a/spaces/flax-community/code-clippy-problem-solver/gradio_app.py b/spaces/flax-community/code-clippy-problem-solver/gradio_app.py deleted file mode 100644 index ac34f316abf2793c371b51316c511d70b923b720..0000000000000000000000000000000000000000 --- a/spaces/flax-community/code-clippy-problem-solver/gradio_app.py +++ /dev/null @@ -1,107 +0,0 @@ -import gradio as gr - -from rich.console import Console -from rich.syntax import Syntax -from transformers import AutoModelForCausalLM, AutoTokenizer - -# model_name = "flax-community/gpt-code-clippy-1.3B-apps-alldata" -model_name = "flax-community/gpt-code-clippy-125M-apps-alldata" -model = AutoModelForCausalLM.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name) -tokenizer.pad_token = tokenizer.eos_token - -console = Console(record=True) - - -def format_input(question, starter_code=""): - answer_type = ( - "\nUse Call-Based format\n" if starter_code else "\nUse Standard Input format\n" - ) - return f"\nQUESTION:\n{question}\n{starter_code}\n{answer_type}\nANSWER:\n" - - -def format_outputs(text): - formatted_text = Syntax( - text, "python", line_numbers=True, indent_guides=True, word_wrap=True - ) - console.print(formatted_text) - - return console.export_html(inline_styles=True) - - -def generate_solution(question, starter_code="", temperature=1.0, num_beams=1): - prompt = format_input(question, starter_code) - input_ids = tokenizer(prompt, return_tensors="pt").input_ids - start = len(input_ids[0]) - output = model.generate( - input_ids, - max_length=start + 200, - do_sample=True, - top_p=0.95, - pad_token_id=tokenizer.pad_token_id, - early_stopping=True, - temperature=temperature, - num_beams=int(num_beams), - no_repeat_ngram_size=None, - repetition_penalty=None, - num_return_sequences=None, - ) - - return format_outputs( - tokenizer.decode(output[0][start:], skip_special_tokens=True).strip() - ) - - -_EXAMPLES = [ - [ - """ -Given a 2D list of size `m * n`. Your task is to find the sum of minimum value in each row. -For Example: -```python -[ - [1, 2, 3, 4, 5], # minimum value of row is 1 - [5, 6, 7, 8, 9], # minimum value of row is 5 - [20, 21, 34, 56, 100] # minimum value of row is 20 -] -``` -So, the function should return `26` because sum of minimums is as `1 + 5 + 20 = 26` - """, - "", - 0.8, - ], - [ - """ -# Personalized greeting - -Create a function that gives a personalized greeting. This function takes two parameters: `name` and `owner`. - """, - """ -Use conditionals to return the proper message: - -case| return ---- | --- -name equals owner | 'Hello boss' -otherwise | 'Hello guest' -def greet(name, owner): - """, - 0.8, - ], -] - - -inputs = [ - gr.inputs.Textbox(placeholder="Define a problem here...", lines=7), - gr.inputs.Textbox(placeholder="Provide optional starter code...", lines=3), - gr.inputs.Slider(0.5, 1.5, 0.1, default=0.8, label="Temperature"), - gr.inputs.Slider(1, 4, 1, default=1, label="Beam size"), -] - -outputs = [gr.outputs.HTML(label="Solution")] - -gr.Interface( - generate_solution, - inputs=inputs, - outputs=outputs, - title="Code Clippy: Problem Solver", - examples=_EXAMPLES, -).launch(share=False) diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/store.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/store.js deleted file mode 100644 index 5d32659c6241c7669bcae51b9c4a948fd9675f6c..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/store.js +++ /dev/null @@ -1,111 +0,0 @@ -import PubSub from '../lib/pubsub.js'; - -/** - * @classdesc Class that handles UI state modifications. - */ -export default class Store { - /** - * @constructor - * @param params {{actions: {}, mutations: {}, state: {}}} - */ - constructor(params) { - let self = this; - self.actions = {}; - self.mutations = {}; - self.state = {}; - self.status = 'default state'; - self.events = new PubSub(); // UI modifications manager - if (params.hasOwnProperty('actions')) { - self.actions = params.actions; - } - if (params.hasOwnProperty('mutations')) { - self.mutations = params.mutations; - } - - // Creates a Proxy to handles the state modifications - self.state = new Proxy((params.state || {}), { - - /** - * Function called when a key of the state is changed. Publish an event corresponding to the key that has been changed. - * @param state {Object} - * @param key {string} - * @param value {} - * @return {boolean} - */ - set: function (state, key, value) { - state[key] = value; - console.log(`stateChange: ${key}: ${value}`); - - if(key == 'envsSets'){ - self.events.publish('envsSetChange', self.state); - } - else if(key == 'morphologies'){ - self.events.publish('morphologiesChange', self.state); - } - else if(key == 'agents'){ - self.events.publish('agentsListChange', self.state); - } - else if(key == 'simulationState'){ - self.events.publish('agentsListChange', self.state); - self.events.publish('mainButtonsChange', self.state); - } - else if(key == 'parkourConfig'){ - self.events.publish('parkourConfigChange', self.state); - } - else if(key == 'drawingModeState'){ - self.events.publish('drawingModeChange', self.state); - } - else if(key == 'advancedOptionsState'){ - self.events.publish('advancedOptionsChange', self.state); - } - else if(key == 'language'){ - self.events.publish('globalElementsChange', self.state); - self.events.publish('aboutTabChange', self.state); - } - - if (self.status !== 'mutation') { - console.warn(`You should use a mutation to set ${key}`); - } - return true; - } - }); - } - - /** - * Triggers the action specified by actionKey with the given payload. - * @param actionKey {string} - * @param payload - * @return {boolean} - */ - dispatch(actionKey, payload) { - let self = this; - if (typeof self.actions[actionKey] !== 'function') { - console.error(`Action "${actionKey} doesn't exist.`); - return false; - } - console.groupCollapsed(`ACTION: ${actionKey}`); - self.status = 'action'; - self.actions[actionKey](self, payload); - console.groupEnd(); - return true; - } - - /** - * Triggers the mutation specified by mutationKey with the given payload. - * @param mutationKey - * @param payload - * @return {boolean} - */ - commit(mutationKey, payload) { - let self = this; - if (typeof self.mutations[mutationKey] !== 'function') { - console.log(`Mutation "${mutationKey}" doesn't exist`); - return false; - } - self.status = 'mutation'; - let newState = self.mutations[mutationKey](self.state, payload); - self.state = Object.assign(self.state, newState); - self.status = 'resting'; - return true; - } -} \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/loss/latent_depth.py b/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/loss/latent_depth.py deleted file mode 100644 index a3b9535ecac3ec403868681a8b50c1fbe1c90dfe..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/loss/latent_depth.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch.nn.modules.loss import _Loss - - -class LatentLayersKLLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def forward(self, layer_samples, lang_idx, update_num, sample_size): - prior = self.args.prior - samples = layer_samples[lang_idx] - eps = 1e-7 - if prior == "uniform": - # uniform prior - kl_loss = (samples * (torch.log(samples + eps) - math.log(0.5))).sum(-1) - elif prior == "agged_posterior": - # aggregated posterior - y_t = torch.stack([x.detach() for x in layer_samples], dim=0) - agged_q = torch.sum(y_t, dim=0) - row_norm = agged_q.sum(-1) - normed_agg_q = agged_q / row_norm - kl_loss = ( - samples * (torch.log(samples + eps) - torch.log(normed_agg_q + eps)) - ).sum(-1) - else: - raise NotImplementedError("The specified prior is not implemented.") - - # normalized by number of layers - kl_loss /= layer_samples[0].size()[0] - kl_weight = min( - self.args.sparsity_weight, - (update_num - self.args.soft_update) - * self.args.sparsity_weight - / self.args.anneal_updates, - ) - kl_loss *= kl_weight * sample_size - return kl_loss - - -class LatentLayersSparsityLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def is_valid(self, update_num): - if self.args.target_layers <= 0: - return False - return update_num > (self.args.soft_update + self.args.anneal_updates) - - def forward(self, layer_samples_list, update_num, sample_size): - batch_loss = 0 - share_loss = 0 - global_sparsity_loss = 0 - layer_samples = torch.stack(layer_samples_list, dim=0) - if ( - self.args.target_layers > 0 or self.args.share_weight > 0 - ) and update_num > (self.args.soft_update + self.args.anneal_updates): - # anneal sparsity weight - if update_num < (self.args.anneal_updates + self.args.soft_update): - weight_anneal = 0 - elif update_num < (2 * self.args.anneal_updates + self.args.soft_update): - weight_anneal = ( - (update_num - self.args.soft_update - self.args.anneal_updates) - * self.args.share_weight - / self.args.anneal_updates - ) - else: - weight_anneal = 1 - # compute ratio among languages - layer_utilization = torch.sum(layer_samples, dim=0) - layer_utilization /= layer_samples.size()[0] - if self.args.share_weight > 0: - # encouraging sharing across languages - share_loss = sum( - -1.0 * v * math.log(v) for v in layer_utilization if v > 0 - ) - batch_loss += ( - weight_anneal * self.args.share_weight * sample_size * share_loss - ) - if self.args.target_layers > 0: - # computed expected number of layers selected - expeted_layers = sum(layer_utilization) - # compute l2 loss wrt target number of layers - global_sparsity_loss = (expeted_layers - self.args.target_layers) ** 2 - batch_loss += ( - weight_anneal - * self.args.share_weight - * sample_size - * global_sparsity_loss - ) - return batch_loss diff --git a/spaces/gradio/default/app.py b/spaces/gradio/default/app.py deleted file mode 100644 index 3f140e2b4c079791b92c01d875e97e9fafa81eea..0000000000000000000000000000000000000000 --- a/spaces/gradio/default/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='gradio/default') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Default` - To use this theme, set `theme='gradio/default'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/gsaivinay/open_llm_leaderboard/src/load_from_hub.py b/spaces/gsaivinay/open_llm_leaderboard/src/load_from_hub.py deleted file mode 100644 index 87b7577d8b95f31f05463b86f290c9dee53606ef..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/open_llm_leaderboard/src/load_from_hub.py +++ /dev/null @@ -1,152 +0,0 @@ -import json -import os -from collections import defaultdict - -import pandas as pd -from huggingface_hub import Repository -from transformers import AutoConfig - -from src.assets.hardcoded_evals import baseline, gpt4_values, gpt35_values -from src.display_models.get_model_metadata import apply_metadata -from src.display_models.read_results import get_eval_results_dicts, make_clickable_model -from src.display_models.utils import AutoEvalColumn, EvalQueueColumn, has_no_nan_values - -IS_PUBLIC = bool(os.environ.get("IS_PUBLIC", True)) - - -def get_all_requested_models(requested_models_dir: str) -> set[str]: - depth = 1 - file_names = [] - users_to_submission_dates = defaultdict(list) - - for root, _, files in os.walk(requested_models_dir): - current_depth = root.count(os.sep) - requested_models_dir.count(os.sep) - if current_depth == depth: - for file in files: - if not file.endswith(".json"): - continue - with open(os.path.join(root, file), "r") as f: - info = json.load(f) - file_names.append(f"{info['model']}_{info['revision']}_{info['precision']}") - - # Select organisation - if info["model"].count("/") == 0 or "submitted_time" not in info: - continue - organisation, _ = info["model"].split("/") - users_to_submission_dates[organisation].append(info["submitted_time"]) - - return set(file_names), users_to_submission_dates - - -def load_all_info_from_hub(QUEUE_REPO: str, RESULTS_REPO: str, QUEUE_PATH: str, RESULTS_PATH: str) -> list[Repository]: - eval_queue_repo = None - eval_results_repo = None - requested_models = None - - print("Pulling evaluation requests and results.") - - eval_queue_repo = Repository( - local_dir=QUEUE_PATH, - clone_from=QUEUE_REPO, - repo_type="dataset", - ) - eval_queue_repo.git_pull() - - eval_results_repo = Repository( - local_dir=RESULTS_PATH, - clone_from=RESULTS_REPO, - repo_type="dataset", - ) - eval_results_repo.git_pull() - - requested_models, users_to_submission_dates = get_all_requested_models("eval-queue") - - return eval_queue_repo, requested_models, eval_results_repo, users_to_submission_dates - - -def get_leaderboard_df( - eval_results: Repository, eval_results_private: Repository, cols: list, benchmark_cols: list -) -> pd.DataFrame: - if eval_results: - print("Pulling evaluation results for the leaderboard.") - eval_results.git_pull() - if eval_results_private: - print("Pulling evaluation results for the leaderboard.") - eval_results_private.git_pull() - - all_data = get_eval_results_dicts() - - # if not IS_PUBLIC: - all_data.append(gpt4_values) - all_data.append(gpt35_values) - - all_data.append(baseline) - apply_metadata(all_data) # Populate model type based on known hardcoded values in `metadata.py` - - df = pd.DataFrame.from_records(all_data) - df = df.sort_values(by=[AutoEvalColumn.average.name], ascending=False) - df = df[cols].round(decimals=2) - - # filter out if any of the benchmarks have not been produced - df = df[has_no_nan_values(df, benchmark_cols)] - return df - - -def get_evaluation_queue_df( - eval_queue: Repository, eval_queue_private: Repository, save_path: str, cols: list -) -> list[pd.DataFrame]: - if eval_queue: - print("Pulling changes for the evaluation queue.") - eval_queue.git_pull() - if eval_queue_private: - print("Pulling changes for the evaluation queue.") - eval_queue_private.git_pull() - - entries = [entry for entry in os.listdir(save_path) if not entry.startswith(".")] - all_evals = [] - - for entry in entries: - if ".json" in entry: - file_path = os.path.join(save_path, entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - - all_evals.append(data) - elif ".md" not in entry: - # this is a folder - sub_entries = [e for e in os.listdir(f"{save_path}/{entry}") if not e.startswith(".")] - for sub_entry in sub_entries: - file_path = os.path.join(save_path, entry, sub_entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - all_evals.append(data) - - pending_list = [e for e in all_evals if e["status"] in ["PENDING", "RERUN"]] - running_list = [e for e in all_evals if e["status"] == "RUNNING"] - finished_list = [e for e in all_evals if e["status"].startswith("FINISHED") or e["status"] == "PENDING_NEW_EVAL"] - df_pending = pd.DataFrame.from_records(pending_list, columns=cols) - df_running = pd.DataFrame.from_records(running_list, columns=cols) - df_finished = pd.DataFrame.from_records(finished_list, columns=cols) - return df_finished[cols], df_running[cols], df_pending[cols] - - -def is_model_on_hub(model_name: str, revision: str) -> bool: - try: - AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False) - return True, None - - except ValueError: - return ( - False, - "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.", - ) - - except Exception as e: - print(f"Could not get the model config from the hub.: {e}") - return False, "was not found on hub!" diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/dual_discriminator.py b/spaces/gwang-kim/DATID-3D/eg3d/training/dual_discriminator.py deleted file mode 100644 index 99bfb5a2a5b3b14c6824813b6977be86b43f7ccc..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/training/dual_discriminator.py +++ /dev/null @@ -1,249 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -"""Discriminator architectures from the paper -"Efficient Geometry-aware 3D Generative Adversarial Networks".""" - -import numpy as np -import torch -from torch_utils import persistence -from torch_utils.ops import upfirdn2d -from training.networks_stylegan2 import DiscriminatorBlock, MappingNetwork, DiscriminatorEpilogue - -@persistence.persistent_class -class SingleDiscriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - sr_upsample_factor = 1, # Ignored for SingleDiscriminator - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, update_emas=False, **block_kwargs): - img = img['image'] - - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -#---------------------------------------------------------------------------- - -def filtered_resizing(image_orig_tensor, size, f, filter_mode='antialiased'): - if filter_mode == 'antialiased': - ada_filtered_64 = torch.nn.functional.interpolate(image_orig_tensor, size=(size, size), mode='bilinear', align_corners=False, antialias=True) - elif filter_mode == 'classic': - ada_filtered_64 = upfirdn2d.upsample2d(image_orig_tensor, f, up=2) - ada_filtered_64 = torch.nn.functional.interpolate(ada_filtered_64, size=(size * 2 + 2, size * 2 + 2), mode='bilinear', align_corners=False) - ada_filtered_64 = upfirdn2d.downsample2d(ada_filtered_64, f, down=2, flip_filter=True, padding=-1) - elif filter_mode == 'none': - ada_filtered_64 = torch.nn.functional.interpolate(image_orig_tensor, size=(size, size), mode='bilinear', align_corners=False) - elif type(filter_mode) == float: - assert 0 < filter_mode < 1 - - filtered = torch.nn.functional.interpolate(image_orig_tensor, size=(size, size), mode='bilinear', align_corners=False, antialias=True) - aliased = torch.nn.functional.interpolate(image_orig_tensor, size=(size, size), mode='bilinear', align_corners=False, antialias=False) - ada_filtered_64 = (1 - filter_mode) * aliased + (filter_mode) * filtered - - return ada_filtered_64 - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DualDiscriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - disc_c_noise = 0, # Corrupt camera parameters with X std dev of noise before disc. pose conditioning. - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - img_channels *= 2 - - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - self.disc_c_noise = disc_c_noise - - def forward(self, img, c, update_emas=False, **block_kwargs): - image_raw = filtered_resizing(img['image_raw'], size=img['image'].shape[-1], f=self.resample_filter) - img = torch.cat([img['image'], image_raw], 1) - - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - if self.disc_c_noise > 0: c += torch.randn_like(c) * c.std(0) * self.disc_c_noise - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DummyDualDiscriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - img_channels *= 2 - - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - self.register_buffer('resample_filter', upfirdn2d.setup_filter([1,3,3,1])) - - self.raw_fade = 1 - - def forward(self, img, c, update_emas=False, **block_kwargs): - self.raw_fade = max(0, self.raw_fade - 1/(500000/32)) - - image_raw = filtered_resizing(img['image_raw'], size=img['image'].shape[-1], f=self.resample_filter) * self.raw_fade - img = torch.cat([img['image'], image_raw], 1) - - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -#---------------------------------------------------------------------------- diff --git a/spaces/h2oai/wave-tour/examples/ml_h2o_parameters.py b/spaces/h2oai/wave-tour/examples/ml_h2o_parameters.py deleted file mode 100644 index 03ff21eea28efbb6ea116dbf70a62356bd1b9f59..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/ml_h2o_parameters.py +++ /dev/null @@ -1,68 +0,0 @@ -# WaveML / H2O-3 / Parameters -# Configure hyperparameters for Wave Models built using H2O-3 AutoML. -# --- -from h2o_wave import main, app, Q, ui, copy_expando -from h2o_wave_ml import build_model, ModelType - -from sklearn.datasets import load_wine -from sklearn.model_selection import train_test_split - - -@app('/demo') -async def serve(q: Q): - if q.args.train: - # train WaveML Model using H2O-3 AutoML - copy_expando(q.args, q.client) - exclude_algos = [] if q.client.include_dl else ['DeepLearning'] - q.client.wave_model = build_model( - train_df=q.client.train_df, - target_column='target', - model_type=ModelType.H2O3, - _h2o3_max_runtime_secs=q.client.max_runtime_secs, - _h2o3_nfolds=2, - _h2o3_exclude_algos=exclude_algos - ) - model_id = q.client.wave_model.model.model_id - accuracy = round(100 - q.client.wave_model.model.mean_per_class_error() * 100, 2) - - # show training details and prediction option - q.page['example'].max_runtime_secs.value = q.client.max_runtime_secs - q.page['example'].include_dl.value = q.client.include_dl - q.page['example'].predict.disabled = False - q.page['example'].message.type = 'success' - q.page['example'].message.text = 'Training successfully completed!' - q.page['example'].model_id.content = f'''**H2O AutoML model id:** {model_id}
    - **Accuracy:** {accuracy}%''' - q.page['example'].example_predictions.content = '' - elif q.args.predict: - # predict on test data - preds = q.client.wave_model.predict(test_df=q.client.test_df) - - # show predictions - q.page['example'].message.text = 'Prediction successfully completed!' - q.page['example'].example_predictions.content = f'''**Example predictions:**
    - {preds[0]}
    {preds[1]}
    {preds[2]}''' - else: - # prepare sample train and test dataframes - data = load_wine(as_frame=True)['frame'] - q.client.train_df, q.client.test_df = train_test_split(data, train_size=0.8) - - # display ui - q.page['example'] = ui.form_card( - box='1 1 -1 -1', - items=[ - ui.text(content='''The sample dataset used is the - wine dataset.'''), - ui.spinbox(name='max_runtime_secs', label='Max Runtime (Secs)', min=5, max=30, step=1, value=10), - ui.toggle(name='include_dl', label='Include Deep Learning', value=False), - ui.buttons(items=[ - ui.button(name='train', label='Train', primary=True), - ui.button(name='predict', label='Predict', primary=True, disabled=True), - ]), - ui.message_bar(name='message', type='warning', text='Training will take a few seconds'), - ui.text(name='model_id', content=''), - ui.text(name='example_predictions', content='') - ] - ) - - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/plot_matplotlib.py b/spaces/h2oai/wave-tour/examples/plot_matplotlib.py deleted file mode 100644 index 0e33113eaa482b47b84a84219e722ecc79bfffd9..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_matplotlib.py +++ /dev/null @@ -1,61 +0,0 @@ -# Plot / Matplotlib -# Use #matplotlib to create plots. Also demonstrates how to provide live control over plots. -# #plot -# --- -import uuid -import os -import numpy as np -import matplotlib.pyplot as plt - -from h2o_wave import ui, main, app, Q - -np.random.seed(19680801) - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: # First visit - q.client.initialized = True - q.client.points = 25 - q.client.alpha = 50 - - q.page['controls'] = ui.form_card( - box='1 1 2 3', - items=[ - ui.text_xl("Lets make some plots"), - ui.slider(name='points', label='Points', min=5, max=50, step=1, value=q.client.points, trigger=True), - ui.slider(name='alpha', label='Alpha', min=5, max=100, step=1, value=q.client.alpha, trigger=True), - ] - ) - q.page['plot'] = ui.markdown_card(box='3 1 2 3', title='Your plot!', content='') - - if q.args.points is not None: - q.client.points = q.args.points - - if q.args.alpha is not None: - q.client.alpha = q.args.alpha - - n = q.client.points - - # Render plot - plt.figure(figsize=(2, 2)) - plt.scatter( - np.random.rand(n), np.random.rand(n), - s=(30 * np.random.rand(n)) ** 2, - c=np.random.rand(n), - alpha=q.client.alpha / 100.0 - ) - image_filename = f'{str(uuid.uuid4())}.png' - plt.savefig(image_filename) - - # Upload - image_path, = await q.site.upload([image_filename]) - - # Clean up - os.remove(image_filename) - - # Display our plot in our markdown card - q.page['plot'].content = f'![plot]({image_path})' - - # Save page - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/tour.js b/spaces/h2oai/wave-tour/examples/tour.js deleted file mode 100644 index dc3fa92b4bdc2f310702fba43d5727ccabac4b66..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/tour.js +++ /dev/null @@ -1,64 +0,0 @@ -require.config({ - paths: { - 'vs': '$tour_assets' + '/monaco', - 'pyodide': '$tour_assets' + '/pyodide/pyodide.js' - } -}) -window.MonacoEnvironment = { - getWorkerUrl: function (workerId, label) { - const { origin } = window.location - return `$${origin}$${'$base_url'}assets/monaco/base/worker/workerMain.js` - } -} -const completionToCompletionItem = item => ({ - label: item.get('label'), - kind: item.get('kind'), - insertText: item.get('label'), - sortText: item.get('sort_text'), -}) -const snippetToCompletionItem = item => ({ - label: item.prefix, - kind: monaco.languages.CompletionItemKind.Snippet, - documentation: item.description, - insertText: item.body.join('\n'), - insertTextRules: monaco.languages.CompletionItemInsertTextRule.InsertAsSnippet, -}) -require(['vs/editor/editor.main', 'pyodide'], async () => { - monaco.languages.registerCompletionItemProvider('python', { - triggerCharacters: ['.', "'", '"'], - provideCompletionItems: async (model, position) => { - const pyRes = await window.pyodide.runPython(`get_wave_completions($${position.lineNumber - 1}, $${position.column - 1}, \'\'\'$${model.getValue()}\'\'\')`) - const completions = pyRes ? pyRes.toJs().map(completionToCompletionItem) : [] - // HACK: Fetch on every keystroke due to weird bug in monaco - showing snippets only for first example. - let [snippets1, snippets2] = await Promise.all([ - fetch('$snippets1').then(r => r.json()), - fetch('$snippets2').then(r => r.json()), - ]) - snippets1 = Object.values(snippets1).map(snippetToCompletionItem) - snippets2 = Object.values(snippets2).map(snippetToCompletionItem) - return { suggestions: [...completions, ...snippets1, ...snippets2] } - } - }) - const editor = monaco.editor.create(document.getElementById('monaco-editor'), { - value: '', - language: 'python', - minimap: { enabled: false }, - overviewRulerLanes: 0, - hideCursorInOverviewRuler: true, - scrollbar: { vertical: 'hidden' }, - overviewRulerBorder: false, - lineDecorationsWidth: 0, - lineNumbersMinChars: 3, - automaticLayout: true, - }) - editor.onDidChangeModelContent(e => { - if (e.isFlush) return - emit_debounced('editor', 'change', editor.getValue()) - }) - window.editor = editor - window.emit_debounced = window.wave.debounce(2000, window.wave.emit) - window.pyodide = await window.loadPyodide() - await window.pyodide.loadPackage('parso') - await window.pyodide.loadPackage('jedi') - await window.pyodide.runPythonAsync(`$py_content`) -}) diff --git a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/pixelation8.py b/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/pixelation8.py deleted file mode 100644 index ef49cb613d09e972adf7b8136b632eb210420686..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/market1501/pixelation8.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..FB_cse_mask_face import anonymizer, detector, common - -detector.score_threshold = .1 -detector.face_detector_cfg.confidence_threshold = .5 -detector.cse_cfg.score_thres = 0.3 -anonymizer.generators.face_G_cfg = None -anonymizer.generators.person_G_cfg = "configs/generators/dummy/pixelation8.py" -anonymizer.generators.cse_person_G_cfg = "configs/generators/dummy/pixelation8.py" \ No newline at end of file diff --git a/spaces/haakohu/deep_privacy2_face/dp2/metrics/torch_metrics.py b/spaces/haakohu/deep_privacy2_face/dp2/metrics/torch_metrics.py deleted file mode 100644 index 0c8747b01a98f3aa4df9161e09c309c101b396f2..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/dp2/metrics/torch_metrics.py +++ /dev/null @@ -1,177 +0,0 @@ -import pickle -import numpy as np -import torch -import time -from pathlib import Path -from dp2 import utils -import tops -from .lpips import SampleSimilarityLPIPS -from torch_fidelity.defaults import DEFAULTS as trf_defaults -from torch_fidelity.metric_fid import fid_features_to_statistics, fid_statistics_to_metric -from torch_fidelity.utils import create_feature_extractor -lpips_model = None -fid_model = None - - -@torch.no_grad() -def mse(images1: torch.Tensor, images2: torch.Tensor) -> torch.Tensor: - se = (images1 - images2) ** 2 - se = se.view(images1.shape[0], -1).mean(dim=1) - return se - - -@torch.no_grad() -def psnr(images1: torch.Tensor, images2: torch.Tensor) -> torch.Tensor: - mse_ = mse(images1, images2) - psnr = 10 * torch.log10(1 / mse_) - return psnr - - -@torch.no_grad() -def lpips(images1: torch.Tensor, images2: torch.Tensor) -> torch.Tensor: - return _lpips_w_grad(images1, images2) - - -def _lpips_w_grad(images1: torch.Tensor, images2: torch.Tensor) -> torch.Tensor: - global lpips_model - if lpips_model is None: - lpips_model = tops.to_cuda(SampleSimilarityLPIPS()) - - images1 = images1.mul(255) - images2 = images2.mul(255) - with torch.cuda.amp.autocast(tops.AMP()): - dists = lpips_model(images1, images2)[0].view(-1) - return dists - - -@torch.no_grad() -def compute_metrics_iteratively( - dataloader, generator, - cache_directory, - data_len=None, - truncation_value: float = None, -) -> dict: - """ - Args: - n_samples (int): Creates N samples from same image to calculate stats - dataset_percentage (float): The percentage of the dataset to compute metrics on. - """ - - global lpips_model, fid_model - if lpips_model is None: - lpips_model = tops.to_cuda(SampleSimilarityLPIPS()) - if fid_model is None: - fid_model = create_feature_extractor( - trf_defaults["feature_extractor"], [trf_defaults["feature_layer_fid"]], cuda=False) - fid_model = tops.to_cuda(fid_model) - cache_directory = Path(cache_directory) - start_time = time.time() - lpips_total = torch.tensor(0, dtype=torch.float32, device=tops.get_device()) - diversity_total = torch.zeros_like(lpips_total) - fid_cache_path = cache_directory.joinpath("fid_stats.pkl") - has_fid_cache = fid_cache_path.is_file() - if data_len is None: - data_len = len(dataloader)*dataloader.batch_size - if not has_fid_cache: - fid_features_real = torch.zeros(data_len, 2048, dtype=torch.float32, device=tops.get_device()) - fid_features_fake = torch.zeros(data_len, 2048, dtype=torch.float32, device=tops.get_device()) - n_samples_seen = torch.tensor([0], dtype=torch.int32, device=tops.get_device()) - eidx = 0 - for batch in utils.tqdm_(iter(dataloader), desc="Computing FID, LPIPS and LPIPS Diversity"): - sidx = eidx - eidx = sidx + batch["img"].shape[0] - n_samples_seen += batch["img"].shape[0] - with torch.cuda.amp.autocast(tops.AMP()): - fakes1 = generator.sample(**batch, truncation_value=truncation_value)["img"] - fakes2 = generator.sample(**batch, truncation_value=truncation_value)["img"] - fakes1 = utils.denormalize_img(fakes1).mul(255) - fakes2 = utils.denormalize_img(fakes2).mul(255) - real_data = utils.denormalize_img(batch["img"]).mul(255) - lpips_1, real_lpips_feats, fake1_lpips_feats = lpips_model(real_data, fakes1) - fake2_lpips_feats = lpips_model.get_feats(fakes2) - lpips_2 = lpips_model.lpips_from_feats(real_lpips_feats, fake2_lpips_feats) - - lpips_total += lpips_1.sum().add(lpips_2.sum()).div(2) - diversity_total += lpips_model.lpips_from_feats(fake1_lpips_feats, fake2_lpips_feats).sum() - if not has_fid_cache: - fid_features_real[sidx:eidx] = fid_model(real_data.byte())[0] - fid_features_fake[sidx:eidx] = fid_model(fakes1.byte())[0] - fid_features_fake = fid_features_fake[:n_samples_seen] - if has_fid_cache: - if tops.rank() == 0: - with open(fid_cache_path, "rb") as fp: - fid_stat_real = pickle.load(fp) - else: - fid_features_real = fid_features_real[:n_samples_seen] - fid_features_real = tops.all_gather_uneven(fid_features_real).cpu() - if tops.rank() == 0: - fid_stat_real = fid_features_to_statistics(fid_features_real) - cache_directory.mkdir(exist_ok=True, parents=True) - with open(fid_cache_path, "wb") as fp: - pickle.dump(fid_stat_real, fp) - fid_features_fake = tops.all_gather_uneven(fid_features_fake).cpu() - if tops.rank() == 0: - print("Starting calculation of fid from features of shape:", fid_features_fake.shape) - fid_stat_fake = fid_features_to_statistics(fid_features_fake) - fid_ = fid_statistics_to_metric(fid_stat_real, fid_stat_fake, verbose=False)["frechet_inception_distance"] - tops.all_reduce(n_samples_seen, torch.distributed.ReduceOp.SUM) - tops.all_reduce(lpips_total, torch.distributed.ReduceOp.SUM) - tops.all_reduce(diversity_total, torch.distributed.ReduceOp.SUM) - lpips_total = lpips_total / n_samples_seen - diversity_total = diversity_total / n_samples_seen - to_return = dict(lpips=lpips_total, lpips_diversity=diversity_total) - if tops.rank() == 0: - to_return["fid"] = fid_ - else: - to_return["fid"] = -1 - to_return["validation_time_s"] = time.time() - start_time - return to_return - - -@torch.no_grad() -def compute_lpips( - dataloader, generator, - truncation_value: float = None, - data_len=None, - ) -> dict: - """ - Args: - n_samples (int): Creates N samples from same image to calculate stats - dataset_percentage (float): The percentage of the dataset to compute metrics on. - """ - global lpips_model, fid_model - if lpips_model is None: - lpips_model = tops.to_cuda(SampleSimilarityLPIPS()) - start_time = time.time() - lpips_total = torch.tensor(0, dtype=torch.float32, device=tops.get_device()) - diversity_total = torch.zeros_like(lpips_total) - if data_len is None: - data_len = len(dataloader) * dataloader.batch_size - eidx = 0 - n_samples_seen = torch.tensor([0], dtype=torch.int32, device=tops.get_device()) - for batch in utils.tqdm_(dataloader, desc="Validating on dataset."): - sidx = eidx - eidx = sidx + batch["img"].shape[0] - n_samples_seen += batch["img"].shape[0] - with torch.cuda.amp.autocast(tops.AMP()): - fakes1 = generator.sample(**batch, truncation_value=truncation_value)["img"] - fakes2 = generator.sample(**batch, truncation_value=truncation_value)["img"] - real_data = batch["img"] - fakes1 = utils.denormalize_img(fakes1).mul(255) - fakes2 = utils.denormalize_img(fakes2).mul(255) - real_data = utils.denormalize_img(real_data).mul(255) - lpips_1, real_lpips_feats, fake1_lpips_feats = lpips_model(real_data, fakes1) - fake2_lpips_feats = lpips_model.get_feats(fakes2) - lpips_2 = lpips_model.lpips_from_feats(real_lpips_feats, fake2_lpips_feats) - - lpips_total += lpips_1.sum().add(lpips_2.sum()).div(2) - diversity_total += lpips_model.lpips_from_feats(fake1_lpips_feats, fake2_lpips_feats).sum() - tops.all_reduce(n_samples_seen, torch.distributed.ReduceOp.SUM) - tops.all_reduce(lpips_total, torch.distributed.ReduceOp.SUM) - tops.all_reduce(diversity_total, torch.distributed.ReduceOp.SUM) - lpips_total = lpips_total / n_samples_seen - diversity_total = diversity_total / n_samples_seen - to_return = dict(lpips=lpips_total, lpips_diversity=diversity_total) - to_return = {k: v.cpu().item() for k, v in to_return.items()} - to_return["validation_time_s"] = time.time() - start_time - return to_return diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/third_party/model_vpt.py b/spaces/hamacojr/CAT-Seg/cat_seg/third_party/model_vpt.py deleted file mode 100644 index dfab9ba8a11d93ea595b5abce0ba5b2d67394e45..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/third_party/model_vpt.py +++ /dev/null @@ -1,476 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union - -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.flatten(start_dim=2).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x[:1], key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - return x.squeeze(0) - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.relu3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - self.mask_pre_mlp = True - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - def forward_dense(self, x: torch.Tensor): - y = self.ln_1(x) - y = F.linear(y, self.attn.in_proj_weight, self.attn.in_proj_bias) - L, N, D = y.shape # L N 3D - - y = y.reshape(L, N, 3, D // 3).permute(2, 1, 0, 3).reshape(3 * N, L, D // 3) - y = F.linear(y, self.attn.out_proj.weight, self.attn.out_proj.bias) - - q, k, v = y.tensor_split(3, dim=0) - v = v.transpose(1, 0) + x # L N D - - v = v + self.mlp(self.ln_2(v)) - return v - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, prompt_length=0, prompt_depth=0): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - self.prompt_length = prompt_length - self.prompt_depth = prompt_depth - self.prompt_tokens = nn.Parameter(torch.zeros(prompt_depth, prompt_length, width)) if prompt_length > 0 else None - if self.prompt_tokens is not None: - nn.init.xavier_uniform_(self.prompt_tokens) - - def forward(self, x: torch.Tensor, dense=False): - for i, resblock in enumerate(self.resblocks): - if self.prompt_length > 0 and i < self.prompt_depth: - l = self.prompt_length + 1 if i > 0 else 1 - x = torch.cat((x[0:1, :, :], self.prompt_tokens[i].repeat(x.shape[1], 1, 1).permute(1, 0, 2) ,x[l:, :, :])) - - if i == self.layers - 1 and dense: - x = resblock.forward_dense(x) - x = torch.cat((x[0:1, :, :], x[self.prompt_length + 1: :, :]), dim=0) - else: - x = resblock(x) - - return x - - -class VisualTransformer(nn.Module): - def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int, prompt_depth: int, prompt_length: int): - super().__init__() - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads, prompt_depth=prompt_depth, prompt_length=prompt_length) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - self.patch_size = patch_size - self.input_resolution = input_resolution - - def forward(self, x: torch.Tensor, dense=False): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - - if dense and (x.shape[1] != self.positional_embedding.shape[0]): - x = x + self.resized_pos_embed(self.input_resolution, x.shape[1]).to(x.dtype) - else: - x = x + self.positional_embedding.to(x.dtype) - - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, dense) - x = x.permute(1, 0, 2) # LND -> NLD - - if dense: - x = self.ln_post(x[:, :, :]) - else: - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - def resized_pos_embed(self, in_res, tgt_res, mode="bicubic"): - #assert L == (input_resolution // self.patch_size) ** 2 + 1 - L, D = self.positional_embedding.shape - - in_side = in_res // self.patch_size - #tgt_side = tgt_res // self.patch_size - tgt_side = int((tgt_res - 1) ** 0.5) - - cls_pos = self.positional_embedding[0].unsqueeze(0) # 1 D - pos_embed = self.positional_embedding[1:].reshape(1, in_side, in_side, D).permute(0, 3, 1, 2) # L-1 D -> 1 D S S - resized_pos_embed = F.interpolate(pos_embed, size=(tgt_side, tgt_side), mode=mode, align_corners=False,) # 1 D S S -> 1 D S' S' - resized_pos_embed = resized_pos_embed.squeeze(0).reshape(D, -1).T # L'-1 D - - return torch.cat((cls_pos, resized_pos_embed), dim=0) - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - # prompt - prompt_depth: int=0, - prompt_length: int=0, - ): - super().__init__() - - self.context_length = context_length - - self.image_resolution = image_resolution - - - if isinstance(vision_layers, (tuple, list)): - assert prompt_length == 0 and prompt_depth==0 - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width - ) - else: - vision_heads = vision_width // 64 - self.visual = VisualTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim, - prompt_depth=prompt_depth, - prompt_length=prompt_length, - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([])) - - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - - def encode_image(self, image, masks=None, pool_mask=None, dense=False): - if pool_mask is not None: - return self.visual(image.type(self.dtype), mask=pool_mask, dense=dense) - if masks == None: - return self.visual(image.type(self.dtype), dense=dense) - else: - return self.visual(image.type(self.dtype), masks.type(self.dtype)) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - # import pdb; pdb.set_trace() - # normalized features - # image_features shape: [1, 1024] - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_iamge = logit_scale * image_features @ text_features.t() - logits_per_text = logit_scale * text_features @ image_features.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_iamge, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict, prompt_depth=0, prompt_length=0): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - model = CLIP( - embed_dim, - image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers, - prompt_depth=prompt_depth, prompt_length=prompt_length, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict, strict=False) - return model.eval() diff --git a/spaces/haoqi7/research/scripts/train/train.py b/spaces/haoqi7/research/scripts/train/train.py deleted file mode 100644 index 9f0651f473fd4e5f72bab4c17152c2c0bdcbdbe6..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/scripts/train/train.py +++ /dev/null @@ -1,171 +0,0 @@ -def train( - push_to_hub:bool, - num_epoch: int, - train_batch_size: int, - eval_batch_size: int, -): - import torch - import numpy as np - - # 1. Dataset - from datasets import load_dataset - dataset = load_dataset("Adapting/abstract-keyphrases") - - # 2. Model - from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - from lrt.clustering.models import KeyBartAdapter - tokenizer = AutoTokenizer.from_pretrained("Adapting/KeyBartAdapter") - - ''' - Or you can just use the initial model weights from Huggingface: - model = AutoModelForSeq2SeqLM.from_pretrained("Adapting/KeyBartAdapter", - revision='9c3ed39c6ed5c7e141363e892d77cf8f589d5999') - ''' - - model = KeyBartAdapter(256) - - # 3. preprocess dataset - dataset = dataset.shuffle() - - def preprocess_function(examples): - inputs = examples['Abstract'] - targets = examples['Keywords'] - model_inputs = tokenizer(inputs, truncation=True) - - # Set up the tokenizer for targets - with tokenizer.as_target_tokenizer(): - labels = tokenizer(targets, truncation=True) - - model_inputs["labels"] = labels["input_ids"] - return model_inputs - - tokenized_dataset = dataset.map( - preprocess_function, - batched=True, - remove_columns=dataset["train"].column_names, - ) - - # 4. evaluation metrics - def compute_metrics(eval_preds): - preds = eval_preds.predictions - labels = eval_preds.label_ids - if isinstance(preds, tuple): - preds = preds[0] - print(preds.shape) - if len(preds.shape) == 3: - preds = preds.argmax(axis=-1) - - decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) - # Replace -100 in the labels as we can't decode them. - labels = np.where(labels != -100, labels, tokenizer.pad_token_id) - decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) - - # Some simple post-processing - decoded_preds = [a.strip().split(';') for a in decoded_preds] - decoded_labels = [a.strip().split(';') for a in decoded_labels] - - precs, recalls, f_scores = [], [], [] - num_match, num_pred, num_gold = [], [], [] - for pred, label in zip(decoded_preds, decoded_labels): - pred_set = set(pred) - label_set = set(label) - match_set = label_set.intersection(pred_set) - p = float(len(match_set)) / float(len(pred_set)) if len(pred_set) > 0 else 0.0 - r = float(len(match_set)) / float(len(label_set)) if len(label_set) > 0 else 0.0 - f1 = float(2 * (p * r)) / (p + r) if (p + r) > 0 else 0.0 - precs.append(p) - recalls.append(r) - f_scores.append(f1) - num_match.append(len(match_set)) - num_pred.append(len(pred_set)) - num_gold.append(len(label_set)) - - # print(f'raw_PRED: {raw_pred}') - print(f'PRED: num={len(pred_set)} - {pred_set}') - print(f'GT: num={len(label_set)} - {label_set}') - print(f'p={p}, r={r}, f1={f1}') - print('-' * 20) - - result = { - 'precision@M': np.mean(precs) * 100.0, - 'recall@M': np.mean(recalls) * 100.0, - 'fscore@M': np.mean(f_scores) * 100.0, - 'num_match': np.mean(num_match), - 'num_pred': np.mean(num_pred), - 'num_gold': np.mean(num_gold), - } - - result = {k: round(v, 2) for k, v in result.items()} - return result - - # 5. train - from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer - - data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) - - model_name = 'KeyBartAdapter' - - args = Seq2SeqTrainingArguments( - model_name, - evaluation_strategy="epoch", - save_strategy="epoch", - learning_rate=2e-5, - per_device_train_batch_size=train_batch_size, - per_device_eval_batch_size=eval_batch_size, - weight_decay=0.01, - save_total_limit=3, - num_train_epochs=num_epoch, - logging_steps=4, - load_best_model_at_end=True, - metric_for_best_model='fscore@M', - predict_with_generate=True, - fp16=torch.cuda.is_available(), # speeds up training on modern GPUs. - # eval_accumulation_steps=10, - ) - - trainer = Seq2SeqTrainer( - model, - args, - train_dataset=tokenized_dataset["train"], - eval_dataset=tokenized_dataset["train"], - data_collator=data_collator, - tokenizer=tokenizer, - compute_metrics=compute_metrics - ) - - trainer.train() - - # 6. push - if push_to_hub: - commit_msg = f'{model_name}_{num_epoch}' - tokenizer.push_to_hub(commit_message=commit_msg, repo_id=model_name) - model.push_to_hub(commit_message=commit_msg, repo_id=model_name) - - return model, tokenizer - -if __name__ == '__main__': - import sys - from pathlib import Path - project_root = Path(__file__).parent.parent.parent.absolute() - sys.path.append(project_root.__str__()) - - - # code - import argparse - parser = argparse.ArgumentParser() - - parser.add_argument("--epoch", help="number of epochs", default=30) - parser.add_argument("--train_batch_size", help="training batch size", default=16) - parser.add_argument("--eval_batch_size", help="evaluation batch size", default=16) - parser.add_argument("--push", help="whether push the model to hub", action='store_true') - - args = parser.parse_args() - print(args) - - model, tokenizer = train( - push_to_hub= bool(args.push), - num_epoch= int(args.epoch), - train_batch_size= int(args.train_batch_size), - eval_batch_size= int(args.eval_batch_size) - ) - diff --git a/spaces/harshvardhansb/ObjectDetection/src/utilities.js b/spaces/harshvardhansb/ObjectDetection/src/utilities.js deleted file mode 100644 index 87ecbafc13f6c441ec8bea9e304dff7733c5c99b..0000000000000000000000000000000000000000 --- a/spaces/harshvardhansb/ObjectDetection/src/utilities.js +++ /dev/null @@ -1,23 +0,0 @@ -export const drawRect = (detection , ctx) =>{ - detection.forEach(prediction=>{ - - //get prediction result from array - const[x,y,width,height]= prediction['bbox']; - const text = prediction['class']; - - //Some styling - - const color = '#' + Math.floor(Math.random()*16777215).toString(16) - ctx.strokeStyle= color - ctx.font = '18px Arial' - ctx.fllStyle = color - - - - //Reactangle & Text - ctx.beginPath() - ctx.fillText(text,x,y) - ctx.rect(x,y,width,height) - ctx.stroke() - }) -} \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/keypoints.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/keypoints.py deleted file mode 100644 index 2242815f31dfe88aaabbf4b49f724c999a71912d..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/keypoints.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import numpy as np -from typing import Any, List, Tuple, Union -import torch - -from detectron2.layers import interpolate - - -class Keypoints: - """ - Stores keypoint annotation data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.no_grad() -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = interpolate(maps[[i]], size=outsize, mode="bicubic", align_corners=False).squeeze( - 0 - ) # #keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/point_head.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/point_head.py deleted file mode 100644 index 6f35baea064fbee14d9bcd0b57e354f82bf54a8c..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/point_head.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.layers import ShapeSpec, cat -from detectron2.structures import BitMasks -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -from .point_features import point_sample - -POINT_HEAD_REGISTRY = Registry("POINT_HEAD") -POINT_HEAD_REGISTRY.__doc__ = """ -Registry for point heads, which makes prediction for a given set of per-point features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -def roi_mask_point_loss(mask_logits, instances, points_coord): - """ - Compute the point-based loss for instance segmentation mask predictions. - - Args: - mask_logits (Tensor): A tensor of shape (R, C, P) or (R, 1, P) for class-specific or - class-agnostic, where R is the total number of predicted masks in all images, C is the - number of foreground classes, and P is the number of points sampled for each mask. - The values are logits. - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 correspondence with the `mask_logits`. So, i_th - elememt of the list contains R_i objects and R_1 + ... + R_N is equal to R. - The ground-truth labels (class, box, mask, ...) associated with each instance are stored - in fields. - points_coords (Tensor): A tensor of shape (R, P, 2), where R is the total number of - predicted masks and P is the number of points for each mask. The coordinates are in - the image pixel coordinate space, i.e. [0, H] x [0, W]. - Returns: - point_loss (Tensor): A scalar tensor containing the loss. - """ - assert len(instances) == 0 or isinstance( - instances[0].gt_masks, BitMasks - ), "Point head works with GT in 'bitmask' format only. Set INPUT.MASK_FORMAT to 'bitmask'." - with torch.no_grad(): - cls_agnostic_mask = mask_logits.size(1) == 1 - total_num_masks = mask_logits.size(0) - - gt_classes = [] - gt_mask_logits = [] - idx = 0 - for instances_per_image in instances: - if not cls_agnostic_mask: - gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) - gt_classes.append(gt_classes_per_image) - - gt_bit_masks = instances_per_image.gt_masks.tensor - h, w = instances_per_image.gt_masks.image_size - scale = torch.tensor([w, h], dtype=torch.float, device=gt_bit_masks.device) - points_coord_grid_sample_format = ( - points_coord[idx : idx + len(instances_per_image)] / scale - ) - idx += len(instances_per_image) - gt_mask_logits.append( - point_sample( - gt_bit_masks.to(torch.float32).unsqueeze(1), - points_coord_grid_sample_format, - align_corners=False, - ).squeeze(1) - ) - gt_mask_logits = cat(gt_mask_logits) - - # torch.mean (in binary_cross_entropy_with_logits) doesn't - # accept empty tensors, so handle it separately - if gt_mask_logits.numel() == 0: - return mask_logits.sum() * 0 - - if cls_agnostic_mask: - mask_logits = mask_logits[:, 0] - else: - indices = torch.arange(total_num_masks) - gt_classes = cat(gt_classes, dim=0) - mask_logits = mask_logits[indices, gt_classes] - - # Log the training accuracy (using gt classes and 0.0 threshold for the logits) - mask_accurate = (mask_logits > 0.0) == gt_mask_logits.to(dtype=torch.uint8) - mask_accuracy = mask_accurate.nonzero().size(0) / mask_accurate.numel() - get_event_storage().put_scalar("point_rend/accuracy", mask_accuracy) - - point_loss = F.binary_cross_entropy_with_logits( - mask_logits, gt_mask_logits.to(dtype=torch.float32), reduction="mean" - ) - return point_loss - - -@POINT_HEAD_REGISTRY.register() -class StandardPointHead(nn.Module): - """ - A point head multi-layer perceptron which we model with conv1d layers with kernel 1. The head - takes both fine-grained and coarse prediction features as its input. - """ - - def __init__(self, cfg, input_shape: ShapeSpec): - """ - The following attributes are parsed from config: - fc_dim: the output dimension of each FC layers - num_fc: the number of FC layers - coarse_pred_each_layer: if True, coarse prediction features are concatenated to each - layer's input - """ - super(StandardPointHead, self).__init__() - # fmt: off - num_classes = cfg.MODEL.POINT_HEAD.NUM_CLASSES - fc_dim = cfg.MODEL.POINT_HEAD.FC_DIM - num_fc = cfg.MODEL.POINT_HEAD.NUM_FC - cls_agnostic_mask = cfg.MODEL.POINT_HEAD.CLS_AGNOSTIC_MASK - self.coarse_pred_each_layer = cfg.MODEL.POINT_HEAD.COARSE_PRED_EACH_LAYER - input_channels = input_shape.channels - # fmt: on - - fc_dim_in = input_channels + num_classes - self.fc_layers = [] - for k in range(num_fc): - fc = nn.Conv1d(fc_dim_in, fc_dim, kernel_size=1, stride=1, padding=0, bias=True) - self.add_module("fc{}".format(k + 1), fc) - self.fc_layers.append(fc) - fc_dim_in = fc_dim - fc_dim_in += num_classes if self.coarse_pred_each_layer else 0 - - num_mask_classes = 1 if cls_agnostic_mask else num_classes - self.predictor = nn.Conv1d(fc_dim_in, num_mask_classes, kernel_size=1, stride=1, padding=0) - - for layer in self.fc_layers: - weight_init.c2_msra_fill(layer) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.predictor.weight, std=0.001) - if self.predictor.bias is not None: - nn.init.constant_(self.predictor.bias, 0) - - def forward(self, fine_grained_features, coarse_features): - x = torch.cat((fine_grained_features, coarse_features), dim=1) - for layer in self.fc_layers: - x = F.relu(layer(x)) - if self.coarse_pred_each_layer: - x = cat((x, coarse_features), dim=1) - return self.predictor(x) - - -def build_point_head(cfg, input_channels): - """ - Build a point head defined by `cfg.MODEL.POINT_HEAD.NAME`. - """ - head_name = cfg.MODEL.POINT_HEAD.NAME - return POINT_HEAD_REGISTRY.get(head_name)(cfg, input_channels) diff --git a/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_llama.py b/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_llama.py deleted file mode 100644 index 6dfac681aeaa11a780304b9e645637cabd677688..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/request_llm/bridge_jittorllms_llama.py +++ /dev/null @@ -1,178 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.local_history = [] - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import pandas - self.info = "依赖检测通过" - self.success = True - except: - from toolbox import trimmed_format_exc - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\ - r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc() - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - env = os.environ.get("PATH", "") - os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin') - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume + '/request_llm/jittorllms') - sys.path.append(root_dir_assume + '/request_llm/jittorllms') - validate_path() # validate path so you can run from base directory - - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'llama'} - print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - print('done get model') - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - print('load_model') - load_model() - - # 进入任务等待状态 - print('进入任务等待状态') - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - query = kwargs['query'] - history = kwargs['history'] - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - print('触发重置') - self.jittorllms_model.reset() - self.local_history.append(query) - - print('收到消息,开始请求') - try: - for response in self.jittorllms_model.stream_chat(query, history): - print(response) - self.child.send(response) - except: - from toolbox import trimmed_format_exc - print(trimmed_format_exc()) - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global llama_glm_handle -llama_glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info - if not llama_glm_handle.success: - error = llama_glm_handle.info - llama_glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - print(response) - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not llama_glm_handle.success: - llama_glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/hesha/text-embeddings-transformers/app.py b/spaces/hesha/text-embeddings-transformers/app.py deleted file mode 100644 index 2fccd40ed67c07a8c261d3948e298df01e4f85c1..0000000000000000000000000000000000000000 --- a/spaces/hesha/text-embeddings-transformers/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModel -import torch - -tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') -model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') - -def mean_pooling(model_output, attention_mask): - token_embeddings = model_output[0] - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) - -def encode_sentences(sentences): - encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') - with torch.no_grad(): - model_output = model(**encoded_input) - sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) - return sentence_embeddings.tolist() - -demo = gr.Interface(fn=encode_sentences, - inputs="textbox", - outputs="text") - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/hf4all/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js b/spaces/hf4all/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js deleted file mode 100644 index f8219f8c6d7cf299958256ed0d71b1f484a43b92..0000000000000000000000000000000000000000 --- a/spaces/hf4all/web-ui/_next/static/chunks/698-f6bc8e9278737c93.js +++ /dev/null @@ -1,25 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[698],{93644:function(){"trimStart"in String.prototype||(String.prototype.trimStart=String.prototype.trimLeft),"trimEnd"in String.prototype||(String.prototype.trimEnd=String.prototype.trimRight),"description"in Symbol.prototype||Object.defineProperty(Symbol.prototype,"description",{configurable:!0,get:function(){var e=/\((.*)\)/.exec(this.toString());return e?e[1]:void 0}}),Array.prototype.flat||(Array.prototype.flat=function(e,t){return t=this.concat.apply([],this),e>1&&t.some(Array.isArray)?t.flat(e-1):t},Array.prototype.flatMap=function(e,t){return this.map(e,t).flat()}),Promise.prototype.finally||(Promise.prototype.finally=function(e){if("function"!=typeof e)return this.then(e,e);var t=this.constructor||Promise;return this.then(function(r){return t.resolve(e()).then(function(){return r})},function(r){return t.resolve(e()).then(function(){throw r})})}),Object.fromEntries||(Object.fromEntries=function(e){return Array.from(e).reduce(function(e,t){return e[t[0]]=t[1],e},{})})},12409:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addBasePath",{enumerable:!0,get:function(){return o}});let n=r(60150),u=r(75588);function o(e,t){return(0,u.normalizePathTrailingSlash)((0,n.addPathPrefix)(e,""))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30930:function(e,t){"use strict";function r(e){var t,r;t=self.__next_s,r=()=>{e()},t&&t.length?t.reduce((e,t)=>{let[r,n]=t;return e.then(()=>new Promise((e,t)=>{let u=document.createElement("script");if(n)for(let e in n)"children"!==e&&u.setAttribute(e,n[e]);r?(u.src=r,u.onload=()=>e(),u.onerror=t):n&&(u.innerHTML=n.children,setTimeout(e)),document.head.appendChild(u)}))},Promise.resolve()).catch(e=>{console.error(e)}).then(()=>{r()}):r()}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"appBootstrap",{enumerable:!0,get:function(){return r}}),window.next={version:"13.4.9",appDir:!0},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},303:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"callServer",{enumerable:!0,get:function(){return u}});let n=r(2353);async function u(e,t){let r=(0,n.getServerActionDispatcher)();if(!r)throw Error("Invariant: missing action dispatcher.");return new Promise((n,u)=>{r({actionId:e,actionArgs:t,resolve:n,reject:u})})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},13426:function(e,t,r){"use strict";let n,u;Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"hydrate",{enumerable:!0,get:function(){return N}});let o=r(26927),l=r(25909);r(93644);let a=o._(r(93194)),i=l._(r(86006)),c=r(35456),s=r(27268);r(15456);let f=o._(r(59214)),d=r(303),p=r(45080),h=window.console.error;window.console.error=function(){for(var e=arguments.length,t=Array(e),r=0;r{if((0,p.isNextRouterError)(e.error)){e.preventDefault();return}});let _=e=>t=>e(t)+"",y=r.u,b={};r.u=_(e=>encodeURI(b[e]||y(e)));let v=r.k;r.k=_(v);let m=r.miniCssF;r.miniCssF=_(m),self.__next_require__=r,self.__next_chunk_load__=e=>{if(!e)return Promise.resolve();let[t,n]=e.split(":");return b[t]=n,r.e(t)};let g=document,O=()=>{let{pathname:e,search:t}=location;return e+t},P=new TextEncoder,E=!1,R=!1;function j(e){if(0===e[0])n=[];else{if(!n)throw Error("Unexpected server data: missing bootstrap script.");u?u.enqueue(P.encode(e[1])):n.push(e[1])}}let S=function(){u&&!R&&(u.close(),R=!0,n=void 0),E=!0};"loading"===document.readyState?document.addEventListener("DOMContentLoaded",S,!1):S();let T=self.__next_f=self.__next_f||[];T.forEach(j),T.push=j;let M=new Map;function w(e){let{cacheKey:t}=e;i.default.useEffect(()=>{M.delete(t)});let r=function(e){let t=M.get(e);if(t)return t;let r=new ReadableStream({start(e){n&&(n.forEach(t=>{e.enqueue(P.encode(t))}),E&&!R&&(e.close(),R=!0,n=void 0)),u=e}}),o=(0,c.createFromReadableStream)(r,{callServer:d.callServer});return M.set(e,o),o}(t),o=(0,i.use)(r);return o}let C=i.default.Fragment;function x(e){let{children:t}=e;return t}function A(e){return i.default.createElement(w,{...e,cacheKey:O()})}function N(){let e=i.default.createElement(C,null,i.default.createElement(s.HeadManagerContext.Provider,{value:{appDir:!0}},i.default.createElement(x,null,i.default.createElement(A,null)))),t={onRecoverableError:f.default},r="__next_error__"===document.documentElement.id;r?a.default.createRoot(g,t).render(e):i.default.startTransition(()=>a.default.hydrateRoot(g,e,t))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},53333:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0});let n=r(30930);(0,n.appBootstrap)(()=>{r(2353),r(49180);let{hydrate:e}=r(13426);e()}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},71002:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"AppRouterAnnouncer",{enumerable:!0,get:function(){return l}});let n=r(86006),u=r(8431),o="next-route-announcer";function l(e){let{tree:t}=e,[r,l]=(0,n.useState)(null);(0,n.useEffect)(()=>{let e=function(){var e;let t=document.getElementsByName(o)[0];if(null==t?void 0:null==(e=t.shadowRoot)?void 0:e.childNodes[0])return t.shadowRoot.childNodes[0];{let e=document.createElement(o);e.style.cssText="position:absolute";let t=document.createElement("div");t.ariaLive="assertive",t.id="__next-route-announcer__",t.role="alert",t.style.cssText="position:absolute;border:0;height:1px;margin:-1px;padding:0;width:1px;clip:rect(0 0 0 0);overflow:hidden;white-space:nowrap;word-wrap:normal";let r=e.attachShadow({mode:"open"});return r.appendChild(t),document.body.appendChild(e),t}}();return l(e),()=>{let e=document.getElementsByTagName(o)[0];(null==e?void 0:e.isConnected)&&document.body.removeChild(e)}},[]);let[a,i]=(0,n.useState)(""),c=(0,n.useRef)();return(0,n.useEffect)(()=>{let e="";if(document.title)e=document.title;else{let t=document.querySelector("h1");t&&(e=t.innerText||t.textContent||"")}void 0!==c.current&&i(e),c.current=e},[t]),r?(0,u.createPortal)(a,r):null}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34852:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RSC:function(){return r},ACTION:function(){return n},NEXT_ROUTER_STATE_TREE:function(){return u},NEXT_ROUTER_PREFETCH:function(){return o},NEXT_URL:function(){return l},FETCH_CACHE_HEADER:function(){return a},RSC_CONTENT_TYPE_HEADER:function(){return i},RSC_VARY_HEADER:function(){return c},FLIGHT_PARAMETERS:function(){return s},NEXT_RSC_UNION_QUERY:function(){return f}});let r="RSC",n="Next-Action",u="Next-Router-State-Tree",o="Next-Router-Prefetch",l="Next-Url",a="x-vercel-sc-headers",i="text/x-component",c=r+", "+u+", "+o,s=[[r],[u],[o]],f="_rsc";("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2353:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getServerActionDispatcher:function(){return E},urlToUrlWithoutFlightMarker:function(){return R},default:function(){return w}});let n=r(25909),u=n._(r(86006)),o=r(15456),l=r(85426),a=r(74741),i=r(8744),c=r(76173),s=r(18688),f=r(47330),d=r(89343),p=r(30753),h=r(12409),_=r(71002),y=r(22418),b=r(62484),v=r(68792),m=r(75238),g=r(34852),O=new Map,P=null;function E(){return P}function R(e){let t=new URL(e,location.origin);return t.searchParams.delete(g.NEXT_RSC_UNION_QUERY),t.pathname.endsWith("/index.txt")?t.pathname=t.pathname.slice(0,-10):t.pathname=t.pathname.slice(0,-4),t}function j(e){return e.origin!==window.location.origin}function S(e){let{tree:t,pushRef:r,canonicalUrl:n,sync:o}=e;return(0,u.useInsertionEffect)(()=>{let e={__NA:!0,tree:t};r.pendingPush&&(0,i.createHrefFromUrl)(new URL(window.location.href))!==n?(r.pendingPush=!1,window.history.pushState(e,"",n)):window.history.replaceState(e,"",n),o()},[t,r,n,o]),null}let T=()=>({status:o.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map});function M(e){let{buildId:t,initialHead:r,initialTree:n,initialCanonicalUrl:i,children:f,assetPrefix:g,notFound:E,notFoundStyles:R,asNotFound:M}=e,w=(0,u.useMemo)(()=>(0,d.createInitialRouterState)({buildId:t,children:f,initialCanonicalUrl:i,initialTree:n,initialParallelRoutes:O,isServer:!1,location:window.location,initialHead:r}),[t,f,i,n,r]),[{tree:C,cache:x,prefetchCache:A,pushRef:N,focusAndScrollRef:I,canonicalUrl:D,nextUrl:k},F,U]=(0,s.useReducerWithReduxDevtools)(l.reducer,w);(0,u.useEffect)(()=>{O=null},[]);let{searchParams:L,pathname:H}=(0,u.useMemo)(()=>{let e=new URL(D,window.location.href);return{searchParams:e.searchParams,pathname:e.pathname}},[D]),$=(0,u.useCallback)((e,t,r)=>{(0,u.startTransition)(()=>{F({type:a.ACTION_SERVER_PATCH,flightData:t,previousTree:e,overrideCanonicalUrl:r,cache:T(),mutable:{}})})},[F]),W=(0,u.useCallback)((e,t,r,n)=>{let u=new URL((0,h.addBasePath)(e),location.href);return F({type:a.ACTION_NAVIGATE,url:u,isExternalUrl:j(u),locationSearch:location.search,forceOptimisticNavigation:r,shouldScroll:null==n||n,navigateType:t,cache:T(),mutable:{}})},[F]);!function(e,t,r){let n=(0,u.useCallback)(n=>{(0,u.startTransition)(()=>{t({...n,type:a.ACTION_SERVER_ACTION,mutable:{},navigate:r,changeByServerResponse:e})})},[e,t,r]);P=n}($,F,W);let B=(0,u.useMemo)(()=>{let e={back:()=>window.history.back(),forward:()=>window.history.forward(),prefetch:(e,t)=>{if((0,p.isBot)(window.navigator.userAgent))return;let r=new URL((0,h.addBasePath)(e),location.href);j(r)||(0,u.startTransition)(()=>{var e;F({type:a.ACTION_PREFETCH,url:r,kind:null!=(e=null==t?void 0:t.kind)?e:a.PrefetchKind.FULL})})},replace:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"replace",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},push:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"push",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},refresh:()=>{(0,u.startTransition)(()=>{F({type:a.ACTION_REFRESH,cache:T(),mutable:{},origin:window.location.origin})})},fastRefresh:()=>{throw Error("fastRefresh can only be used in development mode. Please use refresh instead.")}};return e},[F,W]);if((0,u.useEffect)(()=>{window.next&&(window.next.router=B)},[B]),N.mpaNavigation){let e=window.location;N.pendingPush?e.assign(D):e.replace(D),(0,u.use)((0,m.createInfinitePromise)())}let Y=(0,u.useCallback)(e=>{let{state:t}=e;if(t){if(!t.__NA){window.location.reload();return}(0,u.startTransition)(()=>{F({type:a.ACTION_RESTORE,url:new URL(window.location.href),tree:t.tree})})}},[F]);(0,u.useEffect)(()=>(window.addEventListener("popstate",Y),()=>{window.removeEventListener("popstate",Y)}),[Y]);let V=(0,u.useMemo)(()=>(0,v.findHeadInCache)(x,C[1]),[x,C]),G=u.default.createElement(y.RedirectBoundary,null,V,x.subTreeData,u.default.createElement(_.AppRouterAnnouncer,{tree:C}));return u.default.createElement(u.default.Fragment,null,u.default.createElement(S,{tree:C,pushRef:N,canonicalUrl:D,sync:U}),u.default.createElement(c.PathnameContext.Provider,{value:H},u.default.createElement(c.SearchParamsContext.Provider,{value:L},u.default.createElement(o.GlobalLayoutRouterContext.Provider,{value:{buildId:t,changeByServerResponse:$,tree:C,focusAndScrollRef:I,nextUrl:k}},u.default.createElement(o.AppRouterContext.Provider,{value:B},u.default.createElement(o.LayoutRouterContext.Provider,{value:{childNodes:x.parallelRoutes,tree:C,url:D}},u.default.createElement(b.NotFoundBoundary,{notFound:E,notFoundStyles:R,asNotFound:M},G)))))))}function w(e){let{globalErrorComponent:t,...r}=e;return u.default.createElement(f.ErrorBoundary,{errorComponent:t},u.default.createElement(M,r))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90259:function(e,t,r){"use strict";function n(e){}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"clientHookInServerComponentError",{enumerable:!0,get:function(){return n}}),r(26927),r(86006),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47330:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ErrorBoundaryHandler:function(){return a},default:function(){return i},ErrorBoundary:function(){return c}});let n=r(26927),u=n._(r(86006)),o=r(4e3),l={error:{fontFamily:'system-ui,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji"',height:"100vh",textAlign:"center",display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center"},text:{fontSize:"14px",fontWeight:400,lineHeight:"28px",margin:"0 8px"}};class a extends u.default.Component{static getDerivedStateFromError(e){return{error:e}}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.error?{error:null,previousPathname:e.pathname}:{error:t.error,previousPathname:e.pathname}}render(){return this.state.error?u.default.createElement(u.default.Fragment,null,this.props.errorStyles,u.default.createElement(this.props.errorComponent,{error:this.state.error,reset:this.reset})):this.props.children}constructor(e){super(e),this.reset=()=>{this.setState({error:null})},this.state={error:null,previousPathname:this.props.pathname}}}function i(e){let{error:t}=e,r=null==t?void 0:t.digest;return u.default.createElement("html",null,u.default.createElement("head",null),u.default.createElement("body",null,u.default.createElement("div",{style:l.error},u.default.createElement("div",null,u.default.createElement("h2",{style:l.text},"Application error: a "+(r?"server":"client")+"-side exception has occurred (see the "+(r?"server logs":"browser console")+" for more information)."),r?u.default.createElement("p",{style:l.text},"Digest: "+r):null))))}function c(e){let{errorComponent:t,errorStyles:r,children:n}=e,l=(0,o.usePathname)();return t?u.default.createElement(a,{pathname:l,errorComponent:t,errorStyles:r},n):u.default.createElement(u.default.Fragment,null,n)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47308:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{DYNAMIC_ERROR_CODE:function(){return r},DynamicServerError:function(){return n}});let r="DYNAMIC_SERVER_USAGE";class n extends Error{constructor(e){super("Dynamic server usage: "+e),this.digest=r}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75238:function(e,t){"use strict";let r;function n(){return r||(r=new Promise(()=>{})),r}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInfinitePromise",{enumerable:!0,get:function(){return n}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},45080:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isNextRouterError",{enumerable:!0,get:function(){return o}});let n=r(62951),u=r(14024);function o(e){return e&&e.digest&&((0,u.isRedirectError)(e)||(0,n.isNotFoundError)(e))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49180:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return E}});let n=r(26927),u=r(25909),o=u._(r(86006)),l=n._(r(8431)),a=r(15456),i=r(52368),c=r(75238),s=r(47330),f=r(50655),d=r(92998),p=r(22418),h=r(62484),_=r(65143),y=r(49101),b=["bottom","height","left","right","top","width","x","y"];function v(e,t){let r=e.getBoundingClientRect();return r.top>=0&&r.top<=t}class m extends o.default.Component{componentDidMount(){this.handlePotentialScroll()}componentDidUpdate(){this.props.focusAndScrollRef.apply&&this.handlePotentialScroll()}render(){return this.props.children}constructor(...e){super(...e),this.handlePotentialScroll=()=>{let{focusAndScrollRef:e,segmentPath:t}=this.props;if(e.apply){var r;if(0!==e.segmentPaths.length&&!e.segmentPaths.some(e=>t.every((t,r)=>(0,f.matchSegment)(t,e[r]))))return;let n=null,u=e.hashFragment;if(u&&(n="top"===u?document.body:null!=(r=document.getElementById(u))?r:document.getElementsByName(u)[0]),n||(n=l.default.findDOMNode(this)),!(n instanceof Element))return;for(;!(n instanceof HTMLElement)||function(e){let t=e.getBoundingClientRect();return b.every(e=>0===t[e])}(n);){if(null===n.nextElementSibling)return;n=n.nextElementSibling}e.apply=!1,e.hashFragment=null,e.segmentPaths=[],(0,d.handleSmoothScroll)(()=>{if(u){n.scrollIntoView();return}let e=document.documentElement,t=e.clientHeight;!v(n,t)&&(e.scrollTop=0,v(n,t)||n.scrollIntoView())},{dontForceLayout:!0}),n.focus()}}}}function g(e){let{segmentPath:t,children:r}=e,n=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!n)throw Error("invariant global layout router not mounted");return o.default.createElement(m,{segmentPath:t,focusAndScrollRef:n.focusAndScrollRef},r)}function O(e){let{parallelRouterKey:t,url:r,childNodes:n,childProp:u,segmentPath:l,tree:s,cacheKey:d}=e,p=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!p)throw Error("invariant global layout router not mounted");let{buildId:h,changeByServerResponse:_,tree:y}=p,b=n.get(d);if(u&&null!==u.current&&(b?b.status===a.CacheStates.LAZY_INITIALIZED&&(b.status=a.CacheStates.READY,b.subTreeData=u.current):(b={status:a.CacheStates.READY,data:null,subTreeData:u.current,parallelRoutes:new Map},n.set(d,b))),!b||b.status===a.CacheStates.LAZY_INITIALIZED){let e=function e(t,r){if(t){let[n,u]=t,o=2===t.length;if((0,f.matchSegment)(r[0],n)&&r[1].hasOwnProperty(u)){if(o){let t=e(void 0,r[1][u]);return[r[0],{...r[1],[u]:[t[0],t[1],t[2],"refetch"]}]}return[r[0],{...r[1],[u]:e(t.slice(2),r[1][u])}]}}return r}(["",...l],y);b={status:a.CacheStates.DATA_FETCH,data:(0,i.fetchServerResponse)(new URL(r,location.origin),e,p.nextUrl,h),subTreeData:null,head:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.head:void 0,parallelRoutes:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.parallelRoutes:new Map},n.set(d,b)}if(!b)throw Error("Child node should always exist");if(b.subTreeData&&b.data)throw Error("Child node should not have both subTreeData and data");if(b.data){let[e,t]=(0,o.use)(b.data);b.data=null,setTimeout(()=>{(0,o.startTransition)(()=>{_(y,e,t)})}),(0,o.use)((0,c.createInfinitePromise)())}b.subTreeData||(0,o.use)((0,c.createInfinitePromise)());let v=o.default.createElement(a.LayoutRouterContext.Provider,{value:{tree:s[1][t],childNodes:b.parallelRoutes,url:r}},b.subTreeData);return v}function P(e){let{children:t,loading:r,loadingStyles:n,hasLoading:u}=e;return u?o.default.createElement(o.Suspense,{fallback:o.default.createElement(o.default.Fragment,null,n,r)},t):o.default.createElement(o.default.Fragment,null,t)}function E(e){let{parallelRouterKey:t,segmentPath:r,childProp:n,error:u,errorStyles:l,templateStyles:i,loading:c,loadingStyles:d,hasLoading:b,template:v,notFound:m,notFoundStyles:E,asNotFound:R,styles:j}=e,S=(0,o.useContext)(a.LayoutRouterContext);if(!S)throw Error("invariant expected layout router to be mounted");let{childNodes:T,tree:M,url:w}=S,C=T.get(t);C||(C=new Map,T.set(t,C));let x=M[1][t][0],A=n.segment,N=(0,_.getSegmentValue)(x),I=[x];return o.default.createElement(o.default.Fragment,null,j,I.map(e=>{let j=(0,f.matchSegment)(e,A),S=(0,_.getSegmentValue)(e),T=(0,y.createRouterCacheKey)(e);return o.default.createElement(a.TemplateContext.Provider,{key:(0,y.createRouterCacheKey)(e,!0),value:o.default.createElement(g,{segmentPath:r},o.default.createElement(s.ErrorBoundary,{errorComponent:u,errorStyles:l},o.default.createElement(P,{hasLoading:b,loading:c,loadingStyles:d},o.default.createElement(h.NotFoundBoundary,{notFound:m,notFoundStyles:E,asNotFound:R},o.default.createElement(p.RedirectBoundary,null,o.default.createElement(O,{parallelRouterKey:t,url:w,tree:M,childNodes:C,childProp:j?n:null,segmentPath:r,cacheKey:T,isActive:N===S}))))))},i,v)}))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},50655:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{matchSegment:function(){return u},canSegmentBeOverridden:function(){return o}});let n=r(24778),u=(e,t)=>"string"==typeof e?"string"==typeof t&&e===t:"string"!=typeof t&&e[0]===t[0]&&e[1]===t[1],o=(e,t)=>{var r;return!Array.isArray(e)&&!!Array.isArray(t)&&(null==(r=(0,n.getSegmentParam)(e))?void 0:r.param)===t[0]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},4e3:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ReadonlyURLSearchParams:function(){return p},useSearchParams:function(){return h},usePathname:function(){return _},ServerInsertedHTMLContext:function(){return i.ServerInsertedHTMLContext},useServerInsertedHTML:function(){return i.useServerInsertedHTML},useRouter:function(){return y},useParams:function(){return b},useSelectedLayoutSegments:function(){return v},useSelectedLayoutSegment:function(){return m},redirect:function(){return c.redirect},notFound:function(){return s.notFound}});let n=r(86006),u=r(15456),o=r(76173),l=r(90259),a=r(65143),i=r(73476),c=r(14024),s=r(62951),f=Symbol("internal for urlsearchparams readonly");function d(){return Error("ReadonlyURLSearchParams cannot be modified")}class p{[Symbol.iterator](){return this[f][Symbol.iterator]()}append(){throw d()}delete(){throw d()}set(){throw d()}sort(){throw d()}constructor(e){this[f]=e,this.entries=e.entries.bind(e),this.forEach=e.forEach.bind(e),this.get=e.get.bind(e),this.getAll=e.getAll.bind(e),this.has=e.has.bind(e),this.keys=e.keys.bind(e),this.values=e.values.bind(e),this.toString=e.toString.bind(e)}}function h(){(0,l.clientHookInServerComponentError)("useSearchParams");let e=(0,n.useContext)(o.SearchParamsContext),t=(0,n.useMemo)(()=>e?new p(e):null,[e]);return t}function _(){return(0,l.clientHookInServerComponentError)("usePathname"),(0,n.useContext)(o.PathnameContext)}function y(){(0,l.clientHookInServerComponentError)("useRouter");let e=(0,n.useContext)(u.AppRouterContext);if(null===e)throw Error("invariant expected app router to be mounted");return e}function b(){(0,l.clientHookInServerComponentError)("useParams");let e=(0,n.useContext)(u.GlobalLayoutRouterContext);return e?function e(t,r){void 0===r&&(r={});let n=t[1];for(let t of Object.values(n)){let n=t[0],u=Array.isArray(n),o=u?n[1]:n;!o||o.startsWith("__PAGE__")||(u&&(r[n[0]]=n[1]),r=e(t,r))}return r}(e.tree):null}function v(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegments");let{tree:t}=(0,n.useContext)(u.LayoutRouterContext);return function e(t,r,n,u){let o;if(void 0===n&&(n=!0),void 0===u&&(u=[]),n)o=t[1][r];else{var l;let e=t[1];o=null!=(l=e.children)?l:Object.values(e)[0]}if(!o)return u;let i=o[0],c=(0,a.getSegmentValue)(i);return!c||c.startsWith("__PAGE__")?u:(u.push(c),e(o,r,!1,u))}(t,e)}function m(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegment");let t=v(e);return 0===t.length?null:t[0]}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62484:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"NotFoundBoundary",{enumerable:!0,get:function(){return a}});let n=r(26927),u=n._(r(86006)),o=r(4e3);class l extends u.default.Component{static getDerivedStateFromError(e){if((null==e?void 0:e.digest)==="NEXT_NOT_FOUND")return{notFoundTriggered:!0};throw e}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.notFoundTriggered?{notFoundTriggered:!1,previousPathname:e.pathname}:{notFoundTriggered:t.notFoundTriggered,previousPathname:e.pathname}}render(){return this.state.notFoundTriggered?u.default.createElement(u.default.Fragment,null,u.default.createElement("meta",{name:"robots",content:"noindex"}),this.props.notFoundStyles,this.props.notFound):this.props.children}constructor(e){super(e),this.state={notFoundTriggered:!!e.asNotFound,previousPathname:e.pathname}}}function a(e){let{notFound:t,notFoundStyles:r,asNotFound:n,children:a}=e,i=(0,o.usePathname)();return t?u.default.createElement(l,{pathname:i,notFound:t,notFoundStyles:r,asNotFound:n},a):u.default.createElement(u.default.Fragment,null,a)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62951:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{notFound:function(){return n},isNotFoundError:function(){return u}});let r="NEXT_NOT_FOUND";function n(){let e=Error(r);throw e.digest=r,e}function u(e){return(null==e?void 0:e.digest)===r}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},22418:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectErrorBoundary:function(){return i},RedirectBoundary:function(){return c}});let n=r(25909),u=n._(r(86006)),o=r(4e3),l=r(14024);function a(e){let{redirect:t,reset:r,redirectType:n}=e,a=(0,o.useRouter)();return(0,u.useEffect)(()=>{u.default.startTransition(()=>{n===l.RedirectType.push?a.push(t,{}):a.replace(t,{}),r()})},[t,n,r,a]),null}class i extends u.default.Component{static getDerivedStateFromError(e){if((0,l.isRedirectError)(e)){let t=(0,l.getURLFromRedirectError)(e),r=(0,l.getRedirectTypeFromError)(e);return{redirect:t,redirectType:r}}throw e}render(){let{redirect:e,redirectType:t}=this.state;return null!==e&&null!==t?u.default.createElement(a,{redirect:e,redirectType:t,reset:()=>this.setState({redirect:null})}):this.props.children}constructor(e){super(e),this.state={redirect:null,redirectType:null}}}function c(e){let{children:t}=e,r=(0,o.useRouter)();return u.default.createElement(i,{router:r},t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},14024:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectType:function(){return n},getRedirectError:function(){return a},redirect:function(){return i},isRedirectError:function(){return c},getURLFromRedirectError:function(){return s},getRedirectTypeFromError:function(){return f}});let o=r(24437),l="NEXT_REDIRECT";function a(e,t){let r=Error(l);r.digest=l+";"+t+";"+e;let n=o.requestAsyncStorage.getStore();return n&&(r.mutableCookies=n.mutableCookies),r}function i(e,t){throw void 0===t&&(t="replace"),a(e,t)}function c(e){if("string"!=typeof(null==e?void 0:e.digest))return!1;let[t,r,n]=e.digest.split(";",3);return t===l&&("replace"===r||"push"===r)&&"string"==typeof n}function s(e){return c(e)?e.digest.split(";",3)[2]:null}function f(e){if(!c(e))throw Error("Not a redirect error");return e.digest.split(";",3)[1]}(u=n||(n={})).push="push",u.replace="replace",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},92306:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(25909),u=n._(r(86006)),o=r(15456);function l(){let e=(0,u.useContext)(o.TemplateContext);return u.default.createElement(u.default.Fragment,null,e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},68654:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyFlightData",{enumerable:!0,get:function(){return l}});let n=r(15456),u=r(90743),o=r(23033);function l(e,t,r,l){void 0===l&&(l=!1);let[a,i,c]=r.slice(-3);return null!==i&&(3===r.length?(t.status=n.CacheStates.READY,t.subTreeData=i,(0,u.fillLazyItemsTillLeafWithHead)(t,e,a,c,l)):(t.status=n.CacheStates.READY,t.subTreeData=e.subTreeData,t.parallelRoutes=new Map(e.parallelRoutes),(0,o.fillCacheWithNewSubTreeData)(t,e,r,l)),!0)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76031:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyRouterStatePatchToTree",{enumerable:!0,get:function(){return function e(t,r,o){let l;let[a,i,,,c]=r;if(1===t.length){let e=u(r,o);return e}let[s,f]=t;if(!(0,n.matchSegment)(s,a))return null;let d=2===t.length;if(d)l=u(i[f],o);else if(null===(l=e(t.slice(2),i[f],o)))return null;let p=[t[0],{...i,[f]:l}];return c&&(p[4]=!0),p}}});let n=r(50655);function u(e,t){let[r,o]=e,[l,a]=t;if("__DEFAULT__"===l&&"__DEFAULT__"!==r)return e;if((0,n.matchSegment)(r,l)){let t={};for(let e in o){let r=void 0!==a[e];r?t[e]=u(o[e],a[e]):t[e]=o[e]}for(let e in a)t[e]||(t[e]=a[e]);let n=[r,t];return e[2]&&(n[2]=e[2]),e[3]&&(n[3]=e[3]),e[4]&&(n[4]=e[4]),n}return t}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},41781:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{extractPathFromFlightRouterState:function(){return a},computeChangedPath:function(){return i}});let n=r(47399),u=r(50655),o=e=>"string"==typeof e?e:e[1];function l(e){return e.split("/").reduce((e,t)=>""===t||t.startsWith("(")&&t.endsWith(")")?e:e+"/"+t,"")||"/"}function a(e){var t;let r=Array.isArray(e[0])?e[0][1]:e[0];if("__DEFAULT__"===r||n.INTERCEPTION_ROUTE_MARKERS.some(e=>r.startsWith(e)))return;if(r.startsWith("__PAGE__"))return"";let u=[r],o=null!=(t=e[1])?t:{},i=o.children?a(o.children):void 0;if(void 0!==i)u.push(i);else for(let[e,t]of Object.entries(o)){if("children"===e)continue;let r=a(t);void 0!==r&&u.push(r)}return l(u.join("/"))}function i(e,t){let r=function e(t,r){let[l,i]=t,[c,s]=r,f=o(l),d=o(c);if(n.INTERCEPTION_ROUTE_MARKERS.some(e=>f.startsWith(e)||d.startsWith(e)))return"";if(!(0,u.matchSegment)(l,c)){var p;return null!=(p=a(r))?p:""}for(let t in i)if(s[t]){let r=e(i[t],s[t]);if(null!==r)return o(c)+"/"+r}return null}(e,t);return null==r||"/"===r?r:l(r)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},8744:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!0),e.pathname+e.search+(t?e.hash:"")}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createHrefFromUrl",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},89343:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInitialRouterState",{enumerable:!0,get:function(){return a}});let n=r(15456),u=r(8744),o=r(90743),l=r(41781);function a(e){var t;let{buildId:r,initialTree:a,children:i,initialCanonicalUrl:c,initialParallelRoutes:s,isServer:f,location:d,initialHead:p}=e,h={status:n.CacheStates.READY,data:null,subTreeData:i,parallelRoutes:f?new Map:s};return(null===s||0===s.size)&&(0,o.fillLazyItemsTillLeafWithHead)(h,void 0,a,p),{buildId:r,tree:a,cache:h,prefetchCache:new Map,pushRef:{pendingPush:!1,mpaNavigation:!1},focusAndScrollRef:{apply:!1,hashFragment:null,segmentPaths:[]},canonicalUrl:d?(0,u.createHrefFromUrl)(d):c,nextUrl:null!=(t=(0,l.extractPathFromFlightRouterState)(a)||(null==d?void 0:d.pathname))?t:null}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76486:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createOptimisticTree",{enumerable:!0,get:function(){return function e(t,r,u){let o;let[l,a,i,c,s]=r||[null,{}],f=t[0],d=1===t.length,p=null!==l&&(0,n.matchSegment)(l,f),h=Object.keys(a).length>1,_=!r||!p||h,y={};if(null!==l&&p&&(y=a),!d&&!h){let r=e(t.slice(1),y?y.children:null,u||_);o=r}let b=[f,{...y,...o?{children:o}:{}}];return i&&(b[2]=i),!u&&_?b[3]="refetch":p&&c&&(b[3]=c),p&&s&&(b[4]=s),b}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},7718:function(e,t){"use strict";function r(e){return e.status="pending",e.then(t=>{"pending"===e.status&&(e.status="fulfilled",e.value=t)},t=>{"pending"===e.status&&(e.status="rejected",e.value=t)}),e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRecordFromThenable",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49101:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!1),Array.isArray(e)?e[0]+"|"+e[1]+"|"+e[2]:t&&e.startsWith("__PAGE__")?"__PAGE__":e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRouterCacheKey",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},52368:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fetchServerResponse",{enumerable:!0,get:function(){return s}});let n=r(35456),u=r(34852),o=r(2353),l=r(303),a=r(74741),i=r(77279);function c(e){return[(0,o.urlToUrlWithoutFlightMarker)(e).toString(),void 0]}async function s(e,t,r,s,f){let d={[u.RSC]:"1",[u.NEXT_ROUTER_STATE_TREE]:encodeURIComponent(JSON.stringify(t))};f===a.PrefetchKind.AUTO&&(d[u.NEXT_ROUTER_PREFETCH]="1"),r&&(d[u.NEXT_URL]=r);let p=(0,i.hexHash)([d[u.NEXT_ROUTER_PREFETCH]||"0",d[u.NEXT_ROUTER_STATE_TREE]].join(","));try{let t=new URL(e);t.pathname.endsWith("/")?t.pathname+="index.txt":t.pathname+=".txt",t.searchParams.set(u.NEXT_RSC_UNION_QUERY,p);let r=await fetch(t,{credentials:"same-origin",headers:d}),a=(0,o.urlToUrlWithoutFlightMarker)(r.url),i=r.redirected?a:void 0,f=r.headers.get("content-type")||"",h=f===u.RSC_CONTENT_TYPE_HEADER;if(h||(h=f.startsWith("text/plain")),!h||!r.ok)return c(a.toString());let[_,y]=await (0,n.createFromFetch)(Promise.resolve(r),{callServer:l.callServer});if(s!==_)return c(r.url);return[y,i]}catch(t){return console.error("Failed to fetch RSC payload. Falling back to browser navigation.",t),[e.toString(),void 0]}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},70155:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithDataProperty",{enumerable:!0,get:function(){return function e(t,r,o,l,a){void 0===a&&(a=!1);let i=o.length<=2,[c,s]=o,f=(0,u.createRouterCacheKey)(s),d=r.parallelRoutes.get(c);if(!d||a&&r.parallelRoutes.size>1)return{bailOptimistic:!0};let p=t.parallelRoutes.get(c);p&&p!==d||(p=new Map(d),t.parallelRoutes.set(c,p));let h=d.get(f),_=p.get(f);if(i){_&&_.data&&_!==h||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}if(!_||!h){_||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}return _===h&&(_={status:_.status,data:_.data,subTreeData:_.subTreeData,parallelRoutes:new Map(_.parallelRoutes)},p.set(f,_)),e(_,h,o.slice(2),l)}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},23033:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithNewSubTreeData",{enumerable:!0,get:function(){return function e(t,r,a,i){let c=a.length<=5,[s,f]=a,d=(0,l.createRouterCacheKey)(f),p=r.parallelRoutes.get(s);if(!p)return;let h=t.parallelRoutes.get(s);h&&h!==p||(h=new Map(p),t.parallelRoutes.set(s,h));let _=p.get(d),y=h.get(d);if(c){y&&y.data&&y!==_||(y={status:n.CacheStates.READY,data:null,subTreeData:a[3],parallelRoutes:_?new Map(_.parallelRoutes):new Map},_&&(0,u.invalidateCacheByRouterState)(y,_,a[2]),(0,o.fillLazyItemsTillLeafWithHead)(y,_,a[2],a[4],i),h.set(d,y));return}y&&_&&(y===_&&(y={status:y.status,data:y.data,subTreeData:y.subTreeData,parallelRoutes:new Map(y.parallelRoutes)},h.set(d,y)),e(y,_,a.slice(2),i))}}});let n=r(15456),u=r(18179),o=r(90743),l=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90743:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillLazyItemsTillLeafWithHead",{enumerable:!0,get:function(){return function e(t,r,o,l,a){let i=0===Object.keys(o[1]).length;if(i){t.head=l;return}for(let i in o[1]){let c=o[1][i],s=c[0],f=(0,u.createRouterCacheKey)(s);if(r){let u=r.parallelRoutes.get(i);if(u){let r=new Map(u),o=r.get(f),s=a&&o?{status:o.status,data:o.data,subTreeData:o.subTreeData,parallelRoutes:new Map(o.parallelRoutes)}:{status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map(null==o?void 0:o.parallelRoutes)};r.set(f,s),e(s,o,c,l,a),t.parallelRoutes.set(i,r);continue}}let d={status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map},p=t.parallelRoutes.get(i);p?p.set(f,d):t.parallelRoutes.set(i,new Map([[f,d]])),e(d,void 0,c,l,a)}}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},29231:function(e,t){"use strict";var r,n;function u(e){let{kind:t,prefetchTime:r,lastUsedTime:n}=e;return Date.now()<(null!=n?n:r)+3e4?n?"reusable":"fresh":"auto"===t&&Date.now()["children",e]).flat(),p=(0,c.fillCacheWithDataProperty)(f,e.cache,d,()=>(t||(t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,i,e.nextUrl,e.buildId))),t),!0);if(!(null==p?void 0:p.bailOptimistic))return R.previousTree=e.tree,R.patchedTree=i,R.pendingPush=C,R.hashFragment=M,R.shouldScroll=S,R.scrollableSegments=[],R.cache=f,R.canonicalUrl=w,e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),{data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:Date.now()}),(0,_.handleMutable)(e,R)}if(!A){let t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,void 0)),n={data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:null};e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),n),A=n}let N=(0,b.getPrefetchEntryCacheStatus)(A),{treeAtTimeOfPrefetch:I,data:D}=A,[k,F]=(0,l.readRecordValue)(D);if(A.lastUsedTime=Date.now(),"string"==typeof k)return m(e,R,k,C);let U=e.tree,L=e.cache,H=[];for(let t of k){let o=t.slice(0,-4),l=t.slice(-3)[0],a=["",...o],s=(0,f.applyRouterStatePatchToTree)(a,U,l);if(null===s&&(s=(0,f.applyRouterStatePatchToTree)(a,I,l)),null!==s){if((0,p.isNavigatingToNewRootLayout)(U,s))return m(e,R,w,C);let f=(0,y.applyFlightData)(L,E,t,"auto"===A.kind&&N===b.PrefetchCacheEntryStatus.reusable);f||N!==b.PrefetchCacheEntryStatus.stale||(f=function(e,t,r,u,o){let l=!1;e.status=n.CacheStates.READY,e.subTreeData=t.subTreeData,e.parallelRoutes=new Map(t.parallelRoutes);let a=g(u).map(e=>[...r,...e]);for(let r of a){let n=(0,c.fillCacheWithDataProperty)(e,t,r,o);(null==n?void 0:n.bailOptimistic)||(l=!0)}return l}(E,L,o,l,()=>(0,u.fetchServerResponse)(r,U,e.nextUrl,e.buildId)));let h=(0,d.shouldHardNavigate)(a,U);for(let e of(h?(E.status=n.CacheStates.READY,E.subTreeData=L.subTreeData,(0,i.invalidateCacheBelowFlightSegmentPath)(E,L,o),R.cache=E):f&&(R.cache=E),L=E,U=s,g(l))){let t=[...o,...e];"__DEFAULT__"!==t[t.length-1]&&H.push(t)}}}return R.previousTree=e.tree,R.patchedTree=U,R.canonicalUrl=F?(0,a.createHrefFromUrl)(F):w,R.pendingPush=C,R.scrollableSegments=H,R.hashFragment=M,R.shouldScroll=S,(0,_.handleMutable)(e,R)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},72763:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prefetchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(52368),o=r(74741),l=r(7718),a=r(62268),i=r(34852);function c(e,t){(0,a.prunePrefetchCache)(e.prefetchCache);let{url:r}=t;r.searchParams.delete(i.NEXT_RSC_UNION_QUERY);let c=(0,n.createHrefFromUrl)(r,!1),s=e.prefetchCache.get(c);if(s&&(s.kind===o.PrefetchKind.TEMPORARY&&e.prefetchCache.set(c,{...s,kind:t.kind}),!(s.kind===o.PrefetchKind.AUTO&&t.kind===o.PrefetchKind.FULL)))return e;let f=(0,l.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,t.kind));return e.prefetchCache.set(c,{treeAtTimeOfPrefetch:e.tree,data:f,kind:t.kind,prefetchTime:Date.now(),lastUsedTime:null}),e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62268:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prunePrefetchCache",{enumerable:!0,get:function(){return u}});let n=r(29231);function u(e){for(let[t,r]of e)(0,n.getPrefetchEntryCacheStatus)(r)===n.PrefetchCacheEntryStatus.expired&&e.delete(t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49901:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"refreshReducer",{enumerable:!0,get:function(){return p}});let n=r(52368),u=r(7718),o=r(90168),l=r(8744),a=r(76031),i=r(58999),c=r(86664),s=r(14129),f=r(15456),d=r(90743);function p(e,t){let{cache:r,mutable:p,origin:h}=t,_=e.canonicalUrl,y=e.tree,b=JSON.stringify(p.previousTree)===JSON.stringify(y);if(b)return(0,s.handleMutable)(e,p);r.data||(r.data=(0,u.createRecordFromThenable)((0,n.fetchServerResponse)(new URL(_,h),[y[0],y[1],y[2],"refetch"],e.nextUrl,e.buildId)));let[v,m]=(0,o.readRecordValue)(r.data);if("string"==typeof v)return(0,c.handleExternalUrl)(e,p,v,e.pushRef.pendingPush);for(let t of(r.data=null,v)){if(3!==t.length)return console.log("REFRESH FAILED"),e;let[n]=t,u=(0,a.applyRouterStatePatchToTree)([""],y,n);if(null===u)throw Error("SEGMENT MISMATCH");if((0,i.isNavigatingToNewRootLayout)(y,u))return(0,c.handleExternalUrl)(e,p,_,e.pushRef.pendingPush);let o=m?(0,l.createHrefFromUrl)(m):void 0;m&&(p.canonicalUrl=o);let[s,h]=t.slice(-2);null!==s&&(r.status=f.CacheStates.READY,r.subTreeData=s,(0,d.fillLazyItemsTillLeafWithHead)(r,void 0,n,h),p.cache=r,p.prefetchCache=new Map),p.previousTree=y,p.patchedTree=u,p.canonicalUrl=_,y=u}return(0,s.handleMutable)(e,p)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34520:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"restoreReducer",{enumerable:!0,get:function(){return u}});let n=r(8744);function u(e,t){let{url:r,tree:u}=t,o=(0,n.createHrefFromUrl)(r);return{buildId:e.buildId,canonicalUrl:o,pushRef:e.pushRef,focusAndScrollRef:e.focusAndScrollRef,cache:e.cache,prefetchCache:e.prefetchCache,tree:u,nextUrl:r.pathname}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},87366:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverActionReducer",{enumerable:!0,get:function(){return p}});let n=r(303),u=r(34852),o=r(7718),l=r(90168),a=r(35456),i=r(74741),c=r(12409),s=r(8744),f=r(14024);async function d(e,t){let r,{actionId:o,actionArgs:l}=t,i=await (0,a.encodeReply)(l),s=await fetch("",{method:"POST",headers:{Accept:u.RSC_CONTENT_TYPE_HEADER,"Next-Action":o,[u.NEXT_ROUTER_STATE_TREE]:JSON.stringify(e.tree),...e.nextUrl?{[u.NEXT_URL]:e.nextUrl}:{}},body:i}),f=s.headers.get("x-action-redirect");try{let e=JSON.parse(s.headers.get("x-action-revalidated")||"[[],0,0]");r={paths:e[0]||[],tag:!!e[1],cookie:e[2]}}catch(e){r={paths:[],tag:!1,cookie:!1}}let d=f?new URL((0,c.addBasePath)(f),window.location.origin):void 0;if(s.headers.get("content-type")===u.RSC_CONTENT_TYPE_HEADER){let e=await (0,a.createFromFetch)(Promise.resolve(s),{callServer:n.callServer});if(f){let[,t]=e;return{actionFlightData:null==t?void 0:t[1],redirectLocation:d,revalidatedParts:r}}{let[t,[,n]]=null!=e?e:[];return{actionResult:t,actionFlightData:n,redirectLocation:d,revalidatedParts:r}}}return{redirectLocation:d,revalidatedParts:r}}function p(e,t){if(t.mutable.serverActionApplied)return e;t.mutable.inFlightServerAction||(t.mutable.previousTree=e.tree,t.mutable.previousUrl=e.canonicalUrl,t.mutable.inFlightServerAction=(0,o.createRecordFromThenable)(d(e,t)));try{var r,n;let{actionResult:u,actionFlightData:a,redirectLocation:c,revalidatedParts:d}=(0,l.readRecordValue)(t.mutable.inFlightServerAction);if(d.tag||d.cookie?e.prefetchCache.clear():d.paths.length>0&&e.prefetchCache.clear(),c){if(a){let n=(0,s.createHrefFromUrl)(c,!1),u=e.prefetchCache.get(n);e.prefetchCache.set(n,{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(r=null==u?void 0:u.kind)?r:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null})}t.reject((0,f.getRedirectError)(c.toString(),f.RedirectType.push))}else{if(a){let r=(0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),u=e.prefetchCache.get(r);e.prefetchCache.set((0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(n=null==u?void 0:u.kind)?n:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null}),setTimeout(()=>{t.changeByServerResponse(t.mutable.previousTree,a,void 0)})}t.resolve(u)}}catch(e){if("rejected"===e.status)t.reject(e.value);else throw e}return t.mutable.serverActionApplied=!0,e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},77519:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverPatchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(76031),o=r(58999),l=r(86664),a=r(68654),i=r(14129);function c(e,t){let{flightData:r,previousTree:c,overrideCanonicalUrl:s,cache:f,mutable:d}=t,p=JSON.stringify(c)===JSON.stringify(e.tree);if(!p)return console.log("TREE MISMATCH"),e;if(d.previousTree)return(0,i.handleMutable)(e,d);if("string"==typeof r)return(0,l.handleExternalUrl)(e,d,r,e.pushRef.pendingPush);let h=e.tree,_=e.cache;for(let t of r){let r=t.slice(0,-4),[i]=t.slice(-3,-2),c=(0,u.applyRouterStatePatchToTree)(["",...r],h,i);if(null===c)throw Error("SEGMENT MISMATCH");if((0,o.isNavigatingToNewRootLayout)(h,c))return(0,l.handleExternalUrl)(e,d,e.canonicalUrl,e.pushRef.pendingPush);let p=s?(0,n.createHrefFromUrl)(s):void 0;p&&(d.canonicalUrl=p),(0,a.applyFlightData)(_,f,t),d.previousTree=h,d.patchedTree=c,d.cache=f,_=f,h=c}return(0,i.handleMutable)(e,d)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},74741:function(e,t){"use strict";var r,n;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{PrefetchKind:function(){return r},ACTION_REFRESH:function(){return u},ACTION_NAVIGATE:function(){return o},ACTION_RESTORE:function(){return l},ACTION_SERVER_PATCH:function(){return a},ACTION_PREFETCH:function(){return i},ACTION_FAST_REFRESH:function(){return c},ACTION_SERVER_ACTION:function(){return s}});let u="refresh",o="navigate",l="restore",a="server-patch",i="prefetch",c="fast-refresh",s="server-action";(n=r||(r={})).AUTO="auto",n.FULL="full",n.TEMPORARY="temporary",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},85426:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"reducer",{enumerable:!0,get:function(){return f}});let n=r(74741),u=r(86664),o=r(77519),l=r(34520),a=r(49901),i=r(72763),c=r(73800),s=r(87366),f=function(e,t){switch(t.type){case n.ACTION_NAVIGATE:return(0,u.navigateReducer)(e,t);case n.ACTION_SERVER_PATCH:return(0,o.serverPatchReducer)(e,t);case n.ACTION_RESTORE:return(0,l.restoreReducer)(e,t);case n.ACTION_REFRESH:return(0,a.refreshReducer)(e,t);case n.ACTION_FAST_REFRESH:return(0,c.fastRefreshReducer)(e,t);case n.ACTION_PREFETCH:return(0,i.prefetchReducer)(e,t);case n.ACTION_SERVER_ACTION:return(0,s.serverActionReducer)(e,t);default:throw Error("Unknown action")}};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34712:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"shouldHardNavigate",{enumerable:!0,get:function(){return function e(t,r){let[u,o]=r,[l,a]=t;if(!(0,n.matchSegment)(l,u))return!!Array.isArray(l);let i=t.length<=2;return!i&&e(t.slice(2),o[a])}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},98323:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createSearchParamsBailoutProxy",{enumerable:!0,get:function(){return u}});let n=r(62620);function u(){return new Proxy({},{get(e,t){"string"==typeof t&&(0,n.staticGenerationBailout)("searchParams."+t)}})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62620:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationBailout",{enumerable:!0,get:function(){return l}});let n=r(47308),u=r(30094);class o extends Error{constructor(...e){super(...e),this.code="NEXT_STATIC_GEN_BAILOUT"}}let l=(e,t)=>{let r=u.staticGenerationAsyncStorage.getStore();if(null==r?void 0:r.forceStatic)return!0;if(null==r?void 0:r.dynamicShouldError){let{dynamic:r="error",link:n}=t||{};throw new o('Page with `dynamic = "'+r+"\"` couldn't be rendered statically because it used `"+e+"`."+(n?" See more info here: "+n:""))}if(r&&(r.revalidate=0),null==r?void 0:r.isStaticGeneration){let t=new n.DynamicServerError(e);throw r.dynamicUsageDescription=e,r.dynamicUsageStack=t.stack,t}return!1};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},58531:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(26927),u=n._(r(86006)),o=r(98323);function l(e){let{Component:t,propsForComponent:r}=e,n=(0,o.createSearchParamsBailoutProxy)();return u.default.createElement(t,{searchParams:n,...r})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},18688:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"useReducerWithReduxDevtools",{enumerable:!0,get:function(){return o}});let n=r(86006);function u(e){if(e instanceof Map){let t={};for(let[r,n]of e.entries()){if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n._bundlerConfig){t[r]="FlightData";continue}}t[r]=u(n)}return t}if("object"==typeof e&&null!==e){let t={};for(let r in e){let n=e[r];if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n.hasOwnProperty("_bundlerConfig")){t[r]="FlightData";continue}}t[r]=u(n)}return t}return Array.isArray(e)?e.map(u):e}let o=function(e,t){let r=(0,n.useRef)(),o=(0,n.useRef)();(0,n.useEffect)(()=>{if(!r.current&&!1!==o.current){if(void 0===o.current&&void 0===window.__REDUX_DEVTOOLS_EXTENSION__){o.current=!1;return}return r.current=window.__REDUX_DEVTOOLS_EXTENSION__.connect({instanceId:8e3,name:"next-router"}),r.current&&r.current.init(u(t)),()=>{r.current=void 0}}},[t]);let[l,a]=(0,n.useReducer)((t,n)=>{let o=e(t,n);return r.current&&r.current.send(n,u(o)),o},t),i=(0,n.useCallback)(()=>{r.current&&r.current.send({type:"RENDER_SYNC"},u(l))},[l]);return[l,a,i]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75588:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizePathTrailingSlash",{enumerable:!0,get:function(){return o}});let n=r(61402),u=r(74035),o=e=>{if(!e.startsWith("/"))return e;let{pathname:t,query:r,hash:o}=(0,u.parsePath)(e);return""+(0,n.removeTrailingSlash)(t)+r+o};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},59214:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return u}});let n=r(98687);function u(e){let t="function"==typeof reportError?reportError:e=>{window.console.error(e)};e.digest!==n.NEXT_DYNAMIC_NO_SSR_CODE&&t(e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},15456:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{CacheStates:function(){return n},AppRouterContext:function(){return a},LayoutRouterContext:function(){return i},GlobalLayoutRouterContext:function(){return c},TemplateContext:function(){return s}});let o=r(26927),l=o._(r(86006));(u=n||(n={})).LAZY_INITIALIZED="LAZYINITIALIZED",u.DATA_FETCH="DATAFETCH",u.READY="READY";let a=l.default.createContext(null),i=l.default.createContext(null),c=l.default.createContext(null),s=l.default.createContext(null)},77279:function(e,t){"use strict";function r(e){let t=5381;for(let r=0;r!t||"("===t[0]&&t.endsWith(")")||"@"===t[0]||("page"===t||"route"===t)&&r===n.length-1?e:e+"/"+t,""))}function o(e,t){return t?e.replace(/\.rsc($|\?)/,"$1"):e}},92998:function(e,t){"use strict";function r(e,t){void 0===t&&(t={});let r=document.documentElement,n=r.style.scrollBehavior;r.style.scrollBehavior="auto",t.dontForceLayout||r.getClientRects(),e(),r.style.scrollBehavior=n}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"handleSmoothScroll",{enumerable:!0,get:function(){return r}})},30753:function(e,t){"use strict";function r(e){return/Googlebot|Mediapartners-Google|AdsBot-Google|googleweblight|Storebot-Google|Google-PageRenderer|Bingbot|BingPreview|Slurp|DuckDuckBot|baiduspider|yandex|sogou|LinkedInBot|bitlybot|tumblr|vkShare|quora link preview|facebookexternalhit|facebookcatalog|Twitterbot|applebot|redditbot|Slackbot|Discordbot|WhatsApp|SkypeUriPreview|ia_archiver/i.test(e)}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isBot",{enumerable:!0,get:function(){return r}})},74035:function(e,t){"use strict";function r(e){let t=e.indexOf("#"),r=e.indexOf("?"),n=r>-1&&(t<0||r-1?{pathname:e.substring(0,n?r:t),query:n?e.substring(r,t>-1?t:void 0):"",hash:t>-1?e.slice(t):""}:{pathname:e,query:"",hash:""}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"parsePath",{enumerable:!0,get:function(){return r}})},61402:function(e,t){"use strict";function r(e){return e.replace(/\/$/,"")||"/"}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeTrailingSlash",{enumerable:!0,get:function(){return r}})},73476:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ServerInsertedHTMLContext:function(){return o},useServerInsertedHTML:function(){return l}});let n=r(25909),u=n._(r(86006)),o=u.default.createContext(null);function l(e){let t=(0,u.useContext)(o);t&&t(e)}},75862:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createAsyncLocalStorage",{enumerable:!0,get:function(){return o}});let r=Error("Invariant: AsyncLocalStorage accessed in runtime where it is not available");class n{disable(){throw r}getStore(){}run(){throw r}exit(){throw r}enterWith(){throw r}}let u=globalThis.AsyncLocalStorage;function o(){return u?new u:new n}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},24437:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"requestAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30094:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},93194:function(e,t,r){"use strict";var n=r(8431);t.createRoot=n.createRoot,t.hydrateRoot=n.hydrateRoot},8431:function(e,t,r){"use strict";!function e(){if("undefined"!=typeof __REACT_DEVTOOLS_GLOBAL_HOOK__&&"function"==typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE)try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(e)}catch(e){console.error(e)}}(),e.exports=r(42614)},82672:function(e,t,r){"use strict";/** - * @license React - * react-server-dom-webpack-client.browser.production.min.js - * - * Copyright (c) Meta Platforms, Inc. and affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var n=r(8431),u=r(86006),o={stream:!0},l=new Map;function a(e){var t=globalThis.__next_require__(e);return"function"!=typeof t.then||"fulfilled"===t.status?null:(t.then(function(e){t.status="fulfilled",t.value=e},function(e){t.status="rejected",t.reason=e}),t)}function i(){}var c=n.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.Dispatcher,s=Symbol.for("react.element"),f=Symbol.for("react.lazy"),d=Symbol.for("react.default_value"),p=Symbol.iterator,h=Array.isArray,_=new WeakMap,y=u.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ContextRegistry;function b(e,t,r,n){this.status=e,this.value=t,this.reason=r,this._response=n}function v(e){switch(e.status){case"resolved_model":j(e);break;case"resolved_module":S(e)}switch(e.status){case"fulfilled":return e.value;case"pending":case"blocked":throw e;default:throw e.reason}}function m(e,t){for(var r=0;rd?(h=d,d=3,f++):(h=0,d=3);continue;case 2:44===(v=s[f++])?d=4:_=_<<4|(96s.length&&(v=-1)}var m=s.byteOffset+f;if(-1>>1,u=e[n];if(0>>1;no(i,r))co(s,i)?(e[n]=s,e[c]=r,n=c):(e[n]=i,e[a]=r,n=a);else if(co(s,r))e[n]=s,e[c]=r,n=c;else break}}return t}function o(e,t){var r=e.sortIndex-t.sortIndex;return 0!==r?r:e.id-t.id}if(t.unstable_now=void 0,"object"==typeof performance&&"function"==typeof performance.now){var l,a=performance;t.unstable_now=function(){return a.now()}}else{var i=Date,c=i.now();t.unstable_now=function(){return i.now()-c}}var s=[],f=[],d=1,p=null,h=3,_=!1,y=!1,b=!1,v="function"==typeof setTimeout?setTimeout:null,m="function"==typeof clearTimeout?clearTimeout:null,g="undefined"!=typeof setImmediate?setImmediate:null;function O(e){for(var t=n(f);null!==t;){if(null===t.callback)u(f);else if(t.startTime<=e)u(f),t.sortIndex=t.expirationTime,r(s,t);else break;t=n(f)}}function P(e){if(b=!1,O(e),!y){if(null!==n(s))y=!0,N(E);else{var t=n(f);null!==t&&I(P,t.startTime-e)}}}function E(e,r){y=!1,b&&(b=!1,m(S),S=-1),_=!0;var o=h;try{e:{for(O(r),p=n(s);null!==p&&(!(p.expirationTime>r)||e&&!w());){var l=p.callback;if("function"==typeof l){p.callback=null,h=p.priorityLevel;var a=l(p.expirationTime<=r);if(r=t.unstable_now(),"function"==typeof a){p.callback=a,O(r);var i=!0;break e}p===n(s)&&u(s),O(r)}else u(s);p=n(s)}if(null!==p)i=!0;else{var c=n(f);null!==c&&I(P,c.startTime-r),i=!1}}return i}finally{p=null,h=o,_=!1}}"undefined"!=typeof navigator&&void 0!==navigator.scheduling&&void 0!==navigator.scheduling.isInputPending&&navigator.scheduling.isInputPending.bind(navigator.scheduling);var R=!1,j=null,S=-1,T=5,M=-1;function w(){return!(t.unstable_now()-Me||125l?(e.sortIndex=o,r(f,e),null===n(s)&&e===n(f)&&(b?(m(S),S=-1):b=!0,I(P,o-l))):(e.sortIndex=a,r(s,e),y||_||(y=!0,N(E))),e},t.unstable_shouldYield=w,t.unstable_wrapCallback=function(e){var t=h;return function(){var r=h;h=t;try{return e.apply(this,arguments)}finally{h=r}}}},26183:function(e,t,r){"use strict";e.exports=r(24248)},24778:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getSegmentParam",{enumerable:!0,get:function(){return u}});let n=r(47399);function u(e){let t=n.INTERCEPTION_ROUTE_MARKERS.find(t=>e.startsWith(t));return(t&&(e=e.slice(t.length)),e.startsWith("[[...")&&e.endsWith("]]"))?{type:"optional-catchall",param:e.slice(5,-2)}:e.startsWith("[...")&&e.endsWith("]")?{type:"catchall",param:e.slice(4,-1)}:e.startsWith("[")&&e.endsWith("]")?{type:"dynamic",param:e.slice(1,-1)}:null}},47399:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{INTERCEPTION_ROUTE_MARKERS:function(){return u},isInterceptionRouteAppPath:function(){return o},extractInterceptionRouteInformation:function(){return l}});let n=r(24241),u=["(..)(..)","(.)","(..)","(...)"];function o(e){return void 0!==e.split("/").find(e=>u.find(t=>e.startsWith(t)))}function l(e){let t,r,o;for(let n of e.split("/"))if(r=u.find(e=>n.startsWith(e))){[t,o]=e.split(r,2);break}if(!t||!r||!o)throw Error(`Invalid interception route: ${e}. Must be in the format //(..|...|..)(..)/`);switch(t=(0,n.normalizeAppPath)(t),r){case"(.)":o="/"===t?`/${o}`:t+"/"+o;break;case"(..)":if("/"===t)throw Error(`Invalid interception route: ${e}. Cannot use (..) marker at the root level, use (.) instead.`);o=t.split("/").slice(0,-1).concat(o).join("/");break;case"(...)":o="/"+o;break;case"(..)(..)":let l=t.split("/");if(l.length<=2)throw Error(`Invalid interception route: ${e}. Cannot use (..)(..) marker at the root level or one level up.`);o=l.slice(0,-2).concat(o).join("/");break;default:throw Error("Invariant: unexpected marker")}return{interceptingRoute:t,interceptedRoute:o}}},26927:function(e,t,r){"use strict";function n(e){return e&&e.__esModule?e:{default:e}}r.r(t),r.d(t,{_:function(){return n},_interop_require_default:function(){return n}})},25909:function(e,t,r){"use strict";function n(e){if("function"!=typeof WeakMap)return null;var t=new WeakMap,r=new WeakMap;return(n=function(e){return e?r:t})(e)}function u(e,t){if(!t&&e&&e.__esModule)return e;if(null===e||"object"!=typeof e&&"function"!=typeof e)return{default:e};var r=n(t);if(r&&r.has(e))return r.get(e);var u={},o=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var l in e)if("default"!==l&&Object.prototype.hasOwnProperty.call(e,l)){var a=o?Object.getOwnPropertyDescriptor(e,l):null;a&&(a.get||a.set)?Object.defineProperty(u,l,a):u[l]=e[l]}return u.default=e,r&&r.set(e,u),u}r.r(t),r.d(t,{_:function(){return u},_interop_require_wildcard:function(){return u}})}}]); \ No newline at end of file diff --git a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/__init__.py b/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS-Umamusume-voice-synthesizer/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/hzy123/bingo/README.md b/spaces/hzy123/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/iamstolas/STOLAS/src/components/toaster.tsx b/spaces/iamstolas/STOLAS/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/typing.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/typing.py deleted file mode 100644 index 74f2b8746172ce2d58705f073a45c2276766ce60..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/typing.py +++ /dev/null @@ -1,13 +0,0 @@ -from typing import Any, Literal -from insightface.app.common import Face -import numpy - -Face = Face -Frame = numpy.ndarray[Any, Any] - -FaceRecognition = Literal[ 'reference', 'many' ] -FaceAnalyserDirection = Literal[ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small' ] -FaceAnalyserAge = Literal[ 'child', 'teen', 'adult', 'senior' ] -FaceAnalyserGender = Literal[ 'male', 'female' ] -TempFrameFormat = Literal[ 'jpg', 'png' ] -OutputVideoEncoder = Literal[ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc' ] diff --git a/spaces/inamXcontru/PoeticTTS/A Flying Jatt 3 Movie Download Mp4 Dont Miss the Thrilling and Hilarious Third Installment of the Series.md b/spaces/inamXcontru/PoeticTTS/A Flying Jatt 3 Movie Download Mp4 Dont Miss the Thrilling and Hilarious Third Installment of the Series.md deleted file mode 100644 index ed8a8110ed1720d7575e092fa66b9169e911b76d..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/A Flying Jatt 3 Movie Download Mp4 Dont Miss the Thrilling and Hilarious Third Installment of the Series.md +++ /dev/null @@ -1,6 +0,0 @@ -

    A Flying Jatt 3 Movie Download Mp4


    DOWNLOADhttps://gohhs.com/2uz3LB



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/Adobe Dreamweaver CS6 Revealed - Bishop.pdf Discover the Secrets of Dreamweaver CS6.md b/spaces/inamXcontru/PoeticTTS/Adobe Dreamweaver CS6 Revealed - Bishop.pdf Discover the Secrets of Dreamweaver CS6.md deleted file mode 100644 index b6eeec7793675fad84660d5e19169ecb0994ee29..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Adobe Dreamweaver CS6 Revealed - Bishop.pdf Discover the Secrets of Dreamweaver CS6.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe Dreamweaver CS6 Revealed - Bishop.pdf


    Download Zip ✺✺✺ https://gohhs.com/2uz4Jr



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/DVDFab 6.1.2.5 Platinum With Key Free Download .md b/spaces/inamXcontru/PoeticTTS/DVDFab 6.1.2.5 Platinum With Key Free Download .md deleted file mode 100644 index 40c5e9c0e01a4e5720e57c78678ad7180fd0b284..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/DVDFab 6.1.2.5 Platinum With Key Free Download .md +++ /dev/null @@ -1,6 +0,0 @@ -

    DVDFab 6.1.2.5 Platinum With Key Free Download


    DOWNLOAD ✶✶✶ https://gohhs.com/2uz4xS



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md b/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md deleted file mode 100644 index 0af5d85a6df29c3f8a02f4618c9de02e5556e1c5..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Dil Ka Rishta Movie Full Hd 1080p Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dil Ka Rishta Movie Full Hd 1080p Free Download


    DOWNLOAD - https://gohhs.com/2uz4kS



    -
    -Free Dil Ka Rishta Bada Pyara Hai Hd mp3 download from mp3such ... Hai Full Song Lyrics Full Hd 1080p Kumar Sanu Alka Yagnik Udit Ji Free Download. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md deleted file mode 100644 index b0d72ab7523bcb0dd538213f9f3994b8351177fb..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Catholic Hymn Book Nigeria Pdf Download Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Catholic Hymn Book Nigeria Pdf Download


    Download https://urlin.us/2uEygz



    - -Catholic Hymn Book is a lightweight app with a collection of hymns in the Catholic Hymn Book used in Nigeria and all over the world. ... 910387aaf Online PDF Ebook Epub Library HYMNAL ANCIENT HYMNS AND SPIRITUAL ... Listen to Traditional Irish Songs and Music, Download the MP3 or Midi Files, and get ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md b/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md deleted file mode 100644 index 30e7c7f4e48d4bce8033edecf580a2b2fd937a85..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/EVEREST Ultimate Edition V5.02.1750 Portable Serial Key VERIFIED.md +++ /dev/null @@ -1,7 +0,0 @@ -

    EVEREST Ultimate Edition V5.02.1750 Portable Serial Key


    Download Zip »»» https://urlin.us/2uEyp8



    -
    -June 17, 2552 BC - Serial.txt b) Portable Everest Ultimate Engineer Edition 5.02.1750 • EverestUltimate_Portable_5.02.1750_MultiLang.paf.exe c) keygen_FFF .exe (it works for me, but I didn't bother, I just copied the key from the Everest folder to Registration.reg , without changing anything, and he also earned) • Crack Everest .exe - http://rapidshare.com/files/166696195/ -Everest.exe.html • Everest_Professional_Engineer_Edition_5.01.1505.exe → Portable Everest Ultimate Engineer Edition 5.02.1750 → Everest Ultimate Engineer Edition 5.02.1750 (for Windows 7 - 8.1) / 5.02.1750 (for Windows XP/Vista) • Everest## # August 21, 2552 BC - [EVEREST Ultimate Edition] - Version of EVEREST v5.02.1750. Benchmark module 2.4.258.0 .DMI Motherboard Serial Number [ TRIAL ] - [EVEREST Ultimate Edition] - Version EVEREST v5.02.1750. Benchmark module 2.4.258.0 . DMI Processor Serial Number [ TRIAL ] - [EVEREST Ultimate Edition] - Version EVEREST v5.02.1750. Benchmark module 2.4.258.0 . DMI memory module serial number [ TRIAL ] - [EVEREST Ultimate Edition] - EVEREST version v5.02.1750. Benchmark module 2.4.258.0 . DMI Video Card Serial Number [ TRIAL ] - [EVEREST Ultimate Edition] - EVEREST version v5.02.1750. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py deleted file mode 100644 index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch -import numpy as np -import random -import onnxruntime as ort -def set_random_seed(seed=0): - ort.set_seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - random.seed(seed) - np.random.seed(seed) - -def runonnx(model_path, **kwargs): - ort_session = ort.InferenceSession(model_path) - outputs = ort_session.run( - None, - kwargs - ) - return outputs \ No newline at end of file diff --git a/spaces/j0hngou/vision-diffmask/code/models/__init__.py b/spaces/j0hngou/vision-diffmask/code/models/__init__.py deleted file mode 100644 index 5dd85598d0499636f81c450a86a55a7ff8074ac2..0000000000000000000000000000000000000000 --- a/spaces/j0hngou/vision-diffmask/code/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .classification import ImageClassificationNet -from .interpretation import ImageInterpretationNet diff --git a/spaces/jackculpan/chatwebpage.com/app.py b/spaces/jackculpan/chatwebpage.com/app.py deleted file mode 100644 index b4aecc28f694563887188d61b0e83a9e16d0c4b9..0000000000000000000000000000000000000000 --- a/spaces/jackculpan/chatwebpage.com/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import openai -import gradio as gr -from conversation import Conversation - -try: - from dotenv import load_dotenv - load_dotenv() -except ImportError: - pass # In production, python-dotenv may not be installed - -with gr.Blocks(css="footer {visibility: hidden}", title="ChatWebpage.com") as demo: - conversation = Conversation() - gr.Markdown("Enter your website url, then ask the AI a question.") - with gr.Row(): - with gr.Column(scale=2): - url = gr.Textbox(label="1. Enter a webpage URL to chat about") - url.change(fn=conversation.get_data, inputs=url) - with gr.Column(scale=3): - gr.Examples([ - "https://www.bbc.com/news/business-64937251", - "https://www.ycombinator.com/", - "https://www.producthunt.com/posts/chatwebpage"], inputs=[url]) - - chatbot = gr.Chatbot().style(height=150) - - msg = gr.Textbox(label="2. Chat with AI about the webpage") - msg.submit(conversation.user, [msg, chatbot], [msg, chatbot]).success(conversation.bot, chatbot, chatbot) - with gr.Row(): - with gr.Column(scale=4): - gr.Examples(["Please summarise the webpage", "What is the tone of the webpage?", "Tell me your favorite part of the webpage"], inputs=[msg]) - with gr.Column(scale=4): - model = gr.Radio(["GPT-3.5", "GPT-4"], value="gpt-3.5", label="3. Which AI model?") - - with gr.Row(): - with gr.Column(scale=2): - clear_button = gr.Button("Clear") - clear_button.click(lambda: None, None, chatbot) - with gr.Column(scale=4): - submit_button = gr.Button("Submit") - submit_button.click(conversation.user, [msg, chatbot], [msg, chatbot]).success( - conversation.bot, chatbot, chatbot - ) - -if __name__ == "__main__": - demo.queue(concurrency_count=4) - demo.launch() - \ No newline at end of file diff --git a/spaces/jackculpan/chatwebpage.com/conversation.py b/spaces/jackculpan/chatwebpage.com/conversation.py deleted file mode 100644 index 75088f26b3d722b5bc6b8dc3918bd92d97151e6b..0000000000000000000000000000000000000000 --- a/spaces/jackculpan/chatwebpage.com/conversation.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import openai -import gradio as gr -import requests -from bs4 import BeautifulSoup -import urllib.parse -from selenium import webdriver -from webdriver_manager.chrome import ChromeDriverManager - -try: - from dotenv import load_dotenv - load_dotenv() -except ImportError: - pass # In production, python-dotenv may not be installed - -openai.api_key = os.getenv("OPEN_API_KEY") - -class Conversation: - def __init__(self): - self.messages = [] - - # def is_valid_url(self, url): - # try: - # result = urlparse(url) - # return True if all([result.scheme, result.netloc]) else False - # except ValueError: - # return False - - def to_valid_url(self, input_string): - print("url: ", input_string) - try: - url = input_string.strip() - if not url: - raise ValueError("Invalid URL, please try again.") - parsed_url = urllib.parse.urlparse(url) - if not all([parsed_url.scheme, parsed_url.netloc]): - raise ValueError("Invalid URL, please try again.") - if not parsed_url.scheme: - url = "https://" + url - parsed_url = urllib.parse.urlparse(url) - return parsed_url.geturl() - - except ValueError: - raise ValueError("Invalid URL, please try again.") - - - def get_data(self, old_url): - # ... your existing get_data implementation ... - # Replace `messages` with `self.messages` - - def extract_html_content(url): - response = requests.get(url) - return response.text - - def extract_js_content(url): - options = webdriver.ChromeOptions() - options.add_argument('--headless') - driver = webdriver.Chrome(ChromeDriverManager().install(), options=options) - driver.get(url) - rendered_content = driver.page_source - driver.quit() - return rendered_content - - def smart_scraper(url): - html_content = extract_html_content(url) - selector_to_find = "body" - - # Check if the content is incomplete or if a specific tag is missing - # if not html_content or not html_content.find(selector_to_find): - if not html_content or not html_content.find(selector_to_find): - # If incomplete, use Selenium to render JavaScript - print("Using Selenium for JavaScript rendering...") - js_content = extract_js_content(url) - return js_content - else: - return html_content - - url = self.to_valid_url(old_url) - self.messages - html = smart_scraper(url) - doc = BeautifulSoup(html, 'html.parser') - if not doc: - raise ValueError("Please try again") - doc = doc.body - headings_1 = [e.text for e in doc.find_all('h1')] - headings_2 = [e.text for e in doc.find_all('h2')] - # headings_3 = [e.text for e in doc.find_all('h3')] - links = [e.text for e in doc.find_all('a')] - paragraphs = [e.text for e in doc.find_all('p')] - # spans = [e.text for e in doc.find_all('span')] - joined_paragraphs = (' '.join(paragraphs)) - - if len(joined_paragraphs) > 7500: - paragraphs = joined_paragraphs[:3000] - - self.messages = [] - self.messages.append({'role': 'system', 'content': "You are a helpful assistant that must answer questions about a website."}) - self.messages.append({'role': 'system', 'content': f"here are the h1s - {headings_1}"}) - self.messages.append({'role': 'system', 'content': f"here are the h2s - {headings_2}"}) - # self.messages.append({'role': 'system', 'content': f"here are the links - {links}"}) - # messages.append({'role': 'system', 'content': f"here are the h3s - {headings_3}"}) - self.messages.append({'role': 'system', 'content': f"here are the paragraphs - {paragraphs}"}) - # messages.append({'role': 'system', 'content': f"here are the spans - {spans}"}) - return self.messages - - def ask_chatbot(self, input): - # ... your existing ask_chatbot implementation ... - # Replace `messages` with `self.messages` - if input: - self.messages.append({"role": "user", "content": input}) - try: - chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo", messages=self.messages - ) - except openai.error.InvalidRequestError: - raise ValueError("The website is too large to understand. Please try a different site.") - - reply = chat.choices[0].message.content - if not reply: - raise ValueError("Please try again") - self.messages.append({"role": "assistant", "content": reply}) - return reply - - - def user(self, user_message, history): - # ... your existing user implementation ... - # Replace `messages` with `self.messages` - - return "", history + [[user_message, None]] - - def bot(self, history): - # ... your existing bot implementation ... - # Replace `messages` with `self.messages` - user_message = history[-1][0] - try: - bot_message = self.ask_chatbot(user_message) - except ValueError: - bot_message = "Please try again" - history[-1][1] = bot_message - return history \ No newline at end of file diff --git a/spaces/jatin-tech/SkinZen/main.py b/spaces/jatin-tech/SkinZen/main.py deleted file mode 100644 index 3f81a50b7017beb74a3d2fd1dbaa0edcb2272fe3..0000000000000000000000000000000000000000 --- a/spaces/jatin-tech/SkinZen/main.py +++ /dev/null @@ -1,77 +0,0 @@ -import gradio as gr -from PIL import Image -from io import BytesIO -import PIL -import numpy as np -import os -import json - -BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -def sentence_builder(age, sex, skin_type, allergies, diet, file): - import index - - print(age, sex, skin_type, allergies, diet) - - response = index.predict(file) - predictions = response['prediction'] - prediction = np.array(predictions) - - data = response - data["prediction"] = prediction - - labels = ["Low", "Moderate", "Severe"] - show_prediction = np.zeros((4, 3)) - - - for in_, pred in enumerate(prediction): - show_prediction[in_] = pred - - output1 = {labels[i]: float(show_prediction[0][i]) for i in range(3)} - output2 = {labels[i]: float(show_prediction[1][i]) for i in range(3)} - output3 = {labels[i]: float(show_prediction[2][i]) for i in range(3)} - output4 = {labels[i]: float(show_prediction[3][i]) for i in range(3)} - - data['age'] = age - data['gender'] = sex - data['skin_type'] = skin_type - data['allergies'] = allergies - data['diet'] = diet - - - try: - response = index.recommendation(data) - content = response['choices'][0]['message']['content'] - return content, output1, output2, output3, output4 - except: - return "No recommendation found", output1, output2, output3, output4 - -with gr.Blocks() as demo: - gr.Markdown("Flip text or image files using this demo.") - with gr.Row(): - with gr.Column(): - age = gr.Number(value=20, label="Age") - sex = gr.Radio(["Male", "Female", "Other"], label="Gender", info="Your Gender") - skin_type = gr.CheckboxGroup(["Oily", "Dry", "Normal"], label="Skin", info="Skin Type") - allergy = gr.Dropdown( - ["benzoyl peroxide", "salicylic acid", "Sun-exposure", "Itching", "Swelling", "Redness"], - multiselect=True, label="Allergies", - info="Tell us your allergies and symptoms" - ) - diet = gr.CheckboxGroup(["Veg", "Non-Veg",], label="Diet", info="Select your diet preference") - img = gr.Image(source="upload", type="pil", label="Face Image (with open eye)") - submit = gr.Button("Submit") - - with gr.Tab("Model:Severity Prediction"): - chin = gr.Label(num_top_classes=3, label="Chin|Acne Level") - fh = gr.Label(num_top_classes=3, label="Fore Head|Acne Level") - lc = gr.Label(num_top_classes=3, label="Left Cheek|Acne Level") - rc = gr.Label(num_top_classes=3, label="Right Cheek|Acne Level") - - with gr.Tab("Recommendation:Treatment Plan"): - html_output = gr.HTML('Recommendation will be shown here') - - submit.click(sentence_builder, inputs=[age, sex, skin_type, allergy, diet, img], outputs=[html_output, rc, lc, chin, fh]) - -if __name__ == "__main__": - demo.launch(server_name="0.0.0.0", server_port=7860) \ No newline at end of file diff --git a/spaces/jessica6105/Lu-Bert-VITS2/text/chinese.py b/spaces/jessica6105/Lu-Bert-VITS2/text/chinese.py deleted file mode 100644 index 51acb3ec401d7647278a25537576a0fb1775d827..0000000000000000000000000000000000000000 --- a/spaces/jessica6105/Lu-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,198 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = { - line.split("\t")[0]: line.strip().split("\t")[1] - for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines() -} - -import jieba.posseg as psg - - -rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "$": ".", - "“": "'", - "”": "'", - "‘": "'", - "’": "'", - "(": "'", - ")": "'", - "(": "'", - ")": "'", - "《": "'", - "》": "'", - "【": "'", - "】": "'", - "[": "'", - "]": "'", - "—": "-", - "~": "-", - "~": "-", - "「": "'", - "」": "'", -} - -tone_modifier = ToneSandhi() - - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣", "母") - pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub( - r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text - ) - - return replaced_text - - -def g2p(text): - pattern = r"(?<=[{0}])\s*".format("".join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip() != ""] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch. - phones = ["_"] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3 - ) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - # Replace all English words in the sentence - seg = re.sub("[a-zA-Z]+", "", seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == "eng": - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c + v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = "0" - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c + v_without_tone - assert tone in "12345" - - if c: - # 多音节 - v_rep_map = { - "uei": "ui", - "iou": "iu", - "uen": "un", - } - if v_without_tone in v_rep_map.keys(): - pinyin = c + v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - "ing": "ying", - "i": "yi", - "in": "yin", - "u": "wu", - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - "v": "yu", - "e": "e", - "i": "y", - "u": "w", - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]] + pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(" ") - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - -def text_normalize(text): - numbers = re.findall(r"\d+(?:\.?\d+)?", text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - - -def get_bert_feature(text, word2ph): - from text import chinese_bert - - return chinese_bert.get_bert_feature(text, word2ph) - - -if __name__ == "__main__": - from text.chinese_bert import get_bert_feature - - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/__init__.py b/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/__init__.py deleted file mode 100644 index 2be1f2185aac66e5e838463cc70520658477f5f6..0000000000000000000000000000000000000000 --- a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/preprocessing/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .preprocessing import filepath_pair_to_labeled_sample -from .preprocessing import file_pairs_to_tensor_profiles -from .preprocessing import read_charpoints_file -from .preprocessing import read_surfaceline_file -from .preprocessing import make_height_profiles -from .preprocessing import make_labeled_height_profiles -from .preprocessing import get_file_pairs_from_dir -from .preprocessing import load_datasets diff --git a/spaces/jharrison27/moleculemodeler/app.py b/spaces/jharrison27/moleculemodeler/app.py deleted file mode 100644 index 029c514704303515c5ff40a2bc326a63ca6551e0..0000000000000000000000000000000000000000 --- a/spaces/jharrison27/moleculemodeler/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import streamlit as st -import ipywidgets -import py3Dmol - - -from rdkit import Chem -from rdkit.Chem import Draw -from PIL import Image -from rdkit import Chem -from rdkit.Chem import AllChem -from ipywidgets import interact,fixed,IntSlider -import streamlit as st -import streamlit.components.v1 as components -import py3Dmol -from rdkit import Chem -from rdkit.Chem import Draw -from rdkit.Chem import AllChem - - -def smi2conf(smiles): - '''Convert SMILES to rdkit.Mol with 3D coordinates''' - mol = Chem.MolFromSmiles(smiles) - if mol is not None: - mol = Chem.AddHs(mol) - AllChem.EmbedMolecule(mol) - AllChem.MMFFOptimizeMolecule(mol, maxIters=200) - return mol - else: - return None - -def MolTo3DView(mol, size=(300, 300), style="stick", surface=False, opacity=0.5): - """Draw molecule in 3D - - Args: - ---- - mol: rdMol, molecule to show - size: tuple(int, int), canvas size - style: str, type of drawing molecule - style can be 'line', 'stick', 'sphere', 'carton' - surface, bool, display SAS - opacity, float, opacity of surface, range 0.0-1.0 - Return: - ---- - viewer: py3Dmol.view, a class for constructing embedded 3Dmol.js views in ipython notebooks. - """ - assert style in ('line', 'stick', 'sphere', 'carton') - mblock = Chem.MolToMolBlock(mol) - viewer = py3Dmol.view(width=size[0], height=size[1]) - viewer.addModel(mblock, 'mol') - viewer.setStyle({style:{}}) - if surface: - viewer.addSurface(py3Dmol.SAS, {'opacity': opacity}) - viewer.zoomTo() - return viewer - -def MakeMolecule(name, ingredients): - st.write(name, ": ", ingredients) - m = Chem.MolFromSmiles(ingredients) - im=Draw.MolToImage(m) - st.image(im) - -def conf_viewer(idx): - mol = confs[idx] - return MolTo3DView(mol).show() - -def style_selector(idx, s): - conf = confs[idx] - return MolTo3DView(conf, style=s).show() - -@interact -def smi2viewer(smi='CC=O'): - try: - conf = smi2conf(smi) - return MolTo3DView(conf).show() - except: - return None - -smi = 'COc3nc(OCc2ccc(C#N)c(c1ccc(C(=O)O)cc1)c2P(=O)(O)O)ccc3C[NH2+]CC(I)NC(=O)C(F)(Cl)Br' -conf = smi2conf(smi) -viewer = MolTo3DView(conf, size=(600, 300), style='sphere') -viewer.show() - -#compound_smiles = 'c1cc(C(=O)O)c(OC(=O)C)cc1' -#m = Chem.MolFromSmiles(compound_smiles) -#im=Draw.MolToImage(m) -#st.image(im) - -viewer = MolTo3DView(conf, size=(600, 300), style='sphere') -viewer.show() - -smis = [ 'COc3nc(OCc2ccc(C#N)c(c1ccc(C(=O)O)cc1)c2P(=O)(O)O)ccc3C[NH2+]CC(I)NC(=O)C(F)(Cl)Br', - 'CC(NCCNCC1=CC=C(OCC2=C(C)C(C3=CC=CC=C3)=CC=C2)N=C1OC)=O', - 'Cc1c(COc2cc(OCc3cccc(c3)C#N)c(CN3C[C@H](O)C[C@H]3C(O)=O)cc2Cl)cccc1-c1ccc2OCCOc2c1', - 'CCCCC(=O)NCCCCC(=O)NCCCCCC(=O)[O-]', - "CC(NCCNCC1=CC=C(OCC2=C(C)C(C3=CC=CC=C3)=CC=C2)N=C1OC)=O"] - -confs = [smi2conf(s) for s in smis] - - -st.title('Molecule Modeler') -def show(smi, style='stick'): - mol = Chem.MolFromSmiles(smi) - mol = Chem.AddHs(mol) - AllChem.EmbedMolecule(mol) - AllChem.MMFFOptimizeMolecule(mol, maxIters=200) - mblock = Chem.MolToMolBlock(mol) - - view = py3Dmol.view(width=400, height=400) - view.addModel(mblock, 'mol') - view.setStyle({style:{}}) - view.zoomTo() - view.show() - view.render() - t =view.js() - f = open('viz.html', 'w') - f.write(t.startjs) - f.write(t.endjs) - f.close() - -compound_smiles=st.text_input('SMILES please','CCCCC(=O)NCCCCC(=O)NCCCCCC(=O)[O-]') -m = Chem.MolFromSmiles(compound_smiles) - -#Draw.MolToFile(m,'mol.png') - -show(compound_smiles) -HtmlFile = open("viz.html", 'r', encoding='utf-8') -source_code = HtmlFile.read() -c1,c2=st.columns(2) -with c1: - st.write('Chemical Graph 3D Molecule:') -with c2: - components.html(source_code, height = 400,width=400) - -st.write('Info about SMILES: https://archive.epa.gov/med/med_archive_03/web/html/smiles.html') -st.write('Learn about it at Wikipedia: https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system') -st.write('Search for any compound on PubChem at National Library of Medicine: https://pubchem.ncbi.nlm.nih.gov/#query=vitamin%20e') - - -MakeMolecule("COVID-19 Antiviral Remdesivir GS5734", "CCC(CC)COC(=O)[C@H](C)N[P@](=O)(OC[C@@H]1[C@H]([C@H]([C@](O1)(C#N)C2=CC=C3N2N=CN=C3N)O)O)OC4=CC=CC=C4") -MakeMolecule("Ritonavir", "CC(C)C1=NC(=CS1)CN(C)C(=O)N[C@@H](C(C)C)C(=O)N[C@@H](CC2=CC=CC=C2)C[C@@H]([C@H](CC3=CC=CC=C3)NC(=O)OCC4=CN=CS4)O") -MakeMolecule("Chloroquine", "CCN(CC)CCCC(C)NC1=C2C=CC(=CC2=NC=C1)Cl") -MakeMolecule("Fingolimod", "CCCCCCCCC1=CC=C(C=C1)CCC(CO)(CO)N") -MakeMolecule("N4-Hydroxycytidine", "C1=CN(C(=O)N=C1NO)[C@H]2[C@@H]([C@@H]([C@H](O2)CO)O)O") -MakeMolecule("Favipiravir", "C1=C(N=C(C(=O)N1)C(=O)N)F") - -MakeMolecule("DNA", "C1C(C(OC1N)COP(=O)(O)OC2CC(OC2COP(=O)(O)OC3CC(OC3CO)N)N)O") -MakeMolecule("Trecovirsen DNA", "CC1=CN(C(=O)NC1=O)C2CC(C(O2)COP(=S)(O)OC3CC(OC3COP(=S)(O)OC4CC(OC4COP(=S)(O)OC5CC(OC5COP(=S)(O)OC6CC(OC6COP(=S)(O)OC7CC(OC7COP(=S)(O)OC8CC(OC8COP(=S)(O)OC9CC(OC9COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1COP(=S)(O)OC1CC(OC1CO)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=NC2=C1N=C(NC2=O)N)N1C=CC(=NC1=O)N)N1C=NC2=C(N=CN=C21)N)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=NC2=C(N=CN=C21)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)N1C=CC(=NC1=O)N)N1C=C(C(=O)NC1=O)C)N1C=C(C(=O)NC1=O)C)N1C=CC(=NC1=O)N)O") - - - -MakeMolecule("Ibuprofen", "CC(C)CC1=CC=C(C=C1)C(C)C(=O)O") -MakeMolecule("LSD", "CCN(CC)C(=O)[C@H]1CN([C@@H]2CC3=CNC4=CC=CC(=C34)C2=C1)C") - -MakeMolecule("Ethanol", "CCO") -MakeMolecule("Acetic acid", "CC(=O)O") -MakeMolecule("Cyclohexane", "C1CCCCC1") -MakeMolecule("Pyridine", "c1cnccc1") -MakeMolecule("Nicotine", "CN1CCC[C@H]1c2cccnc2") - - -MakeMolecule("Helium", "[3He]") -MakeMolecule("Hydrogen", "[H]") -MakeMolecule("Caffeine", "CN1C=NC2=C1C(=O)N(C(=O)N2C)C") -MakeMolecule("Sugar", "C([C@@H]1[C@H]([C@@H]([C@H]([C@H](O1)O[C@]2([C@H]([C@@H]([C@H](O2)CO)O)O)CO)O)O)O)O") -MakeMolecule("Dinitrogen", "N#N") -MakeMolecule("Methyl isocyanate (MIC)", "CN=C=O") -MakeMolecule("Copper(II) sulfate", "[Cu+2].[O-]S(=O)(=O)[O-]") -MakeMolecule("Flavopereirin (C17H15N2)", "CCc(c1)ccc2[n+]1ccc3c2[nH]c4c3cccc4 CCc1c[n+]2ccc3c4ccccc4[nH]c3c2cc1") -MakeMolecule("Glucose (β-D-glucopyranose) (C6H12O6)", "OC[C@@H](O1)[C@@H](O)[C@H](O)[C@@H](O)[C@H](O)1") -MakeMolecule("Thiamine (vitamin B1, C12H17N4OS+)", "OCCc1c(C)[n+](cs1)Cc2cnc(C)nc2N") -MakeMolecule("cephalostatin-1", "CC(C)(O1)C[C@@H](O)[C@@]1(O2)[C@@H](C)[C@@H]3CC=C4[C@]3(C2)C(=O)C[C@H]5[C@H]4CC[C@@H](C6)[C@]5(C)Cc(n7)c6nc(C[C@@]89(C))c7C[C@@H]8CC[C@@H]%10[C@@H]9C[C@@H](O)[C@@]%11(C)C%10=C[C@H](O%12)[C@]%11(O)[C@H](C)[C@]%12(O%13)[C@H](O)C[C@@]%13(C)CO") -MakeMolecule("Vitamin E", "CC(C)CCC[C@@H](C)CCC[C@@H](C)CCC [C@]1(C)CCc2c(C)c(O)c(C)c(C)c2O1") -MakeMolecule("Vitamin K2", "CC1=C(C(=O)C2=CC=CC=C2C1=O)CC=C(C)CCC=C(C)CCC=C(C)CCC=C(C)C") -MakeMolecule("Vitamin K1", "CC(C)CCCC(C)CCCC(C)CCCC(=CCC12C(=O)C3=CC=CC=C3C(=O)C1(O2)C)C") \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/HMAC.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/HMAC.py deleted file mode 100644 index e82bb9d30c8c2585c6e7ce019a3495635f9a5995..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/HMAC.py +++ /dev/null @@ -1,213 +0,0 @@ -# -# HMAC.py - Implements the HMAC algorithm as described by RFC 2104. -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.py3compat import bord, tobytes - -from binascii import unhexlify - -from Crypto.Hash import MD5 -from Crypto.Hash import BLAKE2s -from Crypto.Util.strxor import strxor -from Crypto.Random import get_random_bytes - -__all__ = ['new', 'HMAC'] - - -class HMAC(object): - """An HMAC hash object. - Do not instantiate directly. Use the :func:`new` function. - - :ivar digest_size: the size in bytes of the resulting MAC tag - :vartype digest_size: integer - """ - - def __init__(self, key, msg=b"", digestmod=None): - - if digestmod is None: - digestmod = MD5 - - if msg is None: - msg = b"" - - # Size of the MAC tag - self.digest_size = digestmod.digest_size - - self._digestmod = digestmod - - if isinstance(key, memoryview): - key = key.tobytes() - - try: - if len(key) <= digestmod.block_size: - # Step 1 or 2 - key_0 = key + b"\x00" * (digestmod.block_size - len(key)) - else: - # Step 3 - hash_k = digestmod.new(key).digest() - key_0 = hash_k + b"\x00" * (digestmod.block_size - len(hash_k)) - except AttributeError: - # Not all hash types have "block_size" - raise ValueError("Hash type incompatible to HMAC") - - # Step 4 - key_0_ipad = strxor(key_0, b"\x36" * len(key_0)) - - # Start step 5 and 6 - self._inner = digestmod.new(key_0_ipad) - self._inner.update(msg) - - # Step 7 - key_0_opad = strxor(key_0, b"\x5c" * len(key_0)) - - # Start step 8 and 9 - self._outer = digestmod.new(key_0_opad) - - def update(self, msg): - """Authenticate the next chunk of message. - - Args: - data (byte string/byte array/memoryview): The next chunk of data - """ - - self._inner.update(msg) - return self - - def _pbkdf2_hmac_assist(self, first_digest, iterations): - """Carry out the expensive inner loop for PBKDF2-HMAC""" - - result = self._digestmod._pbkdf2_hmac_assist( - self._inner, - self._outer, - first_digest, - iterations) - return result - - def copy(self): - """Return a copy ("clone") of the HMAC object. - - The copy will have the same internal state as the original HMAC - object. - This can be used to efficiently compute the MAC tag of byte - strings that share a common initial substring. - - :return: An :class:`HMAC` - """ - - new_hmac = HMAC(b"fake key", digestmod=self._digestmod) - - # Syncronize the state - new_hmac._inner = self._inner.copy() - new_hmac._outer = self._outer.copy() - - return new_hmac - - def digest(self): - """Return the **binary** (non-printable) MAC tag of the message - authenticated so far. - - :return: The MAC tag digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - frozen_outer_hash = self._outer.copy() - frozen_outer_hash.update(self._inner.digest()) - return frozen_outer_hash.digest() - - def verify(self, mac_tag): - """Verify that a given **binary** MAC (computed by another party) - is valid. - - Args: - mac_tag (byte string/byte string/memoryview): the expected MAC of the message. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - secret = get_random_bytes(16) - - mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) - mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) - - if mac1.digest() != mac2.digest(): - raise ValueError("MAC check failed") - - def hexdigest(self): - """Return the **printable** MAC tag of the message authenticated so far. - - :return: The MAC tag, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) - for x in tuple(self.digest())]) - - def hexverify(self, hex_mac_tag): - """Verify that a given **printable** MAC (computed by another party) - is valid. - - Args: - hex_mac_tag (string): the expected MAC of the message, - as a hexadecimal string. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - self.verify(unhexlify(tobytes(hex_mac_tag))) - - -def new(key, msg=b"", digestmod=None): - """Create a new MAC object. - - Args: - key (bytes/bytearray/memoryview): - key for the MAC object. - It must be long enough to match the expected security level of the - MAC. - msg (bytes/bytearray/memoryview): - Optional. The very first chunk of the message to authenticate. - It is equivalent to an early call to :meth:`HMAC.update`. - digestmod (module): - The hash to use to implement the HMAC. - Default is :mod:`Crypto.Hash.MD5`. - - Returns: - An :class:`HMAC` object - """ - - return HMAC(key, msg, digestmod) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/__init__.py deleted file mode 100644 index 7e056181360cccac58b08ba749630d1ecbab3a2c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/_codecs/__init__.py +++ /dev/null @@ -1,63 +0,0 @@ -from typing import Dict, List - -from .adobe_glyphs import adobe_glyphs -from .pdfdoc import _pdfdoc_encoding -from .std import _std_encoding -from .symbol import _symbol_encoding -from .zapfding import _zapfding_encoding - - -def fill_from_encoding(enc: str) -> List[str]: - lst: List[str] = [] - for x in range(256): - try: - lst += (bytes((x,)).decode(enc),) - except Exception: - lst += (chr(x),) - return lst - - -def rev_encoding(enc: List[str]) -> Dict[str, int]: - rev: Dict[str, int] = {} - for i in range(256): - char = enc[i] - if char == "\u0000": - continue - assert char not in rev, ( - str(char) + " at " + str(i) + " already at " + str(rev[char]) - ) - rev[char] = i - return rev - - -_win_encoding = fill_from_encoding("cp1252") -_mac_encoding = fill_from_encoding("mac_roman") - - -_win_encoding_rev: Dict[str, int] = rev_encoding(_win_encoding) -_mac_encoding_rev: Dict[str, int] = rev_encoding(_mac_encoding) -_symbol_encoding_rev: Dict[str, int] = rev_encoding(_symbol_encoding) -_zapfding_encoding_rev: Dict[str, int] = rev_encoding(_zapfding_encoding) -_pdfdoc_encoding_rev: Dict[str, int] = rev_encoding(_pdfdoc_encoding) - - -charset_encoding: Dict[str, List[str]] = { - "/StandardCoding": _std_encoding, - "/WinAnsiEncoding": _win_encoding, - "/MacRomanEncoding": _mac_encoding, - "/PDFDocEncoding": _pdfdoc_encoding, - "/Symbol": _symbol_encoding, - "/ZapfDingbats": _zapfding_encoding, -} - -__all__ = [ - "adobe_glyphs", - "_std_encoding", - "_symbol_encoding", - "_zapfding_encoding", - "_pdfdoc_encoding", - "_pdfdoc_encoding_rev", - "_win_encoding", - "_mac_encoding", - "charset_encoding", -] diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/test_utils.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/test_utils.py deleted file mode 100644 index fcda2f3ddc045a381470012ba331c75299af4981..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/test_utils.py +++ /dev/null @@ -1,706 +0,0 @@ -"""Utilities shared by tests.""" - -import asyncio -import contextlib -import gc -import inspect -import ipaddress -import os -import socket -import sys -import warnings -from abc import ABC, abstractmethod -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Iterator, - List, - Optional, - Type, - Union, - cast, -) -from unittest import mock - -from aiosignal import Signal -from multidict import CIMultiDict, CIMultiDictProxy -from yarl import URL - -import aiohttp -from aiohttp.client import _RequestContextManager, _WSRequestContextManager - -from . import ClientSession, hdrs -from .abc import AbstractCookieJar -from .client_reqrep import ClientResponse -from .client_ws import ClientWebSocketResponse -from .helpers import PY_38, sentinel -from .http import HttpVersion, RawRequestMessage -from .web import ( - Application, - AppRunner, - BaseRunner, - Request, - Server, - ServerRunner, - SockSite, - UrlMappingMatchInfo, -) -from .web_protocol import _RequestHandler - -if TYPE_CHECKING: # pragma: no cover - from ssl import SSLContext -else: - SSLContext = None - -if PY_38: - from unittest import IsolatedAsyncioTestCase as TestCase -else: - from asynctest import TestCase # type: ignore[no-redef] - -REUSE_ADDRESS = os.name == "posix" and sys.platform != "cygwin" - - -def get_unused_port_socket( - host: str, family: socket.AddressFamily = socket.AF_INET -) -> socket.socket: - return get_port_socket(host, 0, family) - - -def get_port_socket( - host: str, port: int, family: socket.AddressFamily -) -> socket.socket: - s = socket.socket(family, socket.SOCK_STREAM) - if REUSE_ADDRESS: - # Windows has different semantics for SO_REUSEADDR, - # so don't set it. Ref: - # https://docs.microsoft.com/en-us/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - s.bind((host, port)) - return s - - -def unused_port() -> int: - """Return a port that is unused on the current host.""" - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - s.bind(("127.0.0.1", 0)) - return cast(int, s.getsockname()[1]) - - -class BaseTestServer(ABC): - __test__ = False - - def __init__( - self, - *, - scheme: Union[str, object] = sentinel, - loop: Optional[asyncio.AbstractEventLoop] = None, - host: str = "127.0.0.1", - port: Optional[int] = None, - skip_url_asserts: bool = False, - socket_factory: Callable[ - [str, int, socket.AddressFamily], socket.socket - ] = get_port_socket, - **kwargs: Any, - ) -> None: - self._loop = loop - self.runner: Optional[BaseRunner] = None - self._root: Optional[URL] = None - self.host = host - self.port = port - self._closed = False - self.scheme = scheme - self.skip_url_asserts = skip_url_asserts - self.socket_factory = socket_factory - - async def start_server( - self, loop: Optional[asyncio.AbstractEventLoop] = None, **kwargs: Any - ) -> None: - if self.runner: - return - self._loop = loop - self._ssl = kwargs.pop("ssl", None) - self.runner = await self._make_runner(**kwargs) - await self.runner.setup() - if not self.port: - self.port = 0 - try: - version = ipaddress.ip_address(self.host).version - except ValueError: - version = 4 - family = socket.AF_INET6 if version == 6 else socket.AF_INET - _sock = self.socket_factory(self.host, self.port, family) - self.host, self.port = _sock.getsockname()[:2] - site = SockSite(self.runner, sock=_sock, ssl_context=self._ssl) - await site.start() - server = site._server - assert server is not None - sockets = server.sockets - assert sockets is not None - self.port = sockets[0].getsockname()[1] - if self.scheme is sentinel: - if self._ssl: - scheme = "https" - else: - scheme = "http" - self.scheme = scheme - self._root = URL(f"{self.scheme}://{self.host}:{self.port}") - - @abstractmethod # pragma: no cover - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - pass - - def make_url(self, path: str) -> URL: - assert self._root is not None - url = URL(path) - if not self.skip_url_asserts: - assert not url.is_absolute() - return self._root.join(url) - else: - return URL(str(self._root) + path) - - @property - def started(self) -> bool: - return self.runner is not None - - @property - def closed(self) -> bool: - return self._closed - - @property - def handler(self) -> Server: - # for backward compatibility - # web.Server instance - runner = self.runner - assert runner is not None - assert runner.server is not None - return runner.server - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run when the object is garbage collected, and on - exit when used as a context manager. - - """ - if self.started and not self.closed: - assert self.runner is not None - await self.runner.cleanup() - self._root = None - self.port = None - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "BaseTestServer": - await self.start_server(loop=self._loop) - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - await self.close() - - -class TestServer(BaseTestServer): - def __init__( - self, - app: Application, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ): - self.app = app - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - return AppRunner(self.app, **kwargs) - - -class RawTestServer(BaseTestServer): - def __init__( - self, - handler: _RequestHandler, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ) -> None: - self._handler = handler - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, debug: bool = True, **kwargs: Any) -> ServerRunner: - srv = Server(self._handler, loop=self._loop, debug=debug, **kwargs) - return ServerRunner(srv, debug=debug, **kwargs) - - -class TestClient: - """ - A test client implementation. - - To write functional tests for aiohttp based servers. - - """ - - __test__ = False - - def __init__( - self, - server: BaseTestServer, - *, - cookie_jar: Optional[AbstractCookieJar] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - **kwargs: Any, - ) -> None: - if not isinstance(server, BaseTestServer): - raise TypeError( - "server must be TestServer " "instance, found type: %r" % type(server) - ) - self._server = server - self._loop = loop - if cookie_jar is None: - cookie_jar = aiohttp.CookieJar(unsafe=True, loop=loop) - self._session = ClientSession(loop=loop, cookie_jar=cookie_jar, **kwargs) - self._closed = False - self._responses: List[ClientResponse] = [] - self._websockets: List[ClientWebSocketResponse] = [] - - async def start_server(self) -> None: - await self._server.start_server(loop=self._loop) - - @property - def host(self) -> str: - return self._server.host - - @property - def port(self) -> Optional[int]: - return self._server.port - - @property - def server(self) -> BaseTestServer: - return self._server - - @property - def app(self) -> Optional[Application]: - return cast(Optional[Application], getattr(self._server, "app", None)) - - @property - def session(self) -> ClientSession: - """An internal aiohttp.ClientSession. - - Unlike the methods on the TestClient, client session requests - do not automatically include the host in the url queried, and - will require an absolute path to the resource. - - """ - return self._session - - def make_url(self, path: str) -> URL: - return self._server.make_url(path) - - async def _request(self, method: str, path: str, **kwargs: Any) -> ClientResponse: - resp = await self._session.request(method, self.make_url(path), **kwargs) - # save it to close later - self._responses.append(resp) - return resp - - def request(self, method: str, path: str, **kwargs: Any) -> _RequestContextManager: - """Routes a request to tested http server. - - The interface is identical to aiohttp.ClientSession.request, - except the loop kwarg is overridden by the instance used by the - test server. - - """ - return _RequestContextManager(self._request(method, path, **kwargs)) - - def get(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP GET request.""" - return _RequestContextManager(self._request(hdrs.METH_GET, path, **kwargs)) - - def post(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP POST request.""" - return _RequestContextManager(self._request(hdrs.METH_POST, path, **kwargs)) - - def options(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP OPTIONS request.""" - return _RequestContextManager(self._request(hdrs.METH_OPTIONS, path, **kwargs)) - - def head(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP HEAD request.""" - return _RequestContextManager(self._request(hdrs.METH_HEAD, path, **kwargs)) - - def put(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PUT request.""" - return _RequestContextManager(self._request(hdrs.METH_PUT, path, **kwargs)) - - def patch(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_PATCH, path, **kwargs)) - - def delete(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_DELETE, path, **kwargs)) - - def ws_connect(self, path: str, **kwargs: Any) -> _WSRequestContextManager: - """Initiate websocket connection. - - The api corresponds to aiohttp.ClientSession.ws_connect. - - """ - return _WSRequestContextManager(self._ws_connect(path, **kwargs)) - - async def _ws_connect(self, path: str, **kwargs: Any) -> ClientWebSocketResponse: - ws = await self._session.ws_connect(self.make_url(path), **kwargs) - self._websockets.append(ws) - return ws - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run on exit when used as a(n) (asynchronous) - context manager. - - """ - if not self._closed: - for resp in self._responses: - resp.close() - for ws in self._websockets: - await ws.close() - await self._session.close() - await self._server.close() - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "TestClient": - await self.start_server() - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - await self.close() - - -class AioHTTPTestCase(TestCase): - """A base class to allow for unittest web applications using aiohttp. - - Provides the following: - - * self.client (aiohttp.test_utils.TestClient): an aiohttp test client. - * self.loop (asyncio.BaseEventLoop): the event loop in which the - application and server are running. - * self.app (aiohttp.web.Application): the application returned by - self.get_application() - - Note that the TestClient's methods are asynchronous: you have to - execute function on the test client using asynchronous methods. - """ - - async def get_application(self) -> Application: - """Get application. - - This method should be overridden - to return the aiohttp.web.Application - object to test. - """ - return self.get_app() - - def get_app(self) -> Application: - """Obsolete method used to constructing web application. - - Use .get_application() coroutine instead. - """ - raise RuntimeError("Did you forget to define get_application()?") - - def setUp(self) -> None: - if not PY_38: - asyncio.get_event_loop().run_until_complete(self.asyncSetUp()) - - async def asyncSetUp(self) -> None: - try: - self.loop = asyncio.get_running_loop() - except (AttributeError, RuntimeError): # AttributeError->py36 - self.loop = asyncio.get_event_loop_policy().get_event_loop() - - return await self.setUpAsync() - - async def setUpAsync(self) -> None: - self.app = await self.get_application() - self.server = await self.get_server(self.app) - self.client = await self.get_client(self.server) - - await self.client.start_server() - - def tearDown(self) -> None: - if not PY_38: - self.loop.run_until_complete(self.asyncTearDown()) - - async def asyncTearDown(self) -> None: - return await self.tearDownAsync() - - async def tearDownAsync(self) -> None: - await self.client.close() - - async def get_server(self, app: Application) -> TestServer: - """Return a TestServer instance.""" - return TestServer(app, loop=self.loop) - - async def get_client(self, server: TestServer) -> TestClient: - """Return a TestClient instance.""" - return TestClient(server, loop=self.loop) - - -def unittest_run_loop(func: Any, *args: Any, **kwargs: Any) -> Any: - """ - A decorator dedicated to use with asynchronous AioHTTPTestCase test methods. - - In 3.8+, this does nothing. - """ - warnings.warn( - "Decorator `@unittest_run_loop` is no longer needed in aiohttp 3.8+", - DeprecationWarning, - stacklevel=2, - ) - return func - - -_LOOP_FACTORY = Callable[[], asyncio.AbstractEventLoop] - - -@contextlib.contextmanager -def loop_context( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, fast: bool = False -) -> Iterator[asyncio.AbstractEventLoop]: - """A contextmanager that creates an event_loop, for test purposes. - - Handles the creation and cleanup of a test loop. - """ - loop = setup_test_loop(loop_factory) - yield loop - teardown_test_loop(loop, fast=fast) - - -def setup_test_loop( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, -) -> asyncio.AbstractEventLoop: - """Create and return an asyncio.BaseEventLoop instance. - - The caller should also call teardown_test_loop, - once they are done with the loop. - """ - loop = loop_factory() - try: - module = loop.__class__.__module__ - skip_watcher = "uvloop" in module - except AttributeError: # pragma: no cover - # Just in case - skip_watcher = True - asyncio.set_event_loop(loop) - if sys.platform != "win32" and not skip_watcher: - policy = asyncio.get_event_loop_policy() - watcher: asyncio.AbstractChildWatcher - try: # Python >= 3.8 - # Refs: - # * https://github.com/pytest-dev/pytest-xdist/issues/620 - # * https://stackoverflow.com/a/58614689/595220 - # * https://bugs.python.org/issue35621 - # * https://github.com/python/cpython/pull/14344 - watcher = asyncio.ThreadedChildWatcher() - except AttributeError: # Python < 3.8 - watcher = asyncio.SafeChildWatcher() - watcher.attach_loop(loop) - with contextlib.suppress(NotImplementedError): - policy.set_child_watcher(watcher) - return loop - - -def teardown_test_loop(loop: asyncio.AbstractEventLoop, fast: bool = False) -> None: - """Teardown and cleanup an event_loop created by setup_test_loop.""" - closed = loop.is_closed() - if not closed: - loop.call_soon(loop.stop) - loop.run_forever() - loop.close() - - if not fast: - gc.collect() - - asyncio.set_event_loop(None) - - -def _create_app_mock() -> mock.MagicMock: - def get_dict(app: Any, key: str) -> Any: - return app.__app_dict[key] - - def set_dict(app: Any, key: str, value: Any) -> None: - app.__app_dict[key] = value - - app = mock.MagicMock(spec=Application) - app.__app_dict = {} - app.__getitem__ = get_dict - app.__setitem__ = set_dict - - app._debug = False - app.on_response_prepare = Signal(app) - app.on_response_prepare.freeze() - return app - - -def _create_transport(sslcontext: Optional[SSLContext] = None) -> mock.Mock: - transport = mock.Mock() - - def get_extra_info(key: str) -> Optional[SSLContext]: - if key == "sslcontext": - return sslcontext - else: - return None - - transport.get_extra_info.side_effect = get_extra_info - return transport - - -def make_mocked_request( - method: str, - path: str, - headers: Any = None, - *, - match_info: Any = sentinel, - version: HttpVersion = HttpVersion(1, 1), - closing: bool = False, - app: Any = None, - writer: Any = sentinel, - protocol: Any = sentinel, - transport: Any = sentinel, - payload: Any = sentinel, - sslcontext: Optional[SSLContext] = None, - client_max_size: int = 1024**2, - loop: Any = ..., -) -> Request: - """Creates mocked web.Request testing purposes. - - Useful in unit tests, when spinning full web server is overkill or - specific conditions and errors are hard to trigger. - """ - task = mock.Mock() - if loop is ...: - loop = mock.Mock() - loop.create_future.return_value = () - - if version < HttpVersion(1, 1): - closing = True - - if headers: - headers = CIMultiDictProxy(CIMultiDict(headers)) - raw_hdrs = tuple( - (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items() - ) - else: - headers = CIMultiDictProxy(CIMultiDict()) - raw_hdrs = () - - chunked = "chunked" in headers.get(hdrs.TRANSFER_ENCODING, "").lower() - - message = RawRequestMessage( - method, - path, - version, - headers, - raw_hdrs, - closing, - None, - False, - chunked, - URL(path), - ) - if app is None: - app = _create_app_mock() - - if transport is sentinel: - transport = _create_transport(sslcontext) - - if protocol is sentinel: - protocol = mock.Mock() - protocol.transport = transport - - if writer is sentinel: - writer = mock.Mock() - writer.write_headers = make_mocked_coro(None) - writer.write = make_mocked_coro(None) - writer.write_eof = make_mocked_coro(None) - writer.drain = make_mocked_coro(None) - writer.transport = transport - - protocol.transport = transport - protocol.writer = writer - - if payload is sentinel: - payload = mock.Mock() - - req = Request( - message, payload, protocol, writer, task, loop, client_max_size=client_max_size - ) - - match_info = UrlMappingMatchInfo( - {} if match_info is sentinel else match_info, mock.Mock() - ) - match_info.add_app(app) - req._match_info = match_info - - return req - - -def make_mocked_coro( - return_value: Any = sentinel, raise_exception: Any = sentinel -) -> Any: - """Creates a coroutine mock.""" - - async def mock_coro(*args: Any, **kwargs: Any) -> Any: - if raise_exception is not sentinel: - raise raise_exception - if not inspect.isawaitable(return_value): - return return_value - await return_value - - return mock.Mock(wraps=mock_coro) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py deleted file mode 100644 index 8e266fa54d0f0fd05bfde372627e1fb948d6f0fd..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_b_s_l_n.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6bsln.html -class table__b_s_l_n(BaseTTXConverter): - pass diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/http.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/http.py deleted file mode 100644 index 06551017ee8c11594bf7197add6af4203efb886f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/http.py +++ /dev/null @@ -1,877 +0,0 @@ -import asyncio -import io -import logging -import re -import weakref -from copy import copy -from urllib.parse import urlparse - -import aiohttp -import requests -import yarl - -from fsspec.asyn import AbstractAsyncStreamedFile, AsyncFileSystem, sync, sync_wrapper -from fsspec.callbacks import _DEFAULT_CALLBACK -from fsspec.exceptions import FSTimeoutError -from fsspec.spec import AbstractBufferedFile -from fsspec.utils import DEFAULT_BLOCK_SIZE, isfilelike, nullcontext, tokenize - -from ..caching import AllBytes - -# https://stackoverflow.com/a/15926317/3821154 -ex = re.compile(r"""<(a|A)\s+(?:[^>]*?\s+)?(href|HREF)=["'](?P[^"']+)""") -ex2 = re.compile(r"""(?Phttp[s]?://[-a-zA-Z0-9@:%_+.~#?&/=]+)""") -logger = logging.getLogger("fsspec.http") - - -async def get_client(**kwargs): - return aiohttp.ClientSession(**kwargs) - - -class HTTPFileSystem(AsyncFileSystem): - """ - Simple File-System for fetching data via HTTP(S) - - ``ls()`` is implemented by loading the parent page and doing a regex - match on the result. If simple_link=True, anything of the form - "http(s)://server.com/stuff?thing=other"; otherwise only links within - HTML href tags will be used. - """ - - sep = "/" - - def __init__( - self, - simple_links=True, - block_size=None, - same_scheme=True, - size_policy=None, - cache_type="bytes", - cache_options=None, - asynchronous=False, - loop=None, - client_kwargs=None, - get_client=get_client, - encoded=False, - **storage_options, - ): - """ - NB: if this is called async, you must await set_client - - Parameters - ---------- - block_size: int - Blocks to read bytes; if 0, will default to raw requests file-like - objects instead of HTTPFile instances - simple_links: bool - If True, will consider both HTML tags and anything that looks - like a URL; if False, will consider only the former. - same_scheme: True - When doing ls/glob, if this is True, only consider paths that have - http/https matching the input URLs. - size_policy: this argument is deprecated - client_kwargs: dict - Passed to aiohttp.ClientSession, see - https://docs.aiohttp.org/en/stable/client_reference.html - For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}`` - get_client: Callable[..., aiohttp.ClientSession] - A callable which takes keyword arguments and constructs - an aiohttp.ClientSession. It's state will be managed by - the HTTPFileSystem class. - storage_options: key-value - Any other parameters passed on to requests - cache_type, cache_options: defaults used in open - """ - super().__init__(self, asynchronous=asynchronous, loop=loop, **storage_options) - self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE - self.simple_links = simple_links - self.same_schema = same_scheme - self.cache_type = cache_type - self.cache_options = cache_options - self.client_kwargs = client_kwargs or {} - self.get_client = get_client - self.encoded = encoded - self.kwargs = storage_options - self._session = None - - # Clean caching-related parameters from `storage_options` - # before propagating them as `request_options` through `self.kwargs`. - # TODO: Maybe rename `self.kwargs` to `self.request_options` to make - # it clearer. - request_options = copy(storage_options) - self.use_listings_cache = request_options.pop("use_listings_cache", False) - request_options.pop("listings_expiry_time", None) - request_options.pop("max_paths", None) - request_options.pop("skip_instance_cache", None) - self.kwargs = request_options - - @property - def fsid(self): - return "http" - - def encode_url(self, url): - return yarl.URL(url, encoded=self.encoded) - - @staticmethod - def close_session(loop, session): - if loop is not None and loop.is_running(): - try: - sync(loop, session.close, timeout=0.1) - return - except (TimeoutError, FSTimeoutError): - pass - connector = getattr(session, "_connector", None) - if connector is not None: - # close after loop is dead - connector._close() - - async def set_session(self): - if self._session is None: - self._session = await self.get_client(loop=self.loop, **self.client_kwargs) - if not self.asynchronous: - weakref.finalize(self, self.close_session, self.loop, self._session) - return self._session - - @classmethod - def _strip_protocol(cls, path): - """For HTTP, we always want to keep the full URL""" - return path - - @classmethod - def _parent(cls, path): - # override, since _strip_protocol is different for URLs - par = super()._parent(path) - if len(par) > 7: # "http://..." - return par - return "" - - async def _ls_real(self, url, detail=True, **kwargs): - # ignoring URL-encoded arguments - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(url) - session = await self.set_session() - async with session.get(self.encode_url(url), **self.kwargs) as r: - self._raise_not_found_for_status(r, url) - text = await r.text() - if self.simple_links: - links = ex2.findall(text) + [u[2] for u in ex.findall(text)] - else: - links = [u[2] for u in ex.findall(text)] - out = set() - parts = urlparse(url) - for l in links: - if isinstance(l, tuple): - l = l[1] - if l.startswith("/") and len(l) > 1: - # absolute URL on this server - l = parts.scheme + "://" + parts.netloc + l - if l.startswith("http"): - if self.same_schema and l.startswith(url.rstrip("/") + "/"): - out.add(l) - elif l.replace("https", "http").startswith( - url.replace("https", "http").rstrip("/") + "/" - ): - # allowed to cross http <-> https - out.add(l) - else: - if l not in ["..", "../"]: - # Ignore FTP-like "parent" - out.add("/".join([url.rstrip("/"), l.lstrip("/")])) - if not out and url.endswith("/"): - out = await self._ls_real(url.rstrip("/"), detail=False) - if detail: - return [ - { - "name": u, - "size": None, - "type": "directory" if u.endswith("/") else "file", - } - for u in out - ] - else: - return sorted(out) - - async def _ls(self, url, detail=True, **kwargs): - if self.use_listings_cache and url in self.dircache: - out = self.dircache[url] - else: - out = await self._ls_real(url, detail=detail, **kwargs) - self.dircache[url] = out - return out - - ls = sync_wrapper(_ls) - - def _raise_not_found_for_status(self, response, url): - """ - Raises FileNotFoundError for 404s, otherwise uses raise_for_status. - """ - if response.status == 404: - raise FileNotFoundError(url) - response.raise_for_status() - - async def _cat_file(self, url, start=None, end=None, **kwargs): - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(url) - - if start is not None or end is not None: - if start == end: - return b"" - headers = kw.pop("headers", {}).copy() - - headers["Range"] = await self._process_limits(url, start, end) - kw["headers"] = headers - session = await self.set_session() - async with session.get(self.encode_url(url), **kw) as r: - out = await r.read() - self._raise_not_found_for_status(r, url) - return out - - async def _get_file( - self, rpath, lpath, chunk_size=5 * 2**20, callback=_DEFAULT_CALLBACK, **kwargs - ): - kw = self.kwargs.copy() - kw.update(kwargs) - logger.debug(rpath) - session = await self.set_session() - async with session.get(self.encode_url(rpath), **kw) as r: - try: - size = int(r.headers["content-length"]) - except (ValueError, KeyError): - size = None - - callback.set_size(size) - self._raise_not_found_for_status(r, rpath) - if isfilelike(lpath): - outfile = lpath - else: - outfile = open(lpath, "wb") - - try: - chunk = True - while chunk: - chunk = await r.content.read(chunk_size) - outfile.write(chunk) - callback.relative_update(len(chunk)) - finally: - if not isfilelike(lpath): - outfile.close() - - async def _put_file( - self, - lpath, - rpath, - chunk_size=5 * 2**20, - callback=_DEFAULT_CALLBACK, - method="post", - **kwargs, - ): - async def gen_chunks(): - # Support passing arbitrary file-like objects - # and use them instead of streams. - if isinstance(lpath, io.IOBase): - context = nullcontext(lpath) - use_seek = False # might not support seeking - else: - context = open(lpath, "rb") - use_seek = True - - with context as f: - if use_seek: - callback.set_size(f.seek(0, 2)) - f.seek(0) - else: - callback.set_size(getattr(f, "size", None)) - - chunk = f.read(chunk_size) - while chunk: - yield chunk - callback.relative_update(len(chunk)) - chunk = f.read(chunk_size) - - kw = self.kwargs.copy() - kw.update(kwargs) - session = await self.set_session() - - method = method.lower() - if method not in ("post", "put"): - raise ValueError( - f"method has to be either 'post' or 'put', not: {method!r}" - ) - - meth = getattr(session, method) - async with meth(rpath, data=gen_chunks(), **kw) as resp: - self._raise_not_found_for_status(resp, rpath) - - async def _exists(self, path, **kwargs): - kw = self.kwargs.copy() - kw.update(kwargs) - try: - logger.debug(path) - session = await self.set_session() - r = await session.get(self.encode_url(path), **kw) - async with r: - return r.status < 400 - except (requests.HTTPError, aiohttp.ClientError): - return False - - async def _isfile(self, path, **kwargs): - return await self._exists(path, **kwargs) - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=None, # XXX: This differs from the base class. - cache_type=None, - cache_options=None, - size=None, - **kwargs, - ): - """Make a file-like object - - Parameters - ---------- - path: str - Full URL with protocol - mode: string - must be "rb" - block_size: int or None - Bytes to download in one request; use instance value if None. If - zero, will return a streaming Requests file-like instance. - kwargs: key-value - Any other parameters, passed to requests calls - """ - if mode != "rb": - raise NotImplementedError - block_size = block_size if block_size is not None else self.block_size - kw = self.kwargs.copy() - kw["asynchronous"] = self.asynchronous - kw.update(kwargs) - size = size or self.info(path, **kwargs)["size"] - session = sync(self.loop, self.set_session) - if block_size and size: - return HTTPFile( - self, - path, - session=session, - block_size=block_size, - mode=mode, - size=size, - cache_type=cache_type or self.cache_type, - cache_options=cache_options or self.cache_options, - loop=self.loop, - **kw, - ) - else: - return HTTPStreamFile( - self, - path, - mode=mode, - loop=self.loop, - session=session, - **kw, - ) - - async def open_async(self, path, mode="rb", size=None, **kwargs): - session = await self.set_session() - if size is None: - try: - size = (await self._info(path, **kwargs))["size"] - except FileNotFoundError: - pass - return AsyncStreamFile( - self, - path, - loop=self.loop, - session=session, - size=size, - **kwargs, - ) - - def ukey(self, url): - """Unique identifier; assume HTTP files are static, unchanging""" - return tokenize(url, self.kwargs, self.protocol) - - async def _info(self, url, **kwargs): - """Get info of URL - - Tries to access location via HEAD, and then GET methods, but does - not fetch the data. - - It is possible that the server does not supply any size information, in - which case size will be given as None (and certain operations on the - corresponding file will not work). - """ - info = {} - session = await self.set_session() - - for policy in ["head", "get"]: - try: - info.update( - await _file_info( - self.encode_url(url), - size_policy=policy, - session=session, - **self.kwargs, - **kwargs, - ) - ) - if info.get("size") is not None: - break - except Exception as exc: - if policy == "get": - # If get failed, then raise a FileNotFoundError - raise FileNotFoundError(url) from exc - logger.debug(str(exc)) - - return {"name": url, "size": None, **info, "type": "file"} - - async def _glob(self, path, maxdepth=None, **kwargs): - """ - Find files by glob-matching. - - This implementation is idntical to the one in AbstractFileSystem, - but "?" is not considered as a character for globbing, because it is - so common in URLs, often identifying the "query" part. - """ - if maxdepth is not None and maxdepth < 1: - raise ValueError("maxdepth must be at least 1") - import re - - ends = path.endswith("/") - path = self._strip_protocol(path) - idx_star = path.find("*") if path.find("*") >= 0 else len(path) - idx_brace = path.find("[") if path.find("[") >= 0 else len(path) - - min_idx = min(idx_star, idx_brace) - - detail = kwargs.pop("detail", False) - - if not has_magic(path): - if await self._exists(path): - if not detail: - return [path] - else: - return {path: await self._info(path)} - else: - if not detail: - return [] # glob of non-existent returns empty - else: - return {} - elif "/" in path[:min_idx]: - min_idx = path[:min_idx].rindex("/") - root = path[: min_idx + 1] - depth = path[min_idx + 1 :].count("/") + 1 - else: - root = "" - depth = path[min_idx + 1 :].count("/") + 1 - - if "**" in path: - if maxdepth is not None: - idx_double_stars = path.find("**") - depth_double_stars = path[idx_double_stars:].count("/") + 1 - depth = depth - depth_double_stars + maxdepth - else: - depth = None - - allpaths = await self._find( - root, maxdepth=depth, withdirs=True, detail=True, **kwargs - ) - # Escape characters special to python regex, leaving our supported - # special characters in place. - # See https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html - # for shell globbing details. - pattern = ( - "^" - + ( - path.replace("\\", r"\\") - .replace(".", r"\.") - .replace("+", r"\+") - .replace("//", "/") - .replace("(", r"\(") - .replace(")", r"\)") - .replace("|", r"\|") - .replace("^", r"\^") - .replace("$", r"\$") - .replace("{", r"\{") - .replace("}", r"\}") - .rstrip("/") - ) - + "$" - ) - pattern = re.sub("/[*]{2}", "=SLASH_DOUBLE_STARS=", pattern) - pattern = re.sub("[*]{2}/?", "=DOUBLE_STARS=", pattern) - pattern = re.sub("[*]", "[^/]*", pattern) - pattern = re.sub("=SLASH_DOUBLE_STARS=", "(|/.*)", pattern) - pattern = re.sub("=DOUBLE_STARS=", ".*", pattern) - pattern = re.compile(pattern) - out = { - p: allpaths[p] - for p in sorted(allpaths) - if pattern.match(p.replace("//", "/").rstrip("/")) - } - - # Return directories only when the glob end by a slash - # This is needed for posix glob compliance - if ends: - out = {k: v for k, v in out.items() if v["type"] == "directory"} - - if detail: - return out - else: - return list(out) - - async def _isdir(self, path): - # override, since all URLs are (also) files - try: - return bool(await self._ls(path)) - except (FileNotFoundError, ValueError): - return False - - -class HTTPFile(AbstractBufferedFile): - """ - A file-like object pointing to a remove HTTP(S) resource - - Supports only reading, with read-ahead of a predermined block-size. - - In the case that the server does not supply the filesize, only reading of - the complete file in one go is supported. - - Parameters - ---------- - url: str - Full URL of the remote resource, including the protocol - session: requests.Session or None - All calls will be made within this session, to avoid restarting - connections where the server allows this - block_size: int or None - The amount of read-ahead to do, in bytes. Default is 5MB, or the value - configured for the FileSystem creating this file - size: None or int - If given, this is the size of the file in bytes, and we don't attempt - to call the server to find the value. - kwargs: all other key-values are passed to requests calls. - """ - - def __init__( - self, - fs, - url, - session=None, - block_size=None, - mode="rb", - cache_type="bytes", - cache_options=None, - size=None, - loop=None, - asynchronous=False, - **kwargs, - ): - if mode != "rb": - raise NotImplementedError("File mode not supported") - self.asynchronous = asynchronous - self.url = url - self.session = session - self.details = {"name": url, "size": size, "type": "file"} - super().__init__( - fs=fs, - path=url, - mode=mode, - block_size=block_size, - cache_type=cache_type, - cache_options=cache_options, - **kwargs, - ) - self.loop = loop - - def read(self, length=-1): - """Read bytes from file - - Parameters - ---------- - length: int - Read up to this many bytes. If negative, read all content to end of - file. If the server has not supplied the filesize, attempting to - read only part of the data will raise a ValueError. - """ - if ( - (length < 0 and self.loc == 0) # explicit read all - # but not when the size is known and fits into a block anyways - and not (self.size is not None and self.size <= self.blocksize) - ): - self._fetch_all() - if self.size is None: - if length < 0: - self._fetch_all() - else: - length = min(self.size - self.loc, length) - return super().read(length) - - async def async_fetch_all(self): - """Read whole file in one shot, without caching - - This is only called when position is still at zero, - and read() is called without a byte-count. - """ - logger.debug(f"Fetch all for {self}") - if not isinstance(self.cache, AllBytes): - r = await self.session.get(self.fs.encode_url(self.url), **self.kwargs) - async with r: - r.raise_for_status() - out = await r.read() - self.cache = AllBytes( - size=len(out), fetcher=None, blocksize=None, data=out - ) - self.size = len(out) - - _fetch_all = sync_wrapper(async_fetch_all) - - def _parse_content_range(self, headers): - """Parse the Content-Range header""" - s = headers.get("Content-Range", "") - m = re.match(r"bytes (\d+-\d+|\*)/(\d+|\*)", s) - if not m: - return None, None, None - - if m[1] == "*": - start = end = None - else: - start, end = [int(x) for x in m[1].split("-")] - total = None if m[2] == "*" else int(m[2]) - return start, end, total - - async def async_fetch_range(self, start, end): - """Download a block of data - - The expectation is that the server returns only the requested bytes, - with HTTP code 206. If this is not the case, we first check the headers, - and then stream the output - if the data size is bigger than we - requested, an exception is raised. - """ - logger.debug(f"Fetch range for {self}: {start}-{end}") - kwargs = self.kwargs.copy() - headers = kwargs.pop("headers", {}).copy() - headers["Range"] = "bytes=%i-%i" % (start, end - 1) - logger.debug(str(self.url) + " : " + headers["Range"]) - r = await self.session.get( - self.fs.encode_url(self.url), headers=headers, **kwargs - ) - async with r: - if r.status == 416: - # range request outside file - return b"" - r.raise_for_status() - - # If the server has handled the range request, it should reply - # with status 206 (partial content). But we'll guess that a suitable - # Content-Range header or a Content-Length no more than the - # requested range also mean we have got the desired range. - response_is_range = ( - r.status == 206 - or self._parse_content_range(r.headers)[0] == start - or int(r.headers.get("Content-Length", end + 1)) <= end - start - ) - - if response_is_range: - # partial content, as expected - out = await r.read() - elif start > 0: - raise ValueError( - "The HTTP server doesn't appear to support range requests. " - "Only reading this file from the beginning is supported. " - "Open with block_size=0 for a streaming file interface." - ) - else: - # Response is not a range, but we want the start of the file, - # so we can read the required amount anyway. - cl = 0 - out = [] - while True: - chunk = await r.content.read(2**20) - # data size unknown, let's read until we have enough - if chunk: - out.append(chunk) - cl += len(chunk) - if cl > end - start: - break - else: - break - out = b"".join(out)[: end - start] - return out - - _fetch_range = sync_wrapper(async_fetch_range) - - def __reduce__(self): - return ( - reopen, - ( - self.fs, - self.url, - self.mode, - self.blocksize, - self.cache.name if self.cache else "none", - self.size, - ), - ) - - -def reopen(fs, url, mode, blocksize, cache_type, size=None): - return fs.open( - url, mode=mode, block_size=blocksize, cache_type=cache_type, size=size - ) - - -magic_check = re.compile("([*[])") - - -def has_magic(s): - match = magic_check.search(s) - return match is not None - - -class HTTPStreamFile(AbstractBufferedFile): - def __init__(self, fs, url, mode="rb", loop=None, session=None, **kwargs): - self.asynchronous = kwargs.pop("asynchronous", False) - self.url = url - self.loop = loop - self.session = session - if mode != "rb": - raise ValueError - self.details = {"name": url, "size": None} - super().__init__(fs=fs, path=url, mode=mode, cache_type="none", **kwargs) - - async def cor(): - r = await self.session.get(self.fs.encode_url(url), **kwargs).__aenter__() - self.fs._raise_not_found_for_status(r, url) - return r - - self.r = sync(self.loop, cor) - - def seek(self, loc, whence=0): - if loc == 0 and whence == 1: - return - if loc == self.loc and whence == 0: - return - raise ValueError("Cannot seek streaming HTTP file") - - async def _read(self, num=-1): - out = await self.r.content.read(num) - self.loc += len(out) - return out - - read = sync_wrapper(_read) - - async def _close(self): - self.r.close() - - def close(self): - asyncio.run_coroutine_threadsafe(self._close(), self.loop) - super().close() - - def __reduce__(self): - return reopen, (self.fs, self.url, self.mode, self.blocksize, self.cache.name) - - -class AsyncStreamFile(AbstractAsyncStreamedFile): - def __init__( - self, fs, url, mode="rb", loop=None, session=None, size=None, **kwargs - ): - self.url = url - self.session = session - self.r = None - if mode != "rb": - raise ValueError - self.details = {"name": url, "size": None} - self.kwargs = kwargs - super().__init__(fs=fs, path=url, mode=mode, cache_type="none") - self.size = size - - async def read(self, num=-1): - if self.r is None: - r = await self.session.get( - self.fs.encode_url(self.url), **self.kwargs - ).__aenter__() - self.fs._raise_not_found_for_status(r, self.url) - self.r = r - out = await self.r.content.read(num) - self.loc += len(out) - return out - - async def close(self): - if self.r is not None: - self.r.close() - self.r = None - await super().close() - - -async def get_range(session, url, start, end, file=None, **kwargs): - # explicit get a range when we know it must be safe - kwargs = kwargs.copy() - headers = kwargs.pop("headers", {}).copy() - headers["Range"] = "bytes=%i-%i" % (start, end - 1) - r = await session.get(url, headers=headers, **kwargs) - r.raise_for_status() - async with r: - out = await r.read() - if file: - with open(file, "rb+") as f: - f.seek(start) - f.write(out) - else: - return out - - -async def _file_info(url, session, size_policy="head", **kwargs): - """Call HEAD on the server to get details about the file (size/checksum etc.) - - Default operation is to explicitly allow redirects and use encoding - 'identity' (no compression) to get the true size of the target. - """ - logger.debug("Retrieve file size for %s" % url) - kwargs = kwargs.copy() - ar = kwargs.pop("allow_redirects", True) - head = kwargs.get("headers", {}).copy() - head["Accept-Encoding"] = "identity" - kwargs["headers"] = head - - info = {} - if size_policy == "head": - r = await session.head(url, allow_redirects=ar, **kwargs) - elif size_policy == "get": - r = await session.get(url, allow_redirects=ar, **kwargs) - else: - raise TypeError('size_policy must be "head" or "get", got %s' "" % size_policy) - async with r: - r.raise_for_status() - - # TODO: - # recognise lack of 'Accept-Ranges', - # or 'Accept-Ranges': 'none' (not 'bytes') - # to mean streaming only, no random access => return None - if "Content-Length" in r.headers: - # Some servers may choose to ignore Accept-Encoding and return - # compressed content, in which case the returned size is unreliable. - if r.headers.get("Content-Encoding", "identity") == "identity": - info["size"] = int(r.headers["Content-Length"]) - elif "Content-Range" in r.headers: - info["size"] = int(r.headers["Content-Range"].split("/")[1]) - - for checksum_field in ["ETag", "Content-MD5", "Digest"]: - if r.headers.get(checksum_field): - info[checksum_field] = r.headers[checksum_field] - - return info - - -async def _file_size(url, session=None, *args, **kwargs): - if session is None: - session = await get_client() - info = await _file_info(url, session=session, *args, **kwargs) - return info.get("size") - - -file_size = sync_wrapper(_file_size) diff --git a/spaces/juntsu/Text_generator1/app.py b/spaces/juntsu/Text_generator1/app.py deleted file mode 100644 index 8bd463c509f97bbcecc26418e6928556ea5235ef..0000000000000000000000000000000000000000 --- a/spaces/juntsu/Text_generator1/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -myfirstvariable='My First Text Generation' -mylovelysecondvariable='Input text and submit.' - -model1=gr.Interface.load('huggingface/gpt2') -model3=gr.Interface.load('huggingface/EleutherAI/gpt-neo-1.3B') - -gr.Parallel(model1, model3, title=myfirstvariable, description=mylovelysecondvariable).launch() \ No newline at end of file diff --git a/spaces/kangvcar/RealChar/client/web/src/App.css b/spaces/kangvcar/RealChar/client/web/src/App.css deleted file mode 100644 index 86d2be1e9e7438aa586c50358b88cd6e94606533..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/App.css +++ /dev/null @@ -1,56 +0,0 @@ -.app { - min-height: 100vh; - display: flex; - flex-direction: column; - align-items: center; -} - -#desktop-content { - flex-grow: 1; - display: flex; - flex-direction: column; - align-items: center; -} - -.main-screen { - display: flex; - flex-direction: column; - align-items: center; - justify-content: space-between; - flex-grow: 1; - width: 100%; - height: 40vh; - padding: 20px; -} - -.header { - color: #cccccc; - font-family: "Prompt", Helvetica; - font-size: 20px; - font-weight: 200; -} - -.recording { - color: firebrick; - padding-left: 1.2em; -} - -.recording::before { - content: '🔴'; - margin-right: 3px; - animation: recording 600ms alternate infinite; -} -@keyframes recording { - from { opacity: 1; } - to { opacity: 0.2; } -} - -.actions { - display: flex; - justify-content: center; - gap: 30px; -} - -.text-white { - color: white; -} \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/audio2exp_models/audio2exp.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/audio2exp_models/audio2exp.py deleted file mode 100644 index 9e79a929560592687a505e13188796e2b0ca8772..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/audio2exp_models/audio2exp.py +++ /dev/null @@ -1,41 +0,0 @@ -from tqdm import tqdm -import torch -from torch import nn - - -class Audio2Exp(nn.Module): - def __init__(self, netG, cfg, device, prepare_training_loss=False): - super(Audio2Exp, self).__init__() - self.cfg = cfg - self.device = device - self.netG = netG.to(device) - - def test(self, batch): - - mel_input = batch['indiv_mels'] # bs T 1 80 16 - bs = mel_input.shape[0] - T = mel_input.shape[1] - - exp_coeff_pred = [] - - for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames - - current_mel_input = mel_input[:,i:i+10] - - #ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64 - ref = batch['ref'][:, :, :64][:, i:i+10] - ratio = batch['ratio_gt'][:, i:i+10] #bs T - - audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16 - - curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64 - - exp_coeff_pred += [curr_exp_coeff_pred] - - # BS x T x 64 - results_dict = { - 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1) - } - return results_dict - - diff --git a/spaces/kevinwang676/VoiceChangers/infer_pack/commons.py b/spaces/kevinwang676/VoiceChangers/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/scripts/models/util_splitModel.sh b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/scripts/models/util_splitModel.sh deleted file mode 100644 index 1d7f9dc8065629196aeb24ab97a161597fc2bc41..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/scripts/models/util_splitModel.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash - -< - - the first arg has to be wrapped in single quotes to ensure that bash does not expand wildcards -Prereqs: a pytorch model file -Todo: get the parent folder name and use this as the name for the model file -blkHeader - -#--- dependencies -#none - - -#--- initialization/configuration -#--- $1: first arg; the source model file; eg ./bin/models/model.pth -#--- $n: last arg; dest model path; eg. ./test_model_folder -strPth_mdlFile=$1 -strPth_mdlFolder=$2 -strPrefix='/model_' - -if [ -z "$strPth_mdlFile" ] || [ -z "$strPth_mdlFolder" ]; then - echo "WARN: no args provided. Exiting script." - exit -fi - -strpth_pwd=$(pwd) -strpth_scriptLoc=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) -strpth_scrHome="${strpth_scriptLoc}/../" -#strpth_ignHome="${strpth_scrHome}/../" -strpth_appHome="${strpth_scrHome}/../" - -#echo "TRACE: strPth_mdlFile= $strPth_mdlFile" -echo "TRACE: strPth_mdlFolder= $strPth_mdlFolder" - -#--- ensure the target dir exists -mkdir -p $strPth_mdlFolder - -#--- split the model into smaller chunks -echo "split -b 10M $strPth_mdlFile $strPth_mdlFolder$strPrefix" -split -b 10M $strPth_mdlFile $strPth_mdlFolder$strPrefix - -echo -e "INFO:\t Done ...\n" \ No newline at end of file diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/context_block.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/context_block.py deleted file mode 100644 index d60fdb904c749ce3b251510dff3cc63cea70d42e..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/context_block.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn - -from ..utils import constant_init, kaiming_init -from .registry import PLUGIN_LAYERS - - -def last_zero_init(m): - if isinstance(m, nn.Sequential): - constant_init(m[-1], val=0) - else: - constant_init(m, val=0) - - -@PLUGIN_LAYERS.register_module() -class ContextBlock(nn.Module): - """ContextBlock module in GCNet. - - See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond' - (https://arxiv.org/abs/1904.11492) for details. - - Args: - in_channels (int): Channels of the input feature map. - ratio (float): Ratio of channels of transform bottleneck - pooling_type (str): Pooling method for context modeling. - Options are 'att' and 'avg', stand for attention pooling and - average pooling respectively. Default: 'att'. - fusion_types (Sequence[str]): Fusion method for feature fusion, - Options are 'channels_add', 'channel_mul', stand for channelwise - addition and multiplication respectively. Default: ('channel_add',) - """ - - _abbr_ = 'context_block' - - def __init__(self, - in_channels, - ratio, - pooling_type='att', - fusion_types=('channel_add', )): - super(ContextBlock, self).__init__() - assert pooling_type in ['avg', 'att'] - assert isinstance(fusion_types, (list, tuple)) - valid_fusion_types = ['channel_add', 'channel_mul'] - assert all([f in valid_fusion_types for f in fusion_types]) - assert len(fusion_types) > 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff894280cb2782243b25e8973650591577..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/utils/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/utils/__init__.py deleted file mode 100644 index ac489e2dbbc0e6fa87f5088b4edcc20f8cadc1a6..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/spaces/koajoel/PolyFormer/fairseq/.github/ISSUE_TEMPLATE.md b/spaces/koajoel/PolyFormer/fairseq/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5c4c4493e4a8e5386b927e4f4554df925955d129..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,3 +0,0 @@ -## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈 - -Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates. diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/models/__init__.py b/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/utils.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/utils.py deleted file mode 100644 index d0914320eab96e197ae379b94ea7eeb2fe5dfd79..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/utils.py +++ /dev/null @@ -1,174 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/kukuhtw/AutoGPT/autogpt/llm_utils.py b/spaces/kukuhtw/AutoGPT/autogpt/llm_utils.py deleted file mode 100644 index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/llm_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -from __future__ import annotations - -import time -from ast import List - -import openai -from colorama import Fore, Style -from openai.error import APIError, RateLimitError - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - -openai.api_key = CFG.openai_api_key - - -def call_ai_function( - function: str, args: list, description: str, model: str | None = None -) -> str: - """Call an AI function - - This is a magic function that can do anything with no-code. See - https://github.com/Torantulino/AI-Functions for more info. - - Args: - function (str): The function to call - args (list): The arguments to pass to the function - description (str): The description of the function - model (str, optional): The model to use. Defaults to None. - - Returns: - str: The response from the function - """ - if model is None: - model = CFG.smart_llm_model - # For each arg, if any are None, convert to "None": - args = [str(arg) if arg is not None else "None" for arg in args] - # parse args to comma separated string - args = ", ".join(args) - messages = [ - { - "role": "system", - "content": f"You are now the following python function: ```# {description}" - f"\n{function}```\n\nOnly respond with your `return` value.", - }, - {"role": "user", "content": args}, - ] - - return create_chat_completion(model=model, messages=messages, temperature=0) - - -# Overly simple abstraction until we create something better -# simple retry mechanism when getting a rate error or a bad gateway -def create_chat_completion( - messages: list, # type: ignore - model: str | None = None, - temperature: float = CFG.temperature, - max_tokens: int | None = None, -) -> str: - """Create a chat completion using the OpenAI API - - Args: - messages (list[dict[str, str]]): The messages to send to the chat completion - model (str, optional): The model to use. Defaults to None. - temperature (float, optional): The temperature to use. Defaults to 0.9. - max_tokens (int, optional): The max tokens to use. Defaults to None. - - Returns: - str: The response from the chat completion - """ - response = None - num_retries = 10 - warned_user = False - if CFG.debug_mode: - print( - Fore.GREEN - + f"Creating chat completion with model {model}, temperature {temperature}," - f" max_tokens {max_tokens}" + Fore.RESET - ) - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - response = openai.ChatCompletion.create( - deployment_id=CFG.get_azure_deployment_id_for_model(model), - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - else: - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - break - except RateLimitError: - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"Reached rate limit, passing..." + Fore.RESET, - ) - if not warned_user: - logger.double_check( - f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. " - + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}" - ) - warned_user = True - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) - if response is None: - logger.typewriter_log( - "FAILED TO GET RESPONSE FROM OPENAI", - Fore.RED, - "Auto-GPT has failed to get a response from OpenAI's services. " - + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.", - ) - logger.double_check() - if CFG.debug_mode: - raise RuntimeError(f"Failed to get response after {num_retries} retries") - else: - quit(1) - - return response.choices[0].message["content"] - - -def create_embedding_with_ada(text) -> list: - """Create an embedding with text-ada-002 using the OpenAI SDK""" - num_retries = 10 - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - return openai.Embedding.create( - input=[text], - engine=CFG.get_azure_deployment_id_for_model( - "text-embedding-ada-002" - ), - )["data"][0]["embedding"] - else: - return openai.Embedding.create( - input=[text], model="text-embedding-ada-002" - )["data"][0]["embedding"] - except RateLimitError: - pass - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/BufrStubImagePlugin.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/BufrStubImagePlugin.py deleted file mode 100644 index 0425bbd750eacf884ca1fc0ba8aa893a71ccdfc6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/BufrStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# BUFR stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific BUFR image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"BUFR" or prefix[:4] == b"ZCZC" - - -class BufrStubImageFile(ImageFile.StubImageFile): - format = "BUFR" - format_description = "BUFR" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(4)): - msg = "Not a BUFR file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "BUFR save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(BufrStubImageFile.format, BufrStubImageFile, _accept) -Image.register_save(BufrStubImageFile.format, _save) - -Image.register_extension(BufrStubImageFile.format, ".bufr") diff --git a/spaces/leoneat/comments_refiner/app.py b/spaces/leoneat/comments_refiner/app.py deleted file mode 100644 index e3446c18f59123c34628e9632927058ce2e9273c..0000000000000000000000000000000000000000 --- a/spaces/leoneat/comments_refiner/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import streamlit as st -from transformers import BertTokenizer, BertForMaskedLM, BertForSequenceClassification -from copy import copy -from transformers import pipeline -import torch -import numpy as np - - -device = 'cpu' - -tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') -bert_mlm_positive = BertForMaskedLM.from_pretrained('./models/bert_mlm_positive', return_dict=True).to(device).train(False) -bert_mlm_negative = BertForMaskedLM.from_pretrained('./models/bert_mlm_negative', return_dict=True).to(device).train(False) -bert_classifier = BertForSequenceClassification.from_pretrained('./models/bert_classifier', return_dict=True).to(device).train(False) - - -bert_mlm_positive_pipeline = pipeline('fill-mask', top_k=10, model=bert_mlm_positive, tokenizer=tokenizer) -bert_mlm_negative_pipeline = pipeline('fill-mask', top_k=10, model=bert_mlm_negative, tokenizer=tokenizer) -MASK_ID = bert_mlm_positive_pipeline.tokenizer.mask_token_id - - -def get_replacements(sentence: str, num_tokens, k_best, epsilon=1e-3): - """ - - split the sentence into tokens using the INGSOC-approved BERT tokenizer - - find :num_tokens: tokens with the highest ratio (see above) - - replace them with :k_best: words according to bert_mlm_positive - :return: a list of all possible strings (up to k_best * num_tokens) - """ - tokens = tokenizer(sentence)['input_ids'] - candidates = [] - - masked_sentences = [] - masked_sentences_tokens = [] - - targets = [] - - for i in range(1, len(tokens) - 2): - tokens_tmp = copy(tokens) -# tokens_tmp2 = copy(tokens) - -# # Experimenting with padding to blur context a bit -# for j in range(max(0, i - 1), min(len(tokens), i + 2)): -# tokens_tmp[j] = bert_mlm_positive_pipeline.tokenizer.pad_token_id - - tokens_tmp[i] = MASK_ID -# tokens_tmp2[i] = MASK_ID - - sent = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokens_tmp)[1:-1]) - target = tokenizer.ids_to_tokens[tokens[i]] - masked_sentences.append(sent) - targets.append(target) - masked_sentences_tokens.append(tokens_tmp) - - - scores_p = bert_mlm_positive_pipeline(masked_sentences, targets=targets) - if isinstance(scores_p[0], dict): scores_p = [scores_p] - scores_n = bert_mlm_negative_pipeline(masked_sentences, targets=targets) - if isinstance(scores_n[0], dict): scores_n = [scores_n] - - cand_pred = bert_mlm_positive_pipeline(masked_sentences) - if isinstance(cand_pred[0], dict): cand_pred = [cand_pred] - candidates = [[ee['token'] for ee in e[:k_best]] for e in cand_pred] - - scores = [] - for score_p_list, score_n_list, target, masked_sentence_tokens in zip(scores_p, scores_n, targets, masked_sentences_tokens): - score_p = -1 - score_n = -1 - - for opt in score_p_list: -# print(tokenizer.ids_to_tokens[opt['token']], target) - if tokenizer.ids_to_tokens[opt['token']] == target: - score_p = opt['score'] - break - for opt in score_n_list: - if tokenizer.ids_to_tokens[opt['token']] == target: - score_n = opt['score'] - break - - scores.append((score_p + epsilon) - (score_n + epsilon)) - - to_replace = np.argsort(scores)[:num_tokens] + 1 - outputs = [] - for id_replacee in to_replace: -# print(tokens[id_replacee], tokenizer.ids_to_tokens[tokens[id_replacee]], '->', [tokenizer.ids_to_tokens[k] -# for k in candidates[id_replacee - 1]]) - for candidate in candidates[id_replacee - 1]: - tokens_tmp = copy(tokens) - tokens_tmp[id_replacee] = candidate - sent = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokens_tmp)[1:-1]) - outputs.append(sent) - - return outputs - - -def make_nice(sentence, best_m, depth=2): - if len(sentence) < 2: - return [sentence] - with torch.no_grad(): - candidates = [sentence] * best_m - for i in range(depth): - bunch = [] - for sent in candidates: - bunch += get_replacements(sent, num_tokens=2, k_best=2) - scores = bert_classifier(input_ids=torch.tensor(tokenizer(bunch, padding=True)['input_ids']).to(device)).logits - scores = scores[..., 1] - scores[..., 0] - scores = scores.cpu().detach().numpy() - candidates = np.array(bunch)[np.argsort(scores)[-best_m:]] - return candidates - - -st.set_page_config(page_title="Comments mood changer", layout="centered") -sent = st.text_input("Enter bad comment (it takes it's time to convert it - 1-2 minutes per sentence)", value='the staff is horrible !') -st.markdown(f"Processing: \n{sent}") -better_comment = make_nice(sent, 2)[-1] -st.markdown(f"Good comment: \n{better_comment}") diff --git a/spaces/lewisliuX123/wechatgpt3/bot/openai/open_ai_bot.py b/spaces/lewisliuX123/wechatgpt3/bot/openai/open_ai_bot.py deleted file mode 100644 index 79155a1aca9fdf1e975a34bc0816d602e90fd9c8..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatgpt3/bot/openai/open_ai_bot.py +++ /dev/null @@ -1,166 +0,0 @@ -# encoding:utf-8 - -from bot.bot import Bot -from config import conf -from common.log import logger -import openai -import time - -user_session = dict() - -# OpenAI对话模型API (可用) -class OpenAIBot(Bot): - def __init__(self): - openai.api_key = conf().get('open_ai_api_key') - - - def reply(self, query, context=None): - # acquire reply content - if not context or not context.get('type') or context.get('type') == 'TEXT': - logger.info("[OPEN_AI] query={}".format(query)) - from_user_id = context['from_user_id'] - if query == '#清除记忆': - Session.clear_session(from_user_id) - return '记忆已清除' - elif query == '#清除所有': - Session.clear_all_session() - return '所有人记忆已清除' - - new_query = Session.build_session_query(query, from_user_id) - logger.debug("[OPEN_AI] session query={}".format(new_query)) - - reply_content = self.reply_text(new_query, from_user_id, 0) - logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content)) - if reply_content and query: - Session.save_session(query, reply_content, from_user_id) - return reply_content - - elif context.get('type', None) == 'IMAGE_CREATE': - return self.create_img(query, 0) - - def reply_text(self, query, user_id, retry_count=0): - try: - response = openai.Completion.create( - model="text-davinci-003", # 对话模型的名称 - prompt=query, - temperature=1, # 值在[0,1]之间,越大表示回复越具有不确定性 - max_tokens=500, # 回复最大的字符数 - top_p=1, - frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 - presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 - stop=["\n\n\n"] - ) - res_content = response.choices[0]['text'].strip().replace('<|endoftext|>', '') - logger.info("[OPEN_AI] reply={}".format(res_content)) - return res_content - except openai.error.RateLimitError as e: - # rate limit exception - logger.warn(e) - if retry_count < 1: - time.sleep(5) - logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1)) - return self.reply_text(query, user_id, retry_count+1) - else: - return "提问太快啦,请休息一下再问我吧" - except Exception as e: - # unknown exception - logger.exception(e) - Session.clear_session(user_id) - return "请再问我一次吧" - - - def create_img(self, query, retry_count=0): - try: - logger.info("[OPEN_AI] image_query={}".format(query)) - response = openai.Image.create( - prompt=query, #图片描述 - n=1, #每次生成图片的数量 - size="1024x1024" #图片大小,可选有 256x256, 512x512, 1024x1024 - ) - image_url = response['data'][0]['url'] - logger.info("[OPEN_AI] image_url={}".format(image_url)) - return image_url - except openai.error.RateLimitError as e: - logger.warn(e) - if retry_count < 1: - time.sleep(5) - logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1)) - return self.reply_text(query, retry_count+1) - else: - return "提问太快啦,请休息一下再问我吧" - except Exception as e: - logger.exception(e) - return None - - -class Session(object): - @staticmethod - def build_session_query(query, user_id): - ''' - build query with conversation history - e.g. Q: xxx - A: xxx - Q: xxx - :param query: query content - :param user_id: from user id - :return: query content with conversaction - ''' - prompt = conf().get("character_desc", "") - if prompt: - prompt += "<|endoftext|>\n\n\n" - session = user_session.get(user_id, None) - if session: - for conversation in session: - prompt += "Q: " + conversation["question"] + "\n\n\nA: " + conversation["answer"] + "<|endoftext|>\n" - prompt += "Q: " + query + "\nA: " - return prompt - else: - return prompt + "Q: " + query + "\nA: " - - @staticmethod - def save_session(query, answer, user_id): - max_tokens = conf().get("conversation_max_tokens") - if not max_tokens: - # default 3000 - max_tokens = 1000 - conversation = dict() - conversation["question"] = query - conversation["answer"] = answer - session = user_session.get(user_id) - logger.debug(conversation) - logger.debug(session) - if session: - # append conversation - session.append(conversation) - else: - # create session - queue = list() - queue.append(conversation) - user_session[user_id] = queue - - # discard exceed limit conversation - Session.discard_exceed_conversation(user_session[user_id], max_tokens) - - - @staticmethod - def discard_exceed_conversation(session, max_tokens): - count = 0 - count_list = list() - for i in range(len(session)-1, -1, -1): - # count tokens of conversation list - history_conv = session[i] - count += len(history_conv["question"]) + len(history_conv["answer"]) - count_list.append(count) - - for c in count_list: - if c > max_tokens: - # pop first conversation - session.pop(0) - - @staticmethod - def clear_session(user_id): - user_session[user_id] = [] - - @staticmethod - def clear_all_session(): - user_session.clear() \ No newline at end of file diff --git a/spaces/lewiswu1209/MockingBird/ppg2mel/train/loss.py b/spaces/lewiswu1209/MockingBird/ppg2mel/train/loss.py deleted file mode 100644 index 301248cc1ef24c549499e10396ae6c3afab3ba09..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg2mel/train/loss.py +++ /dev/null @@ -1,50 +0,0 @@ -from typing import Dict -from typing import Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..utils.nets_utils import make_pad_mask - - -class MaskedMSELoss(nn.Module): - def __init__(self, frames_per_step): - super().__init__() - self.frames_per_step = frames_per_step - self.mel_loss_criterion = nn.MSELoss(reduction='none') - # self.loss = nn.MSELoss() - self.stop_loss_criterion = nn.BCEWithLogitsLoss(reduction='none') - - def get_mask(self, lengths, max_len=None): - # lengths: [B,] - if max_len is None: - max_len = torch.max(lengths) - batch_size = lengths.size(0) - seq_range = torch.arange(0, max_len).long() - seq_range_expand = seq_range.unsqueeze(0).expand(batch_size, max_len).to(lengths.device) - seq_length_expand = lengths.unsqueeze(1).expand_as(seq_range_expand) - return (seq_range_expand < seq_length_expand).float() - - def forward(self, mel_pred, mel_pred_postnet, mel_trg, lengths, - stop_target, stop_pred): - ## process stop_target - B = stop_target.size(0) - stop_target = stop_target.reshape(B, -1, self.frames_per_step)[:, :, 0] - stop_lengths = torch.ceil(lengths.float() / self.frames_per_step).long() - stop_mask = self.get_mask(stop_lengths, int(mel_trg.size(1)/self.frames_per_step)) - - mel_trg.requires_grad = False - # (B, T, 1) - mel_mask = self.get_mask(lengths, mel_trg.size(1)).unsqueeze(-1) - # (B, T, D) - mel_mask = mel_mask.expand_as(mel_trg) - mel_loss_pre = (self.mel_loss_criterion(mel_pred, mel_trg) * mel_mask).sum() / mel_mask.sum() - mel_loss_post = (self.mel_loss_criterion(mel_pred_postnet, mel_trg) * mel_mask).sum() / mel_mask.sum() - - mel_loss = mel_loss_pre + mel_loss_post - - # stop token loss - stop_loss = torch.sum(self.stop_loss_criterion(stop_pred, stop_target) * stop_mask) / stop_mask.sum() - - return mel_loss, stop_loss diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Darkscandal Pack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Darkscandal Pack.md deleted file mode 100644 index 8713764c6224cce81f607937d7594bed95d01129..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Darkscandal Pack.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    The Darkscandal Pack was a new addition to our rums. Recently, we started selling a special release of this bottle of rum, as a limited edition. This is a very special rum due to the hand crafted signature label. We have carefully selected our ingredients to make this the smoothest rum. The finest ingredients with a passion for rum. Everything was selected to meet our highest quality. We have chosen the rum not only for what it is but for what it does. This is a very different rum, not only in its ingredients but more importantly in the way it is made. It is handcrafted by Puerto Rico, using all local ingredients. The rum was brought to Puerto Rico, and the rum maker was asked to make his magic. We took our rum to a lot of different rum producers so that he could experiment a bit to come up with a unique, artisan rum. The rum maker was guided by the only objective, to create something unique, something special, without being cheesy. By the way, this rum will not be for everyone, this is not a cheap rum. But we think its a rums worth to be on top of your list as a special gift. The rum has a great flavor and has made his debut at the RMM ratings conference. People asked for this and we could not pass up the chance to release the rum. We have launched it as an 18 year single cask rum aged on a lot of rum wood to release its magic and making its entrance.

    -

    Darkscandal Pack


    Download >>> https://bytlly.com/2uGxKT



    -

    Today, March 22, we were contacted by a reporter that received an order for our Darkscandal Pack; the order was not made through us or by our company. She knew that the time period between the written delivery and the actual receipt of the package was on March 12 and this also included the delivery time to Spain. Since her order was placed on Tuesday, we said that she would have the package by March 24. However, the package was received on March 26, which is 2 business days later. We knew that we were responsible because we are the only people that send the packages to our customers, so we apologize for this delay in the delivery. We will be in contact with the customer and provide all information that we know regarding the shipment.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md deleted file mode 100644 index 0170c7314b1612e3e625b049ed2692996da8e6f0..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hazrat Umar Quotes In Urdu Pdf Download TOPl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Hazrat Umar Quotes In Urdu Pdf Downloadl


    Download Ziphttps://bytlly.com/2uGy0Z



    -
    - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hey Ram Tamil Movie Download Hd.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hey Ram Tamil Movie Download Hd.md deleted file mode 100644 index ed56e679cefb479404e47e410492464fbe49704c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hey Ram Tamil Movie Download Hd.md +++ /dev/null @@ -1,7 +0,0 @@ -

    hey ram tamil movie download hd


    Download Zip ->>> https://bytlly.com/2uGwQA



    - -A historical drama told in flashback, a semi-fictional plot centered around the partition of India and the assassination of Mahatma Gandhi. ##Hey Ram (2000) - Recuts HD official trailer | ஹேராம் | Kamal ... Production Company : Raaj Kamal Films ... இல்லை முன்னிற்கான ; Directed by : Kamal S. Dutta, Rajiv Dutta ... -அட்டண்டை நவர்கள் தந்தையான 8a78ff9644
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Mediacoder Premium Vod Edition NEW Cracked Rar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Mediacoder Premium Vod Edition NEW Cracked Rar.md deleted file mode 100644 index abc246b2419bc3dff66aa099cf388755edf56e0b..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Mediacoder Premium Vod Edition NEW Cracked Rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    mediacoder premium vod edition cracked rar


    Download Filehttps://bytlly.com/2uGxUq



    - -096. 6. 28. ; Raaz Full Full Movie quarnell096. 6. 27. ; Raaz 2.5 Pc. Самые древние арки сборных обычно были сделаны из динамичных и такой подготовительной работы. Перед соответствующим предзаготовкой их были размножены художниками, начиная с самых старых монахов и мастеров, заканчивая людьми, назвавшимися арками. Арки начинались при богослужениях, но с тех пор они восходят к советским летающим памятникам и получают названия, звучащие как капитальные начала последовательной обучающей программы артисто� 4fefd39f24
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md deleted file mode 100644 index ab799d8ce8df174c3e814bbc0632739051cbfe9f..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Virtual Wifi Miniport Adapter Driver.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Microsoft Virtual Wifi Miniport Adapter Driver


    Download Zip ———>>> https://bytlly.com/2uGyFD



    -
    -Here is the list of Microsoft Virtual WiFi Miniport Adapter drivers, Download & update Microsoft Virtual WiFi Miniport Adapter drivers from professional Microsoft ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/luckwill/chiakicc/text/sanskrit.py b/spaces/luckwill/chiakicc/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/luckwill/chiakicc/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx b/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
    -} diff --git a/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h b/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h deleted file mode 100644 index abfaf9161406798eeaa79a0d6c22e023de893495..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/constructor_stats.h +++ /dev/null @@ -1,275 +0,0 @@ -#pragma once -/* - tests/constructor_stats.h -- framework for printing and tracking object - instance lifetimes in example/test code. - - Copyright (c) 2016 Jason Rhinelander - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. - -This header provides a few useful tools for writing examples or tests that want to check and/or -display object instance lifetimes. It requires that you include this header and add the following -function calls to constructors: - - class MyClass { - MyClass() { ...; print_default_created(this); } - ~MyClass() { ...; print_destroyed(this); } - MyClass(const MyClass &c) { ...; print_copy_created(this); } - MyClass(MyClass &&c) { ...; print_move_created(this); } - MyClass(int a, int b) { ...; print_created(this, a, b); } - MyClass &operator=(const MyClass &c) { ...; print_copy_assigned(this); } - MyClass &operator=(MyClass &&c) { ...; print_move_assigned(this); } - - ... - } - -You can find various examples of these in several of the existing testing .cpp files. (Of course -you don't need to add any of the above constructors/operators that you don't actually have, except -for the destructor). - -Each of these will print an appropriate message such as: - - ### MyClass @ 0x2801910 created via default constructor - ### MyClass @ 0x27fa780 created 100 200 - ### MyClass @ 0x2801910 destroyed - ### MyClass @ 0x27fa780 destroyed - -You can also include extra arguments (such as the 100, 200 in the output above, coming from the -value constructor) for all of the above methods which will be included in the output. - -For testing, each of these also keeps track the created instances and allows you to check how many -of the various constructors have been invoked from the Python side via code such as: - - from pybind11_tests import ConstructorStats - cstats = ConstructorStats.get(MyClass) - print(cstats.alive()) - print(cstats.default_constructions) - -Note that `.alive()` should usually be the first thing you call as it invokes Python's garbage -collector to actually destroy objects that aren't yet referenced. - -For everything except copy and move constructors and destructors, any extra values given to the -print_...() function is stored in a class-specific values list which you can retrieve and inspect -from the ConstructorStats instance `.values()` method. - -In some cases, when you need to track instances of a C++ class not registered with pybind11, you -need to add a function returning the ConstructorStats for the C++ class; this can be done with: - - m.def("get_special_cstats", &ConstructorStats::get, py::return_value_policy::reference) - -Finally, you can suppress the output messages, but keep the constructor tracking (for -inspection/testing in python) by using the functions with `print_` replaced with `track_` (e.g. -`track_copy_created(this)`). - -*/ - -#include "pybind11_tests.h" -#include -#include -#include -#include - -class ConstructorStats { -protected: - std::unordered_map _instances; // Need a map rather than set because members can shared address with parents - std::list _values; // Used to track values (e.g. of value constructors) -public: - int default_constructions = 0; - int copy_constructions = 0; - int move_constructions = 0; - int copy_assignments = 0; - int move_assignments = 0; - - void copy_created(void *inst) { - created(inst); - copy_constructions++; - } - - void move_created(void *inst) { - created(inst); - move_constructions++; - } - - void default_created(void *inst) { - created(inst); - default_constructions++; - } - - void created(void *inst) { - ++_instances[inst]; - } - - void destroyed(void *inst) { - if (--_instances[inst] < 0) - throw std::runtime_error("cstats.destroyed() called with unknown " - "instance; potential double-destruction " - "or a missing cstats.created()"); - } - - static void gc() { - // Force garbage collection to ensure any pending destructors are invoked: -#if defined(PYPY_VERSION) - PyObject *globals = PyEval_GetGlobals(); - PyObject *result = PyRun_String( - "import gc\n" - "for i in range(2):" - " gc.collect()\n", - Py_file_input, globals, globals); - if (result == nullptr) - throw py::error_already_set(); - Py_DECREF(result); -#else - py::module::import("gc").attr("collect")(); -#endif - } - - int alive() { - gc(); - int total = 0; - for (const auto &p : _instances) - if (p.second > 0) - total += p.second; - return total; - } - - void value() {} // Recursion terminator - // Takes one or more values, converts them to strings, then stores them. - template void value(const T &v, Tmore &&...args) { - std::ostringstream oss; - oss << v; - _values.push_back(oss.str()); - value(std::forward(args)...); - } - - // Move out stored values - py::list values() { - py::list l; - for (const auto &v : _values) l.append(py::cast(v)); - _values.clear(); - return l; - } - - // Gets constructor stats from a C++ type index - static ConstructorStats& get(std::type_index type) { - static std::unordered_map all_cstats; - return all_cstats[type]; - } - - // Gets constructor stats from a C++ type - template static ConstructorStats& get() { -#if defined(PYPY_VERSION) - gc(); -#endif - return get(typeid(T)); - } - - // Gets constructor stats from a Python class - static ConstructorStats& get(py::object class_) { - auto &internals = py::detail::get_internals(); - const std::type_index *t1 = nullptr, *t2 = nullptr; - try { - auto *type_info = internals.registered_types_py.at((PyTypeObject *) class_.ptr()).at(0); - for (auto &p : internals.registered_types_cpp) { - if (p.second == type_info) { - if (t1) { - t2 = &p.first; - break; - } - t1 = &p.first; - } - } - } - catch (const std::out_of_range&) {} - if (!t1) throw std::runtime_error("Unknown class passed to ConstructorStats::get()"); - auto &cs1 = get(*t1); - // If we have both a t1 and t2 match, one is probably the trampoline class; return whichever - // has more constructions (typically one or the other will be 0) - if (t2) { - auto &cs2 = get(*t2); - int cs1_total = cs1.default_constructions + cs1.copy_constructions + cs1.move_constructions + (int) cs1._values.size(); - int cs2_total = cs2.default_constructions + cs2.copy_constructions + cs2.move_constructions + (int) cs2._values.size(); - if (cs2_total > cs1_total) return cs2; - } - return cs1; - } -}; - -// To track construction/destruction, you need to call these methods from the various -// constructors/operators. The ones that take extra values record the given values in the -// constructor stats values for later inspection. -template void track_copy_created(T *inst) { ConstructorStats::get().copy_created(inst); } -template void track_move_created(T *inst) { ConstructorStats::get().move_created(inst); } -template void track_copy_assigned(T *, Values &&...values) { - auto &cst = ConstructorStats::get(); - cst.copy_assignments++; - cst.value(std::forward(values)...); -} -template void track_move_assigned(T *, Values &&...values) { - auto &cst = ConstructorStats::get(); - cst.move_assignments++; - cst.value(std::forward(values)...); -} -template void track_default_created(T *inst, Values &&...values) { - auto &cst = ConstructorStats::get(); - cst.default_created(inst); - cst.value(std::forward(values)...); -} -template void track_created(T *inst, Values &&...values) { - auto &cst = ConstructorStats::get(); - cst.created(inst); - cst.value(std::forward(values)...); -} -template void track_destroyed(T *inst) { - ConstructorStats::get().destroyed(inst); -} -template void track_values(T *, Values &&...values) { - ConstructorStats::get().value(std::forward(values)...); -} - -/// Don't cast pointers to Python, print them as strings -inline const char *format_ptrs(const char *p) { return p; } -template -py::str format_ptrs(T *p) { return "{:#x}"_s.format(reinterpret_cast(p)); } -template -auto format_ptrs(T &&x) -> decltype(std::forward(x)) { return std::forward(x); } - -template -void print_constr_details(T *inst, const std::string &action, Output &&...output) { - py::print("###", py::type_id(), "@", format_ptrs(inst), action, - format_ptrs(std::forward(output))...); -} - -// Verbose versions of the above: -template void print_copy_created(T *inst, Values &&...values) { // NB: this prints, but doesn't store, given values - print_constr_details(inst, "created via copy constructor", values...); - track_copy_created(inst); -} -template void print_move_created(T *inst, Values &&...values) { // NB: this prints, but doesn't store, given values - print_constr_details(inst, "created via move constructor", values...); - track_move_created(inst); -} -template void print_copy_assigned(T *inst, Values &&...values) { - print_constr_details(inst, "assigned via copy assignment", values...); - track_copy_assigned(inst, values...); -} -template void print_move_assigned(T *inst, Values &&...values) { - print_constr_details(inst, "assigned via move assignment", values...); - track_move_assigned(inst, values...); -} -template void print_default_created(T *inst, Values &&...values) { - print_constr_details(inst, "created via default constructor", values...); - track_default_created(inst, values...); -} -template void print_created(T *inst, Values &&...values) { - print_constr_details(inst, "created", values...); - track_created(inst, values...); -} -template void print_destroyed(T *inst, Values &&...values) { // Prints but doesn't store given values - print_constr_details(inst, "destroyed", values...); - track_destroyed(inst); -} -template void print_values(T *inst, Values &&...values) { - print_constr_details(inst, ":", values...); - track_values(inst, values...); -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/inner_product.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/inner_product.h deleted file mode 100644 index e8cf941a1dc3df1a6a516eee54f92fa610fd35cc..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/inner_product.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits inner_product -#include - diff --git a/spaces/maknee/minigpt4.cpp/README.md b/spaces/maknee/minigpt4.cpp/README.md deleted file mode 100644 index 0c2465f89c8ec8d2f1f6d1247a38e92aec2dec8a..0000000000000000000000000000000000000000 --- a/spaces/maknee/minigpt4.cpp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Minigpt4 Ggml -emoji: 🌍 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/sync_batchnorm/replicate.py b/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Global/detection_models/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/models/mapping_model.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/models/mapping_model.py deleted file mode 100644 index e030f0f6274e9592494afbfaf17fa1d8371215ce..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/models/mapping_model.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import os -import functools -from torch.autograd import Variable -from util.image_pool import ImagePool -from .base_model import BaseModel -from . import networks -import math -from .NonLocal_feature_mapping_model import * - - -class Mapping_Model(nn.Module): - def __init__(self, nc, mc=64, n_blocks=3, norm="instance", padding_type="reflect", opt=None): - super(Mapping_Model, self).__init__() - - norm_layer = networks.get_norm_layer(norm_type=norm) - activation = nn.ReLU(True) - model = [] - tmp_nc = 64 - n_up = 4 - - print("Mapping: You are using the mapping model without global restoration.") - - for i in range(n_up): - ic = min(tmp_nc * (2 ** i), mc) - oc = min(tmp_nc * (2 ** (i + 1)), mc) - model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation] - for i in range(n_blocks): - model += [ - networks.ResnetBlock( - mc, - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer, - opt=opt, - dilation=opt.mapping_net_dilation, - ) - ] - - for i in range(n_up - 1): - ic = min(64 * (2 ** (4 - i)), mc) - oc = min(64 * (2 ** (3 - i)), mc) - model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation] - model += [nn.Conv2d(tmp_nc * 2, tmp_nc, 3, 1, 1)] - if opt.feat_dim > 0 and opt.feat_dim < 64: - model += [norm_layer(tmp_nc), activation, nn.Conv2d(tmp_nc, opt.feat_dim, 1, 1)] - # model += [nn.Conv2d(64, 1, 1, 1, 0)] - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class Pix2PixHDModel_Mapping(BaseModel): - def name(self): - return "Pix2PixHDModel_Mapping" - - def init_loss_filter(self, use_gan_feat_loss, use_vgg_loss, use_smooth_l1, stage_1_feat_l2): - flags = (True, True, use_gan_feat_loss, use_vgg_loss, True, True, use_smooth_l1, stage_1_feat_l2) - - def loss_filter(g_feat_l2, g_gan, g_gan_feat, g_vgg, d_real, d_fake, smooth_l1, stage_1_feat_l2): - return [ - l - for (l, f) in zip( - (g_feat_l2, g_gan, g_gan_feat, g_vgg, d_real, d_fake, smooth_l1, stage_1_feat_l2), flags - ) - if f - ] - - return loss_filter - - def initialize(self, opt): - BaseModel.initialize(self, opt) - if opt.resize_or_crop != "none" or not opt.isTrain: - torch.backends.cudnn.benchmark = True - self.isTrain = opt.isTrain - input_nc = opt.label_nc if opt.label_nc != 0 else opt.input_nc - - ##### define networks - # Generator network - netG_input_nc = input_nc - self.netG_A = networks.GlobalGenerator_DCDCv2( - netG_input_nc, - opt.output_nc, - opt.ngf, - opt.k_size, - opt.n_downsample_global, - networks.get_norm_layer(norm_type=opt.norm), - opt=opt, - ) - self.netG_B = networks.GlobalGenerator_DCDCv2( - netG_input_nc, - opt.output_nc, - opt.ngf, - opt.k_size, - opt.n_downsample_global, - networks.get_norm_layer(norm_type=opt.norm), - opt=opt, - ) - - if opt.non_local == "Setting_42" or opt.NL_use_mask: - if opt.mapping_exp==1: - self.mapping_net = Mapping_Model_with_mask_2( - min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc), - opt.map_mc, - n_blocks=opt.mapping_n_block, - opt=opt, - ) - else: - self.mapping_net = Mapping_Model_with_mask( - min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc), - opt.map_mc, - n_blocks=opt.mapping_n_block, - opt=opt, - ) - else: - self.mapping_net = Mapping_Model( - min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc), - opt.map_mc, - n_blocks=opt.mapping_n_block, - opt=opt, - ) - - self.mapping_net.apply(networks.weights_init) - - if opt.load_pretrain != "": - self.load_network(self.mapping_net, "mapping_net", opt.which_epoch, opt.load_pretrain) - - if not opt.no_load_VAE: - - self.load_network(self.netG_A, "G", opt.use_vae_which_epoch, opt.load_pretrainA) - self.load_network(self.netG_B, "G", opt.use_vae_which_epoch, opt.load_pretrainB) - for param in self.netG_A.parameters(): - param.requires_grad = False - for param in self.netG_B.parameters(): - param.requires_grad = False - self.netG_A.eval() - self.netG_B.eval() - - if opt.gpu_ids: - self.netG_A.cuda(opt.gpu_ids[0]) - self.netG_B.cuda(opt.gpu_ids[0]) - self.mapping_net.cuda(opt.gpu_ids[0]) - - if not self.isTrain: - self.load_network(self.mapping_net, "mapping_net", opt.which_epoch) - - # Discriminator network - if self.isTrain: - use_sigmoid = opt.no_lsgan - netD_input_nc = opt.ngf * 2 if opt.feat_gan else input_nc + opt.output_nc - if not opt.no_instance: - netD_input_nc += 1 - - self.netD = networks.define_D(netD_input_nc, opt.ndf, opt.n_layers_D, opt, opt.norm, use_sigmoid, - opt.num_D, not opt.no_ganFeat_loss, gpu_ids=self.gpu_ids) - - # set loss functions and optimizers - if self.isTrain: - if opt.pool_size > 0 and (len(self.gpu_ids)) > 1: - raise NotImplementedError("Fake Pool Not Implemented for MultiGPU") - self.fake_pool = ImagePool(opt.pool_size) - self.old_lr = opt.lr - - # define loss functions - self.loss_filter = self.init_loss_filter(not opt.no_ganFeat_loss, not opt.no_vgg_loss, opt.Smooth_L1, opt.use_two_stage_mapping) - - self.criterionGAN = networks.GANLoss(use_lsgan=not opt.no_lsgan, tensor=self.Tensor) - - - self.criterionFeat = torch.nn.L1Loss() - self.criterionFeat_feat = torch.nn.L1Loss() if opt.use_l1_feat else torch.nn.MSELoss() - - if self.opt.image_L1: - self.criterionImage=torch.nn.L1Loss() - else: - self.criterionImage = torch.nn.SmoothL1Loss() - - - print(self.criterionFeat_feat) - if not opt.no_vgg_loss: - self.criterionVGG = networks.VGGLoss_torch(self.gpu_ids) - - - # Names so we can breakout loss - self.loss_names = self.loss_filter('G_Feat_L2', 'G_GAN', 'G_GAN_Feat', 'G_VGG','D_real', 'D_fake', 'Smooth_L1', 'G_Feat_L2_Stage_1') - - # initialize optimizers - # optimizer G - - if opt.no_TTUR: - beta1,beta2=opt.beta1,0.999 - G_lr,D_lr=opt.lr,opt.lr - else: - beta1,beta2=0,0.9 - G_lr,D_lr=opt.lr/2,opt.lr*2 - - - if not opt.no_load_VAE: - params = list(self.mapping_net.parameters()) - self.optimizer_mapping = torch.optim.Adam(params, lr=G_lr, betas=(beta1, beta2)) - - # optimizer D - params = list(self.netD.parameters()) - self.optimizer_D = torch.optim.Adam(params, lr=D_lr, betas=(beta1, beta2)) - - print("---------- Optimizers initialized -------------") - - def encode_input(self, label_map, inst_map=None, real_image=None, feat_map=None, infer=False): - if self.opt.label_nc == 0: - input_label = label_map.data.cuda() - else: - # create one-hot vector for label map - size = label_map.size() - oneHot_size = (size[0], self.opt.label_nc, size[2], size[3]) - input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0) - if self.opt.data_type == 16: - input_label = input_label.half() - - # get edges from instance map - if not self.opt.no_instance: - inst_map = inst_map.data.cuda() - edge_map = self.get_edges(inst_map) - input_label = torch.cat((input_label, edge_map), dim=1) - input_label = Variable(input_label, volatile=infer) - - # real images for training - if real_image is not None: - real_image = Variable(real_image.data.cuda()) - - return input_label, inst_map, real_image, feat_map - - def discriminate(self, input_label, test_image, use_pool=False): - input_concat = torch.cat((input_label, test_image.detach()), dim=1) - if use_pool: - fake_query = self.fake_pool.query(input_concat) - return self.netD.forward(fake_query) - else: - return self.netD.forward(input_concat) - - def forward(self, label, inst, image, feat, pair=True, infer=False, last_label=None, last_image=None): - # Encode Inputs - input_label, inst_map, real_image, feat_map = self.encode_input(label, inst, image, feat) - - # Fake Generation - input_concat = input_label - - label_feat = self.netG_A.forward(input_concat, flow='enc') - # print('label:') - # print(label_feat.min(), label_feat.max(), label_feat.mean()) - #label_feat = label_feat / 16.0 - - if self.opt.NL_use_mask: - label_feat_map=self.mapping_net(label_feat.detach(),inst) - else: - label_feat_map = self.mapping_net(label_feat.detach()) - - fake_image = self.netG_B.forward(label_feat_map, flow='dec') - image_feat = self.netG_B.forward(real_image, flow='enc') - - loss_feat_l2_stage_1=0 - loss_feat_l2 = self.criterionFeat_feat(label_feat_map, image_feat.data) * self.opt.l2_feat - - - if self.opt.feat_gan: - # Fake Detection and Loss - pred_fake_pool = self.discriminate(label_feat.detach(), label_feat_map, use_pool=True) - loss_D_fake = self.criterionGAN(pred_fake_pool, False) - - # Real Detection and Loss - pred_real = self.discriminate(label_feat.detach(), image_feat) - loss_D_real = self.criterionGAN(pred_real, True) - - # GAN loss (Fake Passability Loss) - pred_fake = self.netD.forward(torch.cat((label_feat.detach(), label_feat_map), dim=1)) - loss_G_GAN = self.criterionGAN(pred_fake, True) - else: - # Fake Detection and Loss - pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True) - loss_D_fake = self.criterionGAN(pred_fake_pool, False) - - # Real Detection and Loss - if pair: - pred_real = self.discriminate(input_label, real_image) - else: - pred_real = self.discriminate(last_label, last_image) - loss_D_real = self.criterionGAN(pred_real, True) - - # GAN loss (Fake Passability Loss) - pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1)) - loss_G_GAN = self.criterionGAN(pred_fake, True) - - # GAN feature matching loss - loss_G_GAN_Feat = 0 - if not self.opt.no_ganFeat_loss and pair: - feat_weights = 4.0 / (self.opt.n_layers_D + 1) - D_weights = 1.0 / self.opt.num_D - for i in range(self.opt.num_D): - for j in range(len(pred_fake[i])-1): - tmp = self.criterionFeat(pred_fake[i][j], pred_real[i][j].detach()) * self.opt.lambda_feat - loss_G_GAN_Feat += D_weights * feat_weights * tmp - else: - loss_G_GAN_Feat = torch.zeros(1).to(label.device) - - # VGG feature matching loss - loss_G_VGG = 0 - if not self.opt.no_vgg_loss: - loss_G_VGG = self.criterionVGG(fake_image, real_image) * self.opt.lambda_feat if pair else torch.zeros(1).to(label.device) - - smooth_l1_loss=0 - if self.opt.Smooth_L1: - smooth_l1_loss=self.criterionImage(fake_image,real_image)*self.opt.L1_weight - - - return [ self.loss_filter(loss_feat_l2, loss_G_GAN, loss_G_GAN_Feat, loss_G_VGG, loss_D_real, loss_D_fake,smooth_l1_loss,loss_feat_l2_stage_1), None if not infer else fake_image ] - - - def inference(self, label, inst): - - use_gpu = len(self.opt.gpu_ids) > 0 - if use_gpu: - input_concat = label.data.cuda() - inst_data = inst.cuda() - else: - input_concat = label.data - inst_data = inst - - label_feat = self.netG_A.forward(input_concat, flow="enc") - - if self.opt.NL_use_mask: - if self.opt.inference_optimize: - label_feat_map=self.mapping_net.inference_forward(label_feat.detach(),inst_data) - else: - label_feat_map = self.mapping_net(label_feat.detach(), inst_data) - else: - label_feat_map = self.mapping_net(label_feat.detach()) - - fake_image = self.netG_B.forward(label_feat_map, flow="dec") - return fake_image - - -class InferenceModel(Pix2PixHDModel_Mapping): - def forward(self, label, inst): - return self.inference(label, inst) - diff --git a/spaces/matthoffner/chatbot/__tests__/utils/app/importExports.test.ts b/spaces/matthoffner/chatbot/__tests__/utils/app/importExports.test.ts deleted file mode 100644 index aa51cbc054eae6a7921d88f2e894186e82a87739..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/__tests__/utils/app/importExports.test.ts +++ /dev/null @@ -1,264 +0,0 @@ -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { - cleanData, - isExportFormatV1, - isExportFormatV2, - isExportFormatV3, - isExportFormatV4, - isLatestExportFormat, -} from '@/utils/app/importExport'; - -import { ExportFormatV1, ExportFormatV2, ExportFormatV4 } from '@/types/export'; -import { OpenAIModelID, OpenAIModels } from '@/types/openai'; - -import { describe, expect, it } from 'vitest'; - -describe('Export Format Functions', () => { - describe('isExportFormatV1', () => { - it('should return true for v1 format', () => { - const obj = [{ id: 1 }]; - expect(isExportFormatV1(obj)).toBe(true); - }); - - it('should return false for non-v1 formats', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV1(obj)).toBe(false); - }); - }); - - describe('isExportFormatV2', () => { - it('should return true for v2 format', () => { - const obj = { history: [], folders: [] }; - expect(isExportFormatV2(obj)).toBe(true); - }); - - it('should return false for non-v2 formats', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV2(obj)).toBe(false); - }); - }); - - describe('isExportFormatV3', () => { - it('should return true for v3 format', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV3(obj)).toBe(true); - }); - - it('should return false for non-v3 formats', () => { - const obj = { version: 4, history: [], folders: [] }; - expect(isExportFormatV3(obj)).toBe(false); - }); - }); - - describe('isExportFormatV4', () => { - it('should return true for v4 format', () => { - const obj = { version: 4, history: [], folders: [], prompts: [] }; - expect(isExportFormatV4(obj)).toBe(true); - }); - - it('should return false for non-v4 formats', () => { - const obj = { version: 5, history: [], folders: [], prompts: [] }; - expect(isExportFormatV4(obj)).toBe(false); - }); - }); -}); - -describe('cleanData Functions', () => { - describe('cleaning v1 data', () => { - it('should return the latest format', () => { - const data = [ - { - id: 1, - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - }, - ] as ExportFormatV1; - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: 1, - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [], - prompts: [], - }); - }); - }); - - describe('cleaning v2 data', () => { - it('should return the latest format', () => { - const data = { - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - }, - ], - folders: [ - { - id: 1, - name: 'folder 1', - }, - ], - } as ExportFormatV2; - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [], - }); - }); - }); - - describe('cleaning v4 data', () => { - it('should return the latest format', () => { - const data = { - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [ - { - id: '1', - name: 'prompt 1', - description: '', - content: '', - model: OpenAIModels[OpenAIModelID.GPT_3_5], - folderId: null, - }, - ], - } as ExportFormatV4; - - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [ - { - id: '1', - name: 'prompt 1', - description: '', - content: '', - model: OpenAIModels[OpenAIModelID.GPT_3_5], - folderId: null, - }, - ], - }); - }); - }); -}); diff --git a/spaces/mattmdjaga/segment_anything_base/README.md b/spaces/mattmdjaga/segment_anything_base/README.md deleted file mode 100644 index 6112b316f3ce54fbdef3aa7c1ca664ec13d09c0f..0000000000000000000000000000000000000000 --- a/spaces/mattmdjaga/segment_anything_base/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Segment Anything Base -emoji: 🏃 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - -# How to use - -To start, input an image, then use the brush to create dots on the object which you want to segment, don't worry if your dots aren't perfect as the code will find the middle of each -drawn item. Then press the segment button to create masks for the object that the dots are on. diff --git a/spaces/mdkhalid/mistralai-Mistral-7B-v0.1/README.md b/spaces/mdkhalid/mistralai-Mistral-7B-v0.1/README.md deleted file mode 100644 index a828fad23e0a7ef8a44f07792e7d67dab64fe74d..0000000000000000000000000000000000000000 --- a/spaces/mdkhalid/mistralai-Mistral-7B-v0.1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mistralai Mistral 7B V0.1 -emoji: 📉 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/megaaziib/hololive-rvc-models/infer_pack/attentions.py b/spaces/megaaziib/hololive-rvc-models/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/megaaziib/hololive-rvc-models/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/melazab1/ChatGPT4/README.md b/spaces/melazab1/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/melazab1/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merve/anonymization/source/measuring-diversity/script.js b/spaces/merve/anonymization/source/measuring-diversity/script.js deleted file mode 100644 index 002fb32c0d0ee11cf292109725ebda6a2a4b57a4..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/measuring-diversity/script.js +++ /dev/null @@ -1,360 +0,0 @@ -// Seeded random number generator -window.random = new Math.seedrandom('aaaa') -window.randomIndex = new Math.seedrandom('7b') - -window.numRows = 20 -window.shapes = window.shapes || d3.range(21).map(i => randomShape(i, random)) - -window.random2 = new Math.seedrandom('7') -// window.columnShapes = window.columnShapes || d3.range(window.numRows).map(i => d3.range(10).map(i =>randomShape(i, random2))) -window.columnShapes = d3.range(window.numRows).map(i => d3.range(10).map(i =>randomShape(i, random2, true))) - -console.log(window.random3) -function randomShape(i, random, colTargets){ - var color2fill = { - green: '#5A9F8A', - orange: '#DF831F', - blue: '#80BAD4', - } - - var randomItem = function(arr) { - const index = Math.abs(random.int32()) % arr.length - return arr[index] - } - - var color = randomItem(d3.keys(color2fill)) - var size = randomItem(['small', 'large']) - var shape = randomItem(['circle', 'square', 'triangle']) - - if (colTargets && (i == 4 || i == 5)){ - color = 'green' - } - if (colTargets && (i == 4 || i == 15)){ - size = 'small' - } - if (colTargets && (i == 3 || i == 5)){ - shape = 'triangle' - } - - var displayIndex = randomIndex() - - return { - i, - displayIndex, - color, - fill: color2fill[color], - dFill: d3.color(color2fill[color]).darker(1), - size, - sizeVal: size == 'large' ? 1 : .4, - shape, - } -} - -var metrics = [ - { - str: 'Greens', - key: 'green', - field: 'color', - target: .3 - }, - { - str: 'Dot', - key: 'triangle', - field: 'shape', - target: .35 - }, - { - str: 'Smalls', - key: 'small', - field: 'size', - target: .60 - }, -] -window.metrics1 = metrics.map(d => ({...d})) -metrics1[2].target = .5 -window.metrics2 = metrics1.map(d => ({...d})) -metrics2[0].target = 1 - -metrics.forEach(d => { - d.scoreScale = d3.scaleLinear().domain([0, d.target, 1]).range([0, 1, 0]) -}) - - -var pctFmt = d3.format('.0%') -function addMetrics(metrics, {active, topSel, isSmall}){ - var metricSel = topSel - .st({textAlign: 'center'}) - .appendMany('div', metrics) - .st({textAlign: 'center', width: 200, display: 'inline-block'}) - - var width = 120 - - var svg = metricSel.append('svg') - .at({width: 120, height: 100}) - .append('g') - .translate([.5, 40.5]) - - if (isSmall){ - svg.translate((d, i) => [i ? -20.5 : 20.5, 40.5]) - } - - - var xScale = d3.scaleLinear().rangeRound([0, width]) - - var topText = svg.append('text') - .at({y: -20, fontWeight: 500, textAnchor: 'middle', x: width/2}) - - svg.append('path') - .at({d: 'M 0 0 H ' + width, stroke: '#000'}) - - var topTick = svg.append('path') - .at({d: 'M 0 0 V -12.5', stroke: '#000', strokeWidth: 3}) - - - var actualSel = svg.append('g').st({fill: highlightColor}) - - actualSel.append('path') - .at({d: 'M 0 0 V 12.5', stroke: highlightColor, strokeWidth: 3}) - - var actualPct = actualSel.append('text') - .translate(30, 1).at({textAnchor: 'middle'}).st({fontWeight: 300}) - - var actualScore = actualSel.append('text') - .translate(50, 1).at({textAnchor: 'middle'}).st({fontWeight: 300}) - - return () => { - var pcts = metrics.map(d => active.percents[d.key] || 0) - - topText.text(d => (d.str + ' Target: ').replace('s ', ' ') + pctFmt(d.target)) - - topTick.translate(d => xScale(d.target), 0) - actualSel.translate((d, i) => xScale(pcts[i]), 0) - - actualPct.text((d, i) => 'Actual: ' + pctFmt(pcts[i])) - actualScore.text((d, i) => 'Difference: ' + pctFmt(Math.abs(d.target - pcts[i]))) - } -} - - -function scoreActive(active){ - var numActive = d3.sum(active) - return metrics.map(m => { - var v = d3.sum(active, (d, i) => active[i] && shapes[i][m.field] == m.key) - return Math.abs(m.target - v/numActive); - // return m.scoreScale(v/numActive || 0) - }) -} - -var measures = [ - { - str: 'Utilitarian', - display_text: 'Minimize Mean Difference', - ranking_display_text: 'Mean Difference', - fn: s => d3.mean(s)*100, - ppFn: s => d3.format('.2%')(d3.mean(s)), - format: s => 'mean(' + s.map(d => d + '%').join(', ') + ')' - }, - { - str: 'Egalitarian', - display_text: 'Minimize Max Difference', - ranking_display_text: 'Max Difference', - fn: s => { - var srt = _.sortBy(s).map(d => Math.round(d*100)).reverse() - - return srt[0]*100000000 + srt[1]*10000 + srt[2] - }, - ppFn: s => { - var srt = _.sortBy(s).map(d => Math.round(d*100)).reverse() - - return srt[0] + '%' - }, - format: s => 'max(' + s.map(d => d + '%').join(', ') + ')' - } -] -measures2 = measures.map(d => ({...d})) - - -var randomActive = d3.range(10000).map(d => { - var active = shapes.map(d => random() < .3) - - if (d == 0) active = '111111111111101011100'.split('').map(d => +d) - - active.score = scoreActive(active) - measures.forEach(d => { - active[d.str] = d.fn(active.score) - }) - - return active -}) - -function addMetricBestButton(metricIndex, {active, sel, render}){ - var measureSel = sel - .append('div').st({textAlign: 'center', marginTop: 20, marginBottom: -20}) - .append('div.measure').st({width: 200, lineHeight: '1.8em', display: 'inline-block'}) - .html('Show Best') - .on('click', d => { - - // console.log(active) - var pcts = metrics.map(d => active.percents[d.key] || 0) - if (pcts[metricIndex] == metrics[metricIndex].target) return - - var nextActive = _.minBy(randomActive, a => a.score[metricIndex]) - active.forEach((d, i) => active[i] = nextActive[i]) - - measureSel.classed('active', e => e == d) - render() - }) -} - -function addMeasures(measures, {active, sel, render}){ - var measureSel = sel.selectAll('div.measure-container') - - measureSel - .append('div.measure') - .st({width: 200, lineHeight: '1.8em', display: 'inline-block', textAlign: 'center', }) - .html((d, i) => i ? 'Show the set where the highest difference is the smallest' : 'Show the set with
    lowest mean difference') - .html('Show Best') - .on('click', d => { - - var nextActive = _.minBy(randomActive, a => a[d.str]) - active.forEach((d, i) => active[i] = nextActive[i]) - - measureSel.classed('active', e => e == d) - render() - }) - - -} - -function addTotalMetrics(metrics, measures, {active, sel, render}){ - var metricSel = sel.classed('bot', 1).st({textAlign: 'center'}) - .appendMany('div.measure-container', measures) - .append('div', measures) - .st({textAlign: 'center', display: 'inline-block'}) - - - var headlineSel = metricSel.append('div') - var calcSel = metricSel.append('div')//.st({color: highlightColor}) - - return () => { - - measures.forEach(d => { - d.scores = scoreActive(active) - - d.score = Math.round(d.fn(d.scores)*100)/100 - if (d.ppFn) d.score = d.ppFn(d.scores) - }) - - headlineSel.st({fontWeight: 600}) - .text(d => d.ranking_display_text + ': ' + d.score) - - calcSel.text(d => { - var roundedScores = d.scores.map(s => Math.round(s * 100)) - - return d.format(roundedScores) - }) - } -} - - -window.shapeRandom = new Math.seedrandom('aaf') -var defaultActive = shapes.map(d => shapeRandom() < .4) -drawShape('all-shapes') - -drawShape('pick-green', ({active, topSel, sel, render}) => { - active.forEach((d, i) => active[i] = defaultActive[i]) - addMetricBestButton(0, {active, sel, render}) - return addMetrics(metrics.filter(d => d.key == 'green'), {active, topSel}) -}) - -drawShape('pick-triangle', ({active, topSel, sel, render}) => { - active.forEach((d, i) => active[i] = defaultActive[i]) - addMetricBestButton(1, {active, sel, render}) - return addMetrics(metrics.filter(d => d.key == 'triangle'), {active, topSel}) -}) - -drawShape('pick-metric', grid => { - grid.active.forEach((d, i) => grid.active[i] = defaultActive[i]) - - var metricRender = addMetrics(metrics, grid) - var totalMetricRender = addTotalMetrics(metrics, measures, grid) - addMeasures(measures, grid) - - return () => { - metricRender() - totalMetricRender() - } -}) - - -function drawShape(id, initFn=d => e => e){ - var active = shapes.map(d => true) - - var sel = d3.select('#' + id).html('') - - var s = 110 - - var topSel = sel.append('div.top') - var shapeSel = sel.appendMany('div.shape', _.sortBy(shapes, d => d.displayIndex)) - .st({width: s, height: s}) - .on('click', d => { - active[d.i] = !active[d.i] - render() - }) - - shapeSel.append('svg') - .at({width: s, height: s}) - .append('g').translate([s/2, s/2]) - .each(function(d){ - if (d.shape == 'square' || true){ - var rs = Math.round(d.sizeVal*s/3.5) - var shapeSel = d3.select(this).append('rect') - .at({x: -rs, y: -rs, width: rs*2, height: rs*2}) - } else if (d.shape == 'circle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: d.sizeVal*s/3}) - } else if (d.shape == 'triangle'){ - var rs = Math.round(d.sizeVal*s/2.9) - var shapeSel = d3.select(this).append('path') - .translate(rs*Math.pow(3,1/2)/10, 1) - .at({d: [ - 'M', 0, -rs, - 'L', -rs*Math.pow(3,1/2)/2, rs/2, - 'L', +rs*Math.pow(3,1/2)/2, rs/2, - 'Z' - ].join(' ')}) - } - - if (d.shape == 'triangle'){ - d3.select(this).append('circle') - .at({r: 4, fill: '#fff', stroke: '#000', strokeWidth: 1}) - } - - shapeSel.at({fill: d.fill, stroke: d.dFill, strokeWidth: 2}) - }) - - var customRender = initFn({active, topSel, sel, render}) - - shapes.render = render - function render(){ - shapeSel.classed('active', d => active[d.i]) - // console.log(active.map(d => +d).join('')) - - active.percents = {} - active.shapes = shapes.filter(d => active[d.i]) - - d3.nestBy(active.shapes, d => d.color).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - d3.nestBy(active.shapes, d => d.size).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - d3.nestBy(active.shapes, d => d.shape).forEach(d => { - active.percents[d.key] = d.length/active.shapes.length - }) - - - customRender() - } - render() -} \ No newline at end of file diff --git a/spaces/merve/data-leak/source/dataset-worldviews/script.js b/spaces/merve/data-leak/source/dataset-worldviews/script.js deleted file mode 100644 index 3ebba088d65f389af1b446a9ea90fcde674d5fdf..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/dataset-worldviews/script.js +++ /dev/null @@ -1,588 +0,0 @@ - -console.clear(); - -var ttSel = d3.select("body").selectAppend("div.tooltip.tooltip-hidden"); -// For result tables -const columns = ["object", "n", "n correct", "accuracy"]; -const rowHeight = 50; -const rowWidth = 100; -const buffer = 2; - -const classifierBlobWidth = 50; -const classifierBlobHeight = 460; - -function drawShapesWithData(classifier) { - var divHeight = classifier.class == "show-shapes" ? 250 : 490; - - var c = d3.conventions({ - sel: d3.select("." + classifier.class).html(""), - width: 1300, - height: divHeight, - layers: "ds", - }); - - function runClassifier() { - classifier.isClassified = true; - var duration = 3000; - classifierSel.classed("is-classified", true); - graphResultsGroup.classed("is-classified", true); - - drawResults(); - buttonSel.text("Reset"); - - var minX = d3.min(shapeParams, (d) => d.endX - 50); - var timer = d3.timer((ms) => { - if (!classifier.isClassified) { - timer.stop(); - shapeSel.classed("is-classified", false); - return; - } - - var t = d3.easeCubicInOut(ms / duration); - t = d3.clamp(0, t, 1); - - shapeParams.forEach((d, i) => { - d.x = d.startX + (d.endX - d.startX) * t; - d.y = d.startY + (d.endY - d.startY) * t; - d.isClassified = d.x > minX; - }); - - shapeSel - .translate((d) => [d.x, d.y]) - .classed("is-classified", (d) => d.isClassified); - - if (t == 1) { - timer.stop(); - } - }); - } - - function resetClassifier() { - shapeSel.translate((d) => [d.startX, d.startY]); - shapeSel.classed("is-classified", false); - classifier.isClassified = false; - shapeSel - .transition("position") - .duration(0) - .translate((d) => [d.startX, d.startY]); - classifierSel.classed("is-classified", false); - graphResultsGroup.classed("is-classified", false); - if (classifier.class != "show-shapes") { - classifierBlobSel.attr("opacity", 100); - } - - drawResults(); - buttonSel.text("Run Classifier"); - } - - // Add run/reset button - var buttonSel = d3 - .select("." + classifier.class + "-button") - .html("") - .append("button#run") - .at({ - type: "button", - class: "classifier-button", - }) - .text("Run Classifier") - .on("click", () => { - // if already classified, reset - if (classifier.isClassified) { - // Resetting - resetClassifier(); - } else { - runClassifier(); - } - }); - - // Backgrounds for different classifications - var classifierSel = c.svg - .append("g") - .at({ - class: "classifier", - }) - .translate([465, 20]); - - classifierSel - .append("path.classifier-bg-shaded") - .at({ - d: classifierBgPathTop, - // fill: "#ccc", - // stroke: "#000", - }) - .translate([-50, 0]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("shaded") - .translate([160, 15]); - - classifierSel - .append("path.classifier-bg-unshaded") - .at({ - d: classifierBgPathBottom, - }) - .translate([-50, 160]); - - classifierSel - .append("text.classifier-bg-text") - .at({ - fill: "#000", - textAnchor: "middle", - dominantBaseline: "central", - class: "monospace", - }) - .text("unshaded") - .translate([160, 175]); - - // Add the shapes themselves - var shapeSel = c.svg - .appendMany("path.shape", shapeParams) - .at({ - d: (d) => d.path, - class: (d) => "gt-" + d.gt + " " + d.correctness, - }) - .translate(function (d) { - if (classifier.class == "show-shapes") { - return [d.initialX + 35, d.initialY-20]; - } else { - return [d.startX, d.startY]; - } - }) - .call(d3.attachTooltip) - .on("mouseover", (d) => { - ttSel.html(""); - if (classifier.usingLabel != "none") { - ttSel - .append("div") - .html( - `labeled: ${toPropertyString( - d[classifier.usingLabel], - classifier.isRounding - ).slice(0, -1)}` - ); - } - var gtSel = ttSel - .append("div") - .html( - `ground truth: ${d.gt}` - ); - if (classifier.isClassified) { - ttSel - .append("div.labeled-row") - .html( - `classified as: ${d.label}` - ); - - ttSel - .append("div.correct-row") - .classed("is-correct-tooltip", d.correctness == "correct") - .html(`
    ${d.correctness}ly classified `); - } - ttSel.classed("tt-text", true); - }); - - // If we're just showing shapes, ignore everything else - if (classifier.class == "show-shapes") return; - - // Add "classifier" line - var classifierBlobSel = c.svg - .append("g") - .at({ - class: "classifier-blob", - strokeWidth: 0, - }) - .translate([378, 20]); - - classifierBlobSel - .append("line.classifier-blob") - .at({ - class: "line", - x1: 27, - x2: 27, - y1: 0, - y2: 464, - stroke: "#000", - strokeWidth: 1, - }) - .style("stroke-dasharray", "5, 5"); - - classifierBlobSel - .append("text.classifier-blob-text") - .at({ - class: "classifier-blob-text monospace", - textAnchor: "middle", - dominantBaseline: "central", - }) - .text("is_shaded classifier") - .attr("transform", "translate(30,480) rotate(0)"); - - if (classifier.class == "show-shapes") { - classifierBlobSel.classed("is-classified", true); - } - - // Draw the results table with accuracies - // This will be hidden before classifier is run. - var graphResultsGroup = c.svg - .append("g") - .attr("class", "results") - .translate([-20, 19]); - - function drawResults() { - // Write text summary - summarySel = d3 - .select("." + classifier.class + "-summary") - .html(summaries[classifier.class]) - .translate([0, 20]); - summarySel.classed("summary-text", true); - summarySel.classed("is-classified", classifier.isClassified); - - if (!classifier.isClassified) { - c.layers[0].html(""); - classifier.wasClassified = false; - return; - } - - // Access results, which are calculated in shapes.js. - // If there are none, draw nothing. - results = allResults[classifier.class]; - if (!results) return; - - // Figure out which shapes should be highlighted on mouseover - // This depends on whether we're "rounding" edge case examples. - function isMatch(rowName, labelName, isRounding) { - // Not filtering at all - if (rowName == "shape") { - return true; - } - if (isRounding == true) { - // No "other" category - return labelName.includes(toOriginalString(rowName)) - ? true - : false; - } else { - // There is an "other" category, prefixed by "rt_" - if (labelName == toOriginalString(rowName)) { - return true; - } else if ( - labelName.includes("rt_") && - rowName == "other shapes" - ) { - return true; - } - return false; - } - } - - // Color the last row of each table - function getColor(d, i) { - if (i != 3) { - // not last index - return "#e6e6e6"; - } else { - var scaleRowValue = d3 - .scaleLinear() - .domain([0.3, 1.0]) - .range([0, 1]); - return d3.interpolateRdYlGn(scaleRowValue(d)); - } - } - - // Adjust text color for visibility - function getTextColor(d, i) { - if (i != 3) { - // not last index - return "#000000"; - } else { - var bgColor = getColor(d, i); - if (d < 0.3) { - // Alternative: use a brighter color? - // return d3.rgb(bgColor).brighter(-2); - return "#FFCCD8"; - } else { - // Alternative: use a darker color? - // return d3.rgb(bgColor).darker(2); - return "#000000"; - } - } - } - - // Draw results table - var tableSel = c.layers[0] - .html("") - .raise() - .st({ width: 400 }) - .append("div") - .translate([0, 10]) - .append("table.results-table.monospace") - .st({ width: 400 }); - - var header = tableSel - .append("thead") - .append("tr") - .appendMany("th", columns) - .text((d) => d); - - var rowSel = tableSel - .appendMany("tr", results) - .at({ - class: "row monospace", - }) - .on("mouseover", (row) => { - if (classifier.class == "default-classifier") { - return; - } - rowSel.classed("active", (d) => d == row); - shapeSel.classed("shape-row-unhighlighted", function (d) { - return !isMatch( - row.object, - d[classifier.usingLabel], - (isRounding = classifier.isRounding) - ); - }); - }) - .on("mouseout", (row) => { - rowSel.classed("active", function (d) { - if (d == row) { - return false; - } - }); - if (classifier.isClassified) { - shapeSel.classed("shape-row-unhighlighted", 0); - } - }); - - rowSel - .appendMany("td", (result) => - columns.map((column) => result[column]) - ) - .text((d) => d) - .st({ - backgroundColor: getColor, - color: getTextColor, - }); - - header.style("opacity", 0); - rowSel.style("opacity", 0); - - // If the classifier has already been run before, draw results right away. - // Otherwise, wait for other animation to run before drawing results. - var initialDelay = classifier.wasClassified ? 0 : 2000; - classifier.wasClassified = true; - - header - .transition() - .delay(initialDelay) - .duration(1000) - .style("opacity", 1); - rowSel - .transition() - .delay(function (d, i) { - return initialDelay + i * 200; - }) - .duration(1000) - .style("opacity", 1); - } - - // Draw the dropdowns for selecting different labels - function drawDropdown() { - if (!classifier.options) return; - - ["rounding", "category"].forEach(function (classifierType) { - if (!classifier.options[classifierType]) return; - var sel = d3 - .select("#" + classifier.class + "-select-" + classifierType) - .html(""); - sel.classed("dropdown", true); - sel.appendMany("option", classifier.options[classifierType]) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - sel.on("change", function () { - if (classifierType == "rounding") { - classifier.isRounding = toBool(this.value); - } else { - classifier.usingLabel = this.value; - } - updateResults(); - drawResults(); - }); - }); - } - drawDropdown(); - updateResults(); - drawResults(); - - // For continuity, auto-run the second two classifiers - if ( - classifier.class == "second-classifier" || - classifier.class == "final-classifier" - ) { - runClassifier(); - } -} - -// Draw the "Labels Tell Stories" section -function drawConclusion() { - function drawNewspapers() { - d3.select(".conclusion-newspapers").html(function () { - var imgPath = - "img/newspapers_" + - document.getElementById("conclusion-select-category").value; - return ( - 'Newspapers with headlines about bias and fairness in shape data.' - ); - }); - } - - function drawInterface() { - d3.select(".conclusion-interface").html(function () { - var imgPath = - "img/confusing_" + - document.getElementById("conclusion-select-category").value; - return ( - '
    A shape that is difficult to classify with several checkboxes, none of which describe the shape. Next to the interface is a text box with a single question mark in it.
    ' - ); - }); - } - - function drawConclusionSummary() { - classifierSel = d3 - .select(".conclusion-summary") - .html(summaries["conclusion"]); - classifierSel.classed("summary-text is-classified", true); - } - - function drawDropdown() { - var sel = d3.select("#conclusion-select-category").html(""); - sel.classed("dropdown", true); - sel.appendMany("option", conclusionOptions.category) - .at({ - value: function (d) { - return d.value; - }, - }) - .text((d) => d.label); - // sel.attr('select', 'circles, triangles, and rectangles'); - sel.on("change", function (d) { - makeConclusionUpdates(); - }); - } - - function makeConclusionUpdates() { - updateResults(); - drawNewspapers(); - drawInterface(); - drawConclusionSummary(); - } - drawDropdown(); - makeConclusionUpdates(); -} - -// Handle the parameters everywhere classifiers are drawn -var classifiers = [ - { - // Just the initial display of shapes, not interactive - class: "show-shapes", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "default-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: false, - usingLabel: "none", - }, - { - class: "second-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with their best guess", value: true }, - { label: 'as "other"', value: false }, - ], - }, - }, - { - class: "final-classifier", - colorBy: (d) => d.correctness, - isClassified: false, - isRounding: true, - usingLabel: "shape_name", - options: { - rounding: [ - { label: "with our best guess", value: true }, - { label: 'as "other"', value: false }, - ], - category: [ - { - label: "circles, triangles, or rectangles", - value: "shape_name", - }, - { label: "pointy shapes or round shapes", value: "pointiness" }, - { label: "small shapes or big shapes", value: "size" }, - { label: "just shapes", value: "none" }, - ], - }, - }, -]; - -// "Labels Tell Stories" dropdown options -var conclusionOptions = { - category: [ - { label: "circles, triangles, and rectangles", value: "shape_name" }, - { label: "pointy shapes and round shapes", value: "pointiness" }, - { label: "small shapes and big shapes", value: "size" }, - ], -}; - -classifiers.forEach(drawShapesWithData); -drawConclusion(); - -// These images are loaded invisibly so they appear seamlessly on dropdown change -const preloadImages = [ - "img/confusing_pointiness.png", - "img/confusing_pointiness.svg", - "img/confusing_shape_name.png", - "img/confusing_shape_name.svg", - "img/confusing_size.png", - "img/confusing_size.svg", - "img/interface_default.png", - "img/interface_default.svg", - "img/interface_shape_name_false.png", - "img/interface_shape_name_false.svg", - "img/interface_shape_name_true.png", - "img/interface_shape_name_true.svg", - "img/newspapers_pointiness.png", - "img/newspapers_pointiness.svg", - "img/newspapers_shape_name.png", - "img/newspapers_shape_name.svg", - "img/newspapers_size.png", - "img/newspapers_size.svg", -]; - -d3.select(".preload-dropdown-img") - .html("") - .appendMany("img", preloadImages) - .at({ src: (d) => d }); diff --git a/spaces/merve/hidden-bias/source/style.css b/spaces/merve/hidden-bias/source/style.css deleted file mode 100644 index ad619bacc7b5b7f61788de06850a80ccc7561b83..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/style.css +++ /dev/null @@ -1,434 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -html{ - background-color: #fff; - font-weight: normal; -} - - -body{ - max-width: 850px; - margin: 0px auto; - font-family: 'Roboto Slab', serif; - font-family: 'Roboto', Helvetica, sans-serif; - font-weight: 300; - line-height: 1.55em; - font-size: 16px; - margin-top: 5px; - margin-bottom: 80px; - color: #3C4043; - font-smoothing: antialiased; -} - -@media (max-width: 760px){ - body{ - padding: 5px; - } -} - -p{ - line-height: 1.55em; - font-size: 16px; - /*line-height: 28px;*/ - color: #3C4043; - letter-spacing: 0.1px; -} - -a{ - color: black; -} - -.header{ - position: relative; - color: black; - font-size: 16px; - height: 24px; - overflow: visible; - font-family: 'Google Sans', sans-serif; - font-weight: 100; - font-size: 20px; - margin: 0px auto; - margin-top: 15px; - padding-left: 20px; -} -.header-left{ - vertical-align: middle; - font-size: 20px; - margin: 0px auto; - width: 300px; -} -.header-left img{ - width: 100px; - opacity: 1; - top: 0px; - position: relative; -} -.header-left a:first-child{ - float: left; -} -.header-left a:last-child{ - position: relative; - top: 8px; - margin-left: 20px; - float: left; -} -.header-left a{ - line-height: 20px; - -webkit-font-smoothing: antialiased; - letter-spacing: 0.1px; - font-size: 20px; - text-transform: uppercase; - font-family: "Google Sans"; - text-align: right; - -webkit-tap-highlight-color: rgba(255,255,255,0); - font-weight: 300; - text-decoration: none; - /*margin: 50px 0 0 50px;*/ - display: inline-block; - color: #00695C !important; -} -.header-left a:hover{ - color: #ff4081 !important; -} - -@media (max-width: 750px){ - .header-right span{ - opacity: 0; - } -} -.header a{ - /*opacity: .5;*/ - text-decoration: none; -} -.header a:hover{ - opacity: 1 -} - - -p{ - max-width: 750px; - margin: 0px auto; - margin-block-start: 1em; - margin-block-end: 1em; -} - -/*TODO mobile padding?*/ - -h3{ - max-width: 750px; - margin: 0px auto; - font-weight: 100; - line-height: 1.3em; -} - -h1,h2,h3,h4,h5{ - font-family: 'Google Sans', sans-serif; - font-weight: 100; - margin-top: 1.5em; - margin-bottom: .5em; -} -h1{ - font-weight: 100; - font-size: 34px; - margin-bottom: .5em; - line-height: 1.3em; - margin-top: 1.4em; - text-align: center; - font-family: "Google Sans"; - /*color: #00695C;*/ -} -h2,h3,h4,h5{ - font-size: 22px; -} - -/*wp classes*/ -img.aligncenter { - display: block; - margin: auto; - max-width: 750px; -} - - - -html{ - overflow-x: hidden; -} - -.full-width{ - width: 100vw; - position: relative; - left: 50%; - right: 50%; - margin-left: -50vw; - margin-right: -50vw; - overflow: hidden; -} - -.full-width img{ - max-width: 100%; - display: block; - margin: 0 auto; -} - -.full-width.px980 img, .full-width.px980 div{ - max-width: 980px; -} -.full-width > div, .full-width > div > div{ - margin: 0px auto; -} -.full-width.px750 img, .full-width.px750 div{ - max-width: 750px; -} - -draft{ - display: none; - /*visibility: collapse;*/ -} - - -h1, .post-summary{ - max-width: 750px; - margin: 0px auto; -} -.post-summary{ - font-size: 19px; - margin-bottom: 65px; - line-height: 1.5em; -} - -h1{ - margin-bottom: 40px; - margin-top: 50px; -} - -.post-tags{ - line-height: 1.55em; - font-style: italic; -} - -.thumbnail-caption{ - font-style: italic; -} - - - - - - -/*graph scroll stuff*/ - -#container{ - position: relative; - width: 900px; - margin-left: -25px; -} - -#container h3{ - line-height: 1.3em; -} - - - - - - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; - width: 300px; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - - - - -.footend{ - margin-left: -9px; - width: 10px; -} - - -.footstart, .footend{ - text-decoration: none; -} - -.footstart:hover, .footend:hover{ - text-decoration: underline; -} - - - - -#recirc{ -} - -#recirc .img{ - outline: 1px solid #ccc; -} - -#recirc .post:hover .img{ - outline: 1px solid #333; -} - -#recirc .title{ - /*color: #00695C;*/ - font-size: 18px; - font-weight: 500; - margin-bottom: -10px; - /*height: 10px !important;*/ - /*opacity: 0;*/ -} - -#recirc .post:hover .title{ - text-decoration: underline !important; -} - -#recirc .post{ - margin-bottom: 30px; -} - - - - - - - - - - - - - -/*Nav Style*/ -#nav-container{ - width: 100vw; - margin-left: calc(50% - 50vw); - display: inline-block; - /*display: none;*/ -} -#navigation { - margin: 0 auto; - max-width: 1260px; - -webkit-font-smoothing: antialiased; - font-family: 'Open Sans', Helvetica, sans-serif; - font-weight: 300; - letter-spacing: 0.1px; - - - color: rgba(0,0,0,.87); - font-size: 14px; - line-height: 20px; - -webkit-font-smoothing: antialiased; - font-family: 'Open Sans', Helvetica, sans-serif; - font-weight: 300; - letter-spacing: 0.1px; - display: flex; - flex-flow: row wrap; - align-items: stretch; - padding: 8px; - margin: 0 auto; - max-width: 1260px; -} -.mdl-grid { - display: -webkit-flex; - display: -ms-flexbox; - display: flex; - -webkit-flex-flow: row wrap; - -ms-flex-flow: row wrap; - flex-flow: row wrap; - margin: 0 auto; - -webkit-align-items: stretch; - -ms-flex-align: stretch; - align-items: stretch; -} - -.mdl-cell { - box-sizing: border-box; -} - -.nav-links { - font-size: 20px; - text-transform: uppercase; - font-family: "Google Sans"; - color: #4a4a4a; - text-align: right; -} - -.nav-logo-small { - width: 110px; - margin: 42px 0 0 0; -} -.nav-links .selected { - color: #00695C !important; -} -/*.nav-links a:visited { - color: #4a4a4a; -} -a:visited { - color: #7B1FA2; -} -*/ -.nav-links a { - color: inherit; - text-decoration: none; - margin: 50px 0 0 50px; - display: inline-block; -} - - -@media screen and (max-width: 1035px){ - .nav-links { - font-size: 16px; - } -} - -.nav-links{ - line-height: 20px; - -webkit-font-smoothing: antialiased; - font-weight: 300; - letter-spacing: 0.1px; - box-sizing: border-box; - margin: 8px; - width: calc(66.6666666667% - 16px); - font-size: 20px; - text-transform: uppercase; - font-family: "Google Sans"; - color: #4a4a4a; - text-align: right; -} - diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts deleted file mode 100644 index 984e7ed4449e9d93e1823b3ee3e4229eac3e84bd..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string; messageId: string } -type RouteId = '/r/[id]/message/[messageId]/prompt'; - -export type EntryGenerator = () => Promise> | Array; -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/prompt_encoder.py b/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/relate-anything-model/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/mms-meta/MMS/vits/models.py b/spaces/mms-meta/MMS/vits/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/vits/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/mshukor/UnIVAL/models/unival/encoders/resnext3d.py b/spaces/mshukor/UnIVAL/models/unival/encoders/resnext3d.py deleted file mode 100644 index a4d47542626c6368a6acd1a229805ef1514601a6..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/unival/encoders/resnext3d.py +++ /dev/null @@ -1,187 +0,0 @@ -# https://github.com/kenshohara/video-classification-3d-cnn-pytorch -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -import math -from functools import partial - -__all__ = ['ResNeXt', 'resnet50', 'resnet101'] - - -def conv3x3x3(in_planes, out_planes, stride=1): - # 3x3x3 convolution with padding - return nn.Conv3d(in_planes, out_planes, kernel_size=3, - stride=stride, padding=1, bias=False) - - -def downsample_basic_block(x, planes, stride): - out = F.avg_pool3d(x, kernel_size=1, stride=stride) - zero_pads = torch.Tensor(out.size(0), planes - out.size(1), - out.size(2), out.size(3), - out.size(4)).zero_() - if isinstance(out.data, torch.cuda.FloatTensor): - zero_pads = zero_pads.cuda() - - out = Variable(torch.cat([out.data, zero_pads], dim=1)) - - return out - - -class ResNeXtBottleneck(nn.Module): - expansion = 2 - - def __init__(self, inplanes, planes, cardinality, stride=1, downsample=None, norm_layer=nn.BatchNorm3d): - super(ResNeXtBottleneck, self).__init__() - mid_planes = cardinality * int(planes / 32) - self.conv1 = nn.Conv3d(inplanes, mid_planes, kernel_size=1, bias=False) - self.bn1 = norm_layer(mid_planes) - self.conv2 = nn.Conv3d(mid_planes, mid_planes, kernel_size=3, stride=stride, - padding=1, groups=cardinality, bias=False) - self.bn2 = norm_layer(mid_planes) - self.conv3 = nn.Conv3d(mid_planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNeXt3D(nn.Module): - - def __init__(self, block, layers, sample_size=16, sample_duration=112, shortcut_type='B', cardinality=32, num_classes=400, last_fc=True, norm_layer=None): - self.last_fc = last_fc - - self.inplanes = 64 - super(ResNeXt3D, self).__init__() - self.conv1 = nn.Conv3d(3, 64, kernel_size=7, stride=(1, 2, 2), - padding=(3, 3, 3), bias=False) - - if norm_layer is None: - norm_layer = nn.BatchNorm3d - - print("use bn:", norm_layer) - self.bn1 = norm_layer(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool3d(kernel_size=(3, 3, 3), stride=2, padding=1) - self.layer1 = self._make_layer(block, 128, layers[0], shortcut_type, cardinality, norm_layer=norm_layer) - self.layer2 = self._make_layer(block, 256, layers[1], shortcut_type, cardinality, stride=2, norm_layer=norm_layer) - self.layer3 = self._make_layer(block, 512, layers[2], shortcut_type, cardinality, stride=2, norm_layer=norm_layer) - if len(layers) > 3: - self.layer4 = self._make_layer(block, 1024, layers[3], shortcut_type, cardinality, stride=2, norm_layer=norm_layer) - self.all_layers = True - else: - self.all_layers = False - last_duration = math.ceil(sample_duration / 16) - last_size = math.ceil(sample_size / 32) - self.avgpool = nn.AvgPool3d((last_duration, last_size, last_size), stride=1) - # self.fc = nn.Linear(cardinality * 32 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv3d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, norm_layer): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, shortcut_type, cardinality, stride=1, norm_layer=nn.BatchNorm3d): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - if shortcut_type == 'A': - downsample = partial(downsample_basic_block, - planes=planes * block.expansion, - stride=stride) - else: - downsample = nn.Sequential( - nn.Conv3d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - norm_layer(planes * block.expansion) - ) - - layers = [] - layers.append(block(self.inplanes, planes, cardinality, stride, downsample, norm_layer=norm_layer)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, cardinality, norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - if self.all_layers: - x = self.layer4(x) - - # x = self.avgpool(x) - - # x = x.view(x.size(0), -1) - # if self.last_fc: - # x = self.fc(x) - - return x, x - -def get_fine_tuning_parameters(model, ft_begin_index): - if ft_begin_index == 0: - return model.parameters() - - ft_module_names = [] - for i in range(ft_begin_index, 5): - ft_module_names.append('layer{}'.format(ft_begin_index)) - ft_module_names.append('fc') - - parameters = [] - for k, v in model.named_parameters(): - for ft_module in ft_module_names: - if ft_module in k: - parameters.append({'params': v}) - break - else: - parameters.append({'params': v, 'lr': 0.0}) - - return parameters - -def resnet50(**kwargs): - """Constructs a ResNet-50 model. - """ - model = ResNeXt3D(ResNeXtBottleneck, [3, 4, 6, 3], **kwargs) - return model - -def resnet101(**kwargs): - """Constructs a ResNet-101 model. - """ - model = ResNeXt3D(ResNeXtBottleneck, [3, 4, 23, 3], **kwargs) - return model - -def resnet152(**kwargs): - """Constructs a ResNet-101 model. - """ - model = ResNeXt3D(ResNeXtBottleneck, [3, 8, 36, 3], **kwargs) - return model diff --git a/spaces/nateraw/deepafx-st/deepafx_st/metrics.py b/spaces/nateraw/deepafx-st/deepafx_st/metrics.py deleted file mode 100644 index ca5ea20bcbb9c0f571b18c6d6e4d44e57acc7d14..0000000000000000000000000000000000000000 --- a/spaces/nateraw/deepafx-st/deepafx_st/metrics.py +++ /dev/null @@ -1,157 +0,0 @@ -import torch -import auraloss -import resampy -import torchaudio -from pesq import pesq -import pyloudnorm as pyln - - -def crest_factor(x): - """Compute the crest factor of waveform.""" - - peak, _ = x.abs().max(dim=-1) - rms = torch.sqrt((x ** 2).mean(dim=-1)) - - return 20 * torch.log(peak / rms.clamp(1e-8)) - - -def rms_energy(x): - - rms = torch.sqrt((x ** 2).mean(dim=-1)) - - return 20 * torch.log(rms.clamp(1e-8)) - - -def spectral_centroid(x): - """Compute the crest factor of waveform. - - See: https://gist.github.com/endolith/359724 - - """ - - spectrum = torch.fft.rfft(x).abs() - normalized_spectrum = spectrum / spectrum.sum() - normalized_frequencies = torch.linspace(0, 1, spectrum.shape[-1]) - spectral_centroid = torch.sum(normalized_frequencies * normalized_spectrum) - - return spectral_centroid - - -def loudness(x, sample_rate): - """Compute the loudness in dB LUFS of waveform.""" - meter = pyln.Meter(sample_rate) - - # add stereo dim if needed - if x.shape[0] < 2: - x = x.repeat(2, 1) - - return torch.tensor(meter.integrated_loudness(x.permute(1, 0).numpy())) - - -class MelSpectralDistance(torch.nn.Module): - def __init__(self, sample_rate, length=65536): - super().__init__() - self.error = auraloss.freq.MelSTFTLoss( - sample_rate, - fft_size=length, - hop_size=length, - win_length=length, - w_sc=0, - w_log_mag=1, - w_lin_mag=1, - n_mels=128, - scale_invariance=False, - ) - - # I think scale invariance may not work well, - # since aspects of the phase may be considered? - - def forward(self, input, target): - return self.error(input, target) - - -class PESQ(torch.nn.Module): - def __init__(self, sample_rate): - super().__init__() - self.sample_rate = sample_rate - - def forward(self, input, target): - if self.sample_rate != 16000: - target = resampy.resample( - target.view(-1).numpy(), - self.sample_rate, - 16000, - ) - input = resampy.resample( - input.view(-1).numpy(), - self.sample_rate, - 16000, - ) - - return pesq( - 16000, - target, - input, - "wb", - ) - - -class CrestFactorError(torch.nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input, target): - return torch.nn.functional.l1_loss( - crest_factor(input), - crest_factor(target), - ).item() - - -class RMSEnergyError(torch.nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input, target): - return torch.nn.functional.l1_loss( - rms_energy(input), - rms_energy(target), - ).item() - - -class SpectralCentroidError(torch.nn.Module): - def __init__(self, sample_rate, n_fft=2048, hop_length=512): - super().__init__() - - self.spectral_centroid = torchaudio.transforms.SpectralCentroid( - sample_rate, - n_fft=n_fft, - hop_length=hop_length, - ) - - def forward(self, input, target): - return torch.nn.functional.l1_loss( - self.spectral_centroid(input + 1e-16).mean(), - self.spectral_centroid(target + 1e-16).mean(), - ).item() - - -class LoudnessError(torch.nn.Module): - def __init__(self, sample_rate: int, peak_normalize: bool = False): - super().__init__() - self.sample_rate = sample_rate - self.peak_normalize = peak_normalize - - def forward(self, input, target): - - if self.peak_normalize: - # peak normalize - x = input / input.abs().max() - y = target / target.abs().max() - else: - x = input - y = target - - return torch.nn.functional.l1_loss( - loudness(x.view(1, -1), self.sample_rate), - loudness(y.view(1, -1), self.sample_rate), - ).item() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ddr Digital Picture Recovery Crack 5.zip [2021].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ddr Digital Picture Recovery Crack 5.zip [2021].md deleted file mode 100644 index 0d5c2bef8c199ce76d025f83c3058234eb66838f..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ddr Digital Picture Recovery Crack 5.zip [2021].md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    How to Recover Deleted Photos with DDR Digital Picture Recovery Software

    -

    If you have accidentally deleted or lost your precious photos from your digital camera, memory card, USB drive or hard disk, you may be wondering how to get them back. One of the solutions you can try is DDR Digital Picture Recovery software, a program designed to recover photos from various storage devices. However, you may also be tempted to download a cracked version of the software from the internet, hoping to save some money and time. But is it really worth it?

    -

    In this article, we will explain what DDR Digital Picture Recovery software is, how it works, and why you should avoid using a cracked version of it. We will also show you how to download and install the software safely and legally, and how to use it to recover your deleted photos.

    -

    ddr digital picture recovery crack 5.zip


    Download File ★★★★★ https://urlcod.com/2uIaIn



    -

    What is DDR Digital Picture Recovery Software?

    -

    DDR Digital Picture Recovery software is a product of Data Recovery Software, a company that specializes in developing data recovery tools for various devices and situations. The software can recover photos from Windows hard disk and USB removable drives, such as digital cameras, memory cards, pen drives, etc. It supports USB removable media brands like Kingston, Transcend, Nikon, Canon, Acer and all major picture files saved in JPEG, JPG and GIF file formats[^1^].

    -

    The software has a user-friendly interface that guides you through the recovery process step by step. You can choose between two recovery modes: Basic Search and Deep Search. The Basic Search mode scans the selected storage device quickly and displays the recovered photos in a thumbnail view. The Deep Search mode performs a thorough scan of the device and recovers photos that are not found by the Basic Search mode. You can preview the recovered photos before saving them to your desired location[^1^].

    -

    Why You Should Avoid Using a Cracked Version of DDR Digital Picture Recovery Software?

    -

    A cracked version of DDR Digital Picture Recovery software is a modified or hacked version of the original software that bypasses its license verification or activation process. It may be available for free or at a low price on some websites that offer illegal downloads of software, games, movies, etc. However, using a cracked version of DDR Digital Picture Recovery software is not only illegal but also risky for several reasons:

    -
      -
    • It may contain viruses, malware, spyware or other harmful programs that can infect your computer and compromise your security and privacy[^2^] [^3^].
    • -
    • It may not work properly or at all, as it may be outdated, corrupted or incompatible with your system[^2^] [^3^].
    • -
    • It may cause further damage to your storage device or photos, as it may overwrite or delete them during the recovery process[^2^] [^3^].
    • -
    • It may not provide any technical support or customer service in case of any problems or issues[^2^] [^3^].
    • -
    • It may violate the intellectual property rights of Data Recovery Software and expose you to legal consequences such as fines or lawsuits[^2^] [^3^].
    • -
    -

    Therefore, using a cracked version of DDR Digital Picture Recovery software is not worth the risk and hassle. You should always use a genuine and licensed version of the software that you can obtain from the official website of Data Recovery Software.

    -

    How to Download and Install DDR Digital Picture Recovery Software Safely and Legally?

    -

    To download and install DDR Digital Picture Recovery software safely and legally, you need to follow these steps:

    -

    -
      -
    1. Go to the official website of Data Recovery Software at https://www.datarecoverysoftware.com/.
    2. -
    3. Click on the "Download" button under "DDR - Digital Picture Recovery" section.
    4. -
    5. You will be redirected to another page where you can choose between two options: "Download Demo" or "Buy Now". The demo version allows you to scan your storage device and preview the recoverable photos, but not save them. The full version allows you to save the recovered photos after purchasing a license key.
    6. -
    7. If you want to try the demo version first, click on "Download Demo" and save the file "DDR-Digital-P

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/HOT-Download-Zmodeler-224-Full-Version.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/HOT-Download-Zmodeler-224-Full-Version.md deleted file mode 100644 index 408e36ba3a81ff2e93c6811aa07047ad811e3cb3..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/HOT-Download-Zmodeler-224-Full-Version.md +++ /dev/null @@ -1,111 +0,0 @@ -## Download Zmodeler 2.2.4 Full Version - - - - ![HOT! Download Zmodeler 2.2.4 Full Version](https://4.bp.blogspot.com/-JPVT9HZyjNE/UHnVMhIhCYI/AAAAAAAADno/ZWzqq8fdrsU/s400/ZModeler%202.2.4%20-%20Registrado.jpg) - - - -**Download Zmodeler 2.2.4 Full Version ➡ [https://jinyurl.com/2tx27d](https://jinyurl.com/2tx27d)** - - - -# How to Download Zmodeler 2.2.4 Full Version for Free - - - -Zmodeler is a popular lightweight 3D editor that is mainly used for game modeling and converting. It supports various formats such as GTA, Test Drive Unlimited, Need for Speed and more. If you want to create or edit your own 3D models for games, Zmodeler is a great tool to use. - - - -However, Zmodeler is not a free software. You need to purchase a license to use it without limitations. The latest version of Zmodeler is 2.2.6, which costs $22 for a one-year license. But what if you don't want to spend money on Zmodeler? Is there a way to download Zmodeler 2.2.4 full version for free? - - - -The answer is yes, but you need to be careful. There are many websites that claim to offer Zmodeler 2.2.4 full version for free, but some of them may contain viruses, malware or spyware that can harm your computer or steal your personal information. You should always scan any file you download with a reliable antivirus software before opening it. - - - -One of the safest and easiest ways to download Zmodeler 2.2.4 full version for free is to use the link below. This link will take you to a trusted website that hosts the original Zmodeler 2.2.4 installer file, along with some additional components that you may need to run the program properly. - - - -[Download Zmodeler 2.2.4 Full Version for Free](https://libertycity.net/files/gta-san-andreas/77683-zmodeler-v-2.2.4-dop.-komponenty.html) - - - -After you download the file, you need to unzip it using a program like WinRAR or 7-Zip. You will find two folders inside: one called "ZModeler v 2.2.4" and another called "Components". The first folder contains the Zmodeler 2.2.4 installer file, which you need to run and follow the instructions on the screen. - - - -The second folder contains some Visual Studio packages that are required for Zmodeler to work correctly on your system. You need to install them one by one or you may get an error message saying "The application failed to start because its parallel configuration is incorrect.". - - - -Once you have installed both Zmodeler and the components, you can launch the program and start using it without any restrictions. You don't need to register or activate it, as it is already cracked and ready to use. - - - -However, you should keep in mind that using Zmodeler 2.2.4 full version for free is illegal and may violate the terms of service of the software developer. You should only use it for educational or personal purposes, and not for commercial or illegal activities. - - - -If you like Zmodeler and want to support its development, you should consider buying a license from the official website: [http://zmodeler2.com/](http://zmodeler2.com/). There you can also find the latest updates, tutorials, forums and other resources related to Zmodeler. - - - -We hope this article helped you learn how to download Zmodeler 2.2.4 full version for free and use it safely on your computer. - - - -## What is Zmodeler and What Can You Do With It? - - - -Zmodeler is a 3D modeling software that was created by Oleg Melashenko in 1999. It is designed to be simple, fast and user-friendly, while offering a lot of features and flexibility for game modding. Zmodeler can import and export various file formats, such as .wft, .dff, .z3d, .obj, .3ds and more. - - - -With Zmodeler, you can create or edit 3D models for games, such as cars, bikes, planes, boats, weapons, characters, buildings and more. You can also apply textures, materials, shaders, lighting and effects to your models. Zmodeler has a powerful scripting system that allows you to automate tasks and customize the program to your needs. - - - -Zmodeler is compatible with many popular games, such as GTA, Test Drive Unlimited, Need for Speed, Mafia, Euro Truck Simulator and more. You can use Zmodeler to modify existing models or create new ones from scratch. You can also convert models from one game to another with ease. - - - -Zmodeler is a great tool for game modders who want to express their creativity and enhance their gaming experience. Whether you want to make realistic cars, fantasy creatures, sci-fi weapons or anything else you can imagine, Zmodeler can help you achieve your goals. - - - -## How to Use Zmodeler 2.2.4 Full Version for Free - - - -Now that you have downloaded and installed Zmodeler 2.2.4 full version for free, you may wonder how to use it. Zmodeler has a simple and intuitive interface that consists of four main parts: the menu bar, the toolbar, the viewport and the status bar. - - - -The menu bar contains various options and commands that you can access by clicking on them. The toolbar contains icons that represent common tools and functions that you can use by clicking on them or using keyboard shortcuts. The viewport is the main area where you can see and manipulate your 3D models. The status bar shows information about your current project and actions. - - - -To start using Zmodeler, you need to create a new project or open an existing one. You can do this by going to File > New or File > Open in the menu bar. You will then see a dialog box where you can choose the name and location of your project file. - - - -Once you have created or opened a project, you can start importing or creating 3D models. You can import models from other games or sources by going to File > Import in the menu bar. You will then see a dialog box where you can choose the file format and location of your model file. - - - -You can also create models from scratch by using the tools in the toolbar. You can use tools such as Create > Primitive to create basic shapes like cubes, spheres or cylinders. You can use tools such as Modify > Move, Rotate or Scale to transform your models. You can use tools such as Edit > Vertex, Edge or Face to edit your models at a lower level. - - - -As you work on your models, you can view them from different angles and perspectives by using the mouse and keyboard controls in the viewport. You can also apply textures, materials, shaders and effects to your models by using the options in the menu bar or the toolbar. - - - -When you are done with your models, you can export them to other games or formats by going to File > Export in the menu bar. You will then see a dialog box where you can choose the file format and location of your model file. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ingles Tecnico Garceta.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ingles Tecnico Garceta.md deleted file mode 100644 index 5e87bf1e7f5f72eed7c926a43e9bf8cd85d24b01..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Ingles Tecnico Garceta.md +++ /dev/null @@ -1,46 +0,0 @@ - -

      Ingles Tecnico Garceta: A Practical Guide for Technical English

      -

      If you are looking for a book that can help you improve your technical English skills, you might want to check out Ingles Tecnico Garceta by Jose Luis Garcia Llamas and Jose Antonio Sabio Pinilla. This book is designed for students and professionals who need to use English for technical purposes, such as engineering, science, technology, or business.

      -

      Ingles Tecnico Garceta covers the main aspects of technical English, such as vocabulary, grammar, pronunciation, writing, reading, listening, and speaking. It also provides useful tips and exercises to help you practice and apply what you learn. The book is divided into 12 units, each focusing on a different topic or field of technical English. Some of the topics include:

      -

      Ingles Tecnico Garceta


      DOWNLOAD 🗸🗸🗸 https://urlcod.com/2uIbTm



      -
        -
      • Basic concepts and tools of technical English
      • -
      • Numbers, measurements, and units
      • -
      • Technical descriptions and processes
      • -
      • Technical documents and reports
      • -
      • Graphs, charts, and diagrams
      • -
      • Technical presentations and meetings
      • -
      • Technical correspondence and emails
      • -
      • Technical terminology and abbreviations
      • -
      • Technical genres and styles
      • -
      • Technical translation and interpretation
      • -
      • Cultural aspects of technical communication
      • -
      • Professional development and career opportunities in technical English
      • -
      -

      The book also includes a glossary of technical terms, a list of common acronyms, a bibliography of recommended resources, and an answer key for the exercises. You can download the book as a PDF file from this website[^1^]. However, if you are the author or own the copyright of this book, please report to us by using this DMCA report form[^1^].

      -

      Ingles Tecnico Garceta is a practical guide for anyone who wants to learn or improve their technical English skills. It can help you communicate more effectively and confidently in your academic or professional field. Whether you are a student, a teacher, or a practitioner of technical English, you will find this book useful and engaging.

      - -

      How to improve your technical English skills

      -

      Technical English is not a separate language, but a specialized use of English for specific purposes. Therefore, to improve your technical English skills, you need to have a solid foundation of general English skills, such as grammar, vocabulary, spelling, and punctuation. You also need to practice using English in technical contexts, such as reading technical texts, writing technical documents, listening to technical presentations, and speaking about technical topics.

      -

      There are many ways to improve your technical English skills, depending on your level, goals, and preferences. Some of the most common methods are:

      -
        -
      • Taking a course or program in technical English. There are many online and offline courses and programs that can help you learn or improve your technical English skills. For example, you can enroll in the IEEE English for Technical Professionals Program[^4^], which offers 14 hours of online instruction with lessons set in working engineering contexts. You can also find courses on platforms like Udemy[^1^] or Coursera[^2^] that cover various aspects of technical English.
      • -
      • Reading technical books, articles, blogs, magazines, or journals. Reading is one of the best ways to expand your vocabulary and learn new expressions and terms related to your field. You can choose materials that interest you or are relevant to your work or studies. You can also use online tools like dictionaries or translators to help you understand unfamiliar words or phrases.
      • -
      • Writing technical documents, reports, emails, or blogs. Writing is another effective way to practice and improve your technical English skills. You can write about topics that you know well or are learning about. You can also use online tools like grammar checkers or plagiarism detectors to help you improve your writing quality and avoid errors or duplication.
      • -
      • Listening to technical podcasts, videos, webinars, lectures, or presentations. Listening is a great way to improve your comprehension and pronunciation skills in technical English. You can listen to materials that suit your level and interests. You can also use online tools like subtitles or transcripts to help you follow along and check your understanding.
      • -
      • Speaking with native or fluent speakers of technical English. Speaking is the most challenging but also the most rewarding way to improve your technical English skills. You can speak with colleagues, classmates, teachers, mentors, or online tutors who have experience or expertise in your field. You can also join online communities or forums where you can exchange ideas and opinions with other learners or professionals.
      • -
      -

      By using these methods consistently and regularly, you can improve your technical English skills and become more confident and competent in your field.

      - -

      How to list technical skills on a resume

      -

      When applying for a job that requires technical skills, it is important to list them on your resume in a clear and effective way. This can help you showcase your qualifications and abilities to potential employers and increase your chances of getting hired.

      -

      There are different ways to list technical skills on a resume, depending on the type and level of skills you have. Some of the most common ways are:

      -
        -
      • Creating a separate section for technical skills. This is a good option if you have many relevant and specific technical skills that you want to highlight. You can create a section called \"Technical Skills\" or \"Skills\" and list your skills using bullet points or tables. You can also group your skills by category or level of proficiency.
      • -
      • Incorporating technical skills into other sections. This is a good option if you have fewer or more general technical skills that you want to integrate with other information on your resume. You can include your skills in sections like \"Summary\", \"Education\", \"Experience\", \"Achievements\", or \"Certifications\". You can also use keywords or phrases that describe your skills in relation to your roles or responsibilities.
      • -
      • Providing examples or evidence of technical skills. This is a good option if you want to demonstrate how you have used or applied your technical skills in real situations. You can provide examples or evidence of your skills in sections like \"Experience\", \"Achievements\", \"Projects\", or \"Portfolio\". You can also use numbers or metrics to quantify your results or outcomes.
      • -
      -

      By listing your technical skills on your resume in a clear and effective way, you can show potential employers that you have the knowledge and expertise they are looking for.

      -

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lumion Pro 10.2 Crack With Activation Code Latest ! _BEST_.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lumion Pro 10.2 Crack With Activation Code Latest ! _BEST_.md deleted file mode 100644 index 880672c3ddb228d7cb3d8dcb9f52a61d74b440a4..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lumion Pro 10.2 Crack With Activation Code Latest ! _BEST_.md +++ /dev/null @@ -1,72 +0,0 @@ - -

      Lumion Pro 10.2 Crack with Activation Code Latest!

      -

      If you are looking for a way to create stunning 3D renderings for your architectural or design projects, you might have heard of Lumion Pro 10.2. This is a powerful and easy-to-use software that allows you to turn your 3D models into realistic images and videos in minutes.

      -

      But what if you don't want to pay for the software? You might be tempted to download a cracked version of Lumion Pro 10.2 from some shady website and use a fake activation code to unlock its features. However, this is not a good idea at all.

      -

      Lumion Pro 10.2 Crack With Activation Code Latest !


      DOWNLOAD ····· https://urlcod.com/2uIaUU



      -

      In this article, we will explain what Lumion Pro 10.2 is, what it can do for you, and why you should avoid using a cracked version of it. We will also show you how to get Lumion Pro 10.2 legally and safely, without risking your computer or your reputation.

      -

      What is Lumion Pro 10.2?

      -

      Lumion Pro 10.2 is the latest version of Lumion, a popular 3D rendering software that is used by architects, designers, engineers, and artists worldwide. Lumion allows you to import your 3D models from any CAD or modeling software, such as SketchUp, Revit, AutoCAD, Blender, etc., and transform them into photorealistic images and videos in minutes.

      -

      Lumion is designed to be easy to use, even for beginners. You don't need any prior rendering experience or technical skills to use Lumion. You can simply drag and drop objects, materials, lighting effects, weather conditions, and more onto your scene and see the results instantly.

      -

      Features of Lumion Pro 10.2

      -

      Lumion Pro 10.2 comes with many features that make it one of the best 3D rendering software on the market. Some of these features are:

      -
        -
      • Real-time rendering: Lumion uses GPU-based ray tracing technology to render your scenes in real-time. This means you can see the changes you make to your scene as you make them, without waiting for long rendering times.
      • -
      • High-quality content library: Lumion Pro 10.2 comes with over 6,000 objects and materials that you can use to decorate your scene. These include buildings, furniture, plants, vehicles, people, animals, and more. All of these are high-quality and realistic, and you can customize them to suit your needs.
      • -
      • Atmospheric effects: Lumion Pro 10.2 allows you to add various atmospheric effects to your scene, such as sky, clouds, fog, rain, snow, sun, moon, stars, and more. You can also adjust the time of day, the season, and the location of your scene to create different moods and scenarios.
      • -
      • Lighting effects: Lumion Pro 10.2 lets you add various lighting effects to your scene, such as spotlights, area lights, point lights, strip lights, and more. You can also use global illumination, ambient occlusion, shadows, reflections, and refractions to enhance the realism of your scene.
      • -
      • Animation effects: Lumion Pro 10.2 enables you to add various animation effects to your scene, such as moving objects, people, animals, vehicles, water, fire, smoke, wind, and more. You can also animate the camera and create smooth transitions between different views of your scene.
      • -
      • Styles and filters: Lumion Pro 10.2 offers you a range of styles and filters that you can apply to your scene to change its appearance and mood. You can choose from presets such as realistic, artistic, sketchy, cinematic, or create your own custom style.
      • -
      • Output options: Lumion Pro 10.2 allows you to export your scene as an image or a video in various formats and resolutions. You can also share your scene online via MyLumion or Lumion LiveSync.
      • -
      -

      Benefits of Lumion Pro 10.2

      -

      Lumion Pro 10.2 can help you create stunning 3D renderings for your architectural or design projects in a fast and easy way. Some of the benefits of using Lumion Pro 10.2 are:

      -
        -
      • Save time and money: Lumion Pro 10.2 can help you save time and money by reducing the need for expensive and complex rendering software and hardware. You can use Lumion on any PC or laptop with a decent graphics card and render your scenes in minutes instead of hours or days.
      • -
      • Increase productivity and creativity: Lumion Pro 10.2 can help you increase your productivity and creativity by allowing you to experiment with different design options and scenarios in real-time. You can easily make changes to your scene and see the results instantly without losing your workflow.
      • -
      • Improve communication and collaboration: Lumion Pro 10.2 can help you improve your communication and collaboration with your clients, colleagues, and stakeholders by allowing you to share your vision and ideas in a clear and compelling way. You can use Lumion to create images and videos that showcase your project from different angles and perspectives.
      • -
      • Impress your audience: Lumion Pro 10.2 can help you impress your audience by creating stunning 3D renderings that capture the attention and emotion of your viewers. You can use Lumion to create realistic and immersive scenes that convey the beauty and functionality of your project.
      • -
      -

      What is Lumion Pro 10.2 Crack?

      -

      Lumion Pro 10.2 Crack is a term that refers to a modified version of Lumion Pro 10.2 that is illegally distributed on the internet for free or at a low cost. A crack is a program that bypasses the security features of the original software and allows users to access its full functionality without paying for a license key.

      -

      Lumion Pro 10.2 Crack is usually accompanied by an activation code or a serial number that is supposed to activate the software after installation. However, these codes are often fake or stolen from legitimate users.

      -

      How does Lumion Pro 10.2 Crack work?

      -

      Lumion Pro 10.2 Crack works by altering the code of the original software and removing or replacing the parts that check for a valid license key or an internet connection. This way, the software thinks that it is activated and does not require any verification or authentication from the official servers.

      -

      -

      Lumion Pro 10.2 Crack may also include additional files or programs that are designed to trick the user into thinking that they are installing the genuine software or that they are getting a good deal. However, these files or programs are often malicious and can harm the user's computer or data.

      -

      Risks of using Lumion Pro 10.2 Crack

      -

      Using Lumion Pro 10.2 Crack is not only illegal but also risky. There are many dangers and disadvantages of using a cracked version of Lumion Pro 10.2, such as:

      -

      Malware infections

      -

      One of the most common risks of using Lumion Pro 10.2 Crack is getting infected by malware, such as viruses, trojans, worms, spyware, ransomware, etc. These malware can damage your computer, steal your personal information, encrypt your files, display unwanted ads, or even take control of your system.

      -

      Malware can be hidden in the crack itself, in the activation code, or in the additional files or programs that come with the crack. You may not even notice that your computer is infected until it is too late.

      -

      Legal issues

      -

      Another risk of using Lumion Pro 10.2 Crack is facing legal issues. Lumion Pro 10.2 is a copyrighted software that is protected by intellectual property laws. Downloading, installing, or using a cracked version of Lumion Pro 10.2 is a violation of these laws and can result in serious consequences.

      -

      You may be sued by the developers of Lumion Pro 10.2 or by the owners of the license keys that you have used illegally. You may also be fined or imprisoned by the authorities for piracy or fraud. You may also lose your reputation and credibility as a professional or a student.

      -

      Performance problems

      -

      A third risk of using Lumion Pro 10.2 Crack is experiencing performance problems. A cracked version of Lumion Pro 10.2 may not work as well as the original version, as it may have bugs, errors, glitches, or compatibility issues with your system or other software.

      -

      You may encounter crashes, freezes, slowdowns, or other errors while using Lumion Pro 10.2 Crack. You may also lose your work or data due to these problems. You may not be able to enjoy the full features and functionality of Lumion Pro 10.2 as intended by the developers.

      -

      Lack of updates and support

      -

      A fourth risk of using Lumion Pro 10.2 Crack is missing out on updates and support. A cracked version of Lumion Pro 10.2 may not receive any updates or patches from the developers, as it is not connected to the official servers or registered with a valid license key.

      -

      This means that you will not be able to access the latest features, improvements, bug fixes, or security patches that are released for Lumion Pro 10.2. You may also not be able to use Lumion LiveSync or MyLumion to share your scenes online.

      -

      Moreover, you will not be able to get any technical support or customer service from the developers or the community if you encounter any problems or have any questions while using Lumion Pro 10.2 Crack.

      -

      Ethical concerns

      -

      A fifth risk of using Lumion Pro 10.2 Crack is having ethical concerns. Using a cracked version of Lumion Pro 10.2 is not only illegal but also unfair and dishonest. You are essentially stealing from the developers who have spent time, money, and effort to create and maintain this software.

      -

      You are also depriving yourself of the opportunity to learn and grow as a professional or a student by using a software that you have not paid for or earned. You are also disrespecting the work and rights of other users who have purchased a license key for Lumion Pro 10.2 legally and legitimately.

      Conclusion

      -

      Lumion Pro 10.2 is a great software for creating 3D renderings for your architectural or design projects. It is easy to use, fast, and realistic. However, you should not use a cracked version of Lumion Pro 10.2, as it is illegal, risky, and unethical.

      -

      Instead, you should get Lumion Pro 10.2 legally and safely from the official website of Lumion or from an authorized reseller. You can also use a free trial version or a student or faculty version of Lumion Pro 10.2 if you are eligible.

      -

      By doing so, you will be able to enjoy the full features and functionality of Lumion Pro 10.2 without compromising your computer, your data, or your reputation. You will also be able to support the developers who have created and maintained this amazing software.

      -

      FAQs

      -

      Here are some frequently asked questions about Lumion Pro 10.2 and Lumion Pro 10.2 Crack:

      -
        -
      1. What is the difference between Lumion and Lumion Pro?
      2. -

        Lumion and Lumion Pro are two versions of the same software, with different features and prices. Lumion Pro has more content, effects, output options, and updates than Lumion. Lumion Pro is also more expensive than Lumion.

        -
      3. How much does Lumion Pro 10.2 cost?
      4. -

        The price of Lumion Pro 10.2 depends on the type of license you choose: a perpetual license or a subscription license. A perpetual license costs $3,499 and gives you lifetime access to Lumion Pro 10.2 and all its updates. A subscription license costs $99 per month or $999 per year and gives you access to Lumion Pro 10.2 and all its updates for the duration of your subscription.

        -
      5. Can I use Lumion Pro 10.2 on multiple computers?
      6. -

        You can use Lumion Pro 10.2 on multiple computers, but not at the same time. You can only activate Lumion Pro 10.2 on one computer at a time with your license key. If you want to use Lumion Pro 10.2 on another computer, you will need to deactivate it on the first computer and activate it on the second computer.

        -
      7. Can I upgrade from Lumion to Lumion Pro?
      8. -

        You can upgrade from Lumion to Lumion Pro by paying the difference in price between the two versions. You can also upgrade from an older version of Lumion or Lumion Pro to a newer version by paying a discounted price.

        -
      9. Can I get a refund for Lumion Pro 10.2?
      10. -

        You can get a refund for Lumion Pro 10.2 within 14 days of purchase if you are not satisfied with the software or if it does not work on your computer. However, you will need to provide proof of purchase and uninstall and deactivate the software from your computer.

        -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Radiohead - Reckoner (Multitrack).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Radiohead - Reckoner (Multitrack).md deleted file mode 100644 index 835fd948fbfdf21b25e07d57956f6d9b8c9123eb..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Radiohead - Reckoner (Multitrack).md +++ /dev/null @@ -1,17 +0,0 @@ - -

      How to Remix Radiohead's Reckoner with Multitracks

      -

      Radiohead is one of the most influential and innovative bands of the 21st century, and their song Reckoner from their album In Rainbows is a masterpiece of ethereal and haunting music. But did you know that you can remix Reckoner with multitracks? Multitracks are separate audio tracks that contain different elements of a song, such as vocals, drums, guitars, keyboards, etc. By using multitracks, you can isolate, mute, mix, and manipulate any part of the song to create your own version.

      -

      In this article, we will show you how to remix Radiohead's Reckoner with multitracks. You will need a computer, a digital audio workstation (DAW) software, such as Audacity or GarageBand, and the multitracks for Reckoner. You can download the multitracks for Reckoner from this Reddit post[^1^], which also contains multitracks for other Radiohead songs. The multitracks for Reckoner are in mp3 format and consist of 10 tracks: bass, drums 1, drums 2, guitar 1, guitar 2, guitar 3, piano 1, piano 2, strings, and vocals.

      -

      Radiohead - Reckoner (Multitrack)


      Downloadhttps://urlcod.com/2uIce5



      -

      Once you have downloaded the multitracks for Reckoner, you can import them into your DAW software. Depending on your software, you may need to convert the mp3 files to wav files first. You can use an online converter such as this one to do that. After importing the multitracks into your DAW software, you should see them as separate tracks on your timeline. You can play them together to hear the original song.

      -

      Now comes the fun part: remixing Reckoner with multitracks. There are no rules or limits to how you can remix Reckoner with multitracks. You can experiment with different effects, filters, levels, panning, pitch shifting, time stretching, reversing, looping, and more. You can also add your own instruments or vocals to the mix. The only thing you need to keep in mind is to respect Radiohead's original work and not violate their copyrights.

      -

      To give you some inspiration, here are some examples of remixes of Reckoner with multitracks that other people have made:

      -
        -
      • Reckoner - Radiohead (1080p): This is the official video for Reckoner by Radiohead[^2^]. It features a beautiful animation of a bird flying through different landscapes.
      • -
      • Radiohead - Reckoner (James Holden Remix): This is a remix of Reckoner by James Holden[^3^], an electronic music producer and DJ. He adds a hypnotic beat and some synth sounds to the original song.
      • -
      • Radiohead - Reckoner (Nosaj Thing Remix): This is a remix of Reckoner by Nosaj Thing, an electronic music artist and producer. He creates a glitchy and atmospheric version of the song with chopped vocals and distorted sounds.
      • -
      -

      We hope this article has helped you learn how to remix Radiohead's Reckoner with multitracks. Have fun and share your remixes with us!

      -

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/nicholasKluge/Aira-Demo/README.md b/spaces/nicholasKluge/Aira-Demo/README.md deleted file mode 100644 index 4f4ffc2c394e131e05ed0dce764624bf4196745e..0000000000000000000000000000000000000000 --- a/spaces/nicholasKluge/Aira-Demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Aira Demo -emoji: 🤓 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- -Check Aira-Instruct-124M [here](https://huggingface.co/nicholasKluge/Aira-Instruct-124M). diff --git a/spaces/nightfury/SD-InPainting/clipseg/setup.py b/spaces/nightfury/SD-InPainting/clipseg/setup.py deleted file mode 100644 index 2bf28ffe269cba3033af263db5f98313772818f0..0000000000000000000000000000000000000000 --- a/spaces/nightfury/SD-InPainting/clipseg/setup.py +++ /dev/null @@ -1,30 +0,0 @@ -from setuptools import setup - -with open("README.md", "r", encoding="utf-8") as readme_file: - readme = readme_file.read() - -requirements = [ - "numpy", - "scipy", - "matplotlib", - "torch", - "torchvision", - "opencv-python", - "CLIP @ git+https://github.com/openai/CLIP.git" -] - -setup( - name='clipseg', - packages=['clipseg'], - package_dir={'clipseg': 'models'}, - package_data={'clipseg': [ - "../weights/*.pth", - ]}, - version='0.0.1', - url='https://github.com/timojl/clipseg', - python_requires='>=3.9', - install_requires=requirements, - description='This repository contains the code used in the paper "Image Segmentation Using Text and Image Prompts".', - long_description=readme, - long_description_content_type="text/markdown", -) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/data_loading.md b/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/data_loading.md deleted file mode 100644 index 1d2769fc513abb0981a140f3a6b6432538704261..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/data_loading.md +++ /dev/null @@ -1,95 +0,0 @@ - -# Dataloader - -Dataloader is the component that provides data to models. -A dataloader usually (but not necessarily) takes raw information from [datasets](./datasets.md), -and process them into a format needed by the model. - -## How the Existing Dataloader Works - -Detectron2 contains a builtin data loading pipeline. -It's good to understand how it works, in case you need to write a custom one. - -Detectron2 provides two functions -[build_detection_{train,test}_loader](../modules/data.html#detectron2.data.build_detection_train_loader) -that create a default data loader from a given config. -Here is how `build_detection_{train,test}_loader` work: - -1. It takes the name of a registered dataset (e.g., "coco_2017_train") and loads a `list[dict]` representing the dataset items - in a lightweight format. These dataset items are not yet ready to be used by the model (e.g., images are - not loaded into memory, random augmentations have not been applied, etc.). - Details about the dataset format and dataset registration can be found in - [datasets](./datasets.md). -2. Each dict in this list is mapped by a function ("mapper"): - * Users can customize this mapping function by specifying the "mapper" argument in - `build_detection_{train,test}_loader`. The default mapper is [DatasetMapper](../modules/data.html#detectron2.data.DatasetMapper). - * The output format of the mapper can be arbitrary, as long as it is accepted by the consumer of this data loader (usually the model). - The outputs of the default mapper, after batching, follow the default model input format documented in - [Use Models](./models.html#model-input-format). - * The role of the mapper is to transform the lightweight representation of a dataset item into a format - that is ready for the model to consume (including, e.g., read images, perform random data augmentation and convert to torch Tensors). - If you would like to perform custom transformations to data, you often want a custom mapper. -3. The outputs of the mapper are batched (simply into a list). -4. This batched data is the output of the data loader. Typically, it's also the input of - `model.forward()`. - - -## Write a Custom Dataloader - -Using a different "mapper" with `build_detection_{train,test}_loader(mapper=)` works for most use cases -of custom data loading. -For example, if you want to resize all images to a fixed size for training, use: - -```python -import detectron2.data.transforms as T -from detectron2.data import DatasetMapper # the default mapper -dataloader = build_detection_train_loader(cfg, - mapper=DatasetMapper(cfg, is_train=True, augmentations=[ - T.Resize((800, 800)) - ])) -# use this dataloader instead of the default -``` -If the arguments of the default [DatasetMapper](../modules/data.html#detectron2.data.DatasetMapper) -does not provide what you need, you may write a custom mapper function and use it instead, e.g.: - -```python -from detectron2.data import detection_utils as utils - # Show how to implement a minimal mapper, similar to the default DatasetMapper -def mapper(dataset_dict): - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # can use other ways to read image - image = utils.read_image(dataset_dict["file_name"], format="BGR") - # See "Data Augmentation" tutorial for details usage - auginput = T.AugInput(image) - transform = T.Resize((800, 800))(auginput) - image = torch.from_numpy(auginput.image.transpose(2, 0, 1)) - annos = [ - utils.transform_instance_annotations(annotation, [transform], image.shape[1:]) - for annotation in dataset_dict.pop("annotations") - ] - return { - # create the format that the model expects - "image": image, - "instances": utils.annotations_to_instances(annos, image.shape[1:]) - } -dataloader = build_detection_train_loader(cfg, mapper=mapper) -``` - -If you want to change not only the mapper (e.g., in order to implement different sampling or batching logic), -`build_detection_train_loader` won't work and you will need to write a different data loader. -The data loader is simply a -python iterator that produces [the format](./models.md) that the model accepts. -You can implement it using any tools you like. - -No matter what to implement, it's recommended to -check out [API documentation of detectron2.data](../modules/data) to learn more about the APIs of -these functions. - -## Use a Custom Dataloader - -If you use [DefaultTrainer](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer), -you can overwrite its `build_{train,test}_loader` method to use your own dataloader. -See the [deeplab dataloader](../../projects/DeepLab/train_net.py) -for an example. - -If you write your own training loop, you can plug in your data loader easily. diff --git a/spaces/nlphuji/whoops-explorer-analysis/app_basic.py b/spaces/nlphuji/whoops-explorer-analysis/app_basic.py deleted file mode 100644 index bf5fa425b2fdbade9a9d22ed0678011b3381124d..0000000000000000000000000000000000000000 --- a/spaces/nlphuji/whoops-explorer-analysis/app_basic.py +++ /dev/null @@ -1,35 +0,0 @@ -from datasets import load_dataset -import gradio as gr -import os -import random - -wmtis = load_dataset("nlphuji/wmtis-identify")['test'] -print(f"Loaded WMTIS identify, first example:") -print(wmtis[0]) -dataset_size = len(wmtis) - 1 - -NORMAL_IMAGE = 'normal_image' -STRANGE_IMAGE = 'strange_image' -def func(index): - example = wmtis[index] - return example['normal_image'], example['normal_hash'], example['strange_image'], example['strange_hash'] - -demo = gr.Blocks() - -with demo: - gr.Markdown("# Slide to iterate WMTIS: Normal vs. Strange Images") - - with gr.Column(): - slider = gr.Slider(minimum=0, maximum=dataset_size) - with gr.Row(): - index = random.choice(range(0, dataset_size)) - with gr.Column(): - i1 = gr.Image(value=wmtis[index]["normal_image"], label='Normal Image') - t1 = gr.Textbox(value=wmtis[index]["normal_hash"], label='Image ID') - with gr.Column(): - i2 = gr.Image(value=wmtis[index]["strange_image"], label='Strange Image') - t2 = gr.Textbox(value=wmtis[index]["strange_hash"], label='Image ID') - - slider.change(func, inputs=[slider], outputs=[i1, t1, i2, t2]) - -demo.launch() \ No newline at end of file diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/BrightcoveExperiences.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/BrightcoveExperiences.js deleted file mode 100644 index 5e715a967444e17df1545d1e9024ee3e46ae9597..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/BrightcoveExperiences.js +++ /dev/null @@ -1,185 +0,0 @@ - -if(brightcove==undefined){var brightcove={};brightcove.getExperience=function(){alert("Please import APIModules_all.js in order to use the API.");};} -if(brightcove.experiences==undefined){brightcove.servicesURL='http://c.brightcove.com/services';brightcove.cdnURL='http://admin.brightcove.com';brightcove.secureCDNURL='https://sadmin.brightcove.com';brightcove.secureServicesURL='https://secure.brightcove.com/services';brightcove.USservicesURL='http://c.brightcove.com/services';brightcove.UScdnURL='http://admin.brightcove.com';brightcove.USsecureCDNURL='https://sadmin.brightcove.com';brightcove.USsecureServicesURL='https://secure.brightcove.com/services';brightcove.pubHost='c.$pubcode$.$zoneprefix$$zone$';brightcove.pubSecureHost='secure.$pubcode$.$zoneprefix$$zone$';brightcove.pubSubdomain='ariessaucetown.local';brightcove.experiences={};brightcove.experienceObjects={};brightcove.renderExperienceInProcess=false;brightcove.createExperiencesQueue=[];brightcove.renderExperienceQueue=[];brightcove.timeouts={};brightcove.flashTimeoutInterval=10000;brightcove.htmlTimeoutInterval=10000;brightcove.experienceNum=0;brightcove.majorVersion=9;brightcove.majorRevision=0;brightcove.minorRevision=28;brightcove.performCdnUrl={'development':'//players.brightcove.net/','qa':'//players.qa.brightcove.net/','staging':'//players.staging.brightcove.net/','production':'//players.brightcove.net/'};brightcove.metricsBaseUrl={'development':'//data.aws-qa.rnatest.brightcove.com','qa':'//data.aws-qa.rnatest.brightcove.com','staging':'//data.aws-qa.rnatest.brightcove.com','production':'//metrics.brightcove.com/tracker'};brightcove.analyticsErrors={'BAD_PUBLISHER_ID':-100,'UNEXPECTED_MAPPING_RESPONSE':-101,'MAPPINGS_CALL_FAILURE':-102};brightcove.servlet={AS3:"federated_f9",HTML:"htmlFederated"};brightcove.mappingFileData={};brightcove.isLinkDotBrightcoveURL=window.location.hostname.indexOf('link.brightcove.co')>=0;brightcove.playerType={FLASH:"flash",HTML:"html",FLASH_IFRAME:"flashIFrame",INSTALLER:"installer",NO_SUPPORT:"nosupport"};brightcove.errorCodes={UNKNOWN:0,DOMAIN_RESTRICTED:1,GEO_RESTRICTED:2,INVALID_ID:3,NO_CONTENT:4,UNAVAILABLE_CONTENT:5,UPGRADE_REQUIRED_FOR_VIDEO:6,UPGRADE_REQUIRED_FOR_PLAYER:7,SERVICE_UNAVAILABLE:8};brightcove.defaultParam={};brightcove.defaultParam.width='100%';brightcove.defaultParam.height='100%';brightcove.defaultFlashParam={};brightcove.defaultFlashParam.allowScriptAccess='always';brightcove.defaultFlashParam.allowFullScreen='true';brightcove.defaultFlashParam.seamlessTabbing=false;brightcove.defaultFlashParam.swliveconnect=true;brightcove.defaultFlashParam.wmode='window';brightcove.defaultFlashParam.quality='high';brightcove.defaultFlashParam.bgcolor='#999999';brightcove.hasActiveX=brightcove.isIE=(window.ActiveXObject!=undefined);brightcove.userAgent=navigator.userAgent;brightcove._queuedAPICalls=[];var brightcoveJS=brightcove;brightcove.createExperiences=function(pEvent,pElementID){var experiences=[];var params;var experience;var flashSupport=brightcove.checkFlashSupport();var htmlSupport=brightcove.checkHtmlSupport();if(brightcove.renderExperienceInProcess){function createExperiencesWrapper(pEvent,pElementID){return function(){brightcove.createExperiences(pEvent,pElementID);}} -brightcove.createExperiencesQueue.push(createExperiencesWrapper(pEvent,pElementID));return;} -if(pElementID!=null){experiences.push(document.getElementById(pElementID));}else{experiences=brightcove.collectExperiences();} -if(brightcove.hasActiveX){params=document.getElementsByTagName('param');} -var urlParams=brightcove.cacheUrlParams();var numExperiences=experiences.length;for(var i=0;i0){experience.params.flashID=experience.id;}else{experience.id=experience.params.flashID='bcExperienceObj'+(brightcove.experienceNum++);} -experience.params.identifierClassName='BrightcoveExperienceID_'+Math.floor(Math.random()*10000);return experience;};brightcove.copySnippetParams=function(experience,params){if(!brightcove.hasActiveX){params=experience.getElementsByTagName('param');} -var numParams=params.length;var param;for(var j=0;j0){experience.params.videoID=urlParams.titleID;experience.params["@videoPlayer"]=urlParams.titleID;experience.params.autoStart=(experience.params.autoStart!="false"&&urlParams.autoStart!="false");} -if(urlParams.lineupID.length>0){experience.params.lineupID=urlParams.lineupID;}} -return experience;};brightcove.determinePlayerType=function(experience,flashSupport,htmlSupport){if(flashSupport==null&&htmlSupport==false){return brightcove.playerType.NO_SUPPORT;} -if(experience.params.forceHTML){if(window.console){var message="The forceHTML parameter was used for the Brightcove player. This value should ONLY be used for";message+=" development and testing purposes and is not supported in production environments.";console.log(message);} -return brightcove.playerType.HTML;} -if(experience.params.forceFlashIFrame||(brightcove.isMetroIE()&&flashSupport==null)){return brightcove.playerType.FLASH_IFRAME;} -if(flashSupport!=null){if(brightcove.isFlashVersionSufficient(experience,flashSupport)){return brightcove.playerType.FLASH;}else{return brightcove.playerType.INSTALLER;}} -if(htmlSupport){if(brightcove.isSupportedHTMLDevice()||experience.params.htmlFallback){return brightcove.playerType.HTML;}} -return brightcove.playerType.NO_SUPPORT;};brightcove.isFlashVersionSufficient=function(experience,flashSupport){if(flashSupport==null)return false;var setMajorVersion=false;var requestedMajorVersion;var requestedMajorRevision;var requestedMinorRevision;if(experience.params.majorVersion!=undefined){requestedMajorVersion=parseInt(experience.params.majorVersion,10);setMajorVersion=true;}else{requestedMajorVersion=brightcove.majorVersion;} -if(experience.params.majorRevision!=undefined){requestedMajorRevision=parseInt(experience.params.majorRevision,10);}else{if(setMajorVersion){requestedMajorRevision=0;}else{requestedMajorRevision=brightcove.majorRevision;}} -if(experience.params.minorRevision!=undefined){requestedMinorRevision=parseInt(experience.params.minorRevision,10);}else{if(setMajorVersion){requestedMinorRevision=0;}else{requestedMinorRevision=brightcove.minorRevision;}} -return(flashSupport.majorVersion>requestedMajorVersion||(flashSupport.majorVersion==requestedMajorVersion&&flashSupport.majorRevision>requestedMajorRevision)||(flashSupport.majorVersion==requestedMajorVersion&&flashSupport.majorRevision==requestedMajorRevision&&flashSupport.minorRevision>=requestedMinorRevision));};brightcove.generateRequestUrl=function(experience,playerType,secureConnections){var file;if(playerType==brightcove.playerType.INSTALLER){file=brightcove.cdnURL+"/viewer/playerProductInstall.swf";var MMPlayerType=brightcove.hasActiveX?"ActiveX":"PlugIn";document.title=document.title.slice(0,47)+" - Flash Player Installation";var MMdoctitle=document.title;file+="?&MMredirectURL="+window.location+'&MMplayerType='+MMPlayerType+'&MMdoctitle='+MMdoctitle;brightcove.reportUpgradeRequired(experience);}else{if(secureConnections){file=brightcove.getPubURL(brightcove.secureServicesURL,brightcove.pubSecureHost,experience.params.pubCode);}else{file=brightcove.getPubURL(brightcove.servicesURL,brightcove.pubHost,experience.params.pubCode);} -var servlet=(playerType==brightcove.playerType.HTML)?brightcove.servlet.HTML:brightcove.servlet.AS3;file+='/viewer/'+servlet+'?'+brightcove.getOverrides();for(var config in experience.params){file+='&'+encodeURIComponent(config)+'='+encodeURIComponent(experience.params[config]);}} -return file;};brightcove.renderInstallGif=function(experience,secureConnections){var cdnURL=secureConnections?brightcove.secureCDNURL:brightcove.cdnURL;var upgradeFlashImage=cdnURL.indexOf('.co.jp')>0?"upgrade_flash_player_kk.gif":"upgrade_flash_player2.gif";var linkHTML="Get Flash Player";return linkHTML;};brightcove.renderExperience=function(experience,file,playerType,secureConnections){var experienceElement;var experienceID=experience.id;var isPubIdInBlacklist=false;var publisherID;var dummyElement;if(brightcove.renderExperienceInProcess){function wrapRenderExperience(experience,file,playerType,secureConnections){return function(){brightcove.renderExperience(experience,file,playerType,secureConnections);}} -brightcove.renderExperienceQueue.push(wrapRenderExperience(experience,file,playerType,secureConnections));return;} -brightcove.renderExperienceInProcess=true;if(!(experience.params.playerKey||experience.params.playerID||experience.params.playerId||experience.params.playerid)){if(window.console){console.log("No playerID or playerKey was found for the Brightcove player, so it can not be rendered.");} -return;} -brightcove.experienceObjects[experienceID]=experience;var unminified=(brightcove.getParameter("unminified")=="true")||(experience.params.unminified==="true");if(experience.params.includeAPI==="true"&&!(brightcove._apiRequested||brightcove.api)){var source="/js/api/";if(unminified){source+="unminified/";} -source+="SmartPlayerAPI.js";var apiInclude=brightcove.createElement('script');apiInclude.type="text/javascript";var cdnURL=secureConnections?brightcove.secureCDNURL:brightcove.cdnURL;apiInclude.src=cdnURL+source;experience.parentNode.appendChild(apiInclude);brightcove._apiRequested=true;} -file+="&startTime="+new Date().getTime();if(experience.params.playerKey){publisherID=brightcove.decodePublisherID(experience.params.playerKey);}else{publisherID=experience.params.publisherID;} -dummyElement=brightcove.createDummyElement(playerType,experience,secureConnections);if(experience.params.enableMapping===true||(experience.params.enableMapping!==false&&!isPubIdInBlacklist)){brightcove.makeMappingFileRequest(publisherID,function(err,data){if(err){if(window.console){console.log(err);}} -brightcove.generateExperienceElement(experience,publisherID,dummyElement,unminified,file,playerType,experienceID,secureConnections,data);});}else{brightcove.generateExperienceElement(experience,publisherID,dummyElement,unminified,file,playerType,experienceID,secureConnections,null);}};brightcove.generateExperienceElement=function(experience,publisherID,dummyElement,unminified,file,playerType,experienceID,secureConnections,data){var playerID;var bcPublisherID;var bcPlayerID;var bcEmbedID;var bcNewSmartPlayerID;var bcForceRefID;var parsedDataObject={};var bcIframe;var replaceElement;var container;var timeout=brightcove.flashTimeoutInterval;var cdnURL=secureConnections?brightcove.secureCDNURL:brightcove.cdnURL;var isKKPod=cdnURL.indexOf('.co.jp')>0;var eolExtensionList=[];if(experience.params.enableMapping!==false&&data&&data.statusCode===200&&data.response&&data.response!==""){try{if(experience.params.playerKey){if(window.JSON){parsedDataObject=JSON.parse(data.response)[experience.params.playerKey];}else{parsedDataObject=brightcove.json_parse(data.response)[experience.params.playerKey];}}else{playerID=experience.params.playerId||experience.params.playerID||experience.params.playerid;if(window.JSON){parsedDataObject=JSON.parse(data.response);}else{parsedDataObject=brightcove.json_parse(data.response);}}}catch(ex){if(window.console){console.log('Error: Unable to parse mapping file: '+ex.message);}} -if(!experience.params.playerKey){for(var mappedPlayerKey in parsedDataObject){var playerMapItem;if(parsedDataObject.hasOwnProperty(mappedPlayerKey)){playerMapItem=parsedDataObject[mappedPlayerKey];if(playerMapItem.smart_player_id&&playerMapItem.smart_player_id===playerID){parsedDataObject=playerMapItem;break;}}}} -if(parsedDataObject&&(((!parsedDataObject.hasOwnProperty('enable_mapping')||parsedDataObject.enable_mapping)&&experience.params.enableMapping!==false)||((parsedDataObject.hasOwnProperty('enable_mapping')&&parsedDataObject.enable_mapping===false)&&experience.params.enableMapping===true))){bcPublisherID=parsedDataObject.account_id?parsedDataObject.account_id:publisherID;bcPlayerID=parsedDataObject.player_id;bcEmbedID=parsedDataObject.embed_id||'default';bcNewSmartPlayerID=parsedDataObject.new_smart_player_id;bcForceRefID=parsedDataObject.force_ref_id||false;} -var isInExtensionList=true;if(Array.prototype.indexOf){isInExtensionList=eolExtensionList.indexOf(String(publisherID))!==-1;}else{for(var i=0;i');}else if(bcPublisherID&&bcPlayerID){bcIframe=brightcove.getBCPlayerIframe(experience,bcPublisherID,bcPlayerID,bcEmbedID);experienceElement=brightcove.createIFrame(experience);brightcove.copyNodeProperties(dummyElement,experienceElement);replaceElement=brightcove.getElementByClassNameCrossBrowser(experience.params.identifierClassName);experienceElement.setAttribute('allowFullScreen','');experienceElement.setAttribute('webkitAllowFullScreen','');experienceElement.setAttribute('mozillaAllowFullScreen','');if(replaceElement&&replaceElement.parentNode){replaceElement.parentNode.replaceChild(experienceElement,replaceElement);} -brightcove.experiences[experienceID]=experienceElement;experienceElement.src=bcIframe;}else{var iframeDoc;experienceElement=brightcove.createIFrame(experience);brightcove.copyNodeProperties(dummyElement,experienceElement);replaceElement=brightcove.getElementByClassNameCrossBrowser(experience.params.identifierClassName);if(replaceElement&&replaceElement.parentNode){replaceElement.parentNode.replaceChild(experienceElement,replaceElement);} -brightcove.experiences[experienceID]=experienceElement;iframeDoc=experienceElement.contentDocument||experienceElement.contentWindow.document;iframeDoc.write('');} -brightcove.renderExperienceInProcess=false;if(brightcove.renderExperienceQueue.length>0){brightcove.renderExperienceQueue.shift()();}else if(brightcove.createExperiencesQueue.length>0){brightcove.createExperiencesQueue.shift()();} -brightcove.timeouts[experience.id]=setTimeout(function(){brightcove.handleExperienceTimeout(experienceID);},timeout);};brightcove.copyNode=function(elementFrom){var experienceElement=elementFrom.cloneNode(true);brightcove.copyNodeProperties(elementFrom,experienceElement);return experienceElement;};brightcove.copyNodeProperties=function(elementFrom,elementTo){var propertyItem;var propertyList=['name','title','height','width','border','onclick','ondblclick','ondrag','ondragend','ondragenter','ondragleave','ondragover','ondragstart','ondrop','onmousedown','onmousemove','onmouseout','onmouseover','onmouseup','onmousewheel','onscroll','onwheel'];for(propertyItem in propertyList){if(elementFrom[propertyList[propertyItem]]){elementTo[propertyList[propertyItem]]=elementFrom[propertyList[propertyItem]];}} -if(elementTo.className!==elementFrom.className){elementTo.className+=' '+elementFrom.className;}};brightcove.getElementsByClassName=function(selector){var retnode=[];var elem=document.getElementsByTagName('*');for(var i=0;i-1)retnode.push(elem[i]);} -return retnode;};brightcove.getElementByClassNameCrossBrowser=function(selector){var searchElement;if(document.querySelectorAll){searchElement=document.querySelectorAll('.'+selector)[0];}else{searchElement=brightcove.getElementsByClassName(selector)[0];} -return searchElement;};brightcove.createDummyElement=function(playerType,experience,secureConnections){var dummyElement;var containerID;var flashObjectParams;var flashEmbedStr;experience.className+=' '+experience.params.identifierClassName;if(playerType===brightcove.playerType.NO_SUPPORT){containerID='_container'+experience.id;dummyElement=brightcove.createElement('span');if(experience.params.height.charAt(experience.params.height.length-1)=="%"){dummyElement.style.display='block';}else{dummyElement.style.display='inline-block';} -dummyElement.className=experience.className;dummyElement.id=containerID;}else if(playerType===brightcove.playerType.HTML||playerType===brightcove.playerType.FLASH_IFRAME){dummyElement=brightcove.createIFrame(experience);if(experience&&experience.parentNode){experience.parentNode.replaceChild(dummyElement,experience);}}else{if(brightcove.hasActiveX){flashEmbedStr=brightcove.getDummyFlashEmbedString(experience);containerID='_container'+experience.id;dummyElement=brightcove.createFlashEmbed(containerID,experience.params.height);if(experience&&experience.parentNode){experience.parentNode.replaceChild(dummyElement,experience);dummyElement.innerHTML=flashEmbedStr;}}else{flashObjectParams=brightcove.getFlashObjectParams(experience);dummyElement=brightcove.createFlashObject(flashObjectParams);if(experience&&experience.parentNode){experience.parentNode.replaceChild(dummyElement,experience);}}} -return dummyElement;};brightcove.getDummyFlashEmbedString=function(experience){return'' -+'';};brightcove.makeMetricsErrorCall=function(publisherID,errorType){var img=document.createElement('img');var metricsUrl=brightcove.metricsBaseUrl['production'];img.src=metricsUrl+'?'+'account='+publisherID+'&domain=videocloud'+'&platform=as3'+'&event=error'+'&error_code='+errorType;};brightcove.createIFrame=function(experience){var iframeElement=brightcove.createElement('iframe');iframeElement.id=experience.id;iframeElement.width=experience.params.width;iframeElement.height=experience.params.height;iframeElement.className=experience.className;iframeElement.frameborder=0;iframeElement.scrolling="no";iframeElement.style.borderStyle="none";return iframeElement;};brightcove.getFlashEmbedString=function(experience,secureConnections){var options='';var flashParams=experience.flashParams;for(var pOption in flashParams){options+='';} -var protocol=secureConnections?"https":"http";return'' -+options -+'';};brightcove.getFlashObjectParams=function(experience,file){var experienceObject={};experienceObject.type='application/x-shockwave-flash';experienceObject.data=file;experienceObject.id=experience.params.flashID;experienceObject.width=experience.params.width;experienceObject.height=experience.params.height;experienceObject.className=experience.className;experienceObject.seamlesstabbing=experience.flashParams.seamlessTabbing;for(var config in experience.flashParams){experienceObject["flashParam_"+config]=experience.flashParams[config];} -return experienceObject;};brightcove.createFlashEmbed=function(experienceId,height){var container=brightcove.createElement('span');if(height.charAt(height.length-1)=="%"){container.style.display='block';}else{container.style.display='inline-block';} -container.id=experienceId;return container;};brightcove.createFlashObject=function(playerConfig){var experienceElement=brightcove.createElement('object');experienceElement.type=playerConfig.type;if(playerConfig.data){experienceElement.data=playerConfig.data;} -experienceElement.id=playerConfig.id;experienceElement.width=playerConfig.width;experienceElement.height=playerConfig.height;experienceElement.className=playerConfig.className;experienceElement.setAttribute("seamlesstabbing",playerConfig.seamlessTabbing);var tempParam;var flashParamPrefix="flashParam_";for(var config in playerConfig){var flashParamInd=config.indexOf(flashParamPrefix);if(flashParamInd==0){tempParam=brightcove.createElement('param');tempParam.name=config.substring(flashParamPrefix.length);tempParam.value=playerConfig[config];experienceElement.appendChild(tempParam);}} -return experienceElement;};brightcove.handleExperienceTimeout=function(pID){brightcove.executeErrorHandlerForExperience(brightcove.experienceObjects[pID],{type:"templateError",errorType:"serviceUnavailable",code:brightcove.errorCodes.SERVICE_UNAVAILABLE,info:pID});};brightcove.reportPlayerLoad=function(pID){var timeout=brightcove.timeouts[pID];if(timeout){clearTimeout(timeout);}};brightcove.reportUpgradeRequired=function(pExperience){brightcove.executeErrorHandlerForExperience(pExperience,{type:"templateError",errorType:"upgradeRequiredForPlayer",code:brightcove.errorCodes.UPGRADE_REQUIRED_FOR_PLAYER,info:pExperience.id});};brightcove.checkFlashSupport=function(){var hasActiveX=(window.ActiveXObject!=undefined);return(hasActiveX)?brightcove.checkFlashSupportIE():brightcove.checkFlashSupportStandard();};brightcove.checkFlashSupportIE=function(){var versions;try{var flash=new ActiveXObject("ShockwaveFlash.ShockwaveFlash.7");var version=flash.GetVariable('$version');versions=/ ([0-9]+),([0-9]+),([0-9]+),/.exec(version);}catch(exception){return null;} -return{majorVersion:versions[1],majorRevision:versions[2],minorRevision:versions[3]};};brightcove.isMetroIE=function(){var version=0;if(navigator.appVersion.indexOf("MSIE")!=-1){var appSplit=navigator.appVersion.split("MSIE");if(appSplit.length>1){version=parseFloat(appSplit[1]);}} -if(version<10||isNaN(version)){return false;} -var activeXSupport=false;try{activeXSupport=!!new ActiveXObject("htmlfile");}catch(e){activeXSupport=false;} -return!activeXSupport;};brightcove.checkFlashSupportStandard=function(){var versions;var majorVersion;var majorRevision;var minorRevision;try{if(typeof navigator.plugins!='undefined'&&navigator.plugins.length>0){if(navigator.plugins["Shockwave Flash 2.0"]||navigator.plugins["Shockwave Flash"]){var swfVersion=navigator.plugins["Shockwave Flash 2.0"]?" 2.0":"";var description=navigator.plugins["Shockwave Flash"+swfVersion].description;var filename=navigator.plugins["Shockwave Flash"+swfVersion].filename;if(filename.match){if(filename.toLowerCase().match(/lite/)){throw new Error();}} -versions=description.split(" ");majorVersion=versions[2].split(".")[0];majorRevision=versions[2].split(".")[1];minorRevision=versions[3];if(minorRevision==""){minorRevision=versions[4];} -if(minorRevision[0]=="d"){minorRevision=minorRevision.substring(1);}else if(minorRevision[0]=="r"){minorRevision=minorRevision.substring(1);if(minorRevision.indexOf("d")>0){minorRevision=minorRevision.substring(0,minorRevision.indexOf("d"));}}}else{throw new Error();}}else{return null;}}catch(exception){return null;} -return{majorVersion:majorVersion,majorRevision:majorRevision,minorRevision:minorRevision};};brightcove.checkHtmlSupport=function(){var v=brightcove.createElement('video');var videoSupport=true;if(!brightcove.userAgent.match(new RegExp("android","i"))){videoSupport=!!(v.canPlayType&&v.canPlayType('video/mp4; codecs="avc1.42E01E, mp4a.40.2"').replace(/no/,''));} -if(brightcove.userAgent.match(/BlackBerry.*Version\/6\.0/)){return false;} -var canvasSupport=!!brightcove.createElement('canvas').getContext;return videoSupport&&canvasSupport;};brightcove.isSupportedHTMLDevice=function(pUAString){var types=["iPad","iPhone","iPod","android","Silk","IEMobile"];var numTypes=types.length;var uaString=pUAString||brightcove.userAgent;for(var i=0;i1){var trace=window;for(var i=0;i0){throw new Error('Invalid string. Length must be a multiple of 4')} -var len=b64.length -placeHolders=b64.charAt(len-2)==='='?2:b64.charAt(len-1)==='='?1:0 -arr=new Arr(b64.length*3/4-placeHolders) -l=placeHolders>0?b64.length-4:b64.length -var L=0 -function push(v){arr[L++]=v} -for(i=0,j=0;i>16) -push((tmp&0xFF00)>>8) -push(tmp&0xFF)} -if(placeHolders===2){tmp=(decode(b64.charAt(i))<<2)|(decode(b64.charAt(i+1))>>4) -push(tmp&0xFF)}else if(placeHolders===1){tmp=(decode(b64.charAt(i))<<10)|(decode(b64.charAt(i+1))<<4)|(decode(b64.charAt(i+2))>>2) -push((tmp>>8)&0xFF) -push(tmp&0xFF)} -return arr} -return{toByteArray:b64ToByteArray}};brightcove.forceRefID=function(experience){var videoID=experience.params.videoID;var videoPlayer=experience.params['@videoPlayer'];var playlistID=experience.params['@videoList'];var lineupID=experience.params.lineupID;var playlistTabs=experience.params['@playlistTabs'];var playlistCombo=experience.params['@playlistCombo'];var playlistVideoFeatured=experience.params['@videoList.featured'];var playlistTabsFeatured=experience.params['@playlistTabs.featured'];var playlistComboFeatured=experience.params['@playlistCombo.featured'];var playlistArray;var playlistJoined;if(playlistTabs){playlistArray=playlistTabs.split(',');}else if(playlistCombo){playlistArray=playlistCombo.split(',');} -if(playlistArray){for(var i=0;i=0)){iframeSource+='videoId='+videoID+'&';}else if(videoID){iframeSource+='videoId=ref:'+videoID+'&';} -if(playlistID&&playlistVideoID){if(playlistVideoID&&(isNaN(playlistVideoID)&&playlistVideoID.indexOf('ref:')<0)){playlistVideoID='ref:'+playlistVideoID;} -iframeSource+='playlistVideoId='+playlistVideoID+'&';} -if(experience.params.language&&experience.params.language==='jp'){iframeSource+='language=ja&';}else if(experience.params.language){iframeSource+='language='+experience.params.language+'&';} -if(experience.params.autoStart&&experience.params.autoStart!='false'){iframeSource+='autoplay='+experience.params.autoStart+'&';} -return iframeSource;};if(/KHTML/i.test(navigator.userAgent)){var checkLoad=setInterval(function(){if(/loaded|complete/.test(document.readyState)){clearInterval(checkLoad);brightcove.createExperiencesPostLoad();}},70);document.addEventListener('load',brightcove.createExperiencesPostLoad,false);} -if(typeof document.addEventListener!='undefined'){document.addEventListener('DOMContentLoaded',brightcove.createExperiencesPostLoad,false);document.addEventListener('load',brightcove.createExperiencesPostLoad,false);window.addEventListener("message",brightcove.respondToMessages,false);}else if(typeof window.attachEvent!='undefined'){window.attachEvent('onload',brightcove.createExperiencesPostLoad);}else{alert(brightcove.i18n.BROWSER_TOO_OLD);}} -brightcove.json_parse=(function(){"use strict";var state,stack,container,key,value,escapes={'\\':'\\','"':'"','/':'/','t':'\t','n':'\n','r':'\r','f':'\f','b':'\b'},string={go:function(){state='ok';},firstokey:function(){key=value;state='colon';},okey:function(){key=value;state='colon';},ovalue:function(){state='ocomma';},firstavalue:function(){state='acomma';},avalue:function(){state='acomma';}},number={go:function(){state='ok';},ovalue:function(){state='ocomma';},firstavalue:function(){state='acomma';},avalue:function(){state='acomma';}},action={'{':{go:function(){stack.push({state:'ok'});container={};state='firstokey';},ovalue:function(){stack.push({container:container,state:'ocomma',key:key});container={};state='firstokey';},firstavalue:function(){stack.push({container:container,state:'acomma'});container={};state='firstokey';},avalue:function(){stack.push({container:container,state:'acomma'});container={};state='firstokey';}},'}':{firstokey:function(){var pop=stack.pop();value=container;container=pop.container;key=pop.key;state=pop.state;},ocomma:function(){var pop=stack.pop();container[key]=value;value=container;container=pop.container;key=pop.key;state=pop.state;}},'[':{go:function(){stack.push({state:'ok'});container=[];state='firstavalue';},ovalue:function(){stack.push({container:container,state:'ocomma',key:key});container=[];state='firstavalue';},firstavalue:function(){stack.push({container:container,state:'acomma'});container=[];state='firstavalue';},avalue:function(){stack.push({container:container,state:'acomma'});container=[];state='firstavalue';}},']':{firstavalue:function(){var pop=stack.pop();value=container;container=pop.container;key=pop.key;state=pop.state;},acomma:function(){var pop=stack.pop();container.push(value);value=container;container=pop.container;key=pop.key;state=pop.state;}},':':{colon:function(){if(Object.hasOwnProperty.call(container,key)){throw new SyntaxError('Duplicate key "'+key+'"');} -state='ovalue';}},',':{ocomma:function(){container[key]=value;state='okey';},acomma:function(){container.push(value);state='avalue';}},'true':{go:function(){value=true;state='ok';},ovalue:function(){value=true;state='ocomma';},firstavalue:function(){value=true;state='acomma';},avalue:function(){value=true;state='acomma';}},'false':{go:function(){value=false;state='ok';},ovalue:function(){value=false;state='ocomma';},firstavalue:function(){value=false;state='acomma';},avalue:function(){value=false;state='acomma';}},'null':{go:function(){value=null;state='ok';},ovalue:function(){value=null;state='ocomma';},firstavalue:function(){value=null;state='acomma';},avalue:function(){value=null;state='acomma';}}};function debackslashify(text){return text.replace(/\\(?:u(.{4})|([^u]))/g,function(a,b,c){return b?String.fromCharCode(parseInt(b,16)):escapes[c];});} -return function(source,reviver){var r,tx=/^[\x20\t\n\r]*(?:([,:\[\]{}]|true|false|null)|(-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?)|"((?:[^\r\n\t\\\"]|\\(?:["\\\/trnfb]|u[0-9a-fA-F]{4}))*)")/;state='go';stack=[];try{for(;;){r=tx.exec(source);if(!r){break;} -if(r[1]){action[r[1]][state]();}else if(r[2]){value=+r[2];number[state]();}else{value=debackslashify(r[3]);string[state]();} -source=source.slice(r[0].length);}}catch(e){state=e;} -if(state!=='ok'||(/[^\x20\t\n\r]/).test(source)){throw state instanceof SyntaxError?state:new SyntaxError('JSON');} -return typeof reviver==='function'?(function walk(holder,key){var k,v,value=holder[key];if(value&&typeof value==='object'){for(k in value){if(Object.prototype.hasOwnProperty.call(value,k)){v=walk(value,k);if(v!==undefined){value[k]=v;}else{delete value[k];}}}} -return reviver.call(holder,key,value);}({'':value},'')):value;};}()); \ No newline at end of file diff --git a/spaces/noahzhy/KR_LPR_TF/README.md b/spaces/noahzhy/KR_LPR_TF/README.md deleted file mode 100644 index b7e05b66e8ea22e195d83e949c887f331f3216c5..0000000000000000000000000000000000000000 --- a/spaces/noahzhy/KR_LPR_TF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: KR LPR TF -emoji: 🌍 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 4.0.0 -app_file: app.py -pinned: false -license: bsd-2-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nomic-ai/daily_dialog/index.html b/spaces/nomic-ai/daily_dialog/index.html deleted file mode 100644 index 26c26cd1ebe86b6ab74af1881539433a07d17c14..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/daily_dialog/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - daily_dialog - - - - -
      - -
      - - - \ No newline at end of file diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/layers/csr_blocksparse_matrix.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/layers/csr_blocksparse_matrix.h deleted file mode 100644 index be51573515e4433758ea3416265504308e2440f7..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/layers/csr_blocksparse_matrix.h +++ /dev/null @@ -1,835 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef LYRA_CODEC_SPARSE_MATMUL_LAYERS_CSR_BLOCKSPARSE_MATRIX_H_ -#define LYRA_CODEC_SPARSE_MATMUL_LAYERS_CSR_BLOCKSPARSE_MATRIX_H_ - -#include -#include -#include -#include -#include -#include - -#include "glog/logging.h" -// IWYU pragma: begin_exports -#include "sparse_matmul/compute/kernels_generic.h" -#include "sparse_matmul/compute/matmul.h" -#include "sparse_matmul/compute/thread_bounds.h" -#include "sparse_matmul/layers/masked_sparse_matrix.h" -#include "sparse_matmul/numerics/fixed_types.h" -#include "sparse_matmul/numerics/float16_types.h" -#include "sparse_matmul/os/coop_threads.h" -#include "sparse_matmul/vector/cache_aligned_vector.h" -// IWYU pragma: end_exports -#include "absl/memory/memory.h" - -namespace csrblocksparse { -// CsrBlockSparseMatrix stores a modified block compressed sparse row -// representation of a sparse matrix. The ordering of the weights is modified -// in the 16x1 and 1x1 cases so that a certain number (4 and 8 respectively) -// of columns of weights are stored contiguously before moving on to the next -// row. The 4x4 case stores each block contiguously. -// -// Currently it is constructed from a MaskedSparseMatrix which usees a dense -// binary mask representation. The construction generates the compressed -// representation. Further iterations will support a direct serialization -// of the compressed representation. -// -// MaskedSparseMatrix masked_matrix(rows, cols, existing_mask, existing_values) -// CsrBlockSparseMatrix matrix(masked_matrix) -// -// matrix.SpMV_bias(rhs, bias, &out); -// -// This class is thread compatible. -template -class CsrBlockSparseMatrix { - public: - CsrBlockSparseMatrix() {} - - // Reference used to indicate that this is an input and not an output. - CsrBlockSparseMatrix(const uint8_t* const& buffer, const std::size_t& len) { - ReadFromFlatBuffer(buffer, len); - ComputeRHSIndices(); - } - - template - CsrBlockSparseMatrix(const MaskedSparseMatrix& masked_matrix) { - sparsity_ = masked_matrix.sparsity(); - rows_ = masked_matrix.rows(); - cols_ = masked_matrix.cols(); - - DetermineBlockSize(masked_matrix); - - if (block_width_ == 1 && block_height_ == 1) - col_multiple_ = 8; - else - col_multiple_ = 1; - - std::vector weights(masked_matrix.values().begin(), - masked_matrix.values().end()); - - reduced_rows_ = (rows_ + block_height_ - 1) / block_height_; - rows_ = reduced_rows_ * block_height_; - reduced_cols_ = cols_ / block_width_; - - // Calculate the reduced CSR representation of the matrix. - std::vector reduced_mask(reduced_rows_ * reduced_cols_); - std::vector row_offsets = {0}; - int nnz = 0; - const auto& mask = masked_matrix.mask(); - for (int r = 0; r < reduced_rows_; ++r) { - for (int c = 0; c < reduced_cols_; ++c) { - int mask_val = mask[r * block_height_ * cols_ + c * block_width_]; - reduced_mask[r * reduced_cols_ + c] = mask_val; - nnz += mask_val; - } - row_offsets.push_back(nnz); - } - - // Make sure the reduced representation has the correct number of columns. - MakeColumnsMultiple(row_offsets, &reduced_mask, &weights); - - std::vector col_indices; - std::vector weights_csr; - std::vector nnz_per_row; - MaskAndWeightsToCsr(reduced_mask, weights, &nnz_per_row, &col_indices, - &weights_csr); - - // Generate column deltas from |col_indices|. - std::vector col_deltas; - for (int i = 0; i < col_indices.size(); ++i) { - // |col_indices| are used to index the RHS vector which is always float. - int64_t diff = sizeof(RhsType); - if (i == 0) - diff *= block_width_ * (col_indices[i]); - else - diff *= block_width_ * (col_indices[i] - col_indices[i - 1]); - - CHECK(diff < std::numeric_limits::max()) - << "delta between column indices in bytes " << diff - << " exceeded the maximum size of the DeltaType " - << std::numeric_limits::max(); - col_deltas.push_back(static_cast(diff)); - } - - // Because of pre-fetching we need some extra values at the end. - col_deltas.insert(col_deltas.end(), std::max(2, col_multiple_ + 1), 0); - nnz_per_row.insert(nnz_per_row.end(), 2, nnz_per_row.back()); - - weights_ = CacheAlignedVector(weights_csr); - col_deltas_ = CacheAlignedVector(col_deltas); - nnz_per_row_ = CacheAlignedVector(nnz_per_row); - ComputeRHSIndices(); - - num_threads_ = 0; - PrepareForThreads(1); - } - - // Constructor makes a matrix from the given weights, deltas and nnz, taking - // the other parameters from |src_matrix|. |cols| is the number of raw columns - // (NOT blocks) of the new matrix. - CsrBlockSparseMatrix( - const CsrBlockSparseMatrix& src_matrix, - const std::vector& new_weights, - const std::vector& new_deltas, const std::vector& new_nnz, - int cols) { - num_threads_ = 0; - col_multiple_ = src_matrix.col_multiple_; - block_width_ = src_matrix.block_width_; - block_height_ = src_matrix.block_height_; - reduced_rows_ = new_nnz.size(); - rows_ = reduced_rows_ * block_height_; - cols_ = cols; - reduced_cols_ = cols_ / block_width_; - weights_ = CacheAlignedVector(new_weights); - col_deltas_ = CacheAlignedVector(new_deltas); - nnz_per_row_ = CacheAlignedVector(new_nnz); - sparsity_ = 1.0f - static_cast(new_weights.size()) / (rows_ * cols_); - ComputeRHSIndices(); - name_ = src_matrix.name_; - PrepareForThreads(1); - } - - // Factory method takes a column slice out of *this and returns a sparse - // matrix that takes as inputs [|start_col|, |end_col|) of *this, and - // returns the same number of outputs, but only a partial result. - // If |keep_rhs_size|, then the new matrix takes the same rhs as the current - // matrix, but uses a subset of it, instead of expecting just the reduced rhs. - // If |start_col| > |end_col|, then we slice out the complement of the defined - // interval, ie [0, |end_col|) + [|start_col|, current end). - // NOTE That |start_col| and |end_col| are in raw column coordinates, NOT - // block units. - CsrBlockSparseMatrix SplitByColumn(int start_col, int end_col, - bool keep_rhs_size = false) const { - int weight_index = 0; - int delta_index = 0; - std::vector new_deltas; - std::vector new_weights; - std::vector new_nnz(reduced_rows_); - int col = 0; - int prev_col = keep_rhs_size ? 0 : start_col; - for (int r = 0; r < reduced_rows_; ++r) { - int reduced_col_count = nnz_per_row_[r]; - for (int c = 0; c < reduced_col_count; ++c, ++delta_index) { - col += col_deltas_[delta_index] / sizeof(RhsType); - if ((start_col < end_col && start_col <= col && col < end_col) || - (start_col > end_col && (col < end_col || col >= start_col))) { - ++new_nnz[r]; - new_deltas.push_back((col - prev_col) * sizeof(RhsType)); - prev_col = col; - for (int i = 0; i < block_width_ * block_height_; - ++i, ++weight_index) { - new_weights.push_back(weights_[weight_index]); - } - } else { - weight_index += block_width_ * block_height_; - } - } - } - int new_cols = keep_rhs_size ? cols_ : end_col - start_col; - return CsrBlockSparseMatrix(*this, new_weights, new_deltas, new_nnz, - new_cols); - } - - // Factory method takes a row slice out of *this and returns a sparse - // matrix that takes the sampe inputs as *this, and returns the outputs for - // the range [|start_row|, |end_row|). - // NOTE That |start_row| and |end_row| are in raw column coordinates, NOT - // block units. - CsrBlockSparseMatrix SplitByRow(int start_row, int end_row) const { - int start_reduced = start_row / block_height_; - int end_reduced = end_row / block_height_; - std::vector new_nnz(nnz_per_row_.data() + start_reduced, - nnz_per_row_.data() + end_reduced); - int weight_start = 0; - for (int r = 0; r < start_reduced; ++r) { - weight_start += nnz_per_row_[r]; - } - int weight_end = weight_start; - for (int r = start_reduced; r < end_reduced; ++r) { - weight_end += nnz_per_row_[r]; - } - int delta_start = 0; - for (int i = 0; i < weight_start; ++i) { - delta_start += col_deltas_[i]; - } - std::vector new_deltas(col_deltas_.data() + weight_start, - col_deltas_.data() + weight_end); - new_deltas[0] += delta_start; - int block_size = block_height_ * block_width_; - std::vector new_weights( - weights_.data() + weight_start * block_size, - weights_.data() + weight_end * block_size); - return CsrBlockSparseMatrix(*this, new_weights, new_deltas, new_nnz, cols_); - } - - // Combines adjacent row blocks, doubling the block height. - // This necessarily involves adding zero weights where the blocks don't align - // across adjacent pairs of rows, so use with caution, as the resulting matrix - // is most likely to run slower if very sparse to begin with. - // In the few cases where the blocks do mostly align, the resulting matmul - // could be much faster, as the number of reads of the rhs will be halved. - void DoubleBlockHeight() { - int new_rows = reduced_rows_ / 2; - std::vector new_nnz(new_rows); - std::vector new_rhs_indices; - std::vector new_weights; - int rhs_index1 = 0; - int rhs_index2 = 0; - int block_size = block_height_ * block_width_; - for (int r = 0; r < new_rows; ++r) { - int start_nnz = new_rhs_indices.size(); - rhs_index2 += nnz_per_row_[r * 2]; - int end1 = rhs_index1 + nnz_per_row_[r * 2]; - int end2 = rhs_index2 + nnz_per_row_[r * 2 + 1]; - // Run over a pair of rows with 2 iterators, combining blocks as we go, or - // padding with zeros where the block positions don't match. - while (rhs_index1 < end1 || rhs_index2 < end2) { - int col1 = rhs_index1 < end1 ? rhs_indices_[rhs_index1] : reduced_cols_; - int col2 = rhs_index2 < end2 ? rhs_indices_[rhs_index2] : reduced_cols_; - if (col1 < col2) { - // Need zero weights for row2 to pad out weights block. - new_rhs_indices.push_back(col1); - new_weights.insert(new_weights.end(), - weights_.data() + rhs_index1 * block_size, - weights_.data() + (rhs_index1 + 1) * block_size); - new_weights.insert(new_weights.end(), block_size, - static_cast(0.0f)); - ++rhs_index1; - } else if (col1 > col2) { - // Need zero weights for row1 to pad out weights block. - new_rhs_indices.push_back(col2); - new_weights.insert(new_weights.end(), block_size, - static_cast(0.0f)); - new_weights.insert(new_weights.end(), - weights_.data() + rhs_index2 * block_size, - weights_.data() + (rhs_index2 + 1) * block_size); - ++rhs_index2; - } else { - // Combine weights for both row1 and row2. - new_rhs_indices.push_back(col1); - new_weights.insert(new_weights.end(), - weights_.data() + rhs_index1 * block_size, - weights_.data() + (rhs_index1 + 1) * block_size); - new_weights.insert(new_weights.end(), - weights_.data() + rhs_index2 * block_size, - weights_.data() + (rhs_index2 + 1) * block_size); - ++rhs_index1; - ++rhs_index2; - } - } - rhs_index1 = rhs_index2; - new_nnz[r] = new_rhs_indices.size() - start_nnz; - } - block_height_ *= 2; - reduced_rows_ /= 2; - weights_ = CacheAlignedVector(new_weights); - rhs_indices_ = CacheAlignedVector(new_rhs_indices); - nnz_per_row_ = CacheAlignedVector(new_nnz); - sparsity_ = 1.0f - static_cast(new_weights.size()) / (rows_ * cols_); - ComputeColDeltas(); - if (num_threads_ > 0) { - int num_threads = num_threads_; - num_threads_ = 0; - PrepareForThreads(num_threads); - } - } - - // Allocates memory and fills buffer. - // Caller is responsible for the memory de-allocation. - // TODO(b/189958858): Both Read and Write need to eventually handle the - // different possible HalfType and DeltaType values, but punting for now as - // there is only one supported combination. - std::size_t WriteToFlatBuffer(std::string* csr_flatbuffer) { - std::size_t bytes = 0; - bytes += FixedParameterSize(); - bytes += weights_.size() * sizeof(WeightType); - bytes += col_deltas_.size() * sizeof(DeltaType); - bytes += nnz_per_row_.size() * sizeof(int); - - uint8_t* bytes_ptr_ptr = - reinterpret_cast(CHECK_NOTNULL(malloc(bytes))); - - int* int_bytes_ptr = reinterpret_cast(bytes_ptr_ptr); - - *int_bytes_ptr++ = rows_; - *int_bytes_ptr++ = cols_; - *int_bytes_ptr++ = reduced_rows_; - *int_bytes_ptr++ = reduced_cols_; - *int_bytes_ptr++ = block_width_; - *int_bytes_ptr++ = block_height_; - *int_bytes_ptr++ = col_multiple_; - *int_bytes_ptr++ = num_threads_; - *int_bytes_ptr++ = weights_.size(); - *int_bytes_ptr++ = col_deltas_.size(); - *int_bytes_ptr++ = nnz_per_row_.size(); - - float* float_bytes_ptr = reinterpret_cast(int_bytes_ptr); - *float_bytes_ptr++ = sparsity_; - - uint8_t* bytes_ptr = reinterpret_cast(float_bytes_ptr); - - memcpy(bytes_ptr, weights_.data(), weights_.size() * sizeof(WeightType)); - bytes_ptr += weights_.size() * sizeof(WeightType); - - memcpy(bytes_ptr, col_deltas_.data(), - col_deltas_.size() * sizeof(DeltaType)); - bytes_ptr += col_deltas_.size() * sizeof(DeltaType); - - memcpy(bytes_ptr, nnz_per_row_.data(), nnz_per_row_.size() * sizeof(int)); - bytes_ptr += nnz_per_row_.size() * sizeof(int); - - csr_flatbuffer->resize(bytes); - csr_flatbuffer->assign(reinterpret_cast(bytes_ptr_ptr), bytes); - free(bytes_ptr_ptr); - - return bytes; - } - - void ReadFromFlatBuffer(const uint8_t* const& bytes, const std::size_t& len) { - CHECK_GE(len, FixedParameterSize()); - - const int* int_bytes_ptr = reinterpret_cast(bytes); - rows_ = *int_bytes_ptr++; - cols_ = *int_bytes_ptr++; - reduced_rows_ = *int_bytes_ptr++; - reduced_cols_ = *int_bytes_ptr++; - block_width_ = *int_bytes_ptr++; - block_height_ = *int_bytes_ptr++; - col_multiple_ = *int_bytes_ptr++; - int num_threads = *int_bytes_ptr++; - int32_t weights_size = *int_bytes_ptr++; - int32_t col_deltas_size = *int_bytes_ptr++; - int32_t nnz_per_row_size = *int_bytes_ptr++; - - // Make sure negative sizes don't mess things up. - weights_size = std::max(0, weights_size); - col_deltas_size = std::max(0, col_deltas_size); - nnz_per_row_size = std::max(0, nnz_per_row_size); - - const float* float_bytes_ptr = - reinterpret_cast(int_bytes_ptr); - sparsity_ = *float_bytes_ptr++; - - std::size_t total_bytes = - FixedParameterSize() + weights_size * sizeof(WeightType) + - col_deltas_size * sizeof(DeltaType) + nnz_per_row_size * sizeof(int); - - CHECK_EQ(total_bytes, len) - << "total bytes: " << total_bytes << ", actual len given: " << len; - - const uint8_t* bytes_ptr = - reinterpret_cast(float_bytes_ptr); - std::vector weights_raw(weights_size); - memcpy(weights_raw.data(), bytes_ptr, weights_size * sizeof(WeightType)); - weights_ = CacheAlignedVector(weights_raw); - bytes_ptr += weights_size * sizeof(WeightType); - - std::vector deltas_raw(col_deltas_size); - memcpy(deltas_raw.data(), bytes_ptr, col_deltas_size * sizeof(DeltaType)); - col_deltas_ = CacheAlignedVector(deltas_raw); - bytes_ptr += col_deltas_size * sizeof(DeltaType); - - std::vector nnz_raw(nnz_per_row_size); - memcpy(nnz_raw.data(), bytes_ptr, nnz_per_row_size * sizeof(int)); - nnz_per_row_ = CacheAlignedVector(nnz_raw); - num_threads_ = 0; - PrepareForThreads(num_threads); - } - - // Multiply a Sparse matrix by a possibly dense matrix. Often the matrix is - // a vector with a small number of columns, hence the term "fat vector". - // 1x1 and 4x4 have specializations for output columns (ie fatness) > 5, - // and often achieve twice as many GFlops when multiplying a right hand side - // that has 5 or more columns. (Best is a multiple of 5). - // 16x1 doesn't have enough registers and just loops over the width 1 kernel. - // - // |rhs| and |out| are COLUMN MAJOR. - - // Fast Tuples WeightType, BiasType, RhsType, OutType are: - // (float, float, float, float) - // (bfloat16, float, float, float) - // and only on ARM64. All other cases use a slow generic implementation. - template - void SpMM_bias(const RhsClass& rhs, const BiasClass& bias, OutClass* out, - bool relu = false, int tid = 0, - SpinBarrier* barrier = nullptr) const { - static_assert(std::is_same::value, - "Rhs types must match"); - CHECK_LT(tid, num_threads_); - CHECK_EQ(rhs.cols(), out->cols()); - CHECK_EQ(rhs.rows(), cols_); - CHECK_GE(out->rows(), rows_); - int cols_to_go = out->cols(); - int rhs_index = *thread_bounds_.OffsetRhsIndices(rhs_indices_.data(), tid); - const RhsType* rhs_ptr = rhs.data() + rhs_index * block_height_; - OutType* out_ptr = thread_bounds_.OffsetOutput(out->data(), tid); - const WeightType* weights_ptr = - thread_bounds_.OffsetWeights(weights_.data(), tid); - const DeltaType* delta_ptr = - thread_bounds_.OffsetRhsIndices(col_deltas_.data(), tid); - int offset = *delta_ptr / sizeof(RhsType); - rhs_ptr -= offset; - const int* nnz_ptr = nnz_per_row_.data() + thread_bounds_.StartRow(tid); - int assigned_rows = - thread_bounds_.StartRow(tid + 1) - thread_bounds_.StartRow(tid); - const BiasType* bias_ptr = thread_bounds_.OffsetBias(bias.data(), tid); - - while (cols_to_go > 0) { - if (block_width_ == 4 && block_height_ == 4) { - if (cols_to_go >= 5) { - detail::SpMM5_4x4( - weights_ptr, delta_ptr, nnz_ptr, rhs_ptr, bias_ptr, out_ptr, - assigned_rows, out->col_stride(), rhs.col_stride(), relu); - } else { - detail::SpMV_4x4( - weights_ptr, delta_ptr, nnz_ptr, rhs_ptr, bias_ptr, out_ptr, - assigned_rows, out->col_stride(), rhs.col_stride(), relu); - } - } else { - if (cols_to_go >= 5) { - detail::SpMM5_1x1( - weights_ptr, delta_ptr, nnz_ptr, rhs_ptr, bias_ptr, out_ptr, - assigned_rows, out->col_stride(), rhs.col_stride(), relu); - } else { - detail::SpMV_1x1( - weights_ptr, delta_ptr, nnz_ptr, rhs_ptr, bias_ptr, out_ptr, - assigned_rows, out->col_stride(), rhs.col_stride(), relu); - } - } - - if (cols_to_go >= 5) { - cols_to_go -= 5; - rhs_ptr += rhs.col_stride() * 5; - out_ptr += out->col_stride() * 5; - } else { - cols_to_go--; - rhs_ptr += rhs.col_stride(); - out_ptr += out->col_stride(); - } - if (barrier) barrier->barrier(); - } - } - template - void MatVec(const MVRhsType* rhs, const MVBiasType* bias, bool relu, int tid, - int replicas, int output_stride, OutType* output) { - CHECK_LT(tid, num_threads_); - CHECK_EQ(block_width_, 4) << "Block width must be 4!"; - if (block_height_ == 8) { - matmul_.MatVec8x4( - thread_bounds_.OffsetWeights(weights_.cast_data(), tid), rhs, - thread_bounds_.OffsetBias(bias, tid), nnz_per_row_.data(), - thread_bounds_.OffsetRhsIndices(rhs_indices_.data(), tid), - thread_bounds_.StartRow(tid), thread_bounds_.StartRow(tid + 1), relu, - replicas, output_stride, thread_bounds_.OffsetOutput(output, tid)); - } else { - CHECK_EQ(block_height_, 4) << "Block height must be 4 or 8!"; - matmul_.MatVec4x4( - thread_bounds_.OffsetWeights(weights_.cast_data(), tid), rhs, - thread_bounds_.OffsetBias(bias, tid), nnz_per_row_.data(), - thread_bounds_.OffsetRhsIndices(rhs_indices_.data(), tid), - thread_bounds_.StartRow(tid), thread_bounds_.StartRow(tid + 1), relu, - replicas, output_stride, thread_bounds_.OffsetOutput(output, tid)); - } - } - - int rows() const { return rows_; } - int cols() const { return cols_; } - int block_height() const { return block_height_; } - int block_width() const { return block_width_; } - float sparsity() const { return sparsity_; } - int num_threads() const { return num_threads_; } - const ThreadBounds& thread_bounds() const { return thread_bounds_; } - const CacheAlignedVector& rhs_indices() const { - return rhs_indices_; - } - const std::string& name() const { return name_; } - void set_name(const std::string& name) { name_ = name; } - const std::vector& split_points() const { - return thread_bounds_.row_starts(); - } - - std::size_t bytes() const { - return weights_.size() * sizeof(WeightType) + - col_deltas_.size() * sizeof(DeltaType) + - nnz_per_row_.size() * sizeof(int); - } - - // Multiplies a sparse matrix by a possibly dense matrix, as SpMM_bias above, - // and then samples from the output (softmax distribution) layer. - template - typename std::enable_if::value, int>::type - SpMM_bias_Sample(const RhsClass& rhs, const BiasClass& bias, OutClass* out, - float temperature, int tid, SpinBarrier* barrier, - std::minstd_rand* gen, - CacheAlignedVector* scratch) const { - SpMM_bias(rhs, bias, out, /*relu=*/false, tid, barrier); - return out->Sample(temperature, gen, scratch); - } - // Fixed32 version. - template - typename std::enable_if::value, int>::type - SpMM_bias_Sample(const RhsClass& rhs, const BiasClass& bias, OutClass* out, - float temperature, int tid, SpinBarrier* barrier, - std::minstd_rand* gen, - CacheAlignedVector* scratch) const { - // We don't pass the barrier on, as we have more work to do. - SpMM_bias(rhs, bias, out, /*relu=*/false, tid); - return out->ReducingSample(gen, scratch, tid, temperature, barrier); - } - - void Print() const { - std::cout << "Weights\n"; - weights_.Print(); - std::cout << std::endl; - std::cout << "Deltas\n"; - col_deltas_.Print(); - std::cout << std::endl; - std::cout << "nnz\n"; - nnz_per_row_.Print(); - std::cout << std::endl; - } - - // Split the computation amongst threads by rows based on the number of - // non zeros, with the addition of a constant to account for the work of the - // bias and the horizontal add at the end, and also guarantees that each - // thread writes only whole cache lines, based on the size of OutType. - // The |cache_line_size| arg is used only for testing. Normally it is provided - // through the architecture #defines. - // Each thread gets a contiguous row range (|split_points|). - // Thread t does rows [ split_points[t], split_points[t + 1] ) - // Each thread also needs to know how many non zeros were before it to skip - // (|nnz_to_skip|). And finally it also needs to know what the offset into - // the rhs vector would have been at the split point (|rhs_to_skip|). - // - // Some tricky corner cases where the number of non-zeros doesn't split - // nicely amongst the number of requested threads are not handled and default - // to one thread; these cases are only going to happen in tests and not in - // the matrices that correspond in real models. - // - // Returns the maximum number of threads that can be used; <= |num_threads|. - template - int PrepareForThreads(int num_threads, int cache_line_size = -1) { - CHECK_GT(num_threads, 0); - // we've already prepared for this number of threads, nothing to do - if (num_threads == num_threads_) return num_threads_; - - num_threads_ = num_threads; - thread_bounds_.PrepareForThreads( - block_width_, block_height_, num_threads_, - ReducedRowsPerCacheLine(cache_line_size), reduced_rows_, - nnz_per_row_.data()); - return num_threads_; - } - - // Computes and stores the |rhs_indices_| from the |col_deltas_|. - void ComputeRHSIndices() { - std::vector cumulative_deltas = CumulativeColDeltas(); - std::vector rhs_indices(cumulative_deltas.size() + - reduced_rows_); - int total_indices = 0; - int delta_index = 0; - for (int r = 0; r < reduced_rows_; ++r) { - for (int n = 0; n < nnz_per_row_[r]; ++n, ++delta_index) { - rhs_indices[total_indices++] = - cumulative_deltas[delta_index] / block_width_; - } - } - rhs_indices_ = CacheAlignedVector(rhs_indices); - } - - // Computes and stores the |col_deltas_| from the |rhs_indices_|. - void ComputeColDeltas() { - std::vector col_deltas(rhs_indices_.size()); - int prev_index = 0; - for (int i = 0; i < rhs_indices_.size(); ++i) { - int offset = rhs_indices_[i] - prev_index; - prev_index = rhs_indices_[i]; - col_deltas[i] = offset * block_width_ * sizeof(RhsType); - } - col_deltas_ = CacheAlignedVector(col_deltas); - } - - // Computes and returns the inclusive prefix sum of the deltas, ie absolute - // positions. - std::vector CumulativeColDeltas() const { - std::vector cum_col_deltas(col_deltas_.size()); - for (int i = 0; i < col_deltas_.size(); ++i) { - cum_col_deltas[i] = col_deltas_[i] / sizeof(RhsType); - if (i > 0) cum_col_deltas[i] += cum_col_deltas[i - 1]; - } - return cum_col_deltas; - } - - private: - constexpr std::size_t FixedParameterSize() const { - return sizeof(int) // rows - + sizeof(int) // cols - + sizeof(int) // reduced_rows - + sizeof(int) // reduced_cols - + sizeof(int) // block_width - + sizeof(int) // block_height - + sizeof(float) // sparsity - + sizeof(int) // col_multiple - + sizeof(int) // num_threads_ - + sizeof(int) // weights_.size() - + sizeof(int) // col_deltas_.size() - + sizeof(int); // nnz_per_row_.size() - } - // Possible block sizes are only those that are supported by the computation - // default is 1x1, other options are 4x4 and 16x1. - template - void DetermineBlockSize(const MaskedSparseMatrix& masked_matrix) { - const std::vector> kPreferredOrder = {{4, 4}}; - int rows = masked_matrix.rows(); - int cols = masked_matrix.cols(); - - for (const auto& block_size : kPreferredOrder) { - int block_height, block_width; - std::tie(block_height, block_width) = block_size; - if (cols % block_width != 0) continue; - - int reduced_rows = (rows + block_height - 1) / block_height; - int reduced_cols = cols / block_width; - - // For each possible block, confirm that it is either all 0s or all 1s. - bool all_same = true; - const auto& mask = masked_matrix.mask(); - for (int r = 0; r < reduced_rows; ++r) { - for (int c = 0; c < reduced_cols; ++c) { - int val = mask[r * block_height * cols + c * block_width]; - for (int i = 0; i < block_height; ++i) { - for (int j = 0; j < block_width; ++j) { - int index = (r * block_height + i) * cols + c * block_width + j; - if (index < masked_matrix.mask().size()) { - all_same &= (masked_matrix.mask()[index] == val); - } - } - } - } - } - - // If this block configuration is possible, accept it. - if (all_same) { - block_height_ = block_height; - block_width_ = block_width; - return; - } - } - - // No large blocks were found, default to 1x1. - block_height_ = 1; - block_width_ = 1; - } - - // CSR descriptors are for the reduced matrix, weights is the full matrix. - template - void MakeColumnsMultiple(const std::vector& row_offsets, - std::vector* reduced_mask, - std::vector* weights) { - if (col_multiple_ > 0) { - // Make sure each row has a number of columns that is a multiple of - // |col_multiple|. - for (int r = 1; r < row_offsets.size(); ++r) { - int num_row = row_offsets[r] - row_offsets[r - 1]; - int num_needed = col_multiple_ - num_row % col_multiple_; - if (num_needed < col_multiple_) { - // Find gaps in the columns where we can insert a column of 0 weights. - int num_added = 0; - for (int c = 0; c < reduced_cols_; ++c) { - if ((*reduced_mask)[(r - 1) * reduced_cols_ + c] == 0) { - (*reduced_mask)[(r - 1) * reduced_cols_ + c] = 1; - - // Zero out the weights that correspond to this block. - for (int i = 0; i < block_height_; ++i) { - for (int j = 0; j < block_width_; ++j) { - (*weights)[((r - 1) * block_height_ + i) * cols_ + - block_width_ * c + j] = InputType(0.f); - } - } - num_added++; - } - - if (num_added == num_needed) break; - } - } - } - } - } - - // Given the final dense mask and weights, convert to the compressed - // block CSR representation. - template - void MaskAndWeightsToCsr(const std::vector& mask, - const std::vector& weights, - std::vector* nnz_per_row, - std::vector* col_indices, - std::vector* weights_csr) { - std::vector row_offsets = {0}; - int nnz = 0; - // Standard CSR format. - if (block_width_ == 1 && block_height_ == 1) { - for (int r = 0; r < rows_; ++r) { - for (int c = 0; c < cols_; ++c) { - if (mask[r * cols_ + c] == 1) { - nnz++; - col_indices->push_back(c); - weights_csr->push_back(WeightType(weights[r * cols_ + c])); - } - } - row_offsets.push_back(nnz); - } - } else if (block_width_ == 4 && block_height_ == 4) { - // Weights are stored contiguously for each block in this case. - for (int r = 0; r < reduced_rows_; ++r) { - for (int c = 0; c < reduced_cols_; ++c) { - if (mask[r * reduced_cols_ + c] == 1) { - col_indices->push_back(c); - nnz++; - for (int i = 0; i < block_height_; ++i) { - for (int j = 0; j < block_width_; ++j) { - int row_index = (block_height_ * r + i) * cols_; - int w_index = row_index + block_width_ * c + j; - WeightType weight = w_index < weights.size() - ? WeightType(weights[w_index]) - : WeightType(0.0f); - weights_csr->push_back(weight); - } - } - } - } - row_offsets.push_back(nnz); - } - } - for (int i = 1; i < row_offsets.size(); ++i) - nnz_per_row->push_back(row_offsets[i] - row_offsets[i - 1]); - } - - // Returns the number of block rows per cache line. This is the minimum unit - // into which the calculation is broken for threads. - template - int ReducedRowsPerCacheLine(int override_cache_line_size = -1) const { - int line_size = kCacheLineSize; - if (override_cache_line_size >= 1) line_size = override_cache_line_size; - return std::max(line_size / (block_height_ * sizeof(OutType)), 1); - } - - int col_multiple_; - int rows_; - int cols_; - int reduced_rows_; - int reduced_cols_; - float sparsity_; - int block_width_; - int block_height_; - int num_threads_; - std::string name_; - - CacheAlignedVector weights_; - CacheAlignedVector col_deltas_; - CacheAlignedVector nnz_per_row_; - // |thread_bounds_| and |rhs_indices_| don't need to be serialized as they are - // always recalculated from serialized data. - CacheAlignedVector rhs_indices_; - Matmul matmul_; - ThreadBounds thread_bounds_; - static constexpr int kCacheLineSize = 64; -}; - -// Converts a sparse matrix represented with (|mask|, |weights|, |size|) into -// the CSR format, and returns that as a serialized string. -template -std::string ConvertDenseToSparseRepresentation_Int16Deltas( - const std::vector& mask, const std::vector& weights, - const int rows, const int cols) { - MaskedSparseMatrix masked_weights(rows, cols, mask.data(), - weights.data()); - CsrBlockSparseMatrix - sparse_masked_weights(masked_weights); - std::string buffer; - sparse_masked_weights.WriteToFlatBuffer(&buffer); - return buffer; -} - -} // namespace csrblocksparse -#endif // LYRA_CODEC_SPARSE_MATMUL_LAYERS_CSR_BLOCKSPARSE_MATRIX_H_ diff --git a/spaces/nyvrx/VoiceChat/app.py b/spaces/nyvrx/VoiceChat/app.py deleted file mode 100644 index 050bef935b8db66c392f843e43c04ab777b0a631..0000000000000000000000000000000000000000 --- a/spaces/nyvrx/VoiceChat/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import whisper -import gradio as gr -import os -from pydub import AudioSegment -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -from TTS.api import TTS -import torch - -mname = "facebook/blenderbot-400M-distill" -chat_model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -chat_model = chat_model.to(device) -SR_model = whisper.load_model("base") -SR_model = SR_model.to(device) - -tts_model_name = TTS.list_models()[11] -tts = TTS(tts_model_name) - -def querybot(audio_file): - AudioSegment.from_wav(audio_file).export("file.mp3", format="mp3") - UTTERANCE = SR_model.transcribe("file.mp3")["text"] - - inputs = tokenizer([UTTERANCE], return_tensors="pt").to(device) - reply_ids = chat_model.generate(**inputs) - response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0] - tts.tts_to_file(text=response, file_path='tts.mp3') - - output = response - - return output, 'tts.mp3' - - -gr.Interface( - fn=querybot, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs=['text',gr.Audio(type='filepath')]).launch() \ No newline at end of file diff --git a/spaces/omdena/omdena-chatbot/Dockerfile b/spaces/omdena/omdena-chatbot/Dockerfile deleted file mode 100644 index 71a9b4ac60f337e3a2ab4503592f8898af098cad..0000000000000000000000000000000000000000 --- a/spaces/omdena/omdena-chatbot/Dockerfile +++ /dev/null @@ -1,52 +0,0 @@ -FROM python:3.11 - -# Set Poetry Environment Variables -ENV POETRY_VERSION=1.5.1 \ - PYTHONUNBUFFERED=1 \ - PYTHONDONTWRITEBYTECODE=1 - -# Install Curl & Poetry -RUN apt-get update \ - && apt-get install curl -y \ - && curl -sSL https://install.python-poetry.org | python - --version $POETRY_VERSION \ - && apt-get clean - -# Set Environment Variables -ENV PATH=/root/.local/bin:$PATH - -# Set the working directory to /code -WORKDIR /code - -# Copy poetry.lock and pyproject.toml -COPY poetry.lock pyproject.toml /code/ - -# Install dependencies -RUN poetry config virtualenvs.create false \ - && poetry install --no-interaction --no-ansi --without dev,test - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 alaye - -# Switch to the "alaye" user -USER alaye - -# Set Environment Variables -ENV HOME=/home/alaye \ - PATH=/home/alaye/.local/bin:$PATH \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces - -# Set the working directory to the user's home directory -WORKDIR $HOME/omdenabot - -# Copy Project -COPY --chown=alaye . $HOME/omdenabot - -# Expose Port -EXPOSE 7860 - -# Run entrypoint -CMD [ "python", "src/web.py"] diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/objects_room/baseline.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/objects_room/baseline.py deleted file mode 100644 index 620fd517e7c698e237faf489267bfc2de7e9d390..0000000000000000000000000000000000000000 --- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/objects_room/baseline.py +++ /dev/null @@ -1,192 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The Google Research Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -r"""Config for unsupervised training on objects_room.""" - -import ml_collections - - -def get_config(): - """Get the default hyperparameter configuration.""" - config = ml_collections.ConfigDict() - - config.seed = 42 - config.seed_data = True - - config.batch_size = 64 - config.num_train_steps = 500000 # from the original Slot Attention - config.init_checkpoint = ml_collections.ConfigDict() - config.init_checkpoint.xid = 0 # Disabled by default. - config.init_checkpoint.wid = 1 - - config.optimizer_configs = ml_collections.ConfigDict() - config.optimizer_configs.optimizer = "adam" - - config.optimizer_configs.grad_clip = ml_collections.ConfigDict() - config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm" - config.optimizer_configs.grad_clip.clip_value = 0.05 - - config.lr_configs = ml_collections.ConfigDict() - config.lr_configs.learning_rate_schedule = "compound" - config.lr_configs.factors = "constant * cosine_decay * linear_warmup" - config.lr_configs.warmup_steps = 10000 # from the original Slot Attention - config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps") - # from the original Slot Attention - config.lr_configs.base_learning_rate = 4e-4 - - # TODO(obvis): Implement masked evaluation. - config.eval_pad_last_batch = False # True - config.log_loss_every_steps = 50 - config.eval_every_steps = 5000 - config.checkpoint_every_steps = 5000 - - config.train_metrics_spec = { - "loss": "loss", - "ari": "ari", - "ari_nobg": "ari_nobg", - } - config.eval_metrics_spec = { - "eval_loss": "loss", - "eval_ari": "ari", - "eval_ari_nobg": "ari_nobg", - } - - config.data = ml_collections.ConfigDict({ - "dataset_name": "objects_room", - "shuffle_buffer_size": config.batch_size * 8, - "resolution": (64, 64) - }) - - config.max_instances = 11 - config.num_slots = config.max_instances # Only used for metrics. - config.logging_min_n_colors = config.max_instances - - config.preproc_train = [ - "tfds_image_to_tfds_video", - "video_from_tfds", - "sparse_to_dense_annotation(max_instances=10)", - ] - - config.preproc_eval = [ - "tfds_image_to_tfds_video", - "video_from_tfds", - "sparse_to_dense_annotation(max_instances=10)", - ] - - config.eval_slice_size = 1 - config.eval_slice_keys = ["video", "segmentations_video"] - - # Dictionary of targets and corresponding channels. Losses need to match. - targets = {"video": 3} - config.losses = {"recon": {"targets": list(targets)}} - config.losses = ml_collections.ConfigDict({ - f"recon_{target}": {"loss_type": "recon", "key": target} - for target in targets}) - - config.model = ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SAVi", - - # Encoder. - "encoder": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.FrameEncoder", - "reduction": "spatial_flatten", - "backbone": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SimpleCNN", - "features": [64, 64, 64, 64], - "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5)], - "strides": [(2, 2), (2, 2), (1, 1), (1, 1)] - }), - "pos_emb": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.PositionEmbedding", - "embedding_type": "linear", - "update_type": "project_add", - "output_transform": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.MLP", - "hidden_size": 128, - "layernorm": "pre" - }), - }), - }), - - # Corrector. - "corrector": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.SlotAttention", - "num_iterations": 3, - "qkv_size": 64, - "mlp_size": 128, - }), - - # Predictor. - # Removed since we are running a single frame. - "predictor": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.Identity" - }), - - # Initializer. - "initializer": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.ParamStateInit", - "shape": (11, 64), # (num_slots, slot_size) - }), - - # Decoder. - "decoder": ml_collections.ConfigDict({ - "module": - "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder", - "resolution": (16, 16), # Update if data resolution or strides change - "backbone": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.CNN", - "features": [64, 64, 64, 64, 64], - "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)], - "strides": [(2, 2), (2, 2), (1, 1), (1, 1), (1, 1)], - "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)], - "layer_transpose": [True, True, False, False, False] - }), - "target_readout": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.Readout", - "keys": list(targets), - "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension - "module": "invariant_slot_attention.modules.MLP", - "num_hidden_layers": 0, - "hidden_size": 0, - "output_size": targets[k]}) for k in targets], - }), - "pos_emb": ml_collections.ConfigDict({ - "module": "invariant_slot_attention.modules.PositionEmbedding", - "embedding_type": "linear", - "update_type": "project_add" - }), - }), - "decode_corrected": True, - "decode_predicted": False, - }) - - # Which video-shaped variables to visualize. - config.debug_var_video_paths = { - "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long - } - - # Define which attention matrices to log/visualize. - config.debug_var_attn_paths = { - "corrector_attn": "corrector/InvertedDotProductAttention_0/GeneralizedDotProductAttention_0/attn" # pylint: disable=line-too-long - } - - # Widths of attention matrices (for reshaping to image grid). - config.debug_var_attn_widths = { - "corrector_attn": 16, - } - - return config - - diff --git a/spaces/openai/openai-detector/detector/train.py b/spaces/openai/openai-detector/detector/train.py deleted file mode 100644 index 748ef4d5e99038c2760d839deb291980cef39389..0000000000000000000000000000000000000000 --- a/spaces/openai/openai-detector/detector/train.py +++ /dev/null @@ -1,305 +0,0 @@ -"""Training code for the detector model""" - -import argparse -import os -import subprocess -import sys -from itertools import count -from multiprocessing import Process - -import torch -import torch.distributed as dist -from torch import nn -from torch.nn.parallel import DistributedDataParallel -from torch.optim import Adam -from torch.utils.data import DataLoader, DistributedSampler, RandomSampler -from tqdm import tqdm -from transformers import * - -from .dataset import Corpus, EncodedDataset -from .download import download -from .utils import summary, distributed - - -def setup_distributed(port=29500): - if not dist.is_available() or not torch.cuda.is_available() or torch.cuda.device_count() <= 1: - return 0, 1 - - if 'MPIR_CVAR_CH3_INTERFACE_HOSTNAME' in os.environ: - from mpi4py import MPI - mpi_rank = MPI.COMM_WORLD.Get_rank() - mpi_size = MPI.COMM_WORLD.Get_size() - - os.environ["MASTER_ADDR"] = '127.0.0.1' - os.environ["MASTER_PORT"] = str(port) - - dist.init_process_group(backend="nccl", world_size=mpi_size, rank=mpi_rank) - return mpi_rank, mpi_size - - dist.init_process_group(backend="nccl", init_method="env://") - return dist.get_rank(), dist.get_world_size() - - -def load_datasets(data_dir, real_dataset, fake_dataset, tokenizer, batch_size, - max_sequence_length, random_sequence_length, epoch_size=None, token_dropout=None, seed=None): - if fake_dataset == 'TWO': - download(real_dataset, 'xl-1542M', 'xl-1542M-nucleus', data_dir=data_dir) - elif fake_dataset == 'THREE': - download(real_dataset, 'xl-1542M', 'xl-1542M-k40', 'xl-1542M-nucleus', data_dir=data_dir) - else: - download(real_dataset, fake_dataset, data_dir=data_dir) - - real_corpus = Corpus(real_dataset, data_dir=data_dir) - - if fake_dataset == "TWO": - real_train, real_valid = real_corpus.train * 2, real_corpus.valid * 2 - fake_corpora = [Corpus(name, data_dir=data_dir) for name in ['xl-1542M', 'xl-1542M-nucleus']] - fake_train = sum([corpus.train for corpus in fake_corpora], []) - fake_valid = sum([corpus.valid for corpus in fake_corpora], []) - elif fake_dataset == "THREE": - real_train, real_valid = real_corpus.train * 3, real_corpus.valid * 3 - fake_corpora = [Corpus(name, data_dir=data_dir) for name in - ['xl-1542M', 'xl-1542M-k40', 'xl-1542M-nucleus']] - fake_train = sum([corpus.train for corpus in fake_corpora], []) - fake_valid = sum([corpus.valid for corpus in fake_corpora], []) - else: - fake_corpus = Corpus(fake_dataset, data_dir=data_dir) - - real_train, real_valid = real_corpus.train, real_corpus.valid - fake_train, fake_valid = fake_corpus.train, fake_corpus.valid - - Sampler = DistributedSampler if distributed() and dist.get_world_size() > 1 else RandomSampler - - min_sequence_length = 10 if random_sequence_length else None - train_dataset = EncodedDataset(real_train, fake_train, tokenizer, max_sequence_length, min_sequence_length, - epoch_size, token_dropout, seed) - train_loader = DataLoader(train_dataset, batch_size, sampler=Sampler(train_dataset), num_workers=0) - - validation_dataset = EncodedDataset(real_valid, fake_valid, tokenizer) - validation_loader = DataLoader(validation_dataset, batch_size=1, sampler=Sampler(validation_dataset)) - - return train_loader, validation_loader - - -def accuracy_sum(logits, labels): - if list(logits.shape) == list(labels.shape) + [2]: - # 2-d outputs - classification = (logits[..., 0] < logits[..., 1]).long().flatten() - else: - classification = (logits > 0).long().flatten() - assert classification.shape == labels.shape - return (classification == labels).float().sum().item() - - -def train(model: nn.Module, optimizer, device: str, loader: DataLoader, desc='Train'): - model.train() - - train_accuracy = 0 - train_epoch_size = 0 - train_loss = 0 - - with tqdm(loader, desc=desc, disable=distributed() and dist.get_rank() > 0) as loop: - for texts, masks, labels in loop: - - texts, masks, labels = texts.to(device), masks.to(device), labels.to(device) - batch_size = texts.shape[0] - - optimizer.zero_grad() - loss, logits = model(texts, attention_mask=masks, labels=labels) - loss.backward() - optimizer.step() - - batch_accuracy = accuracy_sum(logits, labels) - train_accuracy += batch_accuracy - train_epoch_size += batch_size - train_loss += loss.item() * batch_size - - loop.set_postfix(loss=loss.item(), acc=train_accuracy / train_epoch_size) - - return { - "train/accuracy": train_accuracy, - "train/epoch_size": train_epoch_size, - "train/loss": train_loss - } - - -def validate(model: nn.Module, device: str, loader: DataLoader, votes=1, desc='Validation'): - model.eval() - - validation_accuracy = 0 - validation_epoch_size = 0 - validation_loss = 0 - - records = [record for v in range(votes) for record in tqdm(loader, desc=f'Preloading data ... {v}', - disable=dist.is_available() and dist.get_rank() > 0)] - records = [[records[v * len(loader) + i] for v in range(votes)] for i in range(len(loader))] - - with tqdm(records, desc=desc, disable=distributed() and dist.get_rank() > 0) as loop, torch.no_grad(): - for example in loop: - losses = [] - logit_votes = [] - - for texts, masks, labels in example: - texts, masks, labels = texts.to(device), masks.to(device), labels.to(device) - batch_size = texts.shape[0] - - loss, logits = model(texts, attention_mask=masks, labels=labels) - losses.append(loss) - logit_votes.append(logits) - - loss = torch.stack(losses).mean(dim=0) - logits = torch.stack(logit_votes).mean(dim=0) - - batch_accuracy = accuracy_sum(logits, labels) - validation_accuracy += batch_accuracy - validation_epoch_size += batch_size - validation_loss += loss.item() * batch_size - - loop.set_postfix(loss=loss.item(), acc=validation_accuracy / validation_epoch_size) - - return { - "validation/accuracy": validation_accuracy, - "validation/epoch_size": validation_epoch_size, - "validation/loss": validation_loss - } - - -def _all_reduce_dict(d, device): - # wrap in tensor and use reduce to gpu0 tensor - output_d = {} - for (key, value) in sorted(d.items()): - tensor_input = torch.tensor([[value]]).to(device) - torch.distributed.all_reduce(tensor_input) - output_d[key] = tensor_input.item() - return output_d - - -def run(max_epochs=None, - device=None, - batch_size=24, - max_sequence_length=128, - random_sequence_length=False, - epoch_size=None, - seed=None, - data_dir='data', - real_dataset='webtext', - fake_dataset='xl-1542M-nucleus', - token_dropout=None, - large=False, - learning_rate=2e-5, - weight_decay=0, - **kwargs): - args = locals() - rank, world_size = setup_distributed() - - if device is None: - device = f'cuda:{rank}' if torch.cuda.is_available() else 'cpu' - - print('rank:', rank, 'world_size:', world_size, 'device:', device) - - import torch.distributed as dist - if distributed() and rank > 0: - dist.barrier() - - model_name = 'roberta-large' if large else 'roberta-base' - tokenization_utils.logger.setLevel('ERROR') - tokenizer = RobertaTokenizer.from_pretrained(model_name) - model = RobertaForSequenceClassification.from_pretrained(model_name).to(device) - - if rank == 0: - summary(model) - if distributed(): - dist.barrier() - - if world_size > 1: - model = DistributedDataParallel(model, [rank], output_device=rank, find_unused_parameters=True) - - train_loader, validation_loader = load_datasets(data_dir, real_dataset, fake_dataset, tokenizer, batch_size, - max_sequence_length, random_sequence_length, epoch_size, - token_dropout, seed) - - optimizer = Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay) - epoch_loop = count(1) if max_epochs is None else range(1, max_epochs + 1) - - logdir = os.environ.get("OPENAI_LOGDIR", "logs") - os.makedirs(logdir, exist_ok=True) - - from torch.utils.tensorboard import SummaryWriter - writer = SummaryWriter(logdir) if rank == 0 else None - best_validation_accuracy = 0 - - for epoch in epoch_loop: - if world_size > 1: - train_loader.sampler.set_epoch(epoch) - validation_loader.sampler.set_epoch(epoch) - - train_metrics = train(model, optimizer, device, train_loader, f'Epoch {epoch}') - validation_metrics = validate(model, device, validation_loader) - - combined_metrics = _all_reduce_dict({**validation_metrics, **train_metrics}, device) - - combined_metrics["train/accuracy"] /= combined_metrics["train/epoch_size"] - combined_metrics["train/loss"] /= combined_metrics["train/epoch_size"] - combined_metrics["validation/accuracy"] /= combined_metrics["validation/epoch_size"] - combined_metrics["validation/loss"] /= combined_metrics["validation/epoch_size"] - - if rank == 0: - for key, value in combined_metrics.items(): - writer.add_scalar(key, value, global_step=epoch) - - if combined_metrics["validation/accuracy"] > best_validation_accuracy: - best_validation_accuracy = combined_metrics["validation/accuracy"] - - model_to_save = model.module if hasattr(model, 'module') else model - torch.save(dict( - epoch=epoch, - model_state_dict=model_to_save.state_dict(), - optimizer_state_dict=optimizer.state_dict(), - args=args - ), - os.path.join(logdir, "best-model.pt") - ) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument('--max-epochs', type=int, default=None) - parser.add_argument('--device', type=str, default=None) - parser.add_argument('--batch-size', type=int, default=24) - parser.add_argument('--max-sequence-length', type=int, default=128) - parser.add_argument('--random-sequence-length', action='store_true') - parser.add_argument('--epoch-size', type=int, default=None) - parser.add_argument('--seed', type=int, default=None) - parser.add_argument('--data-dir', type=str, default='data') - parser.add_argument('--real-dataset', type=str, default='webtext') - parser.add_argument('--fake-dataset', type=str, default='xl-1542M-k40') - parser.add_argument('--token-dropout', type=float, default=None) - - parser.add_argument('--large', action='store_true', help='use the roberta-large model instead of roberta-base') - parser.add_argument('--learning-rate', type=float, default=2e-5) - parser.add_argument('--weight-decay', type=float, default=0) - args = parser.parse_args() - - nproc = int(subprocess.check_output([sys.executable, '-c', "import torch;" - "print(torch.cuda.device_count() if torch.cuda.is_available() else 1)"])) - if nproc > 1: - print(f'Launching {nproc} processes ...', file=sys.stderr) - - os.environ["MASTER_ADDR"] = '127.0.0.1' - os.environ["MASTER_PORT"] = str(29500) - os.environ['WORLD_SIZE'] = str(nproc) - os.environ['OMP_NUM_THREAD'] = str(1) - subprocesses = [] - - for i in range(nproc): - os.environ['RANK'] = str(i) - os.environ['LOCAL_RANK'] = str(i) - process = Process(target=run, kwargs=vars(args)) - process.start() - subprocesses.append(process) - - for process in subprocesses: - process.join() - else: - run(**vars(args)) diff --git a/spaces/orpatashnik/local-prompt-mixing/src/prompt_mixing.py b/spaces/orpatashnik/local-prompt-mixing/src/prompt_mixing.py deleted file mode 100644 index 98cad351d33b1a86e0b3a6c98bebbe045a86d499..0000000000000000000000000000000000000000 --- a/spaces/orpatashnik/local-prompt-mixing/src/prompt_mixing.py +++ /dev/null @@ -1,86 +0,0 @@ -import torch -from scipy.signal import medfilt2d - -class PromptMixing: - def __init__(self, args, object_of_interest_index, avg_cross_attn=None): - self.object_of_interest_index = object_of_interest_index - self.objects_to_preserve = [args.prompt.split().index(o) + 1 for o in args.objects_to_preserve] - self.obj_pixels_injection_threshold = args.obj_pixels_injection_threshold - - self.start_other_prompt_range = args.start_prompt_range - self.end_other_prompt_range = args.end_prompt_range - - self.start_cross_attn_replace_range = args.num_diffusion_steps - self.end_cross_attn_replace_range = args.num_diffusion_steps - - self.start_self_attn_replace_range = 0 - self.end_self_attn_replace_range = args.end_preserved_obj_self_attn_masking - self.remove_obj_from_self_mask = args.remove_obj_from_self_mask - self.avg_cross_attn = avg_cross_attn - - self.low_resource = args.low_resource - - def get_context_for_v(self, t, context, other_context): - if other_context is not None and \ - self.start_other_prompt_range <= t < self.end_other_prompt_range: - if self.low_resource: - return other_context - else: - v_context = context.clone() - # first half of context is for the uncoditioned image - v_context[v_context.shape[0]//2:] = other_context - return v_context - else: - return context - - def get_cross_attn(self, diffusion_model_wrapper, t, attn, place_in_unet, batch_size): - if self.start_cross_attn_replace_range <= t < self.end_cross_attn_replace_range: - if self.low_resource: - attn[:,:,self.object_of_interest_index] = 0.2 * torch.from_numpy(medfilt2d(attn[:, :, self.object_of_interest_index].cpu().numpy(), kernel_size=3)).to(attn.device) + \ - 0.8 * attn[:, :, self.object_of_interest_index] - else: - # first half of attn maps is for the uncoditioned image - min_h = attn.shape[0] // 2 - attn[min_h:, :, self.object_of_interest_index] = 0.2 * torch.from_numpy(medfilt2d(attn[min_h:, :, self.object_of_interest_index].cpu().numpy(), kernel_size=3)).to(attn.device) + \ - 0.8 * attn[min_h:, :, self.object_of_interest_index] - return attn - - def get_self_attn(self, diffusion_model_wrapper, t, attn, place_in_unet, batch_size): - if attn.shape[1] <= 32 ** 2 and \ - self.avg_cross_attn is not None and \ - self.start_self_attn_replace_range <= t < self.end_self_attn_replace_range: - - key = f"{place_in_unet}_cross" - attn_index = getattr(diffusion_model_wrapper, f'{key}_index') - cr = self.avg_cross_attn[key][attn_index] - setattr(diffusion_model_wrapper, f'{key}_index', attn_index+1) - - if self.low_resource: - attn = self.mask_self_attn_patches(attn, cr, batch_size) - else: - # first half of attn maps is for the uncoditioned image - attn[attn.shape[0]//2:] = self.mask_self_attn_patches(attn[attn.shape[0]//2:], cr, batch_size//2) - - return attn - - def mask_self_attn_patches(self, self_attn, cross_attn, batch_size): - h = self_attn.shape[0] // batch_size - tokens = self.objects_to_preserve - obj_token = self.object_of_interest_index - - normalized_cross_attn = cross_attn - cross_attn.min() - normalized_cross_attn /= normalized_cross_attn.max() - - mask = torch.zeros_like(self_attn[0]) - for tk in tokens: - mask_tk_in = torch.unique((normalized_cross_attn[:,:,tk] > self.obj_pixels_injection_threshold).nonzero(as_tuple=True)[1]) - mask[mask_tk_in, :] = 1 - mask[:, mask_tk_in] = 1 - - if self.remove_obj_from_self_mask: - obj_patches = torch.unique((normalized_cross_attn[:,:,obj_token] > self.obj_pixels_injection_threshold).nonzero(as_tuple=True)[1]) - mask[obj_patches, :] = 0 - mask[:, obj_patches] = 0 - - self_attn[h:] = self_attn[h:] * (1 - mask) + self_attn[:h].repeat(batch_size - 1, 1, 1) * mask - return self_attn \ No newline at end of file diff --git a/spaces/os1187/pii-anonymizer/app.py b/spaces/os1187/pii-anonymizer/app.py deleted file mode 100644 index 20d29fec1557d098add26dd5fc9e39ea6ac9d784..0000000000000000000000000000000000000000 --- a/spaces/os1187/pii-anonymizer/app.py +++ /dev/null @@ -1,212 +0,0 @@ - -"""Streamlit app for Presidio + Privy-trained PII models.""" - -import spacy -from spacy_recognizer import CustomSpacyRecognizer -from presidio_analyzer.nlp_engine import NlpEngineProvider -from presidio_anonymizer import AnonymizerEngine -from presidio_analyzer import AnalyzerEngine, RecognizerRegistry -import pandas as pd -from annotated_text import annotated_text -from json import JSONEncoder -import json -import warnings -import streamlit as st -import os -os.environ["TOKENIZERS_PARALLELISM"] = "false" -warnings.filterwarnings('ignore') -# from flair_recognizer import FlairRecognizer - -# Helper methods -@st.cache(allow_output_mutation=True) -def analyzer_engine(): - """Return AnalyzerEngine.""" - - spacy_recognizer = CustomSpacyRecognizer() - - configuration = { - "nlp_engine_name": "spacy", - "models": [ - {"lang_code": "en", "model_name": "en_spacy_pii_distilbert"}], - } - - # Create NLP engine based on configuration - provider = NlpEngineProvider(nlp_configuration=configuration) - nlp_engine = provider.create_engine() - - registry = RecognizerRegistry() - # add rule-based recognizers - registry.load_predefined_recognizers(nlp_engine=nlp_engine) - registry.add_recognizer(spacy_recognizer) - # remove the nlp engine we passed, to use custom label mappings - registry.remove_recognizer("SpacyRecognizer") - - analyzer = AnalyzerEngine(nlp_engine=nlp_engine, - registry=registry, supported_languages=["en"]) - - # uncomment for flair-based NLP recognizer - # flair_recognizer = FlairRecognizer() - # registry.load_predefined_recognizers() - # registry.add_recognizer(flair_recognizer) - # analyzer = AnalyzerEngine(registry=registry, supported_languages=["en"]) - return analyzer - - -@st.cache(allow_output_mutation=True) -def anonymizer_engine(): - """Return AnonymizerEngine.""" - return AnonymizerEngine() - - -def get_supported_entities(): - """Return supported entities from the Analyzer Engine.""" - return analyzer_engine().get_supported_entities() - - -def analyze(**kwargs): - """Analyze input using Analyzer engine and input arguments (kwargs).""" - if "entities" not in kwargs or "All" in kwargs["entities"]: - kwargs["entities"] = None - return analyzer_engine().analyze(**kwargs) - - -def anonymize(text, analyze_results): - """Anonymize identified input using Presidio Abonymizer.""" - if not text: - return - res = anonymizer_engine().anonymize(text, analyze_results) - return res.text - - -def annotate(text, st_analyze_results, st_entities): - tokens = [] - # sort by start index - results = sorted(st_analyze_results, key=lambda x: x.start) - for i, res in enumerate(results): - if i == 0: - tokens.append(text[:res.start]) - - # append entity text and entity type - tokens.append((text[res.start: res.end], res.entity_type)) - - # if another entity coming i.e. we're not at the last results element, add text up to next entity - if i != len(results) - 1: - tokens.append(text[res.end:results[i+1].start]) - # if no more entities coming, add all remaining text - else: - tokens.append(text[res.end:]) - return tokens - - -st.set_page_config(page_title="Privy + Presidio demo (English)", layout="wide") - -# Side bar -st.sidebar.markdown( - """ -Detect and anonymize PII in text using an [NLP model](https://huggingface.co/beki/en_spacy_pii_distilbert) trained on protocol traces (JSON, SQL, XML etc.) generated by -[Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy) and rule-based classifiers from [Presidio](https://aka.ms/presidio). -""" -) - -st_entities = st.sidebar.multiselect( - label="Which entities to look for?", - options=get_supported_entities(), - default=list(get_supported_entities()), -) - -st_threshold = st.sidebar.slider( - label="Acceptance threshold", min_value=0.0, max_value=1.0, value=0.35 -) - -st_return_decision_process = st.sidebar.checkbox( - "Add analysis explanations in json") - -st.sidebar.info( - "Privy is an open source framework for synthetic data generation in protocol trace formats (json, sql, html etc). Presidio is an open source framework for PII detection and anonymization. " - "For more info visit [privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy) and [aka.ms/presidio](https://aka.ms/presidio)" -) - - -# Main panel -analyzer_load_state = st.info( - "Starting Presidio analyzer and loading Privy-trained PII model...") -engine = analyzer_engine() -analyzer_load_state.empty() - - -st_text = st.text_area( - label="Type in some text", - value="SELECT shipping FROM users WHERE shipping = '201 Thayer St Providence RI 02912'" - "\n\n" - "{user: Willie Porter, ip: 192.168.2.80, email: willie@gmail.com}", - height=200, -) - -button = st.button("Detect PII") - -if 'first_load' not in st.session_state: - st.session_state['first_load'] = True - -# After -st.subheader("Analyzed") -with st.spinner("Analyzing..."): - if button or st.session_state.first_load: - st_analyze_results = analyze( - text=st_text, - entities=st_entities, - language="en", - score_threshold=st_threshold, - return_decision_process=st_return_decision_process, - ) - annotated_tokens = annotate(st_text, st_analyze_results, st_entities) - # annotated_tokens - annotated_text(*annotated_tokens) -# vertical space -st.text("") - -st.subheader("Anonymized") - -with st.spinner("Anonymizing..."): - if button or st.session_state.first_load: - st_anonymize_results = anonymize(st_text, st_analyze_results) - st_anonymize_results - - -# table result -st.subheader("Detailed Findings") -if st_analyze_results: - res_dicts = [r.to_dict() for r in st_analyze_results] - for d in res_dicts: - d['Value'] = st_text[d['start']:d['end']] - df = pd.DataFrame.from_records(res_dicts) - df = df[["entity_type", "Value", "score", "start", "end"]].rename( - { - "entity_type": "Entity type", - "start": "Start", - "end": "End", - "score": "Confidence", - }, - axis=1, - ) - - st.dataframe(df, width=1000) -else: - st.text("No findings") - -st.session_state['first_load'] = True - -# json result - - -class ToDictListEncoder(JSONEncoder): - """Encode dict to json.""" - - def default(self, o): - """Encode to JSON using to_dict.""" - if o: - return o.to_dict() - return [] - - -if st_return_decision_process: - st.json(json.dumps(st_analyze_results, cls=ToDictListEncoder)) diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_p\303\251k_en_aggregate.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_p\303\251k_en_aggregate.html" deleted file mode 100644 index bc2d49e0538a6349975a82b774c920a32f728035..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_p\303\251k_en_aggregate.html" +++ /dev/null @@ -1,46 +0,0 @@ -
      0th instance:
      - -
      -
      -
      - -
      -
      - Source Saliency Heatmap -
      - x: Generated tokens, y: Attributed tokens -
      - - - -
      ▁He's▁a▁baker.</s>
      ▁Ő0.4610.0050.025-0.633
      ▁pék.0.8870.4630.9590.67
      </s>0.00.00.00.0
      -
      - -
      -
      -
      - -
      0th instance:
      - -
      -
      -
      - -
      -
      - Target Saliency Heatmap -
      - x: Generated tokens, y: Attributed tokens -
      - - - -
      ▁He's▁a▁baker.</s>
      ▁He's0.8860.2680.204
      ▁a0.0880.262
      ▁baker.-0.199
      </s>
      -
      - -
      -
      -
      - diff --git a/spaces/p-baleine/metaanalyser/metaanalyser/paper/paper.py b/spaces/p-baleine/metaanalyser/metaanalyser/paper/paper.py deleted file mode 100644 index 8b501b4430e8c7f28f78326bec565cabe86913d6..0000000000000000000000000000000000000000 --- a/spaces/p-baleine/metaanalyser/metaanalyser/paper/paper.py +++ /dev/null @@ -1,291 +0,0 @@ -import arxiv -import datetime -import logging -import re -import tempfile -from collections import Counter -from langchain.base_language import BaseLanguageModel -from langchain.utilities import SerpAPIWrapper -from pdfminer.high_level import extract_text -from pydantic import BaseModel -from tqdm.auto import tqdm -from typing import List, Optional - -from ..memory import memory -from .arxiv_categories import CATEGORY_NAME_ID_MAP - - -logger = logging.getLogger(__name__) - - -class Citation(BaseModel): - - title: str - snippet: str - - -class GoogleScholarItem(BaseModel): - - result_id: str - title: str - link: str - nb_cited: int - citations: List[Citation] - - @property - def mla_citiation(self) -> str: - mla = [c for c in self.citations if c.title == 'MLA'] - - if mla: - return mla[0] - - @classmethod - def from_google_scholar_result(cls, result): - result_id = result["result_id"] - link = result["link"] if "link" in result else "" - nb_cited = ( - result["inline_links"]["cited_by"]["total"] - if "cited_by" in result["inline_links"] else 0 - ) - citations = [ - Citation(title=c["title"], snippet=c["snippet"]) for c in - fetch_google_scholar_cite(result_id)["citations"] - ] - - return cls( - result_id=result_id, - title=result["title"], - link=link, - nb_cited=nb_cited, - citations=citations, - ) - - -class Paper(BaseModel): - """論文を表す、Google Scholar で得られる情報に追加して doi や要約などのフィールドを持つ - - NOTE: serpapi 以外をソースにすることも考えられるが、今は Paper の出自は serpapi の検索結果に限定する - """ - - citation_id: int - google_scholar_item: GoogleScholarItem - entry_id: str - summary: str - published: datetime.datetime - primary_category: str - categories: List[str] - text: str - doi: Optional[str] - - @property - def google_scholar_result_id(self): - return self.google_scholar_item.result_id - - @property - def title(self) -> str: - return self.google_scholar_item.title - - @property - def link(self) -> str: - return self.google_scholar_item.link - - @property - def nb_cited(self) -> int: - return self.google_scholar_item.nb_cited - - @property - def citations(self) -> str: - return self.google_scholar_item.citations - - @property - def mla_citiation(self) -> str: - return self.google_scholar_item.mla_citiation - - @classmethod - def from_google_scholar_result(cls, citation_id, result): - google_scholar_item = GoogleScholarItem.from_google_scholar_result(result) - arxiv_result = fetch_arxiv_result(google_scholar_item.link) - - def get_category(c): - if c not in CATEGORY_NAME_ID_MAP: - logger.warning(f'Category {c} is not found in CATEGORY_NAME_ID_MAP.') - return None - return CATEGORY_NAME_ID_MAP[c] - - primary_category = get_category(arxiv_result.primary_category) - categories = [ - c for c in [get_category(c) for c in arxiv_result.categories] - if c - ] - - return cls( - citation_id=citation_id, - google_scholar_item=google_scholar_item, - entry_id=arxiv_result.entry_id, - summary=arxiv_result.summary, - published=arxiv_result.published, - primary_category=primary_category, - categories=categories, - doi=arxiv_result.doi, - text=get_text_from_arxiv_search_result(arxiv_result), - ) - - def _repr_html_(self): - def get_category_string(): - # 基本的に categories の先頭が primary_category らしい - if not self.categories: - return "" - - result = f"{self.categories[0]}" - - if len(self.categories) == 1: - return result - - return f"{result}; " + "; ".join([c for c in self.categories[1:]]) - - return ( - "
      " - f" Title: {self.title}
      " - f" 引用: [{self.citation_id}] {self.mla_citiation.snippet}
      " - f" 被引用数: {self.nb_cited}
      " - f" 発行日: {self.published}
      " - f" カテゴリ: {get_category_string()}
      " - f" 要約: {self.summary}
      " - "
      " - ) - - -def search_on_google_scholar( - query: str, - approved_domains: List[str] = ["arxiv.org"], - n: int = 10, -) -> List[Paper]: - """query で SerpApi の Google Scholar API に問合せた結果を返す。 - approved_domains に指定されたドメインの論文のみを対象とする。 - 最大 n に指定された件数を返却する。 - """ - - def fetch(start=0): - def valid_item(i): - if "link" not in i: - return False - - domain = re.match(r"https?://([^/]+)", i["link"]) - - if not domain or domain.group(1) not in approved_domains: - return False - - return True - - # FIXME: 検索結果に arxiv の文献をなるべく多く含めたいため検索クエリを弄っている - actual_query = " ".join([query, "arxiv"]) if "arxiv" not in query.lower() else query - search_result = fetch_google_scholar(actual_query, start) - - return [i for i in search_result if valid_item(i)] - - result = [] - start = 0 - - while len(result) < n: - # FIXME: 今のままだとそもそも検索結果が全体で n 件以下の場合に無限ループになってしまう - result += fetch(start) - start += 10 - - logger.info("Collecting details...") - - return [ - Paper.from_google_scholar_result(id, i) - for id, i in tqdm(enumerate(result[:n], start=1)) - ] - - -def get_categories_string(papers: List[Paper], n: int = 3) -> str: - categories = Counter(sum([p.categories for p in papers], [])) - common = categories.most_common(n) - - if not common: - return "Artifical Intelligence" - - if len(common) == 1: - return common[0][0] - - if len(common) == 2: - return " and ".join([c[0] for c in common]) - - *lst, last = common - - return ", ".join([c[0] for c in lst]) + f" and {last[0]}" - - -def get_abstract_with_token_limit( - model: BaseLanguageModel, - papers: List[Paper], - limit: int, - separator: str = "\n", -) -> str: - def get_summary(paper: Paper): - summary = paper.summary.replace("\n", " ") - return f""" -Title: {paper.title} -citation_id: {paper.citation_id} -Summry: {summary} -""" - - summaries = [] - total_num_tokens = 0 - idx = 0 - - while idx < len(papers): - summary = get_summary(papers[idx]) - num_tokens = model.get_num_tokens(summary) - - if total_num_tokens + num_tokens > limit: - break - - summaries.append(summary) - total_num_tokens += num_tokens - idx += 1 - - result = separator.join(summaries).strip() - - logger.info( - f'Number of papers: {len(summaries)}, ' - f'number of tokens: {total_num_tokens}, text: {result[:100]}...' - ) - - return result - - -@memory.cache -def fetch_google_scholar(query: str, start: int) -> dict: - logger.info(f"Looking for `{query}` on Google Scholar, offset: {start}...") - serpapi = SerpAPIWrapper(params={ - "engine": "google_scholar", - "gl": "us", - "hl": "en", - "start": start, - }) - return serpapi.results(query)["organic_results"] - - -@memory.cache -def fetch_google_scholar_cite(google_scholar_id: str) -> dict: - serpapi = SerpAPIWrapper(params={"engine": "google_scholar_cite"}) - return serpapi.results(google_scholar_id) - - -@memory.cache -def fetch_arxiv_result(arxiv_abs_link: str) -> arxiv.Result: - m = re.match(r"https://arxiv\.org/abs/(.+)", arxiv_abs_link) - assert m is not None, f"{arxiv_abs_link} should be a arxiv link" - arxiv_id = m.group(1) - return next(arxiv.Search(id_list=[arxiv_id]).results()) - - -@memory.cache -def get_text_from_arxiv_search_result( - arxiv_search_result: arxiv.Result -) -> str: - with tempfile.TemporaryDirectory() as d: - file_path = arxiv_search_result.download_pdf(dirpath=d) - return extract_text(file_path) diff --git a/spaces/paulbauriegel/voice-coe-data/app.py b/spaces/paulbauriegel/voice-coe-data/app.py deleted file mode 100644 index b18ca54364cc97c1357116e2936834a58eb46120..0000000000000000000000000000000000000000 --- a/spaces/paulbauriegel/voice-coe-data/app.py +++ /dev/null @@ -1,172 +0,0 @@ -import gradio as gr - - -import os - -HF_TOKEN = os.getenv('HF_TOKEN') -callback = gr.HuggingFaceDatasetSaver(HF_TOKEN, "paulbauriegel/voice-coe-demo") - -sentences = \ - {'en':[ - "In winter, the dry leaves fly around in the air.", - "The good old man broke through the ice with his horse and fell into the cold water.", - "He always eats the eggs without salt and pepper.", - "Good to know. Now I can fix the appointment.", - "We find this approach is particularly effective at learning speech to text translation.", - "Ever wonder what your Representative has been up to?", - "The Wenker sheets are the data basis for Georg Wenker's language atlases", - "At least she gets 7000 dollars in damages" - ], - 'de':[ - "Im Winter fliegen die trocknen Blätter durch die Luft herum.", - "Der gute alte Mann ist mit dem Pferde durch´s Eis gebrochen und in das kalte Wasser gefallen.", - "Er isst die Eier immer ohne Salz und Pfeffer.", - "Gut zu wissen. Jetzt kann ich den Termin vereinbaren.", - "Wir haben festgestellt, dass dieser Ansatz besonders effektiv beim Erlernen der Sprache-zu-Text Übersetzung ist.", - "Haben Sie sich jemals gefragt, was Ihr Abgeordneter so treibt?", - "Die Wenkerbogen stellen die Datengrundlage für Georg Wenkers Sprachatlanten dar", - "Zumindest bekommt sie 7000 Dollar Schmerzensgeld", - ], - 'ru': [ - "Зимой сухие листья кружатся в воздухе.", - "Старик провалился под лед на своем коне и упал в холодную воду.", - "Он всегда ест яйца без соли и перца.", - "Это важная информация.Теперь я могу назначить встречу.", - "Мы считаем этот подход особенно эффективным при обучении переводу речи в текст.", - "Вы когда-нибудь задумывались, чем занимается ваш представитель?", - "Листы Венкера являются основой данных для языковых атласов Георга Венкера.", - "По крайней мере, она получает 7000 долларов в качестве возмещения ущерба." - ], - 'sk':[ - "V zime lietajú suché listy vzduchom.", - "Starček prerazil ľad so svojím koňom a spadol do studenej vody.", - "Vajcia vždy konzumuje bez soli a korenia.", - "Je dobré vedieť, že si teraz môžem dohodnúť stretnutie.", - "Zistili sme, že tento prístup je obzvlášť efektívny pri učení sa prekladu reči do textu.", - "Premýšľali ste niekedy, čo chystá váš poslanec?", - "Wenkerove hárky predstavujú základ dát jazykových atlasov Georga Wenkera", - "Aspoň dostane ako bolestné najmenej 7000 dolárov", - ], - 'ar': [ - "في الشتاء ، تتطاير الأوراق الجافة في الهواء.", - "انكسر الجليد بالعجوز الطيب و حصانه وسقطا في الماء البارد.", - "هو يأكل البيض دائما بدون ملح وفلفل.", - "من الجيد أن أعرف. الآن يمكنني تحديد الموعد.", - "نحن نرى أن هذا النهج فعال بشكل خاص في تعلم ترجمة الكلام إلى نص.", - "هل تساءلت يومًا ما الذي كان مندوبك ينوي القيام به؟", - "أوراق وينكر هي البيانات الأساسية لأطالس جورج وينكر اللغوية.", - "على الأقل تحصلت على تعويض قدره 7000 دولار.", - ], - "pl": [ - "Zimą, suche liście fruwają w powietrzu.", - "Lód załamał się i poczciwy staruszek oraz jego koń wpadli do zimnej wody.", - "On zawsze je jajka bez soli i pieprzu.", - "Dobrze wiedzieć. Teraz mogę poprawić zaproszenie.", - "Uważamy, że to podejście jest szczególnie efektywne przy nauce tłumaczenia mowy na tekst.", - "Czy kiedykolwiek zastanawiałeś się o co chodzi twojemu przedstawicielowi?", - "Ankiety Wenkera są podstawą atlasu językowego Georga Wenkera.", - "Ona przynajmniej dostanie 7000 dolarów odszkodowania.", - ], - "hi": [ - "सर्दियों में सूखे पत्ते हवा में इधर-उधर उड़ती हैं।", - "बूढ़े भले आदमी अपने घोड़े के साथ बर्फ तोड़कर ठंडे पानी में गिर गए ।", - "वह हमेशा बिना नमक और काली मिर्च के अंडे खाते है।", - "जानकर अच्छा लगा। अब मैं बैठक फिक्स कर सकता हूं।", - "हम पाते हैं कि यह दृष्टिकोण लिखित भाषा से पठित भाषा सीखने में विशेष रूप से प्रभावी है।", - "क्या आपने कभी सोचा है कि आपका प्रतिनिधि क्या कर रहा है?", - "वेन्कर शीट जॉर्ज वेन्कर की भाषा एटलस के संसूचना का आधार हैं", - "उसे हर्जाने में कम से कम 7000 डॉलर मिलते हैं", - ], - 'el': [ - "Το χειμώνα, τα ξερά φύλλα πετούν στον αέρα.", - "Ο καλός γέρος έσπασε το πάγο και έπεσε μέσα στο κρύο νερό με το άλογό του.", - "Πάντα τρώει τα αυγά χωρίς αλάτι και πιπέρι.", - "Καλώς. Τώρα μπορώ να κλείσω το ραντεβού.", - "Βρίσκουμε αυτή τη πρακτική πιο αποτελεσματική για εκμάθηση μετάφρασης ομιλίας σε κείμενο.", - "Έχεις ποτέ αναρωτηθεί, τι κάνει ο εκπρόσωπός σου?", - "Τα έγγραφα Wenker είναι βασισμένα στα δεδομένα χαρτογράφισης γλωσσών του George Wenker.", - "Τουλάχιστον πήρε 7000 δολάρια σε αποζημιώσεις.", - ]} -with gr.Blocks(title='Voice CoE Data Collection') as demo: - _ = gr.HTML('

      CoE Voice Data Collection

      ') - lang = gr.Dropdown( - sorted(sentences.keys()), - value='en', - interactive=True, - label="Choose your language", - ) - client_ip = gr.Label("", label="User-IP", visible=False) - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_0 = gr.Label(sentences['en'][0], label="") - audio_0 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_1 = gr.Label(sentences['en'][1], label="") - audio_1 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_2 = gr.Label(sentences['en'][2], label="") - audio_2 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_3 = gr.Label(sentences['en'][3], label="") - audio_3 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_4 = gr.Label(sentences['en'][4], label="") - audio_4 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_5 = gr.Label(sentences['en'][5], label="") - audio_5 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_6 = gr.Label(sentences['en'][6], label="") - audio_6 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - #outputs = gr.components.Textbox(label=) - label_7 = gr.Label(sentences['en'][7], label="") - audio_7 = gr.Audio(source="microphone", type="filepath", label="Record sample") - with gr.Row(): - acc = gr.Dropdown( - ["yes", "no", "maybe"], - label="Do you have an accent in the spoken language", - ) - with gr.Row(): - agree = gr.Checkbox(value=False, label='I agree that my data is stored and analysed by the iHub CoE Voice Team') - with gr.Row(): - btn = gr.Button("Submit data") - thx = gr.HTML('') # - - - lang.change(lambda x: {label_0: sentences[x][0], - label_1: sentences[x][1], - label_2: sentences[x][2], - label_3: sentences[x][3], - label_4: sentences[x][4], - label_5: sentences[x][5], - label_6: sentences[x][6], - label_7: sentences[x][7], }, - lang, - [label_0, label_1, label_2, label_3, - label_4, label_5, label_6, label_7]) - - # This needs to be called at some point prior to the first call to callback.flag() - callback.setup([client_ip, lang, audio_0, audio_1, audio_2, audio_3, audio_4, audio_5, audio_6, audio_7, acc], "flagged_data_points") - - # We can choose which components to flag -- in this case, we'll flag all of them - def submit_data(client_ip, lang, audio_0, audio_1, audio_2, audio_3, audio_4, audio_5, audio_6, audio_7, acc, agree, request: gr.Request): - if not agree: - return '

      No data has been submitted

      ' - else: - client_ip_d = {'ip': request.client.host} - callback.flag([client_ip_d, lang, audio_0, audio_1, audio_2, audio_3, audio_4, audio_5, audio_6, audio_7, acc]) - return '

      Thank you for submitting you data

      ' - - btn.click(submit_data, - [client_ip, lang, audio_0, audio_1, audio_2, audio_3, audio_4, audio_5, audio_6, audio_7, acc, agree], - thx, - preprocess=False) - -demo.launch() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/initialise_test.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/initialise_test.py deleted file mode 100644 index 89f9b07511c8fee74686d9cc434bf66345a46d6d..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/colorama/tests/initialise_test.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import sys -from unittest import TestCase, main, skipUnless - -try: - from unittest.mock import patch, Mock -except ImportError: - from mock import patch, Mock - -from ..ansitowin32 import StreamWrapper -from ..initialise import init, just_fix_windows_console, _wipe_internal_state_for_tests -from .utils import osname, replace_by - -orig_stdout = sys.stdout -orig_stderr = sys.stderr - - -class InitTest(TestCase): - - @skipUnless(sys.stdout.isatty(), "sys.stdout is not a tty") - def setUp(self): - # sanity check - self.assertNotWrapped() - - def tearDown(self): - _wipe_internal_state_for_tests() - sys.stdout = orig_stdout - sys.stderr = orig_stderr - - def assertWrapped(self): - self.assertIsNot(sys.stdout, orig_stdout, 'stdout should be wrapped') - self.assertIsNot(sys.stderr, orig_stderr, 'stderr should be wrapped') - self.assertTrue(isinstance(sys.stdout, StreamWrapper), - 'bad stdout wrapper') - self.assertTrue(isinstance(sys.stderr, StreamWrapper), - 'bad stderr wrapper') - - def assertNotWrapped(self): - self.assertIs(sys.stdout, orig_stdout, 'stdout should not be wrapped') - self.assertIs(sys.stderr, orig_stderr, 'stderr should not be wrapped') - - @patch('colorama.initialise.reset_all') - @patch('colorama.ansitowin32.winapi_test', lambda *_: True) - @patch('colorama.ansitowin32.enable_vt_processing', lambda *_: False) - def testInitWrapsOnWindows(self, _): - with osname("nt"): - init() - self.assertWrapped() - - @patch('colorama.initialise.reset_all') - @patch('colorama.ansitowin32.winapi_test', lambda *_: False) - def testInitDoesntWrapOnEmulatedWindows(self, _): - with osname("nt"): - init() - self.assertNotWrapped() - - def testInitDoesntWrapOnNonWindows(self): - with osname("posix"): - init() - self.assertNotWrapped() - - def testInitDoesntWrapIfNone(self): - with replace_by(None): - init() - # We can't use assertNotWrapped here because replace_by(None) - # changes stdout/stderr already. - self.assertIsNone(sys.stdout) - self.assertIsNone(sys.stderr) - - def testInitAutoresetOnWrapsOnAllPlatforms(self): - with osname("posix"): - init(autoreset=True) - self.assertWrapped() - - def testInitWrapOffDoesntWrapOnWindows(self): - with osname("nt"): - init(wrap=False) - self.assertNotWrapped() - - def testInitWrapOffIncompatibleWithAutoresetOn(self): - self.assertRaises(ValueError, lambda: init(autoreset=True, wrap=False)) - - @patch('colorama.win32.SetConsoleTextAttribute') - @patch('colorama.initialise.AnsiToWin32') - def testAutoResetPassedOn(self, mockATW32, _): - with osname("nt"): - init(autoreset=True) - self.assertEqual(len(mockATW32.call_args_list), 2) - self.assertEqual(mockATW32.call_args_list[1][1]['autoreset'], True) - self.assertEqual(mockATW32.call_args_list[0][1]['autoreset'], True) - - @patch('colorama.initialise.AnsiToWin32') - def testAutoResetChangeable(self, mockATW32): - with osname("nt"): - init() - - init(autoreset=True) - self.assertEqual(len(mockATW32.call_args_list), 4) - self.assertEqual(mockATW32.call_args_list[2][1]['autoreset'], True) - self.assertEqual(mockATW32.call_args_list[3][1]['autoreset'], True) - - init() - self.assertEqual(len(mockATW32.call_args_list), 6) - self.assertEqual( - mockATW32.call_args_list[4][1]['autoreset'], False) - self.assertEqual( - mockATW32.call_args_list[5][1]['autoreset'], False) - - - @patch('colorama.initialise.atexit.register') - def testAtexitRegisteredOnlyOnce(self, mockRegister): - init() - self.assertTrue(mockRegister.called) - mockRegister.reset_mock() - init() - self.assertFalse(mockRegister.called) - - -class JustFixWindowsConsoleTest(TestCase): - def _reset(self): - _wipe_internal_state_for_tests() - sys.stdout = orig_stdout - sys.stderr = orig_stderr - - def tearDown(self): - self._reset() - - @patch("colorama.ansitowin32.winapi_test", lambda: True) - def testJustFixWindowsConsole(self): - if sys.platform != "win32": - # just_fix_windows_console should be a no-op - just_fix_windows_console() - self.assertIs(sys.stdout, orig_stdout) - self.assertIs(sys.stderr, orig_stderr) - else: - def fake_std(): - # Emulate stdout=not a tty, stderr=tty - # to check that we handle both cases correctly - stdout = Mock() - stdout.closed = False - stdout.isatty.return_value = False - stdout.fileno.return_value = 1 - sys.stdout = stdout - - stderr = Mock() - stderr.closed = False - stderr.isatty.return_value = True - stderr.fileno.return_value = 2 - sys.stderr = stderr - - for native_ansi in [False, True]: - with patch( - 'colorama.ansitowin32.enable_vt_processing', - lambda *_: native_ansi - ): - self._reset() - fake_std() - - # Regular single-call test - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(sys.stdout, prev_stdout) - if native_ansi: - self.assertIs(sys.stderr, prev_stderr) - else: - self.assertIsNot(sys.stderr, prev_stderr) - - # second call without resetting is always a no-op - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(sys.stdout, prev_stdout) - self.assertIs(sys.stderr, prev_stderr) - - self._reset() - fake_std() - - # If init() runs first, just_fix_windows_console should be a no-op - init() - prev_stdout = sys.stdout - prev_stderr = sys.stderr - just_fix_windows_console() - self.assertIs(prev_stdout, sys.stdout) - self.assertIs(prev_stderr, sys.stderr) - - -if __name__ == '__main__': - main() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_ratio.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_ratio.py deleted file mode 100644 index e8a3a674e0070159b956c29c5092b0f72abc969d..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_ratio.py +++ /dev/null @@ -1,160 +0,0 @@ -import sys -from fractions import Fraction -from math import ceil -from typing import cast, List, Optional, Sequence - -if sys.version_info >= (3, 8): - from typing import Protocol -else: - from pip._vendor.typing_extensions import Protocol # pragma: no cover - - -class Edge(Protocol): - """Any object that defines an edge (such as Layout).""" - - size: Optional[int] = None - ratio: int = 1 - minimum_size: int = 1 - - -def ratio_resolve(total: int, edges: Sequence[Edge]) -> List[int]: - """Divide total space to satisfy size, ratio, and minimum_size, constraints. - - The returned list of integers should add up to total in most cases, unless it is - impossible to satisfy all the constraints. For instance, if there are two edges - with a minimum size of 20 each and `total` is 30 then the returned list will be - greater than total. In practice, this would mean that a Layout object would - clip the rows that would overflow the screen height. - - Args: - total (int): Total number of characters. - edges (List[Edge]): Edges within total space. - - Returns: - List[int]: Number of characters for each edge. - """ - # Size of edge or None for yet to be determined - sizes = [(edge.size or None) for edge in edges] - - _Fraction = Fraction - - # While any edges haven't been calculated - while None in sizes: - # Get flexible edges and index to map these back on to sizes list - flexible_edges = [ - (index, edge) - for index, (size, edge) in enumerate(zip(sizes, edges)) - if size is None - ] - # Remaining space in total - remaining = total - sum(size or 0 for size in sizes) - if remaining <= 0: - # No room for flexible edges - return [ - ((edge.minimum_size or 1) if size is None else size) - for size, edge in zip(sizes, edges) - ] - # Calculate number of characters in a ratio portion - portion = _Fraction( - remaining, sum((edge.ratio or 1) for _, edge in flexible_edges) - ) - - # If any edges will be less than their minimum, replace size with the minimum - for index, edge in flexible_edges: - if portion * edge.ratio <= edge.minimum_size: - sizes[index] = edge.minimum_size - # New fixed size will invalidate calculations, so we need to repeat the process - break - else: - # Distribute flexible space and compensate for rounding error - # Since edge sizes can only be integers we need to add the remainder - # to the following line - remainder = _Fraction(0) - for index, edge in flexible_edges: - size, remainder = divmod(portion * edge.ratio + remainder, 1) - sizes[index] = size - break - # Sizes now contains integers only - return cast(List[int], sizes) - - -def ratio_reduce( - total: int, ratios: List[int], maximums: List[int], values: List[int] -) -> List[int]: - """Divide an integer total in to parts based on ratios. - - Args: - total (int): The total to divide. - ratios (List[int]): A list of integer ratios. - maximums (List[int]): List of maximums values for each slot. - values (List[int]): List of values - - Returns: - List[int]: A list of integers guaranteed to sum to total. - """ - ratios = [ratio if _max else 0 for ratio, _max in zip(ratios, maximums)] - total_ratio = sum(ratios) - if not total_ratio: - return values[:] - total_remaining = total - result: List[int] = [] - append = result.append - for ratio, maximum, value in zip(ratios, maximums, values): - if ratio and total_ratio > 0: - distributed = min(maximum, round(ratio * total_remaining / total_ratio)) - append(value - distributed) - total_remaining -= distributed - total_ratio -= ratio - else: - append(value) - return result - - -def ratio_distribute( - total: int, ratios: List[int], minimums: Optional[List[int]] = None -) -> List[int]: - """Distribute an integer total in to parts based on ratios. - - Args: - total (int): The total to divide. - ratios (List[int]): A list of integer ratios. - minimums (List[int]): List of minimum values for each slot. - - Returns: - List[int]: A list of integers guaranteed to sum to total. - """ - if minimums: - ratios = [ratio if _min else 0 for ratio, _min in zip(ratios, minimums)] - total_ratio = sum(ratios) - assert total_ratio > 0, "Sum of ratios must be > 0" - - total_remaining = total - distributed_total: List[int] = [] - append = distributed_total.append - if minimums is None: - _minimums = [0] * len(ratios) - else: - _minimums = minimums - for ratio, minimum in zip(ratios, _minimums): - if total_ratio > 0: - distributed = max(minimum, ceil(ratio * total_remaining / total_ratio)) - else: - distributed = total_remaining - append(distributed) - total_ratio -= ratio - total_remaining -= distributed - return distributed_total - - -if __name__ == "__main__": - from dataclasses import dataclass - - @dataclass - class E: - - size: Optional[int] = None - ratio: int = 1 - minimum_size: int = 1 - - resolved = ratio_resolve(110, [E(None, 1, 1), E(None, 1, 1), E(None, 1, 1)]) - print(sum(resolved)) diff --git a/spaces/plzdontcry/dakubettergpt/src/store/prompt-slice.ts b/spaces/plzdontcry/dakubettergpt/src/store/prompt-slice.ts deleted file mode 100644 index 50731561cbeb1f159d17cfcee2e0f3d629727f4c..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/store/prompt-slice.ts +++ /dev/null @@ -1,18 +0,0 @@ -import { StoreSlice } from './store'; -import { Prompt } from '@type/prompt'; -import defaultPrompts from '@constants/prompt'; - -export interface PromptSlice { - prompts: Prompt[]; - setPrompts: (commandPrompt: Prompt[]) => void; -} - -export const createPromptSlice: StoreSlice = (set, get) => ({ - prompts: defaultPrompts, - setPrompts: (prompts: Prompt[]) => { - set((prev: PromptSlice) => ({ - ...prev, - prompts: prompts, - })); - }, -}); diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/datasets/images_dataset.py b/spaces/power2/JoJoGan-powerhow2/e4e/datasets/images_dataset.py deleted file mode 100644 index 00c54c7db944569a749af4c6f0c4d99fcc37f9cc..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/e4e/datasets/images_dataset.py +++ /dev/null @@ -1,33 +0,0 @@ -from torch.utils.data import Dataset -from PIL import Image -from utils import data_utils - - -class ImagesDataset(Dataset): - - def __init__(self, source_root, target_root, opts, target_transform=None, source_transform=None): - self.source_paths = sorted(data_utils.make_dataset(source_root)) - self.target_paths = sorted(data_utils.make_dataset(target_root)) - self.source_transform = source_transform - self.target_transform = target_transform - self.opts = opts - - def __len__(self): - return len(self.source_paths) - - def __getitem__(self, index): - from_path = self.source_paths[index] - from_im = Image.open(from_path) - from_im = from_im.convert('RGB') - - to_path = self.target_paths[index] - to_im = Image.open(to_path).convert('RGB') - if self.target_transform: - to_im = self.target_transform(to_im) - - if self.source_transform: - from_im = self.source_transform(from_im) - else: - from_im = to_im - - return from_im, to_im diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_ringbuffer.c b/spaces/prerna9811/Chord/portaudio/src/common/pa_ringbuffer.c deleted file mode 100644 index b978d54f195c3a898b5fab79159072b28d6a1a1b..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/common/pa_ringbuffer.c +++ /dev/null @@ -1,237 +0,0 @@ -/* - * $Id$ - * Portable Audio I/O Library - * Ring Buffer utility. - * - * Author: Phil Burk, http://www.softsynth.com - * modified for SMP safety on Mac OS X by Bjorn Roche - * modified for SMP safety on Linux by Leland Lucius - * also, allowed for const where possible - * modified for multiple-byte-sized data elements by Sven Fischer - * - * Note that this is safe only for a single-thread reader and a - * single-thread writer. - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** - @file - @ingroup common_src -*/ - -#include -#include -#include -#include "pa_ringbuffer.h" -#include -#include "pa_memorybarrier.h" - -/*************************************************************************** - * Initialize FIFO. - * elementCount must be power of 2, returns -1 if not. - */ -ring_buffer_size_t PaUtil_InitializeRingBuffer( PaUtilRingBuffer *rbuf, ring_buffer_size_t elementSizeBytes, ring_buffer_size_t elementCount, void *dataPtr ) -{ - if( ((elementCount-1) & elementCount) != 0) return -1; /* Not Power of two. */ - rbuf->bufferSize = elementCount; - rbuf->buffer = (char *)dataPtr; - PaUtil_FlushRingBuffer( rbuf ); - rbuf->bigMask = (elementCount*2)-1; - rbuf->smallMask = (elementCount)-1; - rbuf->elementSizeBytes = elementSizeBytes; - return 0; -} - -/*************************************************************************** -** Return number of elements available for reading. */ -ring_buffer_size_t PaUtil_GetRingBufferReadAvailable( const PaUtilRingBuffer *rbuf ) -{ - return ( (rbuf->writeIndex - rbuf->readIndex) & rbuf->bigMask ); -} -/*************************************************************************** -** Return number of elements available for writing. */ -ring_buffer_size_t PaUtil_GetRingBufferWriteAvailable( const PaUtilRingBuffer *rbuf ) -{ - return ( rbuf->bufferSize - PaUtil_GetRingBufferReadAvailable(rbuf)); -} - -/*************************************************************************** -** Clear buffer. Should only be called when buffer is NOT being read or written. */ -void PaUtil_FlushRingBuffer( PaUtilRingBuffer *rbuf ) -{ - rbuf->writeIndex = rbuf->readIndex = 0; -} - -/*************************************************************************** -** Get address of region(s) to which we can write data. -** If the region is contiguous, size2 will be zero. -** If non-contiguous, size2 will be the size of second region. -** Returns room available to be written or elementCount, whichever is smaller. -*/ -ring_buffer_size_t PaUtil_GetRingBufferWriteRegions( PaUtilRingBuffer *rbuf, ring_buffer_size_t elementCount, - void **dataPtr1, ring_buffer_size_t *sizePtr1, - void **dataPtr2, ring_buffer_size_t *sizePtr2 ) -{ - ring_buffer_size_t index; - ring_buffer_size_t available = PaUtil_GetRingBufferWriteAvailable( rbuf ); - if( elementCount > available ) elementCount = available; - /* Check to see if write is not contiguous. */ - index = rbuf->writeIndex & rbuf->smallMask; - if( (index + elementCount) > rbuf->bufferSize ) - { - /* Write data in two blocks that wrap the buffer. */ - ring_buffer_size_t firstHalf = rbuf->bufferSize - index; - *dataPtr1 = &rbuf->buffer[index*rbuf->elementSizeBytes]; - *sizePtr1 = firstHalf; - *dataPtr2 = &rbuf->buffer[0]; - *sizePtr2 = elementCount - firstHalf; - } - else - { - *dataPtr1 = &rbuf->buffer[index*rbuf->elementSizeBytes]; - *sizePtr1 = elementCount; - *dataPtr2 = NULL; - *sizePtr2 = 0; - } - - if( available ) - PaUtil_FullMemoryBarrier(); /* (write-after-read) => full barrier */ - - return elementCount; -} - - -/*************************************************************************** -*/ -ring_buffer_size_t PaUtil_AdvanceRingBufferWriteIndex( PaUtilRingBuffer *rbuf, ring_buffer_size_t elementCount ) -{ - /* ensure that previous writes are seen before we update the write index - (write after write) - */ - PaUtil_WriteMemoryBarrier(); - return rbuf->writeIndex = (rbuf->writeIndex + elementCount) & rbuf->bigMask; -} - -/*************************************************************************** -** Get address of region(s) from which we can read data. -** If the region is contiguous, size2 will be zero. -** If non-contiguous, size2 will be the size of second region. -** Returns room available to be read or elementCount, whichever is smaller. -*/ -ring_buffer_size_t PaUtil_GetRingBufferReadRegions( PaUtilRingBuffer *rbuf, ring_buffer_size_t elementCount, - void **dataPtr1, ring_buffer_size_t *sizePtr1, - void **dataPtr2, ring_buffer_size_t *sizePtr2 ) -{ - ring_buffer_size_t index; - ring_buffer_size_t available = PaUtil_GetRingBufferReadAvailable( rbuf ); /* doesn't use memory barrier */ - if( elementCount > available ) elementCount = available; - /* Check to see if read is not contiguous. */ - index = rbuf->readIndex & rbuf->smallMask; - if( (index + elementCount) > rbuf->bufferSize ) - { - /* Write data in two blocks that wrap the buffer. */ - ring_buffer_size_t firstHalf = rbuf->bufferSize - index; - *dataPtr1 = &rbuf->buffer[index*rbuf->elementSizeBytes]; - *sizePtr1 = firstHalf; - *dataPtr2 = &rbuf->buffer[0]; - *sizePtr2 = elementCount - firstHalf; - } - else - { - *dataPtr1 = &rbuf->buffer[index*rbuf->elementSizeBytes]; - *sizePtr1 = elementCount; - *dataPtr2 = NULL; - *sizePtr2 = 0; - } - - if( available ) - PaUtil_ReadMemoryBarrier(); /* (read-after-read) => read barrier */ - - return elementCount; -} -/*************************************************************************** -*/ -ring_buffer_size_t PaUtil_AdvanceRingBufferReadIndex( PaUtilRingBuffer *rbuf, ring_buffer_size_t elementCount ) -{ - /* ensure that previous reads (copies out of the ring buffer) are always completed before updating (writing) the read index. - (write-after-read) => full barrier - */ - PaUtil_FullMemoryBarrier(); - return rbuf->readIndex = (rbuf->readIndex + elementCount) & rbuf->bigMask; -} - -/*************************************************************************** -** Return elements written. */ -ring_buffer_size_t PaUtil_WriteRingBuffer( PaUtilRingBuffer *rbuf, const void *data, ring_buffer_size_t elementCount ) -{ - ring_buffer_size_t size1, size2, numWritten; - void *data1, *data2; - numWritten = PaUtil_GetRingBufferWriteRegions( rbuf, elementCount, &data1, &size1, &data2, &size2 ); - if( size2 > 0 ) - { - - memcpy( data1, data, size1*rbuf->elementSizeBytes ); - data = ((char *)data) + size1*rbuf->elementSizeBytes; - memcpy( data2, data, size2*rbuf->elementSizeBytes ); - } - else - { - memcpy( data1, data, size1*rbuf->elementSizeBytes ); - } - PaUtil_AdvanceRingBufferWriteIndex( rbuf, numWritten ); - return numWritten; -} - -/*************************************************************************** -** Return elements read. */ -ring_buffer_size_t PaUtil_ReadRingBuffer( PaUtilRingBuffer *rbuf, void *data, ring_buffer_size_t elementCount ) -{ - ring_buffer_size_t size1, size2, numRead; - void *data1, *data2; - numRead = PaUtil_GetRingBufferReadRegions( rbuf, elementCount, &data1, &size1, &data2, &size2 ); - if( size2 > 0 ) - { - memcpy( data, data1, size1*rbuf->elementSizeBytes ); - data = ((char *)data) + size1*rbuf->elementSizeBytes; - memcpy( data, data2, size2*rbuf->elementSizeBytes ); - } - else - { - memcpy( data, data1, size1*rbuf->elementSizeBytes ); - } - PaUtil_AdvanceRingBufferReadIndex( rbuf, numRead ); - return numRead; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/flagging.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/flagging.py deleted file mode 100644 index 513c8aef76da8cf31bf5b468fef98378c8401a49..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/flagging.py +++ /dev/null @@ -1,496 +0,0 @@ -from __future__ import annotations - -import csv -import datetime -import json -import os -import time -import uuid -from abc import ABC, abstractmethod -from collections import OrderedDict -from pathlib import Path -from typing import TYPE_CHECKING, Any - -import filelock -import huggingface_hub -from gradio_client import utils as client_utils -from gradio_client.documentation import document, set_documentation_group - -import gradio as gr -from gradio import utils - -if TYPE_CHECKING: - from gradio.components import Component - -set_documentation_group("flagging") - - -class FlaggingCallback(ABC): - """ - An abstract class for defining the methods that any FlaggingCallback should have. - """ - - @abstractmethod - def setup(self, components: list[Component], flagging_dir: str): - """ - This method should be overridden and ensure that everything is set up correctly for flag(). - This method gets called once at the beginning of the Interface.launch() method. - Parameters: - components: Set of components that will provide flagged data. - flagging_dir: A string, typically containing the path to the directory where the flagging file should be storied (provided as an argument to Interface.__init__()). - """ - pass - - @abstractmethod - def flag( - self, - flag_data: list[Any], - flag_option: str = "", - username: str | None = None, - ) -> int: - """ - This method should be overridden by the FlaggingCallback subclass and may contain optional additional arguments. - This gets called every time the button is pressed. - Parameters: - interface: The Interface object that is being used to launch the flagging interface. - flag_data: The data to be flagged. - flag_option (optional): In the case that flagging_options are provided, the flag option that is being used. - username (optional): The username of the user that is flagging the data, if logged in. - Returns: - (int) The total number of samples that have been flagged. - """ - pass - - -@document() -class SimpleCSVLogger(FlaggingCallback): - """ - A simplified implementation of the FlaggingCallback abstract class - provided for illustrative purposes. Each flagged sample (both the input and output data) - is logged to a CSV file on the machine running the gradio app. - Example: - import gradio as gr - def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label", - flagging_callback=SimpleCSVLogger()) - """ - - def __init__(self): - pass - - def setup(self, components: list[Component], flagging_dir: str | Path): - self.components = components - self.flagging_dir = flagging_dir - os.makedirs(flagging_dir, exist_ok=True) - - def flag( - self, - flag_data: list[Any], - flag_option: str = "", - username: str | None = None, - ) -> int: - flagging_dir = self.flagging_dir - log_filepath = Path(flagging_dir) / "log.csv" - - csv_data = [] - for component, sample in zip(self.components, flag_data): - save_dir = Path( - flagging_dir - ) / client_utils.strip_invalid_filename_characters(component.label or "") - save_dir.mkdir(exist_ok=True) - csv_data.append( - component.flag( - sample, - save_dir, - ) - ) - - with open(log_filepath, "a", newline="") as csvfile: - writer = csv.writer(csvfile) - writer.writerow(utils.sanitize_list_for_csv(csv_data)) - - with open(log_filepath) as csvfile: - line_count = len(list(csv.reader(csvfile))) - 1 - return line_count - - -@document() -class CSVLogger(FlaggingCallback): - """ - The default implementation of the FlaggingCallback abstract class. Each flagged - sample (both the input and output data) is logged to a CSV file with headers on the machine running the gradio app. - Example: - import gradio as gr - def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label", - flagging_callback=CSVLogger()) - Guides: using-flagging - """ - - def __init__(self): - pass - - def setup( - self, - components: list[Component], - flagging_dir: str | Path, - ): - self.components = components - self.flagging_dir = flagging_dir - os.makedirs(flagging_dir, exist_ok=True) - - def flag( - self, - flag_data: list[Any], - flag_option: str = "", - username: str | None = None, - ) -> int: - flagging_dir = self.flagging_dir - log_filepath = Path(flagging_dir) / "log.csv" - is_new = not Path(log_filepath).exists() - headers = [ - getattr(component, "label", None) or f"component {idx}" - for idx, component in enumerate(self.components) - ] + [ - "flag", - "username", - "timestamp", - ] - - csv_data = [] - for idx, (component, sample) in enumerate(zip(self.components, flag_data)): - save_dir = Path( - flagging_dir - ) / client_utils.strip_invalid_filename_characters( - getattr(component, "label", None) or f"component {idx}" - ) - save_dir.mkdir(exist_ok=True) - if utils.is_update(sample): - csv_data.append(str(sample)) - else: - csv_data.append( - component.flag(sample, flag_dir=save_dir) - if sample is not None - else "" - ) - csv_data.append(flag_option) - csv_data.append(username if username is not None else "") - csv_data.append(str(datetime.datetime.now())) - - with open(log_filepath, "a", newline="", encoding="utf-8") as csvfile: - writer = csv.writer(csvfile) - if is_new: - writer.writerow(utils.sanitize_list_for_csv(headers)) - writer.writerow(utils.sanitize_list_for_csv(csv_data)) - - with open(log_filepath, encoding="utf-8") as csvfile: - line_count = len(list(csv.reader(csvfile))) - 1 - return line_count - - -@document() -class HuggingFaceDatasetSaver(FlaggingCallback): - """ - A callback that saves each flagged sample (both the input and output data) to a HuggingFace dataset. - - Example: - import gradio as gr - hf_writer = gr.HuggingFaceDatasetSaver(HF_API_TOKEN, "image-classification-mistakes") - def image_classifier(inp): - return {'cat': 0.3, 'dog': 0.7} - demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label", - allow_flagging="manual", flagging_callback=hf_writer) - Guides: using-flagging - """ - - def __init__( - self, - hf_token: str, - dataset_name: str, - private: bool = False, - info_filename: str = "dataset_info.json", - separate_dirs: bool = False, - ): - """ - Parameters: - hf_token: The HuggingFace token to use to create (and write the flagged sample to) the HuggingFace dataset (defaults to the registered one). - dataset_name: The repo_id of the dataset to save the data to, e.g. "image-classifier-1" or "username/image-classifier-1". - private: Whether the dataset should be private (defaults to False). - info_filename: The name of the file to save the dataset info (defaults to "dataset_infos.json"). - separate_dirs: If True, each flagged item will be saved in a separate directory. This makes the flagging more robust to concurrent editing, but may be less convenient to use. - """ - self.hf_token = hf_token - self.dataset_id = dataset_name # TODO: rename parameter (but ensure backward compatibility somehow) - self.dataset_private = private - self.info_filename = info_filename - self.separate_dirs = separate_dirs - - def setup(self, components: list[Component], flagging_dir: str): - """ - Params: - flagging_dir (str): local directory where the dataset is cloned, - updated, and pushed from. - """ - # Setup dataset on the Hub - self.dataset_id = huggingface_hub.create_repo( - repo_id=self.dataset_id, - token=self.hf_token, - private=self.dataset_private, - repo_type="dataset", - exist_ok=True, - ).repo_id - path_glob = "**/*.jsonl" if self.separate_dirs else "data.csv" - huggingface_hub.metadata_update( - repo_id=self.dataset_id, - repo_type="dataset", - metadata={ - "configs": [ - { - "config_name": "default", - "data_files": [{"split": "train", "path": path_glob}], - } - ] - }, - overwrite=True, - token=self.hf_token, - ) - - # Setup flagging dir - self.components = components - self.dataset_dir = ( - Path(flagging_dir).absolute() / self.dataset_id.split("/")[-1] - ) - self.dataset_dir.mkdir(parents=True, exist_ok=True) - self.infos_file = self.dataset_dir / self.info_filename - - # Download remote files to local - remote_files = [self.info_filename] - if not self.separate_dirs: - # No separate dirs => means all data is in the same CSV file => download it to get its current content - remote_files.append("data.csv") - - for filename in remote_files: - try: - huggingface_hub.hf_hub_download( - repo_id=self.dataset_id, - repo_type="dataset", - filename=filename, - local_dir=self.dataset_dir, - token=self.hf_token, - ) - except huggingface_hub.utils.EntryNotFoundError: - pass - - def flag( - self, - flag_data: list[Any], - flag_option: str = "", - username: str | None = None, - ) -> int: - if self.separate_dirs: - # JSONL files to support dataset preview on the Hub - unique_id = str(uuid.uuid4()) - components_dir = self.dataset_dir / unique_id - data_file = components_dir / "metadata.jsonl" - path_in_repo = unique_id # upload in sub folder (safer for concurrency) - else: - # Unique CSV file - components_dir = self.dataset_dir - data_file = components_dir / "data.csv" - path_in_repo = None # upload at root level - - return self._flag_in_dir( - data_file=data_file, - components_dir=components_dir, - path_in_repo=path_in_repo, - flag_data=flag_data, - flag_option=flag_option, - username=username or "", - ) - - def _flag_in_dir( - self, - data_file: Path, - components_dir: Path, - path_in_repo: str | None, - flag_data: list[Any], - flag_option: str = "", - username: str = "", - ) -> int: - # Deserialize components (write images/audio to files) - features, row = self._deserialize_components( - components_dir, flag_data, flag_option, username - ) - - # Write generic info to dataset_infos.json + upload - with filelock.FileLock(str(self.infos_file) + ".lock"): - if not self.infos_file.exists(): - self.infos_file.write_text( - json.dumps({"flagged": {"features": features}}) - ) - - huggingface_hub.upload_file( - repo_id=self.dataset_id, - repo_type="dataset", - token=self.hf_token, - path_in_repo=self.infos_file.name, - path_or_fileobj=self.infos_file, - ) - - headers = list(features.keys()) - - if not self.separate_dirs: - with filelock.FileLock(components_dir / ".lock"): - sample_nb = self._save_as_csv(data_file, headers=headers, row=row) - sample_name = str(sample_nb) - huggingface_hub.upload_folder( - repo_id=self.dataset_id, - repo_type="dataset", - commit_message=f"Flagged sample #{sample_name}", - path_in_repo=path_in_repo, - ignore_patterns="*.lock", - folder_path=components_dir, - token=self.hf_token, - ) - else: - sample_name = self._save_as_jsonl(data_file, headers=headers, row=row) - sample_nb = len( - [path for path in self.dataset_dir.iterdir() if path.is_dir()] - ) - huggingface_hub.upload_folder( - repo_id=self.dataset_id, - repo_type="dataset", - commit_message=f"Flagged sample #{sample_name}", - path_in_repo=path_in_repo, - ignore_patterns="*.lock", - folder_path=components_dir, - token=self.hf_token, - ) - - return sample_nb - - @staticmethod - def _save_as_csv(data_file: Path, headers: list[str], row: list[Any]) -> int: - """Save data as CSV and return the sample name (row number).""" - is_new = not data_file.exists() - - with data_file.open("a", newline="", encoding="utf-8") as csvfile: - writer = csv.writer(csvfile) - - # Write CSV headers if new file - if is_new: - writer.writerow(utils.sanitize_list_for_csv(headers)) - - # Write CSV row for flagged sample - writer.writerow(utils.sanitize_list_for_csv(row)) - - with data_file.open(encoding="utf-8") as csvfile: - return sum(1 for _ in csv.reader(csvfile)) - 1 - - @staticmethod - def _save_as_jsonl(data_file: Path, headers: list[str], row: list[Any]) -> str: - """Save data as JSONL and return the sample name (uuid).""" - Path.mkdir(data_file.parent, parents=True, exist_ok=True) - with open(data_file, "w") as f: - json.dump(dict(zip(headers, row)), f) - return data_file.parent.name - - def _deserialize_components( - self, - data_dir: Path, - flag_data: list[Any], - flag_option: str = "", - username: str = "", - ) -> tuple[dict[Any, Any], list[Any]]: - """Deserialize components and return the corresponding row for the flagged sample. - - Images/audio are saved to disk as individual files. - """ - # Components that can have a preview on dataset repos - file_preview_types = {gr.Audio: "Audio", gr.Image: "Image"} - - # Generate the row corresponding to the flagged sample - features = OrderedDict() - row = [] - for component, sample in zip(self.components, flag_data): - # Get deserialized object (will save sample to disk if applicable -file, audio, image,...-) - label = component.label or "" - save_dir = data_dir / client_utils.strip_invalid_filename_characters(label) - save_dir.mkdir(exist_ok=True, parents=True) - deserialized = component.flag(sample, save_dir) - - # Add deserialized object to row - features[label] = {"dtype": "string", "_type": "Value"} - try: - assert Path(deserialized).exists() - row.append(str(Path(deserialized).relative_to(self.dataset_dir))) - except (AssertionError, TypeError, ValueError): - deserialized = "" if deserialized is None else str(deserialized) - row.append(deserialized) - - # If component is eligible for a preview, add the URL of the file - # Be mindful that images and audio can be None - if isinstance(component, tuple(file_preview_types)): # type: ignore - for _component, _type in file_preview_types.items(): - if isinstance(component, _component): - features[label + " file"] = {"_type": _type} - break - if deserialized: - path_in_repo = str( # returned filepath is absolute, we want it relative to compute URL - Path(deserialized).relative_to(self.dataset_dir) - ).replace( - "\\", "/" - ) - row.append( - huggingface_hub.hf_hub_url( - repo_id=self.dataset_id, - filename=path_in_repo, - repo_type="dataset", - ) - ) - else: - row.append("") - features["flag"] = {"dtype": "string", "_type": "Value"} - features["username"] = {"dtype": "string", "_type": "Value"} - row.append(flag_option) - row.append(username) - return features, row - - -class FlagMethod: - """ - Helper class that contains the flagging options and calls the flagging method. Also - provides visual feedback to the user when flag is clicked. - """ - - def __init__( - self, - flagging_callback: FlaggingCallback, - label: str, - value: str, - visual_feedback: bool = True, - ): - self.flagging_callback = flagging_callback - self.label = label - self.value = value - self.__name__ = "Flag" - self.visual_feedback = visual_feedback - - def __call__(self, request: gr.Request, *flag_data): - try: - self.flagging_callback.flag( - list(flag_data), flag_option=self.value, username=request.username - ) - except Exception as e: - print(f"Error while flagging: {e}") - if self.visual_feedback: - return "Error!" - if not self.visual_feedback: - return - time.sleep(0.8) # to provide enough time for the user to observe button change - return self.reset() - - def reset(self): - return gr.Button(value=self.label, interactive=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.d.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.d.ts deleted file mode 100644 index 872cb027a213a14aa9c98084456213e8ded1163b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.d.ts +++ /dev/null @@ -1,660 +0,0 @@ -export type Platform = 'browser' | 'node' | 'neutral' -export type Format = 'iife' | 'cjs' | 'esm' -export type Loader = 'base64' | 'binary' | 'copy' | 'css' | 'dataurl' | 'default' | 'empty' | 'file' | 'js' | 'json' | 'jsx' | 'local-css' | 'text' | 'ts' | 'tsx' -export type LogLevel = 'verbose' | 'debug' | 'info' | 'warning' | 'error' | 'silent' -export type Charset = 'ascii' | 'utf8' -export type Drop = 'console' | 'debugger' - -interface CommonOptions { - /** Documentation: https://esbuild.github.io/api/#sourcemap */ - sourcemap?: boolean | 'linked' | 'inline' | 'external' | 'both' - /** Documentation: https://esbuild.github.io/api/#legal-comments */ - legalComments?: 'none' | 'inline' | 'eof' | 'linked' | 'external' - /** Documentation: https://esbuild.github.io/api/#source-root */ - sourceRoot?: string - /** Documentation: https://esbuild.github.io/api/#sources-content */ - sourcesContent?: boolean - - /** Documentation: https://esbuild.github.io/api/#format */ - format?: Format - /** Documentation: https://esbuild.github.io/api/#global-name */ - globalName?: string - /** Documentation: https://esbuild.github.io/api/#target */ - target?: string | string[] - /** Documentation: https://esbuild.github.io/api/#supported */ - supported?: Record - /** Documentation: https://esbuild.github.io/api/#platform */ - platform?: Platform - - /** Documentation: https://esbuild.github.io/api/#mangle-props */ - mangleProps?: RegExp - /** Documentation: https://esbuild.github.io/api/#mangle-props */ - reserveProps?: RegExp - /** Documentation: https://esbuild.github.io/api/#mangle-props */ - mangleQuoted?: boolean - /** Documentation: https://esbuild.github.io/api/#mangle-props */ - mangleCache?: Record - /** Documentation: https://esbuild.github.io/api/#drop */ - drop?: Drop[] - /** Documentation: https://esbuild.github.io/api/#drop-labels */ - dropLabels?: string[] - /** Documentation: https://esbuild.github.io/api/#minify */ - minify?: boolean - /** Documentation: https://esbuild.github.io/api/#minify */ - minifyWhitespace?: boolean - /** Documentation: https://esbuild.github.io/api/#minify */ - minifyIdentifiers?: boolean - /** Documentation: https://esbuild.github.io/api/#minify */ - minifySyntax?: boolean - /** Documentation: https://esbuild.github.io/api/#line-limit */ - lineLimit?: number - /** Documentation: https://esbuild.github.io/api/#charset */ - charset?: Charset - /** Documentation: https://esbuild.github.io/api/#tree-shaking */ - treeShaking?: boolean - /** Documentation: https://esbuild.github.io/api/#ignore-annotations */ - ignoreAnnotations?: boolean - - /** Documentation: https://esbuild.github.io/api/#jsx */ - jsx?: 'transform' | 'preserve' | 'automatic' - /** Documentation: https://esbuild.github.io/api/#jsx-factory */ - jsxFactory?: string - /** Documentation: https://esbuild.github.io/api/#jsx-fragment */ - jsxFragment?: string - /** Documentation: https://esbuild.github.io/api/#jsx-import-source */ - jsxImportSource?: string - /** Documentation: https://esbuild.github.io/api/#jsx-development */ - jsxDev?: boolean - /** Documentation: https://esbuild.github.io/api/#jsx-side-effects */ - jsxSideEffects?: boolean - - /** Documentation: https://esbuild.github.io/api/#define */ - define?: { [key: string]: string } - /** Documentation: https://esbuild.github.io/api/#pure */ - pure?: string[] - /** Documentation: https://esbuild.github.io/api/#keep-names */ - keepNames?: boolean - - /** Documentation: https://esbuild.github.io/api/#color */ - color?: boolean - /** Documentation: https://esbuild.github.io/api/#log-level */ - logLevel?: LogLevel - /** Documentation: https://esbuild.github.io/api/#log-limit */ - logLimit?: number - /** Documentation: https://esbuild.github.io/api/#log-override */ - logOverride?: Record - - /** Documentation: https://esbuild.github.io/api/#tsconfig-raw */ - tsconfigRaw?: string | TsconfigRaw -} - -export interface TsconfigRaw { - compilerOptions?: { - alwaysStrict?: boolean - baseUrl?: boolean - experimentalDecorators?: boolean - importsNotUsedAsValues?: 'remove' | 'preserve' | 'error' - jsx?: 'preserve' | 'react-native' | 'react' | 'react-jsx' | 'react-jsxdev' - jsxFactory?: string - jsxFragmentFactory?: string - jsxImportSource?: string - paths?: Record - preserveValueImports?: boolean - strict?: boolean - target?: string - useDefineForClassFields?: boolean - verbatimModuleSyntax?: boolean - } -} - -export interface BuildOptions extends CommonOptions { - /** Documentation: https://esbuild.github.io/api/#bundle */ - bundle?: boolean - /** Documentation: https://esbuild.github.io/api/#splitting */ - splitting?: boolean - /** Documentation: https://esbuild.github.io/api/#preserve-symlinks */ - preserveSymlinks?: boolean - /** Documentation: https://esbuild.github.io/api/#outfile */ - outfile?: string - /** Documentation: https://esbuild.github.io/api/#metafile */ - metafile?: boolean - /** Documentation: https://esbuild.github.io/api/#outdir */ - outdir?: string - /** Documentation: https://esbuild.github.io/api/#outbase */ - outbase?: string - /** Documentation: https://esbuild.github.io/api/#external */ - external?: string[] - /** Documentation: https://esbuild.github.io/api/#packages */ - packages?: 'external' - /** Documentation: https://esbuild.github.io/api/#alias */ - alias?: Record - /** Documentation: https://esbuild.github.io/api/#loader */ - loader?: { [ext: string]: Loader } - /** Documentation: https://esbuild.github.io/api/#resolve-extensions */ - resolveExtensions?: string[] - /** Documentation: https://esbuild.github.io/api/#main-fields */ - mainFields?: string[] - /** Documentation: https://esbuild.github.io/api/#conditions */ - conditions?: string[] - /** Documentation: https://esbuild.github.io/api/#write */ - write?: boolean - /** Documentation: https://esbuild.github.io/api/#allow-overwrite */ - allowOverwrite?: boolean - /** Documentation: https://esbuild.github.io/api/#tsconfig */ - tsconfig?: string - /** Documentation: https://esbuild.github.io/api/#out-extension */ - outExtension?: { [ext: string]: string } - /** Documentation: https://esbuild.github.io/api/#public-path */ - publicPath?: string - /** Documentation: https://esbuild.github.io/api/#entry-names */ - entryNames?: string - /** Documentation: https://esbuild.github.io/api/#chunk-names */ - chunkNames?: string - /** Documentation: https://esbuild.github.io/api/#asset-names */ - assetNames?: string - /** Documentation: https://esbuild.github.io/api/#inject */ - inject?: string[] - /** Documentation: https://esbuild.github.io/api/#banner */ - banner?: { [type: string]: string } - /** Documentation: https://esbuild.github.io/api/#footer */ - footer?: { [type: string]: string } - /** Documentation: https://esbuild.github.io/api/#entry-points */ - entryPoints?: string[] | Record | { in: string, out: string }[] - /** Documentation: https://esbuild.github.io/api/#stdin */ - stdin?: StdinOptions - /** Documentation: https://esbuild.github.io/plugins/ */ - plugins?: Plugin[] - /** Documentation: https://esbuild.github.io/api/#working-directory */ - absWorkingDir?: string - /** Documentation: https://esbuild.github.io/api/#node-paths */ - nodePaths?: string[]; // The "NODE_PATH" variable from Node.js -} - -export interface StdinOptions { - contents: string | Uint8Array - resolveDir?: string - sourcefile?: string - loader?: Loader -} - -export interface Message { - id: string - pluginName: string - text: string - location: Location | null - notes: Note[] - - /** - * Optional user-specified data that is passed through unmodified. You can - * use this to stash the original error, for example. - */ - detail: any -} - -export interface Note { - text: string - location: Location | null -} - -export interface Location { - file: string - namespace: string - /** 1-based */ - line: number - /** 0-based, in bytes */ - column: number - /** in bytes */ - length: number - lineText: string - suggestion: string -} - -export interface OutputFile { - path: string - contents: Uint8Array - hash: string - /** "contents" as text (changes automatically with "contents") */ - readonly text: string -} - -export interface BuildResult { - errors: Message[] - warnings: Message[] - /** Only when "write: false" */ - outputFiles: OutputFile[] | (ProvidedOptions['write'] extends false ? never : undefined) - /** Only when "metafile: true" */ - metafile: Metafile | (ProvidedOptions['metafile'] extends true ? never : undefined) - /** Only when "mangleCache" is present */ - mangleCache: Record | (ProvidedOptions['mangleCache'] extends Object ? never : undefined) -} - -export interface BuildFailure extends Error { - errors: Message[] - warnings: Message[] -} - -/** Documentation: https://esbuild.github.io/api/#serve-arguments */ -export interface ServeOptions { - port?: number - host?: string - servedir?: string - keyfile?: string - certfile?: string - fallback?: string - onRequest?: (args: ServeOnRequestArgs) => void -} - -export interface ServeOnRequestArgs { - remoteAddress: string - method: string - path: string - status: number - /** The time to generate the response, not to send it */ - timeInMS: number -} - -/** Documentation: https://esbuild.github.io/api/#serve-return-values */ -export interface ServeResult { - port: number - host: string -} - -export interface TransformOptions extends CommonOptions { - /** Documentation: https://esbuild.github.io/api/#sourcefile */ - sourcefile?: string - /** Documentation: https://esbuild.github.io/api/#loader */ - loader?: Loader - /** Documentation: https://esbuild.github.io/api/#banner */ - banner?: string - /** Documentation: https://esbuild.github.io/api/#footer */ - footer?: string -} - -export interface TransformResult { - code: string - map: string - warnings: Message[] - /** Only when "mangleCache" is present */ - mangleCache: Record | (ProvidedOptions['mangleCache'] extends Object ? never : undefined) - /** Only when "legalComments" is "external" */ - legalComments: string | (ProvidedOptions['legalComments'] extends 'external' ? never : undefined) -} - -export interface TransformFailure extends Error { - errors: Message[] - warnings: Message[] -} - -export interface Plugin { - name: string - setup: (build: PluginBuild) => (void | Promise) -} - -export interface PluginBuild { - /** Documentation: https://esbuild.github.io/plugins/#build-options */ - initialOptions: BuildOptions - - /** Documentation: https://esbuild.github.io/plugins/#resolve */ - resolve(path: string, options?: ResolveOptions): Promise - - /** Documentation: https://esbuild.github.io/plugins/#on-start */ - onStart(callback: () => - (OnStartResult | null | void | Promise)): void - - /** Documentation: https://esbuild.github.io/plugins/#on-end */ - onEnd(callback: (result: BuildResult) => - (OnEndResult | null | void | Promise)): void - - /** Documentation: https://esbuild.github.io/plugins/#on-resolve */ - onResolve(options: OnResolveOptions, callback: (args: OnResolveArgs) => - (OnResolveResult | null | undefined | Promise)): void - - /** Documentation: https://esbuild.github.io/plugins/#on-load */ - onLoad(options: OnLoadOptions, callback: (args: OnLoadArgs) => - (OnLoadResult | null | undefined | Promise)): void - - /** Documentation: https://esbuild.github.io/plugins/#on-dispose */ - onDispose(callback: () => void): void - - // This is a full copy of the esbuild library in case you need it - esbuild: { - context: typeof context, - build: typeof build, - buildSync: typeof buildSync, - transform: typeof transform, - transformSync: typeof transformSync, - formatMessages: typeof formatMessages, - formatMessagesSync: typeof formatMessagesSync, - analyzeMetafile: typeof analyzeMetafile, - analyzeMetafileSync: typeof analyzeMetafileSync, - initialize: typeof initialize, - version: typeof version, - } -} - -/** Documentation: https://esbuild.github.io/plugins/#resolve-options */ -export interface ResolveOptions { - pluginName?: string - importer?: string - namespace?: string - resolveDir?: string - kind?: ImportKind - pluginData?: any -} - -/** Documentation: https://esbuild.github.io/plugins/#resolve-results */ -export interface ResolveResult { - errors: Message[] - warnings: Message[] - - path: string - external: boolean - sideEffects: boolean - namespace: string - suffix: string - pluginData: any -} - -export interface OnStartResult { - errors?: PartialMessage[] - warnings?: PartialMessage[] -} - -export interface OnEndResult { - errors?: PartialMessage[] - warnings?: PartialMessage[] -} - -/** Documentation: https://esbuild.github.io/plugins/#on-resolve-options */ -export interface OnResolveOptions { - filter: RegExp - namespace?: string -} - -/** Documentation: https://esbuild.github.io/plugins/#on-resolve-arguments */ -export interface OnResolveArgs { - path: string - importer: string - namespace: string - resolveDir: string - kind: ImportKind - pluginData: any -} - -export type ImportKind = - | 'entry-point' - - // JS - | 'import-statement' - | 'require-call' - | 'dynamic-import' - | 'require-resolve' - - // CSS - | 'import-rule' - | 'composes-from' - | 'url-token' - -/** Documentation: https://esbuild.github.io/plugins/#on-resolve-results */ -export interface OnResolveResult { - pluginName?: string - - errors?: PartialMessage[] - warnings?: PartialMessage[] - - path?: string - external?: boolean - sideEffects?: boolean - namespace?: string - suffix?: string - pluginData?: any - - watchFiles?: string[] - watchDirs?: string[] -} - -/** Documentation: https://esbuild.github.io/plugins/#on-load-options */ -export interface OnLoadOptions { - filter: RegExp - namespace?: string -} - -/** Documentation: https://esbuild.github.io/plugins/#on-load-arguments */ -export interface OnLoadArgs { - path: string - namespace: string - suffix: string - pluginData: any -} - -/** Documentation: https://esbuild.github.io/plugins/#on-load-results */ -export interface OnLoadResult { - pluginName?: string - - errors?: PartialMessage[] - warnings?: PartialMessage[] - - contents?: string | Uint8Array - resolveDir?: string - loader?: Loader - pluginData?: any - - watchFiles?: string[] - watchDirs?: string[] -} - -export interface PartialMessage { - id?: string - pluginName?: string - text?: string - location?: Partial | null - notes?: PartialNote[] - detail?: any -} - -export interface PartialNote { - text?: string - location?: Partial | null -} - -/** Documentation: https://esbuild.github.io/api/#metafile */ -export interface Metafile { - inputs: { - [path: string]: { - bytes: number - imports: { - path: string - kind: ImportKind - external?: boolean - original?: string - }[] - format?: 'cjs' | 'esm' - } - } - outputs: { - [path: string]: { - bytes: number - inputs: { - [path: string]: { - bytesInOutput: number - } - } - imports: { - path: string - kind: ImportKind | 'file-loader' - external?: boolean - }[] - exports: string[] - entryPoint?: string - cssBundle?: string - } - } -} - -export interface FormatMessagesOptions { - kind: 'error' | 'warning' - color?: boolean - terminalWidth?: number -} - -export interface AnalyzeMetafileOptions { - color?: boolean - verbose?: boolean -} - -export interface WatchOptions { -} - -export interface BuildContext { - /** Documentation: https://esbuild.github.io/api/#rebuild */ - rebuild(): Promise> - - /** Documentation: https://esbuild.github.io/api/#watch */ - watch(options?: WatchOptions): Promise - - /** Documentation: https://esbuild.github.io/api/#serve */ - serve(options?: ServeOptions): Promise - - cancel(): Promise - dispose(): Promise -} - -// This is a TypeScript type-level function which replaces any keys in "In" -// that aren't in "Out" with "never". We use this to reject properties with -// typos in object literals. See: https://stackoverflow.com/questions/49580725 -type SameShape = In & { [Key in Exclude]: never } - -/** - * This function invokes the "esbuild" command-line tool for you. It returns a - * promise that either resolves with a "BuildResult" object or rejects with a - * "BuildFailure" object. - * - * - Works in node: yes - * - Works in browser: yes - * - * Documentation: https://esbuild.github.io/api/#build - */ -export declare function build(options: SameShape): Promise> - -/** - * This is the advanced long-running form of "build" that supports additional - * features such as watch mode and a local development server. - * - * - Works in node: yes - * - Works in browser: no - * - * Documentation: https://esbuild.github.io/api/#build - */ -export declare function context(options: SameShape): Promise> - -/** - * This function transforms a single JavaScript file. It can be used to minify - * JavaScript, convert TypeScript/JSX to JavaScript, or convert newer JavaScript - * to older JavaScript. It returns a promise that is either resolved with a - * "TransformResult" object or rejected with a "TransformFailure" object. - * - * - Works in node: yes - * - Works in browser: yes - * - * Documentation: https://esbuild.github.io/api/#transform - */ -export declare function transform(input: string | Uint8Array, options?: SameShape): Promise> - -/** - * Converts log messages to formatted message strings suitable for printing in - * the terminal. This allows you to reuse the built-in behavior of esbuild's - * log message formatter. This is a batch-oriented API for efficiency. - * - * - Works in node: yes - * - Works in browser: yes - */ -export declare function formatMessages(messages: PartialMessage[], options: FormatMessagesOptions): Promise - -/** - * Pretty-prints an analysis of the metafile JSON to a string. This is just for - * convenience to be able to match esbuild's pretty-printing exactly. If you want - * to customize it, you can just inspect the data in the metafile yourself. - * - * - Works in node: yes - * - Works in browser: yes - * - * Documentation: https://esbuild.github.io/api/#analyze - */ -export declare function analyzeMetafile(metafile: Metafile | string, options?: AnalyzeMetafileOptions): Promise - -/** - * A synchronous version of "build". - * - * - Works in node: yes - * - Works in browser: no - * - * Documentation: https://esbuild.github.io/api/#build - */ -export declare function buildSync(options: SameShape): BuildResult - -/** - * A synchronous version of "transform". - * - * - Works in node: yes - * - Works in browser: no - * - * Documentation: https://esbuild.github.io/api/#transform - */ -export declare function transformSync(input: string | Uint8Array, options?: SameShape): TransformResult - -/** - * A synchronous version of "formatMessages". - * - * - Works in node: yes - * - Works in browser: no - */ -export declare function formatMessagesSync(messages: PartialMessage[], options: FormatMessagesOptions): string[] - -/** - * A synchronous version of "analyzeMetafile". - * - * - Works in node: yes - * - Works in browser: no - * - * Documentation: https://esbuild.github.io/api/#analyze - */ -export declare function analyzeMetafileSync(metafile: Metafile | string, options?: AnalyzeMetafileOptions): string - -/** - * This configures the browser-based version of esbuild. It is necessary to - * call this first and wait for the returned promise to be resolved before - * making other API calls when using esbuild in the browser. - * - * - Works in node: yes - * - Works in browser: yes ("options" is required) - * - * Documentation: https://esbuild.github.io/api/#browser - */ -export declare function initialize(options: InitializeOptions): Promise - -export interface InitializeOptions { - /** - * The URL of the "esbuild.wasm" file. This must be provided when running - * esbuild in the browser. - */ - wasmURL?: string | URL - - /** - * The result of calling "new WebAssembly.Module(buffer)" where "buffer" - * is a typed array or ArrayBuffer containing the binary code of the - * "esbuild.wasm" file. - * - * You can use this as an alternative to "wasmURL" for environments where it's - * not possible to download the WebAssembly module. - */ - wasmModule?: WebAssembly.Module - - /** - * By default esbuild runs the WebAssembly-based browser API in a web worker - * to avoid blocking the UI thread. This can be disabled by setting "worker" - * to false. - */ - worker?: boolean -} - -export let version: string diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/keras_mixin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/keras_mixin.py deleted file mode 100644 index 32ea4091e0c3f19abc09d81456e9df9d52454da2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/keras_mixin.py +++ /dev/null @@ -1,481 +0,0 @@ -import collections.abc as collections -import json -import os -import warnings -from pathlib import Path -from shutil import copytree -from typing import Any, Dict, List, Optional, Union - -from huggingface_hub import ModelHubMixin, snapshot_download -from huggingface_hub.utils import ( - get_tf_version, - is_graphviz_available, - is_pydot_available, - is_tf_available, - yaml_dump, -) - -from .constants import CONFIG_NAME -from .hf_api import HfApi -from .utils import SoftTemporaryDirectory, logging, validate_hf_hub_args - - -logger = logging.get_logger(__name__) - -if is_tf_available(): - import tensorflow as tf # type: ignore - - -def _flatten_dict(dictionary, parent_key=""): - """Flatten a nested dictionary. - Reference: https://stackoverflow.com/a/6027615/10319735 - - Args: - dictionary (`dict`): - The nested dictionary to be flattened. - parent_key (`str`): - The parent key to be prefixed to the children keys. - Necessary for recursing over the nested dictionary. - - Returns: - The flattened dictionary. - """ - items = [] - for key, value in dictionary.items(): - new_key = f"{parent_key}.{key}" if parent_key else key - if isinstance(value, collections.MutableMapping): - items.extend( - _flatten_dict( - value, - new_key, - ).items() - ) - else: - items.append((new_key, value)) - return dict(items) - - -def _create_hyperparameter_table(model): - """Parse hyperparameter dictionary into a markdown table.""" - if model.optimizer is not None: - optimizer_params = model.optimizer.get_config() - # flatten the configuration - optimizer_params = _flatten_dict(optimizer_params) - optimizer_params["training_precision"] = tf.keras.mixed_precision.global_policy().name - table = "| Hyperparameters | Value |\n| :-- | :-- |\n" - for key, value in optimizer_params.items(): - table += f"| {key} | {value} |\n" - else: - table = None - return table - - -def _plot_network(model, save_directory): - tf.keras.utils.plot_model( - model, - to_file=f"{save_directory}/model.png", - show_shapes=False, - show_dtype=False, - show_layer_names=True, - rankdir="TB", - expand_nested=False, - dpi=96, - layer_range=None, - ) - - -def _create_model_card( - model, - repo_dir: Path, - plot_model: bool = True, - metadata: Optional[dict] = None, -): - """ - Creates a model card for the repository. - """ - hyperparameters = _create_hyperparameter_table(model) - if plot_model and is_graphviz_available() and is_pydot_available(): - _plot_network(model, repo_dir) - if metadata is None: - metadata = {} - readme_path = f"{repo_dir}/README.md" - metadata["library_name"] = "keras" - model_card: str = "---\n" - model_card += yaml_dump(metadata, default_flow_style=False) - model_card += "---\n" - model_card += "\n## Model description\n\nMore information needed\n" - model_card += "\n## Intended uses & limitations\n\nMore information needed\n" - model_card += "\n## Training and evaluation data\n\nMore information needed\n" - if hyperparameters is not None: - model_card += "\n## Training procedure\n" - model_card += "\n### Training hyperparameters\n" - model_card += "\nThe following hyperparameters were used during training:\n\n" - model_card += hyperparameters - model_card += "\n" - if plot_model and os.path.exists(f"{repo_dir}/model.png"): - model_card += "\n ## Model Plot\n" - model_card += "\n
      " - model_card += "\nView Model Plot\n" - path_to_plot = "./model.png" - model_card += f"\n![Model Image]({path_to_plot})\n" - model_card += "\n
      " - - if os.path.exists(readme_path): - with open(readme_path, "r", encoding="utf8") as f: - readme = f.read() - else: - readme = model_card - with open(readme_path, "w", encoding="utf-8") as f: - f.write(readme) - - -def save_pretrained_keras( - model, - save_directory: Union[str, Path], - config: Optional[Dict[str, Any]] = None, - include_optimizer: bool = False, - plot_model: bool = True, - tags: Optional[Union[list, str]] = None, - **model_save_kwargs, -): - """ - Saves a Keras model to save_directory in SavedModel format. Use this if - you're using the Functional or Sequential APIs. - - Args: - model (`Keras.Model`): - The [Keras - model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) - you'd like to save. The model must be compiled and built. - save_directory (`str` or `Path`): - Specify directory in which you want to save the Keras model. - config (`dict`, *optional*): - Configuration object to be saved alongside the model weights. - include_optimizer(`bool`, *optional*, defaults to `False`): - Whether or not to include optimizer in serialization. - plot_model (`bool`, *optional*, defaults to `True`): - Setting this to `True` will plot the model and put it in the model - card. Requires graphviz and pydot to be installed. - tags (Union[`str`,`list`], *optional*): - List of tags that are related to model or string of a single tag. See example tags - [here](https://github.com/huggingface/hub-docs/blame/main/modelcard.md). - model_save_kwargs(`dict`, *optional*): - model_save_kwargs will be passed to - [`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model). - """ - if is_tf_available(): - import tensorflow as tf - else: - raise ImportError("Called a Tensorflow-specific function but could not import it.") - - if not model.built: - raise ValueError("Model should be built before trying to save") - - save_directory = Path(save_directory) - save_directory.mkdir(parents=True, exist_ok=True) - - # saving config - if config: - if not isinstance(config, dict): - raise RuntimeError(f"Provided config to save_pretrained_keras should be a dict. Got: '{type(config)}'") - - with (save_directory / CONFIG_NAME).open("w") as f: - json.dump(config, f) - - metadata = {} - if isinstance(tags, list): - metadata["tags"] = tags - elif isinstance(tags, str): - metadata["tags"] = [tags] - - task_name = model_save_kwargs.pop("task_name", None) - if task_name is not None: - warnings.warn( - "`task_name` input argument is deprecated. Pass `tags` instead.", - FutureWarning, - ) - if "tags" in metadata: - metadata["tags"].append(task_name) - else: - metadata["tags"] = [task_name] - - if model.history is not None: - if model.history.history != {}: - path = save_directory / "history.json" - if path.exists(): - warnings.warn( - "`history.json` file already exists, it will be overwritten by the history of this version.", - UserWarning, - ) - with path.open("w", encoding="utf-8") as f: - json.dump(model.history.history, f, indent=2, sort_keys=True) - - _create_model_card(model, save_directory, plot_model, metadata) - tf.keras.models.save_model(model, save_directory, include_optimizer=include_optimizer, **model_save_kwargs) - - -def from_pretrained_keras(*args, **kwargs) -> "KerasModelHubMixin": - r""" - Instantiate a pretrained Keras model from a pre-trained model from the Hub. - The model is expected to be in `SavedModel` format. - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - Can be either: - - A string, the `model id` of a pretrained model hosted inside a - model repo on huggingface.co. Valid model ids can be located - at the root-level, like `bert-base-uncased`, or namespaced - under a user or organization name, like - `dbmdz/bert-base-german-cased`. - - You can add `revision` by appending `@` at the end of model_id - simply like this: `dbmdz/bert-base-german-cased@main` Revision - is the specific model version to use. It can be a branch name, - a tag name, or a commit id, since we use a git-based system - for storing models and other artifacts on huggingface.co, so - `revision` can be any identifier allowed by git. - - A path to a `directory` containing model weights saved using - [`~transformers.PreTrainedModel.save_pretrained`], e.g., - `./my_model_directory/`. - - `None` if you are both providing the configuration and state - dictionary (resp. with keyword arguments `config` and - `state_dict`). - force_download (`bool`, *optional*, defaults to `False`): - Whether to force the (re-)download of the model weights and - configuration files, overriding the cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether to delete incompletely received files. Will attempt to - resume the download if such a file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., - `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The - proxies are used on each request. - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. If - `True`, will use the token generated when running `transformers-cli - login` (stored in `~/.huggingface`). - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model - configuration should be cached if the standard cache should not be - used. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether to only look at local files (i.e., do not try to download - the model). - model_kwargs (`Dict`, *optional*): - model_kwargs will be passed to the model during initialization - - - - Passing `token=True` is required when you want to use a private - model. - - - """ - return KerasModelHubMixin.from_pretrained(*args, **kwargs) - - -@validate_hf_hub_args -def push_to_hub_keras( - model, - repo_id: str, - *, - config: Optional[dict] = None, - commit_message: str = "Push Keras model using huggingface_hub.", - private: bool = False, - api_endpoint: Optional[str] = None, - token: Optional[str] = None, - branch: Optional[str] = None, - create_pr: Optional[bool] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - delete_patterns: Optional[Union[List[str], str]] = None, - log_dir: Optional[str] = None, - include_optimizer: bool = False, - tags: Optional[Union[list, str]] = None, - plot_model: bool = True, - **model_save_kwargs, -): - """ - Upload model checkpoint to the Hub. - - Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use - `delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more - details. - - Args: - model (`Keras.Model`): - The [Keras model](`https://www.tensorflow.org/api_docs/python/tf/keras/Model`) you'd like to push to the - Hub. The model must be compiled and built. - repo_id (`str`): - ID of the repository to push to (example: `"username/my-model"`). - commit_message (`str`, *optional*, defaults to "Add Keras model"): - Message to commit while pushing. - private (`bool`, *optional*, defaults to `False`): - Whether the repository created should be private. - api_endpoint (`str`, *optional*): - The API endpoint to use when pushing the model to the hub. - token (`str`, *optional*): - The token to use as HTTP bearer authorization for remote files. If - not set, will use the token set when logging in with - `huggingface-cli login` (stored in `~/.huggingface`). - branch (`str`, *optional*): - The git branch on which to push the model. This defaults to - the default branch as specified in your repository, which - defaults to `"main"`. - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request from `branch` with that commit. - Defaults to `False`. - config (`dict`, *optional*): - Configuration object to be saved alongside the model weights. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are pushed. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not pushed. - delete_patterns (`List[str]` or `str`, *optional*): - If provided, remote files matching any of the patterns will be deleted from the repo. - log_dir (`str`, *optional*): - TensorBoard logging directory to be pushed. The Hub automatically - hosts and displays a TensorBoard instance if log files are included - in the repository. - include_optimizer (`bool`, *optional*, defaults to `False`): - Whether or not to include optimizer during serialization. - tags (Union[`list`, `str`], *optional*): - List of tags that are related to model or string of a single tag. See example tags - [here](https://github.com/huggingface/hub-docs/blame/main/modelcard.md). - plot_model (`bool`, *optional*, defaults to `True`): - Setting this to `True` will plot the model and put it in the model - card. Requires graphviz and pydot to be installed. - model_save_kwargs(`dict`, *optional*): - model_save_kwargs will be passed to - [`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model). - - Returns: - The url of the commit of your model in the given repository. - """ - api = HfApi(endpoint=api_endpoint) - repo_id = api.create_repo(repo_id=repo_id, token=token, private=private, exist_ok=True).repo_id - - # Push the files to the repo in a single commit - with SoftTemporaryDirectory() as tmp: - saved_path = Path(tmp) / repo_id - save_pretrained_keras( - model, - saved_path, - config=config, - include_optimizer=include_optimizer, - tags=tags, - plot_model=plot_model, - **model_save_kwargs, - ) - - # If `log_dir` provided, delete remote logs and upload new ones - if log_dir is not None: - delete_patterns = ( - [] - if delete_patterns is None - else ( - [delete_patterns] # convert `delete_patterns` to a list - if isinstance(delete_patterns, str) - else delete_patterns - ) - ) - delete_patterns.append("logs/*") - copytree(log_dir, saved_path / "logs") - - return api.upload_folder( - repo_type="model", - repo_id=repo_id, - folder_path=saved_path, - commit_message=commit_message, - token=token, - revision=branch, - create_pr=create_pr, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - delete_patterns=delete_patterns, - ) - - -class KerasModelHubMixin(ModelHubMixin): - """ - Implementation of [`ModelHubMixin`] to provide model Hub upload/download - capabilities to Keras models. - - - ```python - >>> import tensorflow as tf - >>> from huggingface_hub import KerasModelHubMixin - - - >>> class MyModel(tf.keras.Model, KerasModelHubMixin): - ... def __init__(self, **kwargs): - ... super().__init__() - ... self.config = kwargs.pop("config", None) - ... self.dummy_inputs = ... - ... self.layer = ... - - ... def call(self, *args): - ... return ... - - - >>> # Initialize and compile the model as you normally would - >>> model = MyModel() - >>> model.compile(...) - >>> # Build the graph by training it or passing dummy inputs - >>> _ = model(model.dummy_inputs) - >>> # Save model weights to local directory - >>> model.save_pretrained("my-awesome-model") - >>> # Push model weights to the Hub - >>> model.push_to_hub("my-awesome-model") - >>> # Download and initialize weights from the Hub - >>> model = MyModel.from_pretrained("username/super-cool-model") - ``` - """ - - def _save_pretrained(self, save_directory): - save_pretrained_keras(self, save_directory) - - @classmethod - def _from_pretrained( - cls, - model_id, - revision, - cache_dir, - force_download, - proxies, - resume_download, - local_files_only, - token, - **model_kwargs, - ): - """Here we just call [`from_pretrained_keras`] function so both the mixin and - functional APIs stay in sync. - - TODO - Some args above aren't used since we are calling - snapshot_download instead of hf_hub_download. - """ - if is_tf_available(): - import tensorflow as tf - else: - raise ImportError("Called a TensorFlow-specific function but could not import it.") - - # TODO - Figure out what to do about these config values. Config is not going to be needed to load model - cfg = model_kwargs.pop("config", None) - - # Root is either a local filepath matching model_id or a cached snapshot - if not os.path.isdir(model_id): - storage_folder = snapshot_download( - repo_id=model_id, - revision=revision, - cache_dir=cache_dir, - library_name="keras", - library_version=get_tf_version(), - ) - else: - storage_folder = model_id - - model = tf.keras.models.load_model(storage_folder, **model_kwargs) - - # For now, we add a new attribute, config, to store the config loaded from the hub/a local dir. - model.config = cfg - - return model diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/templates/datasetcard_template.md b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/templates/datasetcard_template.md deleted file mode 100644 index f8cb4c80bfe647627589bb2a0b58273c8478cd00..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/templates/datasetcard_template.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1 -# Doc / guide: https://huggingface.co/docs/hub/datasets-cards -{{ card_data }} ---- - -# Dataset Card for {{ pretty_name | default("Dataset Name", true) }} - - - -{{ dataset_summary | default("", true) }} - -## Dataset Details - -### Dataset Description - - - -{{ dataset_description | default("", true) }} - -- **Curated by:** {{ curators | default("[More Information Needed]", true)}} -- **Funded by [optional]:** {{ funded_by | default("[More Information Needed]", true)}} -- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}} -- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}} -- **License:** {{ license | default("[More Information Needed]", true)}} - -### Dataset Sources [optional] - - - -- **Repository:** {{ repo | default("[More Information Needed]", true)}} -- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}} -- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} - -## Uses - - - -### Direct Use - - - -{{ direct_use | default("[More Information Needed]", true)}} - -### Out-of-Scope Use - - - -{{ out_of_scope_use | default("[More Information Needed]", true)}} - -## Dataset Structure - - - -{{ dataset_structure | default("[More Information Needed]", true)}} - -## Dataset Creation - -### Curation Rationale - - - -{{ curation_rationale_section | default("[More Information Needed]", true)}} - -### Source Data - - - -#### Data Collection and Processing - - - -{{ data_collection_and_processing_section | default("[More Information Needed]", true)}} - -#### Who are the source data producers? - - - -{{ source_data_producers_section | default("[More Information Needed]", true)}} - -### Annotations [optional] - - - -#### Annotation process - - - -{{ annotation_process_section | default("[More Information Needed]", true)}} - -#### Who are the annotators? - - - -{{ who_are_annotators_section | default("[More Information Needed]", true)}} - -#### Personal and Sensitive Information - - - -{{ personal_and_sensitive_information | default("[More Information Needed]", true)}} - -## Bias, Risks, and Limitations - - - -{{ bias_risks_limitations | default("[More Information Needed]", true)}} - -### Recommendations - - - -{{ bias_recommendations | default("Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.", true)}} - -## Citation [optional] - - - -**BibTeX:** - -{{ citation_bibtex | default("[More Information Needed]", true)}} - -**APA:** - -{{ citation_apa | default("[More Information Needed]", true)}} - -## Glossary [optional] - - - -{{ glossary | default("[More Information Needed]", true)}} - -## More Information [optional] - -{{ more_information | default("[More Information Needed]", true)}} - -## Dataset Card Authors [optional] - -{{ dataset_card_authors | default("[More Information Needed]", true)}} - -## Dataset Card Contact - -{{ dataset_card_contact | default("[More Information Needed]", true)}} \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_ufunc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_ufunc.py deleted file mode 100644 index 02c437021fe9537e0dd8acf4503374c5ee15c45f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_ufunc.py +++ /dev/null @@ -1,2994 +0,0 @@ -import warnings -import itertools -import sys -import ctypes as ct - -import pytest -from pytest import param - -import numpy as np -import numpy.core._umath_tests as umt -import numpy.linalg._umath_linalg as uml -import numpy.core._operand_flag_tests as opflag_tests -import numpy.core._rational_tests as _rational_tests -from numpy.testing import ( - assert_, assert_equal, assert_raises, assert_array_equal, - assert_almost_equal, assert_array_almost_equal, assert_no_warnings, - assert_allclose, HAS_REFCOUNT, suppress_warnings, IS_WASM, IS_PYPY, - ) -from numpy.testing._private.utils import requires_memory -from numpy.compat import pickle - - -UNARY_UFUNCS = [obj for obj in np.core.umath.__dict__.values() - if isinstance(obj, np.ufunc)] -UNARY_OBJECT_UFUNCS = [uf for uf in UNARY_UFUNCS if "O->O" in uf.types] - - -class TestUfuncKwargs: - def test_kwarg_exact(self): - assert_raises(TypeError, np.add, 1, 2, castingx='safe') - assert_raises(TypeError, np.add, 1, 2, dtypex=int) - assert_raises(TypeError, np.add, 1, 2, extobjx=[4096]) - assert_raises(TypeError, np.add, 1, 2, outx=None) - assert_raises(TypeError, np.add, 1, 2, sigx='ii->i') - assert_raises(TypeError, np.add, 1, 2, signaturex='ii->i') - assert_raises(TypeError, np.add, 1, 2, subokx=False) - assert_raises(TypeError, np.add, 1, 2, wherex=[True]) - - def test_sig_signature(self): - assert_raises(TypeError, np.add, 1, 2, sig='ii->i', - signature='ii->i') - - def test_sig_dtype(self): - assert_raises(TypeError, np.add, 1, 2, sig='ii->i', - dtype=int) - assert_raises(TypeError, np.add, 1, 2, signature='ii->i', - dtype=int) - - def test_extobj_refcount(self): - # Should not segfault with USE_DEBUG. - assert_raises(TypeError, np.add, 1, 2, extobj=[4096], parrot=True) - - -class TestUfuncGenericLoops: - """Test generic loops. - - The loops to be tested are: - - PyUFunc_ff_f_As_dd_d - PyUFunc_ff_f - PyUFunc_dd_d - PyUFunc_gg_g - PyUFunc_FF_F_As_DD_D - PyUFunc_DD_D - PyUFunc_FF_F - PyUFunc_GG_G - PyUFunc_OO_O - PyUFunc_OO_O_method - PyUFunc_f_f_As_d_d - PyUFunc_d_d - PyUFunc_f_f - PyUFunc_g_g - PyUFunc_F_F_As_D_D - PyUFunc_F_F - PyUFunc_D_D - PyUFunc_G_G - PyUFunc_O_O - PyUFunc_O_O_method - PyUFunc_On_Om - - Where: - - f -- float - d -- double - g -- long double - F -- complex float - D -- complex double - G -- complex long double - O -- python object - - It is difficult to assure that each of these loops is entered from the - Python level as the special cased loops are a moving target and the - corresponding types are architecture dependent. We probably need to - define C level testing ufuncs to get at them. For the time being, I've - just looked at the signatures registered in the build directory to find - relevant functions. - - """ - np_dtypes = [ - (np.single, np.single), (np.single, np.double), - (np.csingle, np.csingle), (np.csingle, np.cdouble), - (np.double, np.double), (np.longdouble, np.longdouble), - (np.cdouble, np.cdouble), (np.clongdouble, np.clongdouble)] - - @pytest.mark.parametrize('input_dtype,output_dtype', np_dtypes) - def test_unary_PyUFunc(self, input_dtype, output_dtype, f=np.exp, x=0, y=1): - xs = np.full(10, input_dtype(x), dtype=output_dtype) - ys = f(xs)[::2] - assert_allclose(ys, y) - assert_equal(ys.dtype, output_dtype) - - def f2(x, y): - return x**y - - @pytest.mark.parametrize('input_dtype,output_dtype', np_dtypes) - def test_binary_PyUFunc(self, input_dtype, output_dtype, f=f2, x=0, y=1): - xs = np.full(10, input_dtype(x), dtype=output_dtype) - ys = f(xs, xs)[::2] - assert_allclose(ys, y) - assert_equal(ys.dtype, output_dtype) - - # class to use in testing object method loops - class foo: - def conjugate(self): - return np.bool_(1) - - def logical_xor(self, obj): - return np.bool_(1) - - def test_unary_PyUFunc_O_O(self): - x = np.ones(10, dtype=object) - assert_(np.all(np.abs(x) == 1)) - - def test_unary_PyUFunc_O_O_method_simple(self, foo=foo): - x = np.full(10, foo(), dtype=object) - assert_(np.all(np.conjugate(x) == True)) - - def test_binary_PyUFunc_OO_O(self): - x = np.ones(10, dtype=object) - assert_(np.all(np.add(x, x) == 2)) - - def test_binary_PyUFunc_OO_O_method(self, foo=foo): - x = np.full(10, foo(), dtype=object) - assert_(np.all(np.logical_xor(x, x))) - - def test_binary_PyUFunc_On_Om_method(self, foo=foo): - x = np.full((10, 2, 3), foo(), dtype=object) - assert_(np.all(np.logical_xor(x, x))) - - def test_python_complex_conjugate(self): - # The conjugate ufunc should fall back to calling the method: - arr = np.array([1+2j, 3-4j], dtype="O") - assert isinstance(arr[0], complex) - res = np.conjugate(arr) - assert res.dtype == np.dtype("O") - assert_array_equal(res, np.array([1-2j, 3+4j], dtype="O")) - - @pytest.mark.parametrize("ufunc", UNARY_OBJECT_UFUNCS) - def test_unary_PyUFunc_O_O_method_full(self, ufunc): - """Compare the result of the object loop with non-object one""" - val = np.float64(np.pi/4) - - class MyFloat(np.float64): - def __getattr__(self, attr): - try: - return super().__getattr__(attr) - except AttributeError: - return lambda: getattr(np.core.umath, attr)(val) - - # Use 0-D arrays, to ensure the same element call - num_arr = np.array(val, dtype=np.float64) - obj_arr = np.array(MyFloat(val), dtype="O") - - with np.errstate(all="raise"): - try: - res_num = ufunc(num_arr) - except Exception as exc: - with assert_raises(type(exc)): - ufunc(obj_arr) - else: - res_obj = ufunc(obj_arr) - assert_array_almost_equal(res_num.astype("O"), res_obj) - - -def _pickleable_module_global(): - pass - - -class TestUfunc: - def test_pickle(self): - for proto in range(2, pickle.HIGHEST_PROTOCOL + 1): - assert_(pickle.loads(pickle.dumps(np.sin, - protocol=proto)) is np.sin) - - # Check that ufunc not defined in the top level numpy namespace - # such as numpy.core._rational_tests.test_add can also be pickled - res = pickle.loads(pickle.dumps(_rational_tests.test_add, - protocol=proto)) - assert_(res is _rational_tests.test_add) - - def test_pickle_withstring(self): - astring = (b"cnumpy.core\n_ufunc_reconstruct\np0\n" - b"(S'numpy.core.umath'\np1\nS'cos'\np2\ntp3\nRp4\n.") - assert_(pickle.loads(astring) is np.cos) - - @pytest.mark.skipif(IS_PYPY, reason="'is' check does not work on PyPy") - def test_pickle_name_is_qualname(self): - # This tests that a simplification of our ufunc pickle code will - # lead to allowing qualnames as names. Future ufuncs should - # possible add a specific qualname, or a hook into pickling instead - # (dask+numba may benefit). - _pickleable_module_global.ufunc = umt._pickleable_module_global_ufunc - obj = pickle.loads(pickle.dumps(_pickleable_module_global.ufunc)) - assert obj is umt._pickleable_module_global_ufunc - - def test_reduceat_shifting_sum(self): - L = 6 - x = np.arange(L) - idx = np.array(list(zip(np.arange(L - 2), np.arange(L - 2) + 2))).ravel() - assert_array_equal(np.add.reduceat(x, idx)[::2], [1, 3, 5, 7]) - - def test_all_ufunc(self): - """Try to check presence and results of all ufuncs. - - The list of ufuncs comes from generate_umath.py and is as follows: - - ===== ==== ============= =============== ======================== - done args function types notes - ===== ==== ============= =============== ======================== - n 1 conjugate nums + O - n 1 absolute nums + O complex -> real - n 1 negative nums + O - n 1 sign nums + O -> int - n 1 invert bool + ints + O flts raise an error - n 1 degrees real + M cmplx raise an error - n 1 radians real + M cmplx raise an error - n 1 arccos flts + M - n 1 arccosh flts + M - n 1 arcsin flts + M - n 1 arcsinh flts + M - n 1 arctan flts + M - n 1 arctanh flts + M - n 1 cos flts + M - n 1 sin flts + M - n 1 tan flts + M - n 1 cosh flts + M - n 1 sinh flts + M - n 1 tanh flts + M - n 1 exp flts + M - n 1 expm1 flts + M - n 1 log flts + M - n 1 log10 flts + M - n 1 log1p flts + M - n 1 sqrt flts + M real x < 0 raises error - n 1 ceil real + M - n 1 trunc real + M - n 1 floor real + M - n 1 fabs real + M - n 1 rint flts + M - n 1 isnan flts -> bool - n 1 isinf flts -> bool - n 1 isfinite flts -> bool - n 1 signbit real -> bool - n 1 modf real -> (frac, int) - n 1 logical_not bool + nums + M -> bool - n 2 left_shift ints + O flts raise an error - n 2 right_shift ints + O flts raise an error - n 2 add bool + nums + O boolean + is || - n 2 subtract bool + nums + O boolean - is ^ - n 2 multiply bool + nums + O boolean * is & - n 2 divide nums + O - n 2 floor_divide nums + O - n 2 true_divide nums + O bBhH -> f, iIlLqQ -> d - n 2 fmod nums + M - n 2 power nums + O - n 2 greater bool + nums + O -> bool - n 2 greater_equal bool + nums + O -> bool - n 2 less bool + nums + O -> bool - n 2 less_equal bool + nums + O -> bool - n 2 equal bool + nums + O -> bool - n 2 not_equal bool + nums + O -> bool - n 2 logical_and bool + nums + M -> bool - n 2 logical_or bool + nums + M -> bool - n 2 logical_xor bool + nums + M -> bool - n 2 maximum bool + nums + O - n 2 minimum bool + nums + O - n 2 bitwise_and bool + ints + O flts raise an error - n 2 bitwise_or bool + ints + O flts raise an error - n 2 bitwise_xor bool + ints + O flts raise an error - n 2 arctan2 real + M - n 2 remainder ints + real + O - n 2 hypot real + M - ===== ==== ============= =============== ======================== - - Types other than those listed will be accepted, but they are cast to - the smallest compatible type for which the function is defined. The - casting rules are: - - bool -> int8 -> float32 - ints -> double - - """ - pass - - # from include/numpy/ufuncobject.h - size_inferred = 2 - can_ignore = 4 - def test_signature0(self): - # the arguments to test_signature are: nin, nout, core_signature - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(i),(i)->()") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 1, 0)) - assert_equal(ixs, (0, 0)) - assert_equal(flags, (self.size_inferred,)) - assert_equal(sizes, (-1,)) - - def test_signature1(self): - # empty core signature; treat as plain ufunc (with trivial core) - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(),()->()") - assert_equal(enabled, 0) - assert_equal(num_dims, (0, 0, 0)) - assert_equal(ixs, ()) - assert_equal(flags, ()) - assert_equal(sizes, ()) - - def test_signature2(self): - # more complicated names for variables - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(i1,i2),(J_1)->(_kAB)") - assert_equal(enabled, 1) - assert_equal(num_dims, (2, 1, 1)) - assert_equal(ixs, (0, 1, 2, 3)) - assert_equal(flags, (self.size_inferred,)*4) - assert_equal(sizes, (-1, -1, -1, -1)) - - def test_signature3(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(i1, i12), (J_1)->(i12, i2)") - assert_equal(enabled, 1) - assert_equal(num_dims, (2, 1, 2)) - assert_equal(ixs, (0, 1, 2, 1, 3)) - assert_equal(flags, (self.size_inferred,)*4) - assert_equal(sizes, (-1, -1, -1, -1)) - - def test_signature4(self): - # matrix_multiply signature from _umath_tests - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(n,k),(k,m)->(n,m)") - assert_equal(enabled, 1) - assert_equal(num_dims, (2, 2, 2)) - assert_equal(ixs, (0, 1, 1, 2, 0, 2)) - assert_equal(flags, (self.size_inferred,)*3) - assert_equal(sizes, (-1, -1, -1)) - - def test_signature5(self): - # matmul signature from _umath_tests - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 2, 1, "(n?,k),(k,m?)->(n?,m?)") - assert_equal(enabled, 1) - assert_equal(num_dims, (2, 2, 2)) - assert_equal(ixs, (0, 1, 1, 2, 0, 2)) - assert_equal(flags, (self.size_inferred | self.can_ignore, - self.size_inferred, - self.size_inferred | self.can_ignore)) - assert_equal(sizes, (-1, -1, -1)) - - def test_signature6(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 1, 1, "(3)->()") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 0)) - assert_equal(ixs, (0,)) - assert_equal(flags, (0,)) - assert_equal(sizes, (3,)) - - def test_signature7(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 3, 1, "(3),(03,3),(n)->(9)") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 2, 1, 1)) - assert_equal(ixs, (0, 0, 0, 1, 2)) - assert_equal(flags, (0, self.size_inferred, 0)) - assert_equal(sizes, (3, -1, 9)) - - def test_signature8(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 3, 1, "(3?),(3?,3?),(n)->(9)") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 2, 1, 1)) - assert_equal(ixs, (0, 0, 0, 1, 2)) - assert_equal(flags, (self.can_ignore, self.size_inferred, 0)) - assert_equal(sizes, (3, -1, 9)) - - def test_signature9(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 1, 1, "( 3) -> ( )") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 0)) - assert_equal(ixs, (0,)) - assert_equal(flags, (0,)) - assert_equal(sizes, (3,)) - - def test_signature10(self): - enabled, num_dims, ixs, flags, sizes = umt.test_signature( - 3, 1, "( 3? ) , (3? , 3?) ,(n )-> ( 9)") - assert_equal(enabled, 1) - assert_equal(num_dims, (1, 2, 1, 1)) - assert_equal(ixs, (0, 0, 0, 1, 2)) - assert_equal(flags, (self.can_ignore, self.size_inferred, 0)) - assert_equal(sizes, (3, -1, 9)) - - def test_signature_failure_extra_parenthesis(self): - with assert_raises(ValueError): - umt.test_signature(2, 1, "((i)),(i)->()") - - def test_signature_failure_mismatching_parenthesis(self): - with assert_raises(ValueError): - umt.test_signature(2, 1, "(i),)i(->()") - - def test_signature_failure_signature_missing_input_arg(self): - with assert_raises(ValueError): - umt.test_signature(2, 1, "(i),->()") - - def test_signature_failure_signature_missing_output_arg(self): - with assert_raises(ValueError): - umt.test_signature(2, 2, "(i),(i)->()") - - def test_get_signature(self): - assert_equal(umt.inner1d.signature, "(i),(i)->()") - - def test_forced_sig(self): - a = 0.5*np.arange(3, dtype='f8') - assert_equal(np.add(a, 0.5), [0.5, 1, 1.5]) - with pytest.warns(DeprecationWarning): - assert_equal(np.add(a, 0.5, sig='i', casting='unsafe'), [0, 0, 1]) - assert_equal(np.add(a, 0.5, sig='ii->i', casting='unsafe'), [0, 0, 1]) - with pytest.warns(DeprecationWarning): - assert_equal(np.add(a, 0.5, sig=('i4',), casting='unsafe'), - [0, 0, 1]) - assert_equal(np.add(a, 0.5, sig=('i4', 'i4', 'i4'), - casting='unsafe'), [0, 0, 1]) - - b = np.zeros((3,), dtype='f8') - np.add(a, 0.5, out=b) - assert_equal(b, [0.5, 1, 1.5]) - b[:] = 0 - with pytest.warns(DeprecationWarning): - np.add(a, 0.5, sig='i', out=b, casting='unsafe') - assert_equal(b, [0, 0, 1]) - b[:] = 0 - np.add(a, 0.5, sig='ii->i', out=b, casting='unsafe') - assert_equal(b, [0, 0, 1]) - b[:] = 0 - with pytest.warns(DeprecationWarning): - np.add(a, 0.5, sig=('i4',), out=b, casting='unsafe') - assert_equal(b, [0, 0, 1]) - b[:] = 0 - np.add(a, 0.5, sig=('i4', 'i4', 'i4'), out=b, casting='unsafe') - assert_equal(b, [0, 0, 1]) - - def test_signature_all_None(self): - # signature all None, is an acceptable alternative (since 1.21) - # to not providing a signature. - res1 = np.add([3], [4], sig=(None, None, None)) - res2 = np.add([3], [4]) - assert_array_equal(res1, res2) - res1 = np.maximum([3], [4], sig=(None, None, None)) - res2 = np.maximum([3], [4]) - assert_array_equal(res1, res2) - - with pytest.raises(TypeError): - # special case, that would be deprecated anyway, so errors: - np.add(3, 4, signature=(None,)) - - def test_signature_dtype_type(self): - # Since that will be the normal behaviour (past NumPy 1.21) - # we do support the types already: - float_dtype = type(np.dtype(np.float64)) - np.add(3, 4, signature=(float_dtype, float_dtype, None)) - - @pytest.mark.parametrize("get_kwarg", [ - lambda dt: dict(dtype=x), - lambda dt: dict(signature=(x, None, None))]) - def test_signature_dtype_instances_allowed(self, get_kwarg): - # We allow certain dtype instances when there is a clear singleton - # and the given one is equivalent; mainly for backcompat. - int64 = np.dtype("int64") - int64_2 = pickle.loads(pickle.dumps(int64)) - # Relies on pickling behavior, if assert fails just remove test... - assert int64 is not int64_2 - - assert np.add(1, 2, **get_kwarg(int64_2)).dtype == int64 - td = np.timedelta(2, "s") - assert np.add(td, td, **get_kwarg("m8")).dtype == "m8[s]" - - @pytest.mark.parametrize("get_kwarg", [ - param(lambda x: dict(dtype=x), id="dtype"), - param(lambda x: dict(signature=(x, None, None)), id="signature")]) - def test_signature_dtype_instances_allowed(self, get_kwarg): - msg = "The `dtype` and `signature` arguments to ufuncs" - - with pytest.raises(TypeError, match=msg): - np.add(3, 5, **get_kwarg(np.dtype("int64").newbyteorder())) - with pytest.raises(TypeError, match=msg): - np.add(3, 5, **get_kwarg(np.dtype("m8[ns]"))) - with pytest.raises(TypeError, match=msg): - np.add(3, 5, **get_kwarg("m8[ns]")) - - @pytest.mark.parametrize("casting", ["unsafe", "same_kind", "safe"]) - def test_partial_signature_mismatch(self, casting): - # If the second argument matches already, no need to specify it: - res = np.ldexp(np.float32(1.), np.int_(2), dtype="d") - assert res.dtype == "d" - res = np.ldexp(np.float32(1.), np.int_(2), signature=(None, None, "d")) - assert res.dtype == "d" - - # ldexp only has a loop for long input as second argument, overriding - # the output cannot help with that (no matter the casting) - with pytest.raises(TypeError): - np.ldexp(1., np.uint64(3), dtype="d") - with pytest.raises(TypeError): - np.ldexp(1., np.uint64(3), signature=(None, None, "d")) - - def test_partial_signature_mismatch_with_cache(self): - with pytest.raises(TypeError): - np.add(np.float16(1), np.uint64(2), sig=("e", "d", None)) - # Ensure e,d->None is in the dispatching cache (double loop) - np.add(np.float16(1), np.float64(2)) - # The error must still be raised: - with pytest.raises(TypeError): - np.add(np.float16(1), np.uint64(2), sig=("e", "d", None)) - - def test_use_output_signature_for_all_arguments(self): - # Test that providing only `dtype=` or `signature=(None, None, dtype)` - # is sufficient if falling back to a homogeneous signature works. - # In this case, the `intp, intp -> intp` loop is chosen. - res = np.power(1.5, 2.8, dtype=np.intp, casting="unsafe") - assert res == 1 # the cast happens first. - res = np.power(1.5, 2.8, signature=(None, None, np.intp), - casting="unsafe") - assert res == 1 - with pytest.raises(TypeError): - # the unsafe casting would normally cause errors though: - np.power(1.5, 2.8, dtype=np.intp) - - def test_signature_errors(self): - with pytest.raises(TypeError, - match="the signature object to ufunc must be a string or"): - np.add(3, 4, signature=123.) # neither a string nor a tuple - - with pytest.raises(ValueError): - # bad symbols that do not translate to dtypes - np.add(3, 4, signature="%^->#") - - with pytest.raises(ValueError): - np.add(3, 4, signature=b"ii-i") # incomplete and byte string - - with pytest.raises(ValueError): - np.add(3, 4, signature="ii>i") # incomplete string - - with pytest.raises(ValueError): - np.add(3, 4, signature=(None, "f8")) # bad length - - with pytest.raises(UnicodeDecodeError): - np.add(3, 4, signature=b"\xff\xff->i") - - def test_forced_dtype_times(self): - # Signatures only set the type numbers (not the actual loop dtypes) - # so using `M` in a signature/dtype should generally work: - a = np.array(['2010-01-02', '1999-03-14', '1833-03'], dtype='>M8[D]') - np.maximum(a, a, dtype="M") - np.maximum.reduce(a, dtype="M") - - arr = np.arange(10, dtype="m8[s]") - np.add(arr, arr, dtype="m") - np.maximum(arr, arr, dtype="m") - - @pytest.mark.parametrize("ufunc", [np.add, np.sqrt]) - def test_cast_safety(self, ufunc): - """Basic test for the safest casts, because ufuncs inner loops can - indicate a cast-safety as well (which is normally always "no"). - """ - def call_ufunc(arr, **kwargs): - return ufunc(*(arr,) * ufunc.nin, **kwargs) - - arr = np.array([1., 2., 3.], dtype=np.float32) - arr_bs = arr.astype(arr.dtype.newbyteorder()) - expected = call_ufunc(arr) - # Normally, a "no" cast: - res = call_ufunc(arr, casting="no") - assert_array_equal(expected, res) - # Byte-swapping is not allowed with "no" though: - with pytest.raises(TypeError): - call_ufunc(arr_bs, casting="no") - - # But is allowed with "equiv": - res = call_ufunc(arr_bs, casting="equiv") - assert_array_equal(expected, res) - - # Casting to float64 is safe, but not equiv: - with pytest.raises(TypeError): - call_ufunc(arr_bs, dtype=np.float64, casting="equiv") - - # but it is safe cast: - res = call_ufunc(arr_bs, dtype=np.float64, casting="safe") - expected = call_ufunc(arr.astype(np.float64)) # upcast - assert_array_equal(expected, res) - - def test_true_divide(self): - a = np.array(10) - b = np.array(20) - tgt = np.array(0.5) - - for tc in 'bhilqBHILQefdgFDG': - dt = np.dtype(tc) - aa = a.astype(dt) - bb = b.astype(dt) - - # Check result value and dtype. - for x, y in itertools.product([aa, -aa], [bb, -bb]): - - # Check with no output type specified - if tc in 'FDG': - tgt = complex(x)/complex(y) - else: - tgt = float(x)/float(y) - - res = np.true_divide(x, y) - rtol = max(np.finfo(res).resolution, 1e-15) - assert_allclose(res, tgt, rtol=rtol) - - if tc in 'bhilqBHILQ': - assert_(res.dtype.name == 'float64') - else: - assert_(res.dtype.name == dt.name ) - - # Check with output type specified. This also checks for the - # incorrect casts in issue gh-3484 because the unary '-' does - # not change types, even for unsigned types, Hence casts in the - # ufunc from signed to unsigned and vice versa will lead to - # errors in the values. - for tcout in 'bhilqBHILQ': - dtout = np.dtype(tcout) - assert_raises(TypeError, np.true_divide, x, y, dtype=dtout) - - for tcout in 'efdg': - dtout = np.dtype(tcout) - if tc in 'FDG': - # Casting complex to float is not allowed - assert_raises(TypeError, np.true_divide, x, y, dtype=dtout) - else: - tgt = float(x)/float(y) - rtol = max(np.finfo(dtout).resolution, 1e-15) - # The value of tiny for double double is NaN - with suppress_warnings() as sup: - sup.filter(UserWarning) - if not np.isnan(np.finfo(dtout).tiny): - atol = max(np.finfo(dtout).tiny, 3e-308) - else: - atol = 3e-308 - # Some test values result in invalid for float16 - # and the cast to it may overflow to inf. - with np.errstate(invalid='ignore', over='ignore'): - res = np.true_divide(x, y, dtype=dtout) - if not np.isfinite(res) and tcout == 'e': - continue - assert_allclose(res, tgt, rtol=rtol, atol=atol) - assert_(res.dtype.name == dtout.name) - - for tcout in 'FDG': - dtout = np.dtype(tcout) - tgt = complex(x)/complex(y) - rtol = max(np.finfo(dtout).resolution, 1e-15) - # The value of tiny for double double is NaN - with suppress_warnings() as sup: - sup.filter(UserWarning) - if not np.isnan(np.finfo(dtout).tiny): - atol = max(np.finfo(dtout).tiny, 3e-308) - else: - atol = 3e-308 - res = np.true_divide(x, y, dtype=dtout) - if not np.isfinite(res): - continue - assert_allclose(res, tgt, rtol=rtol, atol=atol) - assert_(res.dtype.name == dtout.name) - - # Check booleans - a = np.ones((), dtype=np.bool_) - res = np.true_divide(a, a) - assert_(res == 1.0) - assert_(res.dtype.name == 'float64') - res = np.true_divide(~a, a) - assert_(res == 0.0) - assert_(res.dtype.name == 'float64') - - def test_sum_stability(self): - a = np.ones(500, dtype=np.float32) - assert_almost_equal((a / 10.).sum() - a.size / 10., 0, 4) - - a = np.ones(500, dtype=np.float64) - assert_almost_equal((a / 10.).sum() - a.size / 10., 0, 13) - - @pytest.mark.skipif(IS_WASM, reason="fp errors don't work in wasm") - def test_sum(self): - for dt in (int, np.float16, np.float32, np.float64, np.longdouble): - for v in (0, 1, 2, 7, 8, 9, 15, 16, 19, 127, - 128, 1024, 1235): - # warning if sum overflows, which it does in float16 - with warnings.catch_warnings(record=True) as w: - warnings.simplefilter("always", RuntimeWarning) - - tgt = dt(v * (v + 1) / 2) - overflow = not np.isfinite(tgt) - assert_equal(len(w), 1 * overflow) - - d = np.arange(1, v + 1, dtype=dt) - - assert_almost_equal(np.sum(d), tgt) - assert_equal(len(w), 2 * overflow) - - assert_almost_equal(np.sum(d[::-1]), tgt) - assert_equal(len(w), 3 * overflow) - - d = np.ones(500, dtype=dt) - assert_almost_equal(np.sum(d[::2]), 250.) - assert_almost_equal(np.sum(d[1::2]), 250.) - assert_almost_equal(np.sum(d[::3]), 167.) - assert_almost_equal(np.sum(d[1::3]), 167.) - assert_almost_equal(np.sum(d[::-2]), 250.) - assert_almost_equal(np.sum(d[-1::-2]), 250.) - assert_almost_equal(np.sum(d[::-3]), 167.) - assert_almost_equal(np.sum(d[-1::-3]), 167.) - # sum with first reduction entry != 0 - d = np.ones((1,), dtype=dt) - d += d - assert_almost_equal(d, 2.) - - def test_sum_complex(self): - for dt in (np.complex64, np.complex128, np.clongdouble): - for v in (0, 1, 2, 7, 8, 9, 15, 16, 19, 127, - 128, 1024, 1235): - tgt = dt(v * (v + 1) / 2) - dt((v * (v + 1) / 2) * 1j) - d = np.empty(v, dtype=dt) - d.real = np.arange(1, v + 1) - d.imag = -np.arange(1, v + 1) - assert_almost_equal(np.sum(d), tgt) - assert_almost_equal(np.sum(d[::-1]), tgt) - - d = np.ones(500, dtype=dt) + 1j - assert_almost_equal(np.sum(d[::2]), 250. + 250j) - assert_almost_equal(np.sum(d[1::2]), 250. + 250j) - assert_almost_equal(np.sum(d[::3]), 167. + 167j) - assert_almost_equal(np.sum(d[1::3]), 167. + 167j) - assert_almost_equal(np.sum(d[::-2]), 250. + 250j) - assert_almost_equal(np.sum(d[-1::-2]), 250. + 250j) - assert_almost_equal(np.sum(d[::-3]), 167. + 167j) - assert_almost_equal(np.sum(d[-1::-3]), 167. + 167j) - # sum with first reduction entry != 0 - d = np.ones((1,), dtype=dt) + 1j - d += d - assert_almost_equal(d, 2. + 2j) - - def test_sum_initial(self): - # Integer, single axis - assert_equal(np.sum([3], initial=2), 5) - - # Floating point - assert_almost_equal(np.sum([0.2], initial=0.1), 0.3) - - # Multiple non-adjacent axes - assert_equal(np.sum(np.ones((2, 3, 5), dtype=np.int64), axis=(0, 2), initial=2), - [12, 12, 12]) - - def test_sum_where(self): - # More extensive tests done in test_reduction_with_where. - assert_equal(np.sum([[1., 2.], [3., 4.]], where=[True, False]), 4.) - assert_equal(np.sum([[1., 2.], [3., 4.]], axis=0, initial=5., - where=[True, False]), [9., 5.]) - - def test_inner1d(self): - a = np.arange(6).reshape((2, 3)) - assert_array_equal(umt.inner1d(a, a), np.sum(a*a, axis=-1)) - a = np.arange(6) - assert_array_equal(umt.inner1d(a, a), np.sum(a*a)) - - def test_broadcast(self): - msg = "broadcast" - a = np.arange(4).reshape((2, 1, 2)) - b = np.arange(4).reshape((1, 2, 2)) - assert_array_equal(umt.inner1d(a, b), np.sum(a*b, axis=-1), err_msg=msg) - msg = "extend & broadcast loop dimensions" - b = np.arange(4).reshape((2, 2)) - assert_array_equal(umt.inner1d(a, b), np.sum(a*b, axis=-1), err_msg=msg) - # Broadcast in core dimensions should fail - a = np.arange(8).reshape((4, 2)) - b = np.arange(4).reshape((4, 1)) - assert_raises(ValueError, umt.inner1d, a, b) - # Extend core dimensions should fail - a = np.arange(8).reshape((4, 2)) - b = np.array(7) - assert_raises(ValueError, umt.inner1d, a, b) - # Broadcast should fail - a = np.arange(2).reshape((2, 1, 1)) - b = np.arange(3).reshape((3, 1, 1)) - assert_raises(ValueError, umt.inner1d, a, b) - - # Writing to a broadcasted array with overlap should warn, gh-2705 - a = np.arange(2) - b = np.arange(4).reshape((2, 2)) - u, v = np.broadcast_arrays(a, b) - assert_equal(u.strides[0], 0) - x = u + v - with warnings.catch_warnings(record=True) as w: - warnings.simplefilter("always") - u += v - assert_equal(len(w), 1) - assert_(x[0, 0] != u[0, 0]) - - # Output reduction should not be allowed. - # See gh-15139 - a = np.arange(6).reshape(3, 2) - b = np.ones(2) - out = np.empty(()) - assert_raises(ValueError, umt.inner1d, a, b, out) - out2 = np.empty(3) - c = umt.inner1d(a, b, out2) - assert_(c is out2) - - def test_out_broadcasts(self): - # For ufuncs and gufuncs (not for reductions), we currently allow - # the output to cause broadcasting of the input arrays. - # both along dimensions with shape 1 and dimensions which do not - # exist at all in the inputs. - arr = np.arange(3).reshape(1, 3) - out = np.empty((5, 4, 3)) - np.add(arr, arr, out=out) - assert (out == np.arange(3) * 2).all() - - # The same holds for gufuncs (gh-16484) - umt.inner1d(arr, arr, out=out) - # the result would be just a scalar `5`, but is broadcast fully: - assert (out == 5).all() - - @pytest.mark.parametrize(["arr", "out"], [ - ([2], np.empty(())), - ([1, 2], np.empty(1)), - (np.ones((4, 3)), np.empty((4, 1)))], - ids=["(1,)->()", "(2,)->(1,)", "(4, 3)->(4, 1)"]) - def test_out_broadcast_errors(self, arr, out): - # Output is (currently) allowed to broadcast inputs, but it cannot be - # smaller than the actual result. - with pytest.raises(ValueError, match="non-broadcastable"): - np.positive(arr, out=out) - - with pytest.raises(ValueError, match="non-broadcastable"): - np.add(np.ones(()), arr, out=out) - - def test_type_cast(self): - msg = "type cast" - a = np.arange(6, dtype='short').reshape((2, 3)) - assert_array_equal(umt.inner1d(a, a), np.sum(a*a, axis=-1), - err_msg=msg) - msg = "type cast on one argument" - a = np.arange(6).reshape((2, 3)) - b = a + 0.1 - assert_array_almost_equal(umt.inner1d(a, b), np.sum(a*b, axis=-1), - err_msg=msg) - - def test_endian(self): - msg = "big endian" - a = np.arange(6, dtype='>i4').reshape((2, 3)) - assert_array_equal(umt.inner1d(a, a), np.sum(a*a, axis=-1), - err_msg=msg) - msg = "little endian" - a = np.arange(6, dtype='()' - inner1d = umt.inner1d - a = np.arange(27.).reshape((3, 3, 3)) - b = np.arange(10., 19.).reshape((3, 1, 3)) - # basic tests on inputs (outputs tested below with matrix_multiply). - c = inner1d(a, b) - assert_array_equal(c, (a * b).sum(-1)) - # default - c = inner1d(a, b, axes=[(-1,), (-1,), ()]) - assert_array_equal(c, (a * b).sum(-1)) - # integers ok for single axis. - c = inner1d(a, b, axes=[-1, -1, ()]) - assert_array_equal(c, (a * b).sum(-1)) - # mix fine - c = inner1d(a, b, axes=[(-1,), -1, ()]) - assert_array_equal(c, (a * b).sum(-1)) - # can omit last axis. - c = inner1d(a, b, axes=[-1, -1]) - assert_array_equal(c, (a * b).sum(-1)) - # can pass in other types of integer (with __index__ protocol) - c = inner1d(a, b, axes=[np.int8(-1), np.array(-1, dtype=np.int32)]) - assert_array_equal(c, (a * b).sum(-1)) - # swap some axes - c = inner1d(a, b, axes=[0, 0]) - assert_array_equal(c, (a * b).sum(0)) - c = inner1d(a, b, axes=[0, 2]) - assert_array_equal(c, (a.transpose(1, 2, 0) * b).sum(-1)) - # Check errors for improperly constructed axes arguments. - # should have list. - assert_raises(TypeError, inner1d, a, b, axes=-1) - # needs enough elements - assert_raises(ValueError, inner1d, a, b, axes=[-1]) - # should pass in indices. - assert_raises(TypeError, inner1d, a, b, axes=[-1.0, -1.0]) - assert_raises(TypeError, inner1d, a, b, axes=[(-1.0,), -1]) - assert_raises(TypeError, inner1d, a, b, axes=[None, 1]) - # cannot pass an index unless there is only one dimension - # (output is wrong in this case) - assert_raises(np.AxisError, inner1d, a, b, axes=[-1, -1, -1]) - # or pass in generally the wrong number of axes - assert_raises(np.AxisError, inner1d, a, b, axes=[-1, -1, (-1,)]) - assert_raises(np.AxisError, inner1d, a, b, axes=[-1, (-2, -1), ()]) - # axes need to have same length. - assert_raises(ValueError, inner1d, a, b, axes=[0, 1]) - - # matrix_multiply signature: '(m,n),(n,p)->(m,p)' - mm = umt.matrix_multiply - a = np.arange(12).reshape((2, 3, 2)) - b = np.arange(8).reshape((2, 2, 2, 1)) + 1 - # Sanity check. - c = mm(a, b) - assert_array_equal(c, np.matmul(a, b)) - # Default axes. - c = mm(a, b, axes=[(-2, -1), (-2, -1), (-2, -1)]) - assert_array_equal(c, np.matmul(a, b)) - # Default with explicit axes. - c = mm(a, b, axes=[(1, 2), (2, 3), (2, 3)]) - assert_array_equal(c, np.matmul(a, b)) - # swap some axes. - c = mm(a, b, axes=[(0, -1), (1, 2), (-2, -1)]) - assert_array_equal(c, np.matmul(a.transpose(1, 0, 2), - b.transpose(0, 3, 1, 2))) - # Default with output array. - c = np.empty((2, 2, 3, 1)) - d = mm(a, b, out=c, axes=[(1, 2), (2, 3), (2, 3)]) - assert_(c is d) - assert_array_equal(c, np.matmul(a, b)) - # Transposed output array - c = np.empty((1, 2, 2, 3)) - d = mm(a, b, out=c, axes=[(-2, -1), (-2, -1), (3, 0)]) - assert_(c is d) - assert_array_equal(c, np.matmul(a, b).transpose(3, 0, 1, 2)) - # Check errors for improperly constructed axes arguments. - # wrong argument - assert_raises(TypeError, mm, a, b, axis=1) - # axes should be list - assert_raises(TypeError, mm, a, b, axes=1) - assert_raises(TypeError, mm, a, b, axes=((-2, -1), (-2, -1), (-2, -1))) - # list needs to have right length - assert_raises(ValueError, mm, a, b, axes=[]) - assert_raises(ValueError, mm, a, b, axes=[(-2, -1)]) - # list should not contain None, or lists - assert_raises(TypeError, mm, a, b, axes=[None, None, None]) - assert_raises(TypeError, - mm, a, b, axes=[[-2, -1], [-2, -1], [-2, -1]]) - assert_raises(TypeError, - mm, a, b, axes=[(-2, -1), (-2, -1), [-2, -1]]) - assert_raises(TypeError, mm, a, b, axes=[(-2, -1), (-2, -1), None]) - # single integers are AxisErrors if more are required - assert_raises(np.AxisError, mm, a, b, axes=[-1, -1, -1]) - assert_raises(np.AxisError, mm, a, b, axes=[(-2, -1), (-2, -1), -1]) - # tuples should not have duplicated values - assert_raises(ValueError, mm, a, b, axes=[(-2, -1), (-2, -1), (-2, -2)]) - # arrays should have enough axes. - z = np.zeros((2, 2)) - assert_raises(ValueError, mm, z, z[0]) - assert_raises(ValueError, mm, z, z, out=z[:, 0]) - assert_raises(ValueError, mm, z[1], z, axes=[0, 1]) - assert_raises(ValueError, mm, z, z, out=z[0], axes=[0, 1]) - # Regular ufuncs should not accept axes. - assert_raises(TypeError, np.add, 1., 1., axes=[0]) - # should be able to deal with bad unrelated kwargs. - assert_raises(TypeError, mm, z, z, axes=[0, 1], parrot=True) - - def test_axis_argument(self): - # inner1d signature: '(i),(i)->()' - inner1d = umt.inner1d - a = np.arange(27.).reshape((3, 3, 3)) - b = np.arange(10., 19.).reshape((3, 1, 3)) - c = inner1d(a, b) - assert_array_equal(c, (a * b).sum(-1)) - c = inner1d(a, b, axis=-1) - assert_array_equal(c, (a * b).sum(-1)) - out = np.zeros_like(c) - d = inner1d(a, b, axis=-1, out=out) - assert_(d is out) - assert_array_equal(d, c) - c = inner1d(a, b, axis=0) - assert_array_equal(c, (a * b).sum(0)) - # Sanity checks on innerwt and cumsum. - a = np.arange(6).reshape((2, 3)) - b = np.arange(10, 16).reshape((2, 3)) - w = np.arange(20, 26).reshape((2, 3)) - assert_array_equal(umt.innerwt(a, b, w, axis=0), - np.sum(a * b * w, axis=0)) - assert_array_equal(umt.cumsum(a, axis=0), np.cumsum(a, axis=0)) - assert_array_equal(umt.cumsum(a, axis=-1), np.cumsum(a, axis=-1)) - out = np.empty_like(a) - b = umt.cumsum(a, out=out, axis=0) - assert_(out is b) - assert_array_equal(b, np.cumsum(a, axis=0)) - b = umt.cumsum(a, out=out, axis=1) - assert_(out is b) - assert_array_equal(b, np.cumsum(a, axis=-1)) - # Check errors. - # Cannot pass in both axis and axes. - assert_raises(TypeError, inner1d, a, b, axis=0, axes=[0, 0]) - # Not an integer. - assert_raises(TypeError, inner1d, a, b, axis=[0]) - # more than 1 core dimensions. - mm = umt.matrix_multiply - assert_raises(TypeError, mm, a, b, axis=1) - # Output wrong size in axis. - out = np.empty((1, 2, 3), dtype=a.dtype) - assert_raises(ValueError, umt.cumsum, a, out=out, axis=0) - # Regular ufuncs should not accept axis. - assert_raises(TypeError, np.add, 1., 1., axis=0) - - def test_keepdims_argument(self): - # inner1d signature: '(i),(i)->()' - inner1d = umt.inner1d - a = np.arange(27.).reshape((3, 3, 3)) - b = np.arange(10., 19.).reshape((3, 1, 3)) - c = inner1d(a, b) - assert_array_equal(c, (a * b).sum(-1)) - c = inner1d(a, b, keepdims=False) - assert_array_equal(c, (a * b).sum(-1)) - c = inner1d(a, b, keepdims=True) - assert_array_equal(c, (a * b).sum(-1, keepdims=True)) - out = np.zeros_like(c) - d = inner1d(a, b, keepdims=True, out=out) - assert_(d is out) - assert_array_equal(d, c) - # Now combined with axis and axes. - c = inner1d(a, b, axis=-1, keepdims=False) - assert_array_equal(c, (a * b).sum(-1, keepdims=False)) - c = inner1d(a, b, axis=-1, keepdims=True) - assert_array_equal(c, (a * b).sum(-1, keepdims=True)) - c = inner1d(a, b, axis=0, keepdims=False) - assert_array_equal(c, (a * b).sum(0, keepdims=False)) - c = inner1d(a, b, axis=0, keepdims=True) - assert_array_equal(c, (a * b).sum(0, keepdims=True)) - c = inner1d(a, b, axes=[(-1,), (-1,), ()], keepdims=False) - assert_array_equal(c, (a * b).sum(-1)) - c = inner1d(a, b, axes=[(-1,), (-1,), (-1,)], keepdims=True) - assert_array_equal(c, (a * b).sum(-1, keepdims=True)) - c = inner1d(a, b, axes=[0, 0], keepdims=False) - assert_array_equal(c, (a * b).sum(0)) - c = inner1d(a, b, axes=[0, 0, 0], keepdims=True) - assert_array_equal(c, (a * b).sum(0, keepdims=True)) - c = inner1d(a, b, axes=[0, 2], keepdims=False) - assert_array_equal(c, (a.transpose(1, 2, 0) * b).sum(-1)) - c = inner1d(a, b, axes=[0, 2], keepdims=True) - assert_array_equal(c, (a.transpose(1, 2, 0) * b).sum(-1, - keepdims=True)) - c = inner1d(a, b, axes=[0, 2, 2], keepdims=True) - assert_array_equal(c, (a.transpose(1, 2, 0) * b).sum(-1, - keepdims=True)) - c = inner1d(a, b, axes=[0, 2, 0], keepdims=True) - assert_array_equal(c, (a * b.transpose(2, 0, 1)).sum(0, keepdims=True)) - # Hardly useful, but should work. - c = inner1d(a, b, axes=[0, 2, 1], keepdims=True) - assert_array_equal(c, (a.transpose(1, 0, 2) * b.transpose(0, 2, 1)) - .sum(1, keepdims=True)) - # Check with two core dimensions. - a = np.eye(3) * np.arange(4.)[:, np.newaxis, np.newaxis] - expected = uml.det(a) - c = uml.det(a, keepdims=False) - assert_array_equal(c, expected) - c = uml.det(a, keepdims=True) - assert_array_equal(c, expected[:, np.newaxis, np.newaxis]) - a = np.eye(3) * np.arange(4.)[:, np.newaxis, np.newaxis] - expected_s, expected_l = uml.slogdet(a) - cs, cl = uml.slogdet(a, keepdims=False) - assert_array_equal(cs, expected_s) - assert_array_equal(cl, expected_l) - cs, cl = uml.slogdet(a, keepdims=True) - assert_array_equal(cs, expected_s[:, np.newaxis, np.newaxis]) - assert_array_equal(cl, expected_l[:, np.newaxis, np.newaxis]) - # Sanity check on innerwt. - a = np.arange(6).reshape((2, 3)) - b = np.arange(10, 16).reshape((2, 3)) - w = np.arange(20, 26).reshape((2, 3)) - assert_array_equal(umt.innerwt(a, b, w, keepdims=True), - np.sum(a * b * w, axis=-1, keepdims=True)) - assert_array_equal(umt.innerwt(a, b, w, axis=0, keepdims=True), - np.sum(a * b * w, axis=0, keepdims=True)) - # Check errors. - # Not a boolean - assert_raises(TypeError, inner1d, a, b, keepdims='true') - # More than 1 core dimension, and core output dimensions. - mm = umt.matrix_multiply - assert_raises(TypeError, mm, a, b, keepdims=True) - assert_raises(TypeError, mm, a, b, keepdims=False) - # Regular ufuncs should not accept keepdims. - assert_raises(TypeError, np.add, 1., 1., keepdims=False) - - def test_innerwt(self): - a = np.arange(6).reshape((2, 3)) - b = np.arange(10, 16).reshape((2, 3)) - w = np.arange(20, 26).reshape((2, 3)) - assert_array_equal(umt.innerwt(a, b, w), np.sum(a*b*w, axis=-1)) - a = np.arange(100, 124).reshape((2, 3, 4)) - b = np.arange(200, 224).reshape((2, 3, 4)) - w = np.arange(300, 324).reshape((2, 3, 4)) - assert_array_equal(umt.innerwt(a, b, w), np.sum(a*b*w, axis=-1)) - - def test_innerwt_empty(self): - """Test generalized ufunc with zero-sized operands""" - a = np.array([], dtype='f8') - b = np.array([], dtype='f8') - w = np.array([], dtype='f8') - assert_array_equal(umt.innerwt(a, b, w), np.sum(a*b*w, axis=-1)) - - def test_cross1d(self): - """Test with fixed-sized signature.""" - a = np.eye(3) - assert_array_equal(umt.cross1d(a, a), np.zeros((3, 3))) - out = np.zeros((3, 3)) - result = umt.cross1d(a[0], a, out) - assert_(result is out) - assert_array_equal(result, np.vstack((np.zeros(3), a[2], -a[1]))) - assert_raises(ValueError, umt.cross1d, np.eye(4), np.eye(4)) - assert_raises(ValueError, umt.cross1d, a, np.arange(4.)) - # Wrong output core dimension. - assert_raises(ValueError, umt.cross1d, a, np.arange(3.), np.zeros((3, 4))) - # Wrong output broadcast dimension (see gh-15139). - assert_raises(ValueError, umt.cross1d, a, np.arange(3.), np.zeros(3)) - - def test_can_ignore_signature(self): - # Comparing the effects of ? in signature: - # matrix_multiply: (m,n),(n,p)->(m,p) # all must be there. - # matmul: (m?,n),(n,p?)->(m?,p?) # allow missing m, p. - mat = np.arange(12).reshape((2, 3, 2)) - single_vec = np.arange(2) - col_vec = single_vec[:, np.newaxis] - col_vec_array = np.arange(8).reshape((2, 2, 2, 1)) + 1 - # matrix @ single column vector with proper dimension - mm_col_vec = umt.matrix_multiply(mat, col_vec) - # matmul does the same thing - matmul_col_vec = umt.matmul(mat, col_vec) - assert_array_equal(matmul_col_vec, mm_col_vec) - # matrix @ vector without dimension making it a column vector. - # matrix multiply fails -> missing core dim. - assert_raises(ValueError, umt.matrix_multiply, mat, single_vec) - # matmul mimicker passes, and returns a vector. - matmul_col = umt.matmul(mat, single_vec) - assert_array_equal(matmul_col, mm_col_vec.squeeze()) - # Now with a column array: same as for column vector, - # broadcasting sensibly. - mm_col_vec = umt.matrix_multiply(mat, col_vec_array) - matmul_col_vec = umt.matmul(mat, col_vec_array) - assert_array_equal(matmul_col_vec, mm_col_vec) - # As above, but for row vector - single_vec = np.arange(3) - row_vec = single_vec[np.newaxis, :] - row_vec_array = np.arange(24).reshape((4, 2, 1, 1, 3)) + 1 - # row vector @ matrix - mm_row_vec = umt.matrix_multiply(row_vec, mat) - matmul_row_vec = umt.matmul(row_vec, mat) - assert_array_equal(matmul_row_vec, mm_row_vec) - # single row vector @ matrix - assert_raises(ValueError, umt.matrix_multiply, single_vec, mat) - matmul_row = umt.matmul(single_vec, mat) - assert_array_equal(matmul_row, mm_row_vec.squeeze()) - # row vector array @ matrix - mm_row_vec = umt.matrix_multiply(row_vec_array, mat) - matmul_row_vec = umt.matmul(row_vec_array, mat) - assert_array_equal(matmul_row_vec, mm_row_vec) - # Now for vector combinations - # row vector @ column vector - col_vec = row_vec.T - col_vec_array = row_vec_array.swapaxes(-2, -1) - mm_row_col_vec = umt.matrix_multiply(row_vec, col_vec) - matmul_row_col_vec = umt.matmul(row_vec, col_vec) - assert_array_equal(matmul_row_col_vec, mm_row_col_vec) - # single row vector @ single col vector - assert_raises(ValueError, umt.matrix_multiply, single_vec, single_vec) - matmul_row_col = umt.matmul(single_vec, single_vec) - assert_array_equal(matmul_row_col, mm_row_col_vec.squeeze()) - # row vector array @ matrix - mm_row_col_array = umt.matrix_multiply(row_vec_array, col_vec_array) - matmul_row_col_array = umt.matmul(row_vec_array, col_vec_array) - assert_array_equal(matmul_row_col_array, mm_row_col_array) - # Finally, check that things are *not* squeezed if one gives an - # output. - out = np.zeros_like(mm_row_col_array) - out = umt.matrix_multiply(row_vec_array, col_vec_array, out=out) - assert_array_equal(out, mm_row_col_array) - out[:] = 0 - out = umt.matmul(row_vec_array, col_vec_array, out=out) - assert_array_equal(out, mm_row_col_array) - # And check one cannot put missing dimensions back. - out = np.zeros_like(mm_row_col_vec) - assert_raises(ValueError, umt.matrix_multiply, single_vec, single_vec, - out) - # But fine for matmul, since it is just a broadcast. - out = umt.matmul(single_vec, single_vec, out) - assert_array_equal(out, mm_row_col_vec.squeeze()) - - def test_matrix_multiply(self): - self.compare_matrix_multiply_results(np.int64) - self.compare_matrix_multiply_results(np.double) - - def test_matrix_multiply_umath_empty(self): - res = umt.matrix_multiply(np.ones((0, 10)), np.ones((10, 0))) - assert_array_equal(res, np.zeros((0, 0))) - res = umt.matrix_multiply(np.ones((10, 0)), np.ones((0, 10))) - assert_array_equal(res, np.zeros((10, 10))) - - def compare_matrix_multiply_results(self, tp): - d1 = np.array(np.random.rand(2, 3, 4), dtype=tp) - d2 = np.array(np.random.rand(2, 3, 4), dtype=tp) - msg = "matrix multiply on type %s" % d1.dtype.name - - def permute_n(n): - if n == 1: - return ([0],) - ret = () - base = permute_n(n-1) - for perm in base: - for i in range(n): - new = perm + [n-1] - new[n-1] = new[i] - new[i] = n-1 - ret += (new,) - return ret - - def slice_n(n): - if n == 0: - return ((),) - ret = () - base = slice_n(n-1) - for sl in base: - ret += (sl+(slice(None),),) - ret += (sl+(slice(0, 1),),) - return ret - - def broadcastable(s1, s2): - return s1 == s2 or s1 == 1 or s2 == 1 - - permute_3 = permute_n(3) - slice_3 = slice_n(3) + ((slice(None, None, -1),)*3,) - - ref = True - for p1 in permute_3: - for p2 in permute_3: - for s1 in slice_3: - for s2 in slice_3: - a1 = d1.transpose(p1)[s1] - a2 = d2.transpose(p2)[s2] - ref = ref and a1.base is not None - ref = ref and a2.base is not None - if (a1.shape[-1] == a2.shape[-2] and - broadcastable(a1.shape[0], a2.shape[0])): - assert_array_almost_equal( - umt.matrix_multiply(a1, a2), - np.sum(a2[..., np.newaxis].swapaxes(-3, -1) * - a1[..., np.newaxis,:], axis=-1), - err_msg=msg + ' %s %s' % (str(a1.shape), - str(a2.shape))) - - assert_equal(ref, True, err_msg="reference check") - - def test_euclidean_pdist(self): - a = np.arange(12, dtype=float).reshape(4, 3) - out = np.empty((a.shape[0] * (a.shape[0] - 1) // 2,), dtype=a.dtype) - umt.euclidean_pdist(a, out) - b = np.sqrt(np.sum((a[:, None] - a)**2, axis=-1)) - b = b[~np.tri(a.shape[0], dtype=bool)] - assert_almost_equal(out, b) - # An output array is required to determine p with signature (n,d)->(p) - assert_raises(ValueError, umt.euclidean_pdist, a) - - def test_cumsum(self): - a = np.arange(10) - result = umt.cumsum(a) - assert_array_equal(result, a.cumsum()) - - def test_object_logical(self): - a = np.array([3, None, True, False, "test", ""], dtype=object) - assert_equal(np.logical_or(a, None), - np.array([x or None for x in a], dtype=object)) - assert_equal(np.logical_or(a, True), - np.array([x or True for x in a], dtype=object)) - assert_equal(np.logical_or(a, 12), - np.array([x or 12 for x in a], dtype=object)) - assert_equal(np.logical_or(a, "blah"), - np.array([x or "blah" for x in a], dtype=object)) - - assert_equal(np.logical_and(a, None), - np.array([x and None for x in a], dtype=object)) - assert_equal(np.logical_and(a, True), - np.array([x and True for x in a], dtype=object)) - assert_equal(np.logical_and(a, 12), - np.array([x and 12 for x in a], dtype=object)) - assert_equal(np.logical_and(a, "blah"), - np.array([x and "blah" for x in a], dtype=object)) - - assert_equal(np.logical_not(a), - np.array([not x for x in a], dtype=object)) - - assert_equal(np.logical_or.reduce(a), 3) - assert_equal(np.logical_and.reduce(a), None) - - def test_object_comparison(self): - class HasComparisons: - def __eq__(self, other): - return '==' - - arr0d = np.array(HasComparisons()) - assert_equal(arr0d == arr0d, True) - assert_equal(np.equal(arr0d, arr0d), True) # normal behavior is a cast - - arr1d = np.array([HasComparisons()]) - assert_equal(arr1d == arr1d, np.array([True])) - assert_equal(np.equal(arr1d, arr1d), np.array([True])) # normal behavior is a cast - assert_equal(np.equal(arr1d, arr1d, dtype=object), np.array(['=='])) - - def test_object_array_reduction(self): - # Reductions on object arrays - a = np.array(['a', 'b', 'c'], dtype=object) - assert_equal(np.sum(a), 'abc') - assert_equal(np.max(a), 'c') - assert_equal(np.min(a), 'a') - a = np.array([True, False, True], dtype=object) - assert_equal(np.sum(a), 2) - assert_equal(np.prod(a), 0) - assert_equal(np.any(a), True) - assert_equal(np.all(a), False) - assert_equal(np.max(a), True) - assert_equal(np.min(a), False) - assert_equal(np.array([[1]], dtype=object).sum(), 1) - assert_equal(np.array([[[1, 2]]], dtype=object).sum((0, 1)), [1, 2]) - assert_equal(np.array([1], dtype=object).sum(initial=1), 2) - assert_equal(np.array([[1], [2, 3]], dtype=object) - .sum(initial=[0], where=[False, True]), [0, 2, 3]) - - def test_object_array_accumulate_inplace(self): - # Checks that in-place accumulates work, see also gh-7402 - arr = np.ones(4, dtype=object) - arr[:] = [[1] for i in range(4)] - # Twice reproduced also for tuples: - np.add.accumulate(arr, out=arr) - np.add.accumulate(arr, out=arr) - assert_array_equal(arr, - np.array([[1]*i for i in [1, 3, 6, 10]], dtype=object), - ) - - # And the same if the axis argument is used - arr = np.ones((2, 4), dtype=object) - arr[0, :] = [[2] for i in range(4)] - np.add.accumulate(arr, out=arr, axis=-1) - np.add.accumulate(arr, out=arr, axis=-1) - assert_array_equal(arr[0, :], - np.array([[2]*i for i in [1, 3, 6, 10]], dtype=object), - ) - - def test_object_array_accumulate_failure(self): - # Typical accumulation on object works as expected: - res = np.add.accumulate(np.array([1, 0, 2], dtype=object)) - assert_array_equal(res, np.array([1, 1, 3], dtype=object)) - # But errors are propagated from the inner-loop if they occur: - with pytest.raises(TypeError): - np.add.accumulate([1, None, 2]) - - def test_object_array_reduceat_inplace(self): - # Checks that in-place reduceats work, see also gh-7465 - arr = np.empty(4, dtype=object) - arr[:] = [[1] for i in range(4)] - out = np.empty(4, dtype=object) - out[:] = [[1] for i in range(4)] - np.add.reduceat(arr, np.arange(4), out=arr) - np.add.reduceat(arr, np.arange(4), out=arr) - assert_array_equal(arr, out) - - # And the same if the axis argument is used - arr = np.ones((2, 4), dtype=object) - arr[0, :] = [[2] for i in range(4)] - out = np.ones((2, 4), dtype=object) - out[0, :] = [[2] for i in range(4)] - np.add.reduceat(arr, np.arange(4), out=arr, axis=-1) - np.add.reduceat(arr, np.arange(4), out=arr, axis=-1) - assert_array_equal(arr, out) - - def test_object_array_reduceat_failure(self): - # Reduceat works as expected when no invalid operation occurs (None is - # not involved in an operation here) - res = np.add.reduceat(np.array([1, None, 2], dtype=object), [1, 2]) - assert_array_equal(res, np.array([None, 2], dtype=object)) - # But errors when None would be involved in an operation: - with pytest.raises(TypeError): - np.add.reduceat([1, None, 2], [0, 2]) - - def test_zerosize_reduction(self): - # Test with default dtype and object dtype - for a in [[], np.array([], dtype=object)]: - assert_equal(np.sum(a), 0) - assert_equal(np.prod(a), 1) - assert_equal(np.any(a), False) - assert_equal(np.all(a), True) - assert_raises(ValueError, np.max, a) - assert_raises(ValueError, np.min, a) - - def test_axis_out_of_bounds(self): - a = np.array([False, False]) - assert_raises(np.AxisError, a.all, axis=1) - a = np.array([False, False]) - assert_raises(np.AxisError, a.all, axis=-2) - - a = np.array([False, False]) - assert_raises(np.AxisError, a.any, axis=1) - a = np.array([False, False]) - assert_raises(np.AxisError, a.any, axis=-2) - - def test_scalar_reduction(self): - # The functions 'sum', 'prod', etc allow specifying axis=0 - # even for scalars - assert_equal(np.sum(3, axis=0), 3) - assert_equal(np.prod(3.5, axis=0), 3.5) - assert_equal(np.any(True, axis=0), True) - assert_equal(np.all(False, axis=0), False) - assert_equal(np.max(3, axis=0), 3) - assert_equal(np.min(2.5, axis=0), 2.5) - - # Check scalar behaviour for ufuncs without an identity - assert_equal(np.power.reduce(3), 3) - - # Make sure that scalars are coming out from this operation - assert_(type(np.prod(np.float32(2.5), axis=0)) is np.float32) - assert_(type(np.sum(np.float32(2.5), axis=0)) is np.float32) - assert_(type(np.max(np.float32(2.5), axis=0)) is np.float32) - assert_(type(np.min(np.float32(2.5), axis=0)) is np.float32) - - # check if scalars/0-d arrays get cast - assert_(type(np.any(0, axis=0)) is np.bool_) - - # assert that 0-d arrays get wrapped - class MyArray(np.ndarray): - pass - a = np.array(1).view(MyArray) - assert_(type(np.any(a)) is MyArray) - - def test_casting_out_param(self): - # Test that it's possible to do casts on output - a = np.ones((200, 100), np.int64) - b = np.ones((200, 100), np.int64) - c = np.ones((200, 100), np.float64) - np.add(a, b, out=c) - assert_equal(c, 2) - - a = np.zeros(65536) - b = np.zeros(65536, dtype=np.float32) - np.subtract(a, 0, out=b) - assert_equal(b, 0) - - def test_where_param(self): - # Test that the where= ufunc parameter works with regular arrays - a = np.arange(7) - b = np.ones(7) - c = np.zeros(7) - np.add(a, b, out=c, where=(a % 2 == 1)) - assert_equal(c, [0, 2, 0, 4, 0, 6, 0]) - - a = np.arange(4).reshape(2, 2) + 2 - np.power(a, [2, 3], out=a, where=[[0, 1], [1, 0]]) - assert_equal(a, [[2, 27], [16, 5]]) - # Broadcasting the where= parameter - np.subtract(a, 2, out=a, where=[True, False]) - assert_equal(a, [[0, 27], [14, 5]]) - - def test_where_param_buffer_output(self): - # This test is temporarily skipped because it requires - # adding masking features to the nditer to work properly - - # With casting on output - a = np.ones(10, np.int64) - b = np.ones(10, np.int64) - c = 1.5 * np.ones(10, np.float64) - np.add(a, b, out=c, where=[1, 0, 0, 1, 0, 0, 1, 1, 1, 0]) - assert_equal(c, [2, 1.5, 1.5, 2, 1.5, 1.5, 2, 2, 2, 1.5]) - - def test_where_param_alloc(self): - # With casting and allocated output - a = np.array([1], dtype=np.int64) - m = np.array([True], dtype=bool) - assert_equal(np.sqrt(a, where=m), [1]) - - # No casting and allocated output - a = np.array([1], dtype=np.float64) - m = np.array([True], dtype=bool) - assert_equal(np.sqrt(a, where=m), [1]) - - def test_where_with_broadcasting(self): - # See gh-17198 - a = np.random.random((5000, 4)) - b = np.random.random((5000, 1)) - - where = a > 0.3 - out = np.full_like(a, 0) - np.less(a, b, where=where, out=out) - b_where = np.broadcast_to(b, a.shape)[where] - assert_array_equal((a[where] < b_where), out[where].astype(bool)) - assert not out[~where].any() # outside mask, out remains all 0 - - def check_identityless_reduction(self, a): - # np.minimum.reduce is an identityless reduction - - # Verify that it sees the zero at various positions - a[...] = 1 - a[1, 0, 0] = 0 - assert_equal(np.minimum.reduce(a, axis=None), 0) - assert_equal(np.minimum.reduce(a, axis=(0, 1)), [0, 1, 1, 1]) - assert_equal(np.minimum.reduce(a, axis=(0, 2)), [0, 1, 1]) - assert_equal(np.minimum.reduce(a, axis=(1, 2)), [1, 0]) - assert_equal(np.minimum.reduce(a, axis=0), - [[0, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=1), - [[1, 1, 1, 1], [0, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=2), - [[1, 1, 1], [0, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=()), a) - - a[...] = 1 - a[0, 1, 0] = 0 - assert_equal(np.minimum.reduce(a, axis=None), 0) - assert_equal(np.minimum.reduce(a, axis=(0, 1)), [0, 1, 1, 1]) - assert_equal(np.minimum.reduce(a, axis=(0, 2)), [1, 0, 1]) - assert_equal(np.minimum.reduce(a, axis=(1, 2)), [0, 1]) - assert_equal(np.minimum.reduce(a, axis=0), - [[1, 1, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=1), - [[0, 1, 1, 1], [1, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=2), - [[1, 0, 1], [1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=()), a) - - a[...] = 1 - a[0, 0, 1] = 0 - assert_equal(np.minimum.reduce(a, axis=None), 0) - assert_equal(np.minimum.reduce(a, axis=(0, 1)), [1, 0, 1, 1]) - assert_equal(np.minimum.reduce(a, axis=(0, 2)), [0, 1, 1]) - assert_equal(np.minimum.reduce(a, axis=(1, 2)), [0, 1]) - assert_equal(np.minimum.reduce(a, axis=0), - [[1, 0, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=1), - [[1, 0, 1, 1], [1, 1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=2), - [[0, 1, 1], [1, 1, 1]]) - assert_equal(np.minimum.reduce(a, axis=()), a) - - @requires_memory(6 * 1024**3) - def test_identityless_reduction_huge_array(self): - # Regression test for gh-20921 (copying identity incorrectly failed) - arr = np.zeros((2, 2**31), 'uint8') - arr[:, 0] = [1, 3] - arr[:, -1] = [4, 1] - res = np.maximum.reduce(arr, axis=0) - del arr - assert res[0] == 3 - assert res[-1] == 4 - - def test_identityless_reduction_corder(self): - a = np.empty((2, 3, 4), order='C') - self.check_identityless_reduction(a) - - def test_identityless_reduction_forder(self): - a = np.empty((2, 3, 4), order='F') - self.check_identityless_reduction(a) - - def test_identityless_reduction_otherorder(self): - a = np.empty((2, 4, 3), order='C').swapaxes(1, 2) - self.check_identityless_reduction(a) - - def test_identityless_reduction_noncontig(self): - a = np.empty((3, 5, 4), order='C').swapaxes(1, 2) - a = a[1:, 1:, 1:] - self.check_identityless_reduction(a) - - def test_identityless_reduction_noncontig_unaligned(self): - a = np.empty((3*4*5*8 + 1,), dtype='i1') - a = a[1:].view(dtype='f8') - a.shape = (3, 4, 5) - a = a[1:, 1:, 1:] - self.check_identityless_reduction(a) - - def test_reduce_identity_depends_on_loop(self): - """ - The type of the result should always depend on the selected loop, not - necessarily the output (only relevant for object arrays). - """ - # For an object loop, the default value 0 with type int is used: - assert type(np.add.reduce([], dtype=object)) is int - out = np.array(None, dtype=object) - # When the loop is float64 but `out` is object this does not happen, - # the result is float64 cast to object (which gives Python `float`). - np.add.reduce([], out=out, dtype=np.float64) - assert type(out[()]) is float - - def test_initial_reduction(self): - # np.minimum.reduce is an identityless reduction - - # For cases like np.maximum(np.abs(...), initial=0) - # More generally, a supremum over non-negative numbers. - assert_equal(np.maximum.reduce([], initial=0), 0) - - # For cases like reduction of an empty array over the reals. - assert_equal(np.minimum.reduce([], initial=np.inf), np.inf) - assert_equal(np.maximum.reduce([], initial=-np.inf), -np.inf) - - # Random tests - assert_equal(np.minimum.reduce([5], initial=4), 4) - assert_equal(np.maximum.reduce([4], initial=5), 5) - assert_equal(np.maximum.reduce([5], initial=4), 5) - assert_equal(np.minimum.reduce([4], initial=5), 4) - - # Check initial=None raises ValueError for both types of ufunc reductions - assert_raises(ValueError, np.minimum.reduce, [], initial=None) - assert_raises(ValueError, np.add.reduce, [], initial=None) - # Also in the somewhat special object case: - with pytest.raises(ValueError): - np.add.reduce([], initial=None, dtype=object) - - # Check that np._NoValue gives default behavior. - assert_equal(np.add.reduce([], initial=np._NoValue), 0) - - # Check that initial kwarg behaves as intended for dtype=object - a = np.array([10], dtype=object) - res = np.add.reduce(a, initial=5) - assert_equal(res, 15) - - def test_empty_reduction_and_idenity(self): - arr = np.zeros((0, 5)) - # OK, since the reduction itself is *not* empty, the result is - assert np.true_divide.reduce(arr, axis=1).shape == (0,) - # Not OK, the reduction itself is empty and we have no idenity - with pytest.raises(ValueError): - np.true_divide.reduce(arr, axis=0) - - # Test that an empty reduction fails also if the result is empty - arr = np.zeros((0, 0, 5)) - with pytest.raises(ValueError): - np.true_divide.reduce(arr, axis=1) - - # Division reduction makes sense with `initial=1` (empty or not): - res = np.true_divide.reduce(arr, axis=1, initial=1) - assert_array_equal(res, np.ones((0, 5))) - - @pytest.mark.parametrize('axis', (0, 1, None)) - @pytest.mark.parametrize('where', (np.array([False, True, True]), - np.array([[True], [False], [True]]), - np.array([[True, False, False], - [False, True, False], - [False, True, True]]))) - def test_reduction_with_where(self, axis, where): - a = np.arange(9.).reshape(3, 3) - a_copy = a.copy() - a_check = np.zeros_like(a) - np.positive(a, out=a_check, where=where) - - res = np.add.reduce(a, axis=axis, where=where) - check = a_check.sum(axis) - assert_equal(res, check) - # Check we do not overwrite elements of a internally. - assert_array_equal(a, a_copy) - - @pytest.mark.parametrize(('axis', 'where'), - ((0, np.array([True, False, True])), - (1, [True, True, False]), - (None, True))) - @pytest.mark.parametrize('initial', (-np.inf, 5.)) - def test_reduction_with_where_and_initial(self, axis, where, initial): - a = np.arange(9.).reshape(3, 3) - a_copy = a.copy() - a_check = np.full(a.shape, -np.inf) - np.positive(a, out=a_check, where=where) - - res = np.maximum.reduce(a, axis=axis, where=where, initial=initial) - check = a_check.max(axis, initial=initial) - assert_equal(res, check) - - def test_reduction_where_initial_needed(self): - a = np.arange(9.).reshape(3, 3) - m = [False, True, False] - assert_raises(ValueError, np.maximum.reduce, a, where=m) - - def test_identityless_reduction_nonreorderable(self): - a = np.array([[8.0, 2.0, 2.0], [1.0, 0.5, 0.25]]) - - res = np.divide.reduce(a, axis=0) - assert_equal(res, [8.0, 4.0, 8.0]) - - res = np.divide.reduce(a, axis=1) - assert_equal(res, [2.0, 8.0]) - - res = np.divide.reduce(a, axis=()) - assert_equal(res, a) - - assert_raises(ValueError, np.divide.reduce, a, axis=(0, 1)) - - def test_reduce_zero_axis(self): - # If we have a n x m array and do a reduction with axis=1, then we are - # doing n reductions, and each reduction takes an m-element array. For - # a reduction operation without an identity, then: - # n > 0, m > 0: fine - # n = 0, m > 0: fine, doing 0 reductions of m-element arrays - # n > 0, m = 0: can't reduce a 0-element array, ValueError - # n = 0, m = 0: can't reduce a 0-element array, ValueError (for - # consistency with the above case) - # This test doesn't actually look at return values, it just checks to - # make sure that error we get an error in exactly those cases where we - # expect one, and assumes the calculations themselves are done - # correctly. - - def ok(f, *args, **kwargs): - f(*args, **kwargs) - - def err(f, *args, **kwargs): - assert_raises(ValueError, f, *args, **kwargs) - - def t(expect, func, n, m): - expect(func, np.zeros((n, m)), axis=1) - expect(func, np.zeros((m, n)), axis=0) - expect(func, np.zeros((n // 2, n // 2, m)), axis=2) - expect(func, np.zeros((n // 2, m, n // 2)), axis=1) - expect(func, np.zeros((n, m // 2, m // 2)), axis=(1, 2)) - expect(func, np.zeros((m // 2, n, m // 2)), axis=(0, 2)) - expect(func, np.zeros((m // 3, m // 3, m // 3, - n // 2, n // 2)), - axis=(0, 1, 2)) - # Check what happens if the inner (resp. outer) dimensions are a - # mix of zero and non-zero: - expect(func, np.zeros((10, m, n)), axis=(0, 1)) - expect(func, np.zeros((10, n, m)), axis=(0, 2)) - expect(func, np.zeros((m, 10, n)), axis=0) - expect(func, np.zeros((10, m, n)), axis=1) - expect(func, np.zeros((10, n, m)), axis=2) - - # np.maximum is just an arbitrary ufunc with no reduction identity - assert_equal(np.maximum.identity, None) - t(ok, np.maximum.reduce, 30, 30) - t(ok, np.maximum.reduce, 0, 30) - t(err, np.maximum.reduce, 30, 0) - t(err, np.maximum.reduce, 0, 0) - err(np.maximum.reduce, []) - np.maximum.reduce(np.zeros((0, 0)), axis=()) - - # all of the combinations are fine for a reduction that has an - # identity - t(ok, np.add.reduce, 30, 30) - t(ok, np.add.reduce, 0, 30) - t(ok, np.add.reduce, 30, 0) - t(ok, np.add.reduce, 0, 0) - np.add.reduce([]) - np.add.reduce(np.zeros((0, 0)), axis=()) - - # OTOH, accumulate always makes sense for any combination of n and m, - # because it maps an m-element array to an m-element array. These - # tests are simpler because accumulate doesn't accept multiple axes. - for uf in (np.maximum, np.add): - uf.accumulate(np.zeros((30, 0)), axis=0) - uf.accumulate(np.zeros((0, 30)), axis=0) - uf.accumulate(np.zeros((30, 30)), axis=0) - uf.accumulate(np.zeros((0, 0)), axis=0) - - def test_safe_casting(self): - # In old versions of numpy, in-place operations used the 'unsafe' - # casting rules. In versions >= 1.10, 'same_kind' is the - # default and an exception is raised instead of a warning. - # when 'same_kind' is not satisfied. - a = np.array([1, 2, 3], dtype=int) - # Non-in-place addition is fine - assert_array_equal(assert_no_warnings(np.add, a, 1.1), - [2.1, 3.1, 4.1]) - assert_raises(TypeError, np.add, a, 1.1, out=a) - - def add_inplace(a, b): - a += b - - assert_raises(TypeError, add_inplace, a, 1.1) - # Make sure that explicitly overriding the exception is allowed: - assert_no_warnings(np.add, a, 1.1, out=a, casting="unsafe") - assert_array_equal(a, [2, 3, 4]) - - def test_ufunc_custom_out(self): - # Test ufunc with built in input types and custom output type - - a = np.array([0, 1, 2], dtype='i8') - b = np.array([0, 1, 2], dtype='i8') - c = np.empty(3, dtype=_rational_tests.rational) - - # Output must be specified so numpy knows what - # ufunc signature to look for - result = _rational_tests.test_add(a, b, c) - target = np.array([0, 2, 4], dtype=_rational_tests.rational) - assert_equal(result, target) - - # The new resolution means that we can (usually) find custom loops - # as long as they match exactly: - result = _rational_tests.test_add(a, b) - assert_equal(result, target) - - # This works even more generally, so long the default common-dtype - # promoter works out: - result = _rational_tests.test_add(a, b.astype(np.uint16), out=c) - assert_equal(result, target) - - # But, it can be fooled, e.g. (use scalars, which forces legacy - # type resolution to kick in, which then fails): - with assert_raises(TypeError): - _rational_tests.test_add(a, np.uint16(2)) - - def test_operand_flags(self): - a = np.arange(16, dtype='l').reshape(4, 4) - b = np.arange(9, dtype='l').reshape(3, 3) - opflag_tests.inplace_add(a[:-1, :-1], b) - assert_equal(a, np.array([[0, 2, 4, 3], [7, 9, 11, 7], - [14, 16, 18, 11], [12, 13, 14, 15]], dtype='l')) - - a = np.array(0) - opflag_tests.inplace_add(a, 3) - assert_equal(a, 3) - opflag_tests.inplace_add(a, [3, 4]) - assert_equal(a, 10) - - def test_struct_ufunc(self): - import numpy.core._struct_ufunc_tests as struct_ufunc - - a = np.array([(1, 2, 3)], dtype='u8,u8,u8') - b = np.array([(1, 2, 3)], dtype='u8,u8,u8') - - result = struct_ufunc.add_triplet(a, b) - assert_equal(result, np.array([(2, 4, 6)], dtype='u8,u8,u8')) - assert_raises(RuntimeError, struct_ufunc.register_fail) - - def test_custom_ufunc(self): - a = np.array( - [_rational_tests.rational(1, 2), - _rational_tests.rational(1, 3), - _rational_tests.rational(1, 4)], - dtype=_rational_tests.rational) - b = np.array( - [_rational_tests.rational(1, 2), - _rational_tests.rational(1, 3), - _rational_tests.rational(1, 4)], - dtype=_rational_tests.rational) - - result = _rational_tests.test_add_rationals(a, b) - expected = np.array( - [_rational_tests.rational(1), - _rational_tests.rational(2, 3), - _rational_tests.rational(1, 2)], - dtype=_rational_tests.rational) - assert_equal(result, expected) - - def test_custom_ufunc_forced_sig(self): - # gh-9351 - looking for a non-first userloop would previously hang - with assert_raises(TypeError): - np.multiply(_rational_tests.rational(1), 1, - signature=(_rational_tests.rational, int, None)) - - def test_custom_array_like(self): - - class MyThing: - __array_priority__ = 1000 - - rmul_count = 0 - getitem_count = 0 - - def __init__(self, shape): - self.shape = shape - - def __len__(self): - return self.shape[0] - - def __getitem__(self, i): - MyThing.getitem_count += 1 - if not isinstance(i, tuple): - i = (i,) - if len(i) > self.ndim: - raise IndexError("boo") - - return MyThing(self.shape[len(i):]) - - def __rmul__(self, other): - MyThing.rmul_count += 1 - return self - - np.float64(5)*MyThing((3, 3)) - assert_(MyThing.rmul_count == 1, MyThing.rmul_count) - assert_(MyThing.getitem_count <= 2, MyThing.getitem_count) - - @pytest.mark.parametrize("a", ( - np.arange(10, dtype=int), - np.arange(10, dtype=_rational_tests.rational), - )) - def test_ufunc_at_basic(self, a): - - aa = a.copy() - np.add.at(aa, [2, 5, 2], 1) - assert_equal(aa, [0, 1, 4, 3, 4, 6, 6, 7, 8, 9]) - - with pytest.raises(ValueError): - # missing second operand - np.add.at(aa, [2, 5, 3]) - - aa = a.copy() - np.negative.at(aa, [2, 5, 3]) - assert_equal(aa, [0, 1, -2, -3, 4, -5, 6, 7, 8, 9]) - - aa = a.copy() - b = np.array([100, 100, 100]) - np.add.at(aa, [2, 5, 2], b) - assert_equal(aa, [0, 1, 202, 3, 4, 105, 6, 7, 8, 9]) - - with pytest.raises(ValueError): - # extraneous second operand - np.negative.at(a, [2, 5, 3], [1, 2, 3]) - - with pytest.raises(ValueError): - # second operand cannot be converted to an array - np.add.at(a, [2, 5, 3], [[1, 2], 1]) - - # ufuncs with indexed loops for performance in ufunc.at - indexed_ufuncs = [np.add, np.subtract, np.multiply, np.floor_divide, - np.maximum, np.minimum, np.fmax, np.fmin] - - @pytest.mark.parametrize( - "typecode", np.typecodes['AllInteger'] + np.typecodes['Float']) - @pytest.mark.parametrize("ufunc", indexed_ufuncs) - def test_ufunc_at_inner_loops(self, typecode, ufunc): - if ufunc is np.divide and typecode in np.typecodes['AllInteger']: - # Avoid divide-by-zero and inf for integer divide - a = np.ones(100, dtype=typecode) - indx = np.random.randint(100, size=30, dtype=np.intp) - vals = np.arange(1, 31, dtype=typecode) - else: - a = np.ones(1000, dtype=typecode) - indx = np.random.randint(1000, size=3000, dtype=np.intp) - vals = np.arange(3000, dtype=typecode) - atag = a.copy() - # Do the calculation twice and compare the answers - with warnings.catch_warnings(record=True) as w_at: - warnings.simplefilter('always') - ufunc.at(a, indx, vals) - with warnings.catch_warnings(record=True) as w_loop: - warnings.simplefilter('always') - for i, v in zip(indx, vals): - # Make sure all the work happens inside the ufunc - # in order to duplicate error/warning handling - ufunc(atag[i], v, out=atag[i:i+1], casting="unsafe") - assert_equal(atag, a) - # If w_loop warned, make sure w_at warned as well - if len(w_loop) > 0: - # - assert len(w_at) > 0 - assert w_at[0].category == w_loop[0].category - assert str(w_at[0].message)[:10] == str(w_loop[0].message)[:10] - - @pytest.mark.parametrize("typecode", np.typecodes['Complex']) - @pytest.mark.parametrize("ufunc", [np.add, np.subtract, np.multiply]) - def test_ufunc_at_inner_loops_complex(self, typecode, ufunc): - a = np.ones(10, dtype=typecode) - indx = np.concatenate([np.ones(6, dtype=np.intp), - np.full(18, 4, dtype=np.intp)]) - value = a.dtype.type(1j) - ufunc.at(a, indx, value) - expected = np.ones_like(a) - if ufunc is np.multiply: - expected[1] = expected[4] = -1 - else: - expected[1] += 6 * (value if ufunc is np.add else -value) - expected[4] += 18 * (value if ufunc is np.add else -value) - - assert_array_equal(a, expected) - - def test_ufunc_at_ellipsis(self): - # Make sure the indexed loop check does not choke on iters - # with subspaces - arr = np.zeros(5) - np.add.at(arr, slice(None), np.ones(5)) - assert_array_equal(arr, np.ones(5)) - - def test_ufunc_at_negative(self): - arr = np.ones(5, dtype=np.int32) - indx = np.arange(5) - umt.indexed_negative.at(arr, indx) - # If it is [-1, -1, -1, -100, 0] then the regular strided loop was used - assert np.all(arr == [-1, -1, -1, -200, -1]) - - def test_ufunc_at_large(self): - # issue gh-23457 - indices = np.zeros(8195, dtype=np.int16) - b = np.zeros(8195, dtype=float) - b[0] = 10 - b[1] = 5 - b[8192:] = 100 - a = np.zeros(1, dtype=float) - np.add.at(a, indices, b) - assert a[0] == b.sum() - - def test_cast_index_fastpath(self): - arr = np.zeros(10) - values = np.ones(100000) - # index must be cast, which may be buffered in chunks: - index = np.zeros(len(values), dtype=np.uint8) - np.add.at(arr, index, values) - assert arr[0] == len(values) - - @pytest.mark.parametrize("value", [ - np.ones(1), np.ones(()), np.float64(1.), 1.]) - def test_ufunc_at_scalar_value_fastpath(self, value): - arr = np.zeros(1000) - # index must be cast, which may be buffered in chunks: - index = np.repeat(np.arange(1000), 2) - np.add.at(arr, index, value) - assert_array_equal(arr, np.full_like(arr, 2 * value)) - - def test_ufunc_at_multiD(self): - a = np.arange(9).reshape(3, 3) - b = np.array([[100, 100, 100], [200, 200, 200], [300, 300, 300]]) - np.add.at(a, (slice(None), [1, 2, 1]), b) - assert_equal(a, [[0, 201, 102], [3, 404, 205], [6, 607, 308]]) - - a = np.arange(27).reshape(3, 3, 3) - b = np.array([100, 200, 300]) - np.add.at(a, (slice(None), slice(None), [1, 2, 1]), b) - assert_equal(a, - [[[0, 401, 202], - [3, 404, 205], - [6, 407, 208]], - - [[9, 410, 211], - [12, 413, 214], - [15, 416, 217]], - - [[18, 419, 220], - [21, 422, 223], - [24, 425, 226]]]) - - a = np.arange(9).reshape(3, 3) - b = np.array([[100, 100, 100], [200, 200, 200], [300, 300, 300]]) - np.add.at(a, ([1, 2, 1], slice(None)), b) - assert_equal(a, [[0, 1, 2], [403, 404, 405], [206, 207, 208]]) - - a = np.arange(27).reshape(3, 3, 3) - b = np.array([100, 200, 300]) - np.add.at(a, (slice(None), [1, 2, 1], slice(None)), b) - assert_equal(a, - [[[0, 1, 2], - [203, 404, 605], - [106, 207, 308]], - - [[9, 10, 11], - [212, 413, 614], - [115, 216, 317]], - - [[18, 19, 20], - [221, 422, 623], - [124, 225, 326]]]) - - a = np.arange(9).reshape(3, 3) - b = np.array([100, 200, 300]) - np.add.at(a, (0, [1, 2, 1]), b) - assert_equal(a, [[0, 401, 202], [3, 4, 5], [6, 7, 8]]) - - a = np.arange(27).reshape(3, 3, 3) - b = np.array([100, 200, 300]) - np.add.at(a, ([1, 2, 1], 0, slice(None)), b) - assert_equal(a, - [[[0, 1, 2], - [3, 4, 5], - [6, 7, 8]], - - [[209, 410, 611], - [12, 13, 14], - [15, 16, 17]], - - [[118, 219, 320], - [21, 22, 23], - [24, 25, 26]]]) - - a = np.arange(27).reshape(3, 3, 3) - b = np.array([100, 200, 300]) - np.add.at(a, (slice(None), slice(None), slice(None)), b) - assert_equal(a, - [[[100, 201, 302], - [103, 204, 305], - [106, 207, 308]], - - [[109, 210, 311], - [112, 213, 314], - [115, 216, 317]], - - [[118, 219, 320], - [121, 222, 323], - [124, 225, 326]]]) - - def test_ufunc_at_0D(self): - a = np.array(0) - np.add.at(a, (), 1) - assert_equal(a, 1) - - assert_raises(IndexError, np.add.at, a, 0, 1) - assert_raises(IndexError, np.add.at, a, [], 1) - - def test_ufunc_at_dtypes(self): - # Test mixed dtypes - a = np.arange(10) - np.power.at(a, [1, 2, 3, 2], 3.5) - assert_equal(a, np.array([0, 1, 4414, 46, 4, 5, 6, 7, 8, 9])) - - def test_ufunc_at_boolean(self): - # Test boolean indexing and boolean ufuncs - a = np.arange(10) - index = a % 2 == 0 - np.equal.at(a, index, [0, 2, 4, 6, 8]) - assert_equal(a, [1, 1, 1, 3, 1, 5, 1, 7, 1, 9]) - - # Test unary operator - a = np.arange(10, dtype='u4') - np.invert.at(a, [2, 5, 2]) - assert_equal(a, [0, 1, 2, 3, 4, 5 ^ 0xffffffff, 6, 7, 8, 9]) - - def test_ufunc_at_advanced(self): - # Test empty subspace - orig = np.arange(4) - a = orig[:, None][:, 0:0] - np.add.at(a, [0, 1], 3) - assert_array_equal(orig, np.arange(4)) - - # Test with swapped byte order - index = np.array([1, 2, 1], np.dtype('i').newbyteorder()) - values = np.array([1, 2, 3, 4], np.dtype('f').newbyteorder()) - np.add.at(values, index, 3) - assert_array_equal(values, [1, 8, 6, 4]) - - # Test exception thrown - values = np.array(['a', 1], dtype=object) - assert_raises(TypeError, np.add.at, values, [0, 1], 1) - assert_array_equal(values, np.array(['a', 1], dtype=object)) - - # Test multiple output ufuncs raise error, gh-5665 - assert_raises(ValueError, np.modf.at, np.arange(10), [1]) - - # Test maximum - a = np.array([1, 2, 3]) - np.maximum.at(a, [0], 0) - assert_equal(a, np.array([1, 2, 3])) - - @pytest.mark.parametrize("dtype", - np.typecodes['AllInteger'] + np.typecodes['Float']) - @pytest.mark.parametrize("ufunc", - [np.add, np.subtract, np.divide, np.minimum, np.maximum]) - def test_at_negative_indexes(self, dtype, ufunc): - a = np.arange(0, 10).astype(dtype) - indxs = np.array([-1, 1, -1, 2]).astype(np.intp) - vals = np.array([1, 5, 2, 10], dtype=a.dtype) - - expected = a.copy() - for i, v in zip(indxs, vals): - expected[i] = ufunc(expected[i], v) - - ufunc.at(a, indxs, vals) - assert_array_equal(a, expected) - assert np.all(indxs == [-1, 1, -1, 2]) - - def test_at_not_none_signature(self): - # Test ufuncs with non-trivial signature raise a TypeError - a = np.ones((2, 2, 2)) - b = np.ones((1, 2, 2)) - assert_raises(TypeError, np.matmul.at, a, [0], b) - - a = np.array([[[1, 2], [3, 4]]]) - assert_raises(TypeError, np.linalg._umath_linalg.det.at, a, [0]) - - def test_at_no_loop_for_op(self): - # str dtype does not have a ufunc loop for np.add - arr = np.ones(10, dtype=str) - with pytest.raises(np.core._exceptions._UFuncNoLoopError): - np.add.at(arr, [0, 1], [0, 1]) - - def test_at_output_casting(self): - arr = np.array([-1]) - np.equal.at(arr, [0], [0]) - assert arr[0] == 0 - - def test_at_broadcast_failure(self): - arr = np.arange(5) - with pytest.raises(ValueError): - np.add.at(arr, [0, 1], [1, 2, 3]) - - - def test_reduce_arguments(self): - f = np.add.reduce - d = np.ones((5,2), dtype=int) - o = np.ones((2,), dtype=d.dtype) - r = o * 5 - assert_equal(f(d), r) - # a, axis=0, dtype=None, out=None, keepdims=False - assert_equal(f(d, axis=0), r) - assert_equal(f(d, 0), r) - assert_equal(f(d, 0, dtype=None), r) - assert_equal(f(d, 0, dtype='i'), r) - assert_equal(f(d, 0, 'i'), r) - assert_equal(f(d, 0, None), r) - assert_equal(f(d, 0, None, out=None), r) - assert_equal(f(d, 0, None, out=o), r) - assert_equal(f(d, 0, None, o), r) - assert_equal(f(d, 0, None, None), r) - assert_equal(f(d, 0, None, None, keepdims=False), r) - assert_equal(f(d, 0, None, None, True), r.reshape((1,) + r.shape)) - assert_equal(f(d, 0, None, None, False, 0), r) - assert_equal(f(d, 0, None, None, False, initial=0), r) - assert_equal(f(d, 0, None, None, False, 0, True), r) - assert_equal(f(d, 0, None, None, False, 0, where=True), r) - # multiple keywords - assert_equal(f(d, axis=0, dtype=None, out=None, keepdims=False), r) - assert_equal(f(d, 0, dtype=None, out=None, keepdims=False), r) - assert_equal(f(d, 0, None, out=None, keepdims=False), r) - assert_equal(f(d, 0, None, out=None, keepdims=False, initial=0, - where=True), r) - - # too little - assert_raises(TypeError, f) - # too much - assert_raises(TypeError, f, d, 0, None, None, False, 0, True, 1) - # invalid axis - assert_raises(TypeError, f, d, "invalid") - assert_raises(TypeError, f, d, axis="invalid") - assert_raises(TypeError, f, d, axis="invalid", dtype=None, - keepdims=True) - # invalid dtype - assert_raises(TypeError, f, d, 0, "invalid") - assert_raises(TypeError, f, d, dtype="invalid") - assert_raises(TypeError, f, d, dtype="invalid", out=None) - # invalid out - assert_raises(TypeError, f, d, 0, None, "invalid") - assert_raises(TypeError, f, d, out="invalid") - assert_raises(TypeError, f, d, out="invalid", dtype=None) - # keepdims boolean, no invalid value - # assert_raises(TypeError, f, d, 0, None, None, "invalid") - # assert_raises(TypeError, f, d, keepdims="invalid", axis=0, dtype=None) - # invalid mix - assert_raises(TypeError, f, d, 0, keepdims="invalid", dtype="invalid", - out=None) - - # invalid keyword - assert_raises(TypeError, f, d, axis=0, dtype=None, invalid=0) - assert_raises(TypeError, f, d, invalid=0) - assert_raises(TypeError, f, d, 0, keepdims=True, invalid="invalid", - out=None) - assert_raises(TypeError, f, d, axis=0, dtype=None, keepdims=True, - out=None, invalid=0) - assert_raises(TypeError, f, d, axis=0, dtype=None, - out=None, invalid=0) - - def test_structured_equal(self): - # https://github.com/numpy/numpy/issues/4855 - - class MyA(np.ndarray): - def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): - return getattr(ufunc, method)(*(input.view(np.ndarray) - for input in inputs), **kwargs) - a = np.arange(12.).reshape(4,3) - ra = a.view(dtype=('f8,f8,f8')).squeeze() - mra = ra.view(MyA) - - target = np.array([ True, False, False, False], dtype=bool) - assert_equal(np.all(target == (mra == ra[0])), True) - - def test_scalar_equal(self): - # Scalar comparisons should always work, without deprecation warnings. - # even when the ufunc fails. - a = np.array(0.) - b = np.array('a') - assert_(a != b) - assert_(b != a) - assert_(not (a == b)) - assert_(not (b == a)) - - def test_NotImplemented_not_returned(self): - # See gh-5964 and gh-2091. Some of these functions are not operator - # related and were fixed for other reasons in the past. - binary_funcs = [ - np.power, np.add, np.subtract, np.multiply, np.divide, - np.true_divide, np.floor_divide, np.bitwise_and, np.bitwise_or, - np.bitwise_xor, np.left_shift, np.right_shift, np.fmax, - np.fmin, np.fmod, np.hypot, np.logaddexp, np.logaddexp2, - np.maximum, np.minimum, np.mod, - np.greater, np.greater_equal, np.less, np.less_equal, - np.equal, np.not_equal] - - a = np.array('1') - b = 1 - c = np.array([1., 2.]) - for f in binary_funcs: - assert_raises(TypeError, f, a, b) - assert_raises(TypeError, f, c, a) - - @pytest.mark.parametrize("ufunc", - [np.logical_and, np.logical_or]) # logical_xor object loop is bad - @pytest.mark.parametrize("signature", - [(None, None, object), (object, None, None), - (None, object, None)]) - def test_logical_ufuncs_object_signatures(self, ufunc, signature): - a = np.array([True, None, False], dtype=object) - res = ufunc(a, a, signature=signature) - assert res.dtype == object - - @pytest.mark.parametrize("ufunc", - [np.logical_and, np.logical_or, np.logical_xor]) - @pytest.mark.parametrize("signature", - [(bool, None, object), (object, None, bool), - (None, object, bool)]) - def test_logical_ufuncs_mixed_object_signatures(self, ufunc, signature): - # Most mixed signatures fail (except those with bool out, e.g. `OO->?`) - a = np.array([True, None, False]) - with pytest.raises(TypeError): - ufunc(a, a, signature=signature) - - @pytest.mark.parametrize("ufunc", - [np.logical_and, np.logical_or, np.logical_xor]) - def test_logical_ufuncs_support_anything(self, ufunc): - # The logical ufuncs support even input that can't be promoted: - a = np.array(b'1', dtype="V3") - c = np.array([1., 2.]) - assert_array_equal(ufunc(a, c), ufunc([True, True], True)) - assert ufunc.reduce(a) == True - # check that the output has no effect: - out = np.zeros(2, dtype=np.int32) - expected = ufunc([True, True], True).astype(out.dtype) - assert_array_equal(ufunc(a, c, out=out), expected) - out = np.zeros((), dtype=np.int32) - assert ufunc.reduce(a, out=out) == True - # Last check, test reduction when out and a match (the complexity here - # is that the "i,i->?" may seem right, but should not match. - a = np.array([3], dtype="i") - out = np.zeros((), dtype=a.dtype) - assert ufunc.reduce(a, out=out) == 1 - - @pytest.mark.parametrize("ufunc", - [np.logical_and, np.logical_or, np.logical_xor]) - def test_logical_ufuncs_reject_string(self, ufunc): - """ - Logical ufuncs are normally well defined by working with the boolean - equivalent, i.e. casting all inputs to bools should work. - - However, casting strings to bools is *currently* weird, because it - actually uses `bool(int(str))`. Thus we explicitly reject strings. - This test should succeed (and can probably just be removed) as soon as - string to bool casts are well defined in NumPy. - """ - with pytest.raises(TypeError, match="contain a loop with signature"): - ufunc(["1"], ["3"]) - with pytest.raises(TypeError, match="contain a loop with signature"): - ufunc.reduce(["1", "2", "0"]) - - @pytest.mark.parametrize("ufunc", - [np.logical_and, np.logical_or, np.logical_xor]) - def test_logical_ufuncs_out_cast_check(self, ufunc): - a = np.array('1') - c = np.array([1., 2.]) - out = a.copy() - with pytest.raises(TypeError): - # It would be safe, but not equiv casting: - ufunc(a, c, out=out, casting="equiv") - - def test_reducelike_byteorder_resolution(self): - # See gh-20699, byte-order changes need some extra care in the type - # resolution to make the following succeed: - arr_be = np.arange(10, dtype=">i8") - arr_le = np.arange(10, dtype="i - if 'O' in typ or '?' in typ: - continue - inp, out = typ.split('->') - args = [np.ones((3, 3), t) for t in inp] - with warnings.catch_warnings(record=True): - warnings.filterwarnings("always") - res = ufunc(*args) - if isinstance(res, tuple): - outs = tuple(out) - assert len(res) == len(outs) - for r, t in zip(res, outs): - assert r.dtype == np.dtype(t) - else: - assert res.dtype == np.dtype(out) - -@pytest.mark.parametrize('ufunc', [getattr(np, x) for x in dir(np) - if isinstance(getattr(np, x), np.ufunc)]) -@np._no_nep50_warning() -def test_ufunc_noncontiguous(ufunc): - ''' - Check that contiguous and non-contiguous calls to ufuncs - have the same results for values in range(9) - ''' - for typ in ufunc.types: - # types is a list of strings like ii->i - if any(set('O?mM') & set(typ)): - # bool, object, datetime are too irregular for this simple test - continue - inp, out = typ.split('->') - args_c = [np.empty(6, t) for t in inp] - args_n = [np.empty(18, t)[::3] for t in inp] - for a in args_c: - a.flat = range(1,7) - for a in args_n: - a.flat = range(1,7) - with warnings.catch_warnings(record=True): - warnings.filterwarnings("always") - res_c = ufunc(*args_c) - res_n = ufunc(*args_n) - if len(out) == 1: - res_c = (res_c,) - res_n = (res_n,) - for c_ar, n_ar in zip(res_c, res_n): - dt = c_ar.dtype - if np.issubdtype(dt, np.floating): - # for floating point results allow a small fuss in comparisons - # since different algorithms (libm vs. intrinsics) can be used - # for different input strides - res_eps = np.finfo(dt).eps - tol = 2*res_eps - assert_allclose(res_c, res_n, atol=tol, rtol=tol) - else: - assert_equal(c_ar, n_ar) - - -@pytest.mark.parametrize('ufunc', [np.sign, np.equal]) -def test_ufunc_warn_with_nan(ufunc): - # issue gh-15127 - # test that calling certain ufuncs with a non-standard `nan` value does not - # emit a warning - # `b` holds a 64 bit signaling nan: the most significant bit of the - # significand is zero. - b = np.array([0x7ff0000000000001], 'i8').view('f8') - assert np.isnan(b) - if ufunc.nin == 1: - ufunc(b) - elif ufunc.nin == 2: - ufunc(b, b.copy()) - else: - raise ValueError('ufunc with more than 2 inputs') - - -@pytest.mark.skipif(not HAS_REFCOUNT, reason="Python lacks refcounts") -def test_ufunc_out_casterrors(): - # Tests that casting errors are correctly reported and buffers are - # cleared. - # The following array can be added to itself as an object array, but - # the result cannot be cast to an integer output: - value = 123 # relies on python cache (leak-check will still find it) - arr = np.array([value] * int(np.BUFSIZE * 1.5) + - ["string"] + - [value] * int(1.5 * np.BUFSIZE), dtype=object) - out = np.ones(len(arr), dtype=np.intp) - - count = sys.getrefcount(value) - with pytest.raises(ValueError): - # Output casting failure: - np.add(arr, arr, out=out, casting="unsafe") - - assert count == sys.getrefcount(value) - # output is unchanged after the error, this shows that the iteration - # was aborted (this is not necessarily defined behaviour) - assert out[-1] == 1 - - with pytest.raises(ValueError): - # Input casting failure: - np.add(arr, arr, out=out, dtype=np.intp, casting="unsafe") - - assert count == sys.getrefcount(value) - # output is unchanged after the error, this shows that the iteration - # was aborted (this is not necessarily defined behaviour) - assert out[-1] == 1 - - -@pytest.mark.parametrize("bad_offset", [0, int(np.BUFSIZE * 1.5)]) -def test_ufunc_input_casterrors(bad_offset): - value = 123 - arr = np.array([value] * bad_offset + - ["string"] + - [value] * int(1.5 * np.BUFSIZE), dtype=object) - with pytest.raises(ValueError): - # Force cast inputs, but the buffered cast of `arr` to intp fails: - np.add(arr, arr, dtype=np.intp, casting="unsafe") - - -@pytest.mark.skipif(IS_WASM, reason="fp errors don't work in wasm") -@pytest.mark.parametrize("bad_offset", [0, int(np.BUFSIZE * 1.5)]) -def test_ufunc_input_floatingpoint_error(bad_offset): - value = 123 - arr = np.array([value] * bad_offset + - [np.nan] + - [value] * int(1.5 * np.BUFSIZE)) - with np.errstate(invalid="raise"), pytest.raises(FloatingPointError): - # Force cast inputs, but the buffered cast of `arr` to intp fails: - np.add(arr, arr, dtype=np.intp, casting="unsafe") - - -def test_trivial_loop_invalid_cast(): - # This tests the fast-path "invalid cast", see gh-19904. - with pytest.raises(TypeError, - match="cast ufunc 'add' input 0"): - # the void dtype definitely cannot cast to double: - np.add(np.array(1, "i,i"), 3, signature="dd->d") - - -@pytest.mark.skipif(not HAS_REFCOUNT, reason="Python lacks refcounts") -@pytest.mark.parametrize("offset", - [0, np.BUFSIZE//2, int(1.5*np.BUFSIZE)]) -def test_reduce_casterrors(offset): - # Test reporting of casting errors in reductions, we test various - # offsets to where the casting error will occur, since these may occur - # at different places during the reduction procedure. For example - # the first item may be special. - value = 123 # relies on python cache (leak-check will still find it) - arr = np.array([value] * offset + - ["string"] + - [value] * int(1.5 * np.BUFSIZE), dtype=object) - out = np.array(-1, dtype=np.intp) - - count = sys.getrefcount(value) - with pytest.raises(ValueError, match="invalid literal"): - # This is an unsafe cast, but we currently always allow that. - # Note that the double loop is picked, but the cast fails. - # `initial=None` disables the use of an identity here to test failures - # while copying the first values path (not used when identity exists). - np.add.reduce(arr, dtype=np.intp, out=out, initial=None) - assert count == sys.getrefcount(value) - # If an error occurred during casting, the operation is done at most until - # the error occurs (the result of which would be `value * offset`) and -1 - # if the error happened immediately. - # This does not define behaviour, the output is invalid and thus undefined - assert out[()] < value * offset - - -def test_object_reduce_cleanup_on_failure(): - # Test cleanup, including of the initial value (manually provided or not) - with pytest.raises(TypeError): - np.add.reduce([1, 2, None], initial=4) - - with pytest.raises(TypeError): - np.add.reduce([1, 2, None]) - - -@pytest.mark.skipif(IS_WASM, reason="fp errors don't work in wasm") -@pytest.mark.parametrize("method", - [np.add.accumulate, np.add.reduce, - pytest.param(lambda x: np.add.reduceat(x, [0]), id="reduceat"), - pytest.param(lambda x: np.log.at(x, [2]), id="at")]) -def test_ufunc_methods_floaterrors(method): - # adding inf and -inf (or log(-inf) creates an invalid float and warns - arr = np.array([np.inf, 0, -np.inf]) - with np.errstate(all="warn"): - with pytest.warns(RuntimeWarning, match="invalid value"): - method(arr) - - arr = np.array([np.inf, 0, -np.inf]) - with np.errstate(all="raise"): - with pytest.raises(FloatingPointError): - method(arr) - - -def _check_neg_zero(value): - if value != 0.0: - return False - if not np.signbit(value.real): - return False - if value.dtype.kind == "c": - return np.signbit(value.imag) - return True - -@pytest.mark.parametrize("dtype", np.typecodes["AllFloat"]) -def test_addition_negative_zero(dtype): - dtype = np.dtype(dtype) - if dtype.kind == "c": - neg_zero = dtype.type(complex(-0.0, -0.0)) - else: - neg_zero = dtype.type(-0.0) - - arr = np.array(neg_zero) - arr2 = np.array(neg_zero) - - assert _check_neg_zero(arr + arr2) - # In-place ops may end up on a different path (reduce path) see gh-21211 - arr += arr2 - assert _check_neg_zero(arr) - - -@pytest.mark.parametrize("dtype", np.typecodes["AllFloat"]) -@pytest.mark.parametrize("use_initial", [True, False]) -def test_addition_reduce_negative_zero(dtype, use_initial): - dtype = np.dtype(dtype) - if dtype.kind == "c": - neg_zero = dtype.type(complex(-0.0, -0.0)) - else: - neg_zero = dtype.type(-0.0) - - kwargs = {} - if use_initial: - kwargs["initial"] = neg_zero - else: - pytest.xfail("-0. propagation in sum currently requires initial") - - # Test various length, in case SIMD paths or chunking play a role. - # 150 extends beyond the pairwise blocksize; probably not important. - for i in range(0, 150): - arr = np.array([neg_zero] * i, dtype=dtype) - res = np.sum(arr, **kwargs) - if i > 0 or use_initial: - assert _check_neg_zero(res) - else: - # `sum([])` should probably be 0.0 and not -0.0 like `sum([-0.0])` - assert not np.signbit(res.real) - assert not np.signbit(res.imag) - -class TestLowlevelAPIAccess: - def test_resolve_dtypes_basic(self): - # Basic test for dtype resolution: - i4 = np.dtype("i4") - f4 = np.dtype("f4") - f8 = np.dtype("f8") - - r = np.add.resolve_dtypes((i4, f4, None)) - assert r == (f8, f8, f8) - - # Signature uses the same logic to parse as ufunc (less strict) - # the following is "same-kind" casting so works: - r = np.add.resolve_dtypes(( - i4, i4, None), signature=(None, None, "f4")) - assert r == (f4, f4, f4) - - # Check NEP 50 "weak" promotion also: - r = np.add.resolve_dtypes((f4, int, None)) - assert r == (f4, f4, f4) - - with pytest.raises(TypeError): - np.add.resolve_dtypes((i4, f4, None), casting="no") - - def test_weird_dtypes(self): - S0 = np.dtype("S0") - # S0 is often converted by NumPy to S1, but not here: - r = np.equal.resolve_dtypes((S0, S0, None)) - assert r == (S0, S0, np.dtype(bool)) - - # Subarray dtypes are weird and may not work fully, we preserve them - # leading to a TypeError (currently no equal loop for void/structured) - dts = np.dtype("10i") - with pytest.raises(TypeError): - np.equal.resolve_dtypes((dts, dts, None)) - - def test_resolve_dtypes_reduction(self): - i4 = np.dtype("i4") - with pytest.raises(NotImplementedError): - np.add.resolve_dtypes((i4, i4, i4), reduction=True) - - @pytest.mark.parametrize("dtypes", [ - (np.dtype("i"), np.dtype("i")), - (None, np.dtype("i"), np.dtype("f")), - (np.dtype("i"), None, np.dtype("f")), - ("i4", "i4", None)]) - def test_resolve_dtypes_errors(self, dtypes): - with pytest.raises(TypeError): - np.add.resolve_dtypes(dtypes) - - def test_resolve_dtypes_reduction(self): - i2 = np.dtype("i2") - long_ = np.dtype("long") - # Check special addition resolution: - res = np.add.resolve_dtypes((None, i2, None), reduction=True) - assert res == (long_, long_, long_) - - def test_resolve_dtypes_reduction_errors(self): - i2 = np.dtype("i2") - - with pytest.raises(TypeError): - np.add.resolve_dtypes((None, i2, i2)) - - with pytest.raises(TypeError): - np.add.signature((None, None, "i4")) - - @pytest.mark.skipif(not hasattr(ct, "pythonapi"), - reason="`ctypes.pythonapi` required for capsule unpacking.") - def test_loop_access(self): - # This is a basic test for the full strided loop access - data_t = ct.ARRAY(ct.c_char_p, 2) - dim_t = ct.ARRAY(ct.c_ssize_t, 1) - strides_t = ct.ARRAY(ct.c_ssize_t, 2) - strided_loop_t = ct.CFUNCTYPE( - ct.c_int, ct.c_void_p, data_t, dim_t, strides_t, ct.c_void_p) - - class call_info_t(ct.Structure): - _fields_ = [ - ("strided_loop", strided_loop_t), - ("context", ct.c_void_p), - ("auxdata", ct.c_void_p), - ("requires_pyapi", ct.c_byte), - ("no_floatingpoint_errors", ct.c_byte), - ] - - i4 = np.dtype("i4") - dt, call_info_obj = np.negative._resolve_dtypes_and_context((i4, i4)) - assert dt == (i4, i4) # can be used without casting - - # Fill in the rest of the information: - np.negative._get_strided_loop(call_info_obj) - - ct.pythonapi.PyCapsule_GetPointer.restype = ct.c_void_p - call_info = ct.pythonapi.PyCapsule_GetPointer( - ct.py_object(call_info_obj), - ct.c_char_p(b"numpy_1.24_ufunc_call_info")) - - call_info = ct.cast(call_info, ct.POINTER(call_info_t)).contents - - arr = np.arange(10, dtype=i4) - call_info.strided_loop( - call_info.context, - data_t(arr.ctypes.data, arr.ctypes.data), - arr.ctypes.shape, # is a C-array with 10 here - strides_t(arr.ctypes.strides[0], arr.ctypes.strides[0]), - call_info.auxdata) - - # We just directly called the negative inner-loop in-place: - assert_array_equal(arr, -np.arange(10, dtype=i4)) - - @pytest.mark.parametrize("strides", [1, (1, 2, 3), (1, "2")]) - def test__get_strided_loop_errors_bad_strides(self, strides): - i4 = np.dtype("i4") - dt, call_info = np.negative._resolve_dtypes_and_context((i4, i4)) - - with pytest.raises(TypeError, match="fixed_strides.*tuple.*or None"): - np.negative._get_strided_loop(call_info, fixed_strides=strides) - - def test__get_strided_loop_errors_bad_call_info(self): - i4 = np.dtype("i4") - dt, call_info = np.negative._resolve_dtypes_and_context((i4, i4)) - - with pytest.raises(ValueError, match="PyCapsule"): - np.negative._get_strided_loop("not the capsule!") - - with pytest.raises(TypeError, match=".*incompatible context"): - np.add._get_strided_loop(call_info) - - np.negative._get_strided_loop(call_info) - with pytest.raises(TypeError): - # cannot call it a second time: - np.negative._get_strided_loop(call_info) - - def test_long_arrays(self): - t = np.zeros((1029, 917), dtype=np.single) - t[0][0] = 1 - t[28][414] = 1 - tc = np.cos(t) - assert_equal(tc[0][0], tc[28][414]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_use.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_use.f90 deleted file mode 100644 index 337465ac540440fc8e8e10d23757af202e8a52a4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_use.f90 +++ /dev/null @@ -1,19 +0,0 @@ -subroutine sum_with_use(x, res) - use precision - - implicit none - - real(kind=rk), intent(in) :: x(:) - real(kind=rk), intent(out) :: res - - integer :: i - - !print *, "size(x) = ", size(x) - - res = 0.0 - - do i = 1, size(x) - res = res + x(i) - enddo - - end subroutine diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/fft/tests/test_helper.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/fft/tests/test_helper.py deleted file mode 100644 index 3fb700bb3d00760b0dd0020b52f1c60549d7706e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/fft/tests/test_helper.py +++ /dev/null @@ -1,167 +0,0 @@ -"""Test functions for fftpack.helper module - -Copied from fftpack.helper by Pearu Peterson, October 2005 - -""" -import numpy as np -from numpy.testing import assert_array_almost_equal -from numpy import fft, pi - - -class TestFFTShift: - - def test_definition(self): - x = [0, 1, 2, 3, 4, -4, -3, -2, -1] - y = [-4, -3, -2, -1, 0, 1, 2, 3, 4] - assert_array_almost_equal(fft.fftshift(x), y) - assert_array_almost_equal(fft.ifftshift(y), x) - x = [0, 1, 2, 3, 4, -5, -4, -3, -2, -1] - y = [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4] - assert_array_almost_equal(fft.fftshift(x), y) - assert_array_almost_equal(fft.ifftshift(y), x) - - def test_inverse(self): - for n in [1, 4, 9, 100, 211]: - x = np.random.random((n,)) - assert_array_almost_equal(fft.ifftshift(fft.fftshift(x)), x) - - def test_axes_keyword(self): - freqs = [[0, 1, 2], [3, 4, -4], [-3, -2, -1]] - shifted = [[-1, -3, -2], [2, 0, 1], [-4, 3, 4]] - assert_array_almost_equal(fft.fftshift(freqs, axes=(0, 1)), shifted) - assert_array_almost_equal(fft.fftshift(freqs, axes=0), - fft.fftshift(freqs, axes=(0,))) - assert_array_almost_equal(fft.ifftshift(shifted, axes=(0, 1)), freqs) - assert_array_almost_equal(fft.ifftshift(shifted, axes=0), - fft.ifftshift(shifted, axes=(0,))) - - assert_array_almost_equal(fft.fftshift(freqs), shifted) - assert_array_almost_equal(fft.ifftshift(shifted), freqs) - - def test_uneven_dims(self): - """ Test 2D input, which has uneven dimension sizes """ - freqs = [ - [0, 1], - [2, 3], - [4, 5] - ] - - # shift in dimension 0 - shift_dim0 = [ - [4, 5], - [0, 1], - [2, 3] - ] - assert_array_almost_equal(fft.fftshift(freqs, axes=0), shift_dim0) - assert_array_almost_equal(fft.ifftshift(shift_dim0, axes=0), freqs) - assert_array_almost_equal(fft.fftshift(freqs, axes=(0,)), shift_dim0) - assert_array_almost_equal(fft.ifftshift(shift_dim0, axes=[0]), freqs) - - # shift in dimension 1 - shift_dim1 = [ - [1, 0], - [3, 2], - [5, 4] - ] - assert_array_almost_equal(fft.fftshift(freqs, axes=1), shift_dim1) - assert_array_almost_equal(fft.ifftshift(shift_dim1, axes=1), freqs) - - # shift in both dimensions - shift_dim_both = [ - [5, 4], - [1, 0], - [3, 2] - ] - assert_array_almost_equal(fft.fftshift(freqs, axes=(0, 1)), shift_dim_both) - assert_array_almost_equal(fft.ifftshift(shift_dim_both, axes=(0, 1)), freqs) - assert_array_almost_equal(fft.fftshift(freqs, axes=[0, 1]), shift_dim_both) - assert_array_almost_equal(fft.ifftshift(shift_dim_both, axes=[0, 1]), freqs) - - # axes=None (default) shift in all dimensions - assert_array_almost_equal(fft.fftshift(freqs, axes=None), shift_dim_both) - assert_array_almost_equal(fft.ifftshift(shift_dim_both, axes=None), freqs) - assert_array_almost_equal(fft.fftshift(freqs), shift_dim_both) - assert_array_almost_equal(fft.ifftshift(shift_dim_both), freqs) - - def test_equal_to_original(self): - """ Test that the new (>=v1.15) implementation (see #10073) is equal to the original (<=v1.14) """ - from numpy.core import asarray, concatenate, arange, take - - def original_fftshift(x, axes=None): - """ How fftshift was implemented in v1.14""" - tmp = asarray(x) - ndim = tmp.ndim - if axes is None: - axes = list(range(ndim)) - elif isinstance(axes, int): - axes = (axes,) - y = tmp - for k in axes: - n = tmp.shape[k] - p2 = (n + 1) // 2 - mylist = concatenate((arange(p2, n), arange(p2))) - y = take(y, mylist, k) - return y - - def original_ifftshift(x, axes=None): - """ How ifftshift was implemented in v1.14 """ - tmp = asarray(x) - ndim = tmp.ndim - if axes is None: - axes = list(range(ndim)) - elif isinstance(axes, int): - axes = (axes,) - y = tmp - for k in axes: - n = tmp.shape[k] - p2 = n - (n + 1) // 2 - mylist = concatenate((arange(p2, n), arange(p2))) - y = take(y, mylist, k) - return y - - # create possible 2d array combinations and try all possible keywords - # compare output to original functions - for i in range(16): - for j in range(16): - for axes_keyword in [0, 1, None, (0,), (0, 1)]: - inp = np.random.rand(i, j) - - assert_array_almost_equal(fft.fftshift(inp, axes_keyword), - original_fftshift(inp, axes_keyword)) - - assert_array_almost_equal(fft.ifftshift(inp, axes_keyword), - original_ifftshift(inp, axes_keyword)) - - -class TestFFTFreq: - - def test_definition(self): - x = [0, 1, 2, 3, 4, -4, -3, -2, -1] - assert_array_almost_equal(9*fft.fftfreq(9), x) - assert_array_almost_equal(9*pi*fft.fftfreq(9, pi), x) - x = [0, 1, 2, 3, 4, -5, -4, -3, -2, -1] - assert_array_almost_equal(10*fft.fftfreq(10), x) - assert_array_almost_equal(10*pi*fft.fftfreq(10, pi), x) - - -class TestRFFTFreq: - - def test_definition(self): - x = [0, 1, 2, 3, 4] - assert_array_almost_equal(9*fft.rfftfreq(9), x) - assert_array_almost_equal(9*pi*fft.rfftfreq(9, pi), x) - x = [0, 1, 2, 3, 4, 5] - assert_array_almost_equal(10*fft.rfftfreq(10), x) - assert_array_almost_equal(10*pi*fft.rfftfreq(10, pi), x) - - -class TestIRFFTN: - - def test_not_last_axis_success(self): - ar, ai = np.random.random((2, 16, 8, 32)) - a = ar + 1j*ai - - axes = (-2,) - - # Should not raise error - fft.irfftn(a, axes=axes) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/test_datetime.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/test_datetime.py deleted file mode 100644 index 6510612ba6f877d46ee53fea05977d58ca4ef13d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/test_datetime.py +++ /dev/null @@ -1,188 +0,0 @@ -import re - -import pytest - -import pandas as pd -from pandas import ( - DataFrame, - Index, - Series, - Timestamp, - date_range, -) -import pandas._testing as tm - - -class TestDatetimeIndex: - def test_get_loc_naive_dti_aware_str_deprecated(self): - # GH#46903 - ts = Timestamp("20130101")._value - dti = pd.DatetimeIndex([ts + 50 + i for i in range(100)]) - ser = Series(range(100), index=dti) - - key = "2013-01-01 00:00:00.000000050+0000" - msg = re.escape(repr(key)) - with pytest.raises(KeyError, match=msg): - ser[key] - - with pytest.raises(KeyError, match=msg): - dti.get_loc(key) - - def test_indexing_with_datetime_tz(self): - # GH#8260 - # support datetime64 with tz - - idx = Index(date_range("20130101", periods=3, tz="US/Eastern"), name="foo") - dr = date_range("20130110", periods=3) - df = DataFrame({"A": idx, "B": dr}) - df["C"] = idx - df.iloc[1, 1] = pd.NaT - df.iloc[1, 2] = pd.NaT - - expected = Series( - [Timestamp("2013-01-02 00:00:00-0500", tz="US/Eastern"), pd.NaT, pd.NaT], - index=list("ABC"), - dtype="object", - name=1, - ) - - # indexing - result = df.iloc[1] - tm.assert_series_equal(result, expected) - result = df.loc[1] - tm.assert_series_equal(result, expected) - - def test_indexing_fast_xs(self): - # indexing - fast_xs - df = DataFrame({"a": date_range("2014-01-01", periods=10, tz="UTC")}) - result = df.iloc[5] - expected = Series( - [Timestamp("2014-01-06 00:00:00+0000", tz="UTC")], index=["a"], name=5 - ) - tm.assert_series_equal(result, expected) - - result = df.loc[5] - tm.assert_series_equal(result, expected) - - # indexing - boolean - result = df[df.a > df.a[3]] - expected = df.iloc[4:] - tm.assert_frame_equal(result, expected) - - def test_consistency_with_tz_aware_scalar(self): - # xef gh-12938 - # various ways of indexing the same tz-aware scalar - df = Series([Timestamp("2016-03-30 14:35:25", tz="Europe/Brussels")]).to_frame() - - df = pd.concat([df, df]).reset_index(drop=True) - expected = Timestamp("2016-03-30 14:35:25+0200", tz="Europe/Brussels") - - result = df[0][0] - assert result == expected - - result = df.iloc[0, 0] - assert result == expected - - result = df.loc[0, 0] - assert result == expected - - result = df.iat[0, 0] - assert result == expected - - result = df.at[0, 0] - assert result == expected - - result = df[0].loc[0] - assert result == expected - - result = df[0].at[0] - assert result == expected - - def test_indexing_with_datetimeindex_tz(self, indexer_sl): - # GH 12050 - # indexing on a series with a datetimeindex with tz - index = date_range("2015-01-01", periods=2, tz="utc") - - ser = Series(range(2), index=index, dtype="int64") - - # list-like indexing - - for sel in (index, list(index)): - # getitem - result = indexer_sl(ser)[sel] - expected = ser.copy() - if sel is not index: - expected.index = expected.index._with_freq(None) - tm.assert_series_equal(result, expected) - - # setitem - result = ser.copy() - indexer_sl(result)[sel] = 1 - expected = Series(1, index=index) - tm.assert_series_equal(result, expected) - - # single element indexing - - # getitem - assert indexer_sl(ser)[index[1]] == 1 - - # setitem - result = ser.copy() - indexer_sl(result)[index[1]] = 5 - expected = Series([0, 5], index=index) - tm.assert_series_equal(result, expected) - - def test_nanosecond_getitem_setitem_with_tz(self): - # GH 11679 - data = ["2016-06-28 08:30:00.123456789"] - index = pd.DatetimeIndex(data, dtype="datetime64[ns, America/Chicago]") - df = DataFrame({"a": [10]}, index=index) - result = df.loc[df.index[0]] - expected = Series(10, index=["a"], name=df.index[0]) - tm.assert_series_equal(result, expected) - - result = df.copy() - result.loc[df.index[0], "a"] = -1 - expected = DataFrame(-1, index=index, columns=["a"]) - tm.assert_frame_equal(result, expected) - - def test_getitem_str_slice_millisecond_resolution(self, frame_or_series): - # GH#33589 - - keys = [ - "2017-10-25T16:25:04.151", - "2017-10-25T16:25:04.252", - "2017-10-25T16:50:05.237", - "2017-10-25T16:50:05.238", - ] - obj = frame_or_series( - [1, 2, 3, 4], - index=[Timestamp(x) for x in keys], - ) - result = obj[keys[1] : keys[2]] - expected = frame_or_series( - [2, 3], - index=[ - Timestamp(keys[1]), - Timestamp(keys[2]), - ], - ) - tm.assert_equal(result, expected) - - def test_getitem_pyarrow_index(self, frame_or_series): - # GH 53644 - pytest.importorskip("pyarrow") - obj = frame_or_series( - range(5), - index=date_range("2020", freq="D", periods=5).astype( - "timestamp[us][pyarrow]" - ), - ) - result = obj.loc[obj.index[:-3]] - expected = frame_or_series( - range(2), - index=date_range("2020", freq="D", periods=2).astype( - "timestamp[us][pyarrow]" - ), - ) - tm.assert_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tools/test_to_time.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tools/test_to_time.py deleted file mode 100644 index 5046fd9d0edc17ba9fc4558d3dcfbf5ecf778b07..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tools/test_to_time.py +++ /dev/null @@ -1,70 +0,0 @@ -from datetime import time -import locale - -import numpy as np -import pytest - -from pandas.compat import PY311 - -from pandas import Series -import pandas._testing as tm -from pandas.core.tools.times import to_time - -# The tests marked with this are locale-dependent. -# They pass, except when the machine locale is zh_CN or it_IT. -fails_on_non_english = pytest.mark.xfail( - locale.getlocale()[0] in ("zh_CN", "it_IT"), - reason="fail on a CI build with LC_ALL=zh_CN.utf8/it_IT.utf8", - strict=False, -) - - -class TestToTime: - @pytest.mark.parametrize( - "time_string", - [ - "14:15", - "1415", - pytest.param("2:15pm", marks=fails_on_non_english), - pytest.param("0215pm", marks=fails_on_non_english), - "14:15:00", - "141500", - pytest.param("2:15:00pm", marks=fails_on_non_english), - pytest.param("021500pm", marks=fails_on_non_english), - time(14, 15), - ], - ) - def test_parsers_time(self, time_string): - # GH#11818 - assert to_time(time_string) == time(14, 15) - - def test_odd_format(self): - new_string = "14.15" - msg = r"Cannot convert arg \['14\.15'\] to a time" - if not PY311: - with pytest.raises(ValueError, match=msg): - to_time(new_string) - assert to_time(new_string, format="%H.%M") == time(14, 15) - - def test_arraylike(self): - arg = ["14:15", "20:20"] - expected_arr = [time(14, 15), time(20, 20)] - assert to_time(arg) == expected_arr - assert to_time(arg, format="%H:%M") == expected_arr - assert to_time(arg, infer_time_format=True) == expected_arr - assert to_time(arg, format="%I:%M%p", errors="coerce") == [None, None] - - res = to_time(arg, format="%I:%M%p", errors="ignore") - tm.assert_numpy_array_equal(res, np.array(arg, dtype=np.object_)) - - msg = "Cannot convert.+to a time with given format" - with pytest.raises(ValueError, match=msg): - to_time(arg, format="%I:%M%p", errors="raise") - - tm.assert_series_equal( - to_time(Series(arg, name="test")), Series(expected_arr, name="test") - ) - - res = to_time(np.array(arg)) - assert isinstance(res, list) - assert res == expected_arr diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treeadapters/genshi.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treeadapters/genshi.py deleted file mode 100644 index 61d5fb6ac42ca4152f056d996af0cb0b0d2ddc35..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treeadapters/genshi.py +++ /dev/null @@ -1,54 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from genshi.core import QName, Attrs -from genshi.core import START, END, TEXT, COMMENT, DOCTYPE - - -def to_genshi(walker): - """Convert a tree to a genshi tree - - :arg walker: the treewalker to use to walk the tree to convert it - - :returns: generator of genshi nodes - - """ - text = [] - for token in walker: - type = token["type"] - if type in ("Characters", "SpaceCharacters"): - text.append(token["data"]) - elif text: - yield TEXT, "".join(text), (None, -1, -1) - text = [] - - if type in ("StartTag", "EmptyTag"): - if token["namespace"]: - name = "{%s}%s" % (token["namespace"], token["name"]) - else: - name = token["name"] - attrs = Attrs([(QName("{%s}%s" % attr if attr[0] is not None else attr[1]), value) - for attr, value in token["data"].items()]) - yield (START, (QName(name), attrs), (None, -1, -1)) - if type == "EmptyTag": - type = "EndTag" - - if type == "EndTag": - if token["namespace"]: - name = "{%s}%s" % (token["namespace"], token["name"]) - else: - name = token["name"] - - yield END, QName(name), (None, -1, -1) - - elif type == "Comment": - yield COMMENT, token["data"], (None, -1, -1) - - elif type == "Doctype": - yield DOCTYPE, (token["name"], token["publicId"], - token["systemId"]), (None, -1, -1) - - else: - pass # FIXME: What to do? - - if text: - yield TEXT, "".join(text), (None, -1, -1) diff --git a/spaces/propilot/propilot-calling-functions/app.py b/spaces/propilot/propilot-calling-functions/app.py deleted file mode 100644 index 29f2758cb94c1f8973f4c6f479557e2c91440c34..0000000000000000000000000000000000000000 --- a/spaces/propilot/propilot-calling-functions/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import openai -import streamlit as st -from streamlit_chat import message -import os -from dotenv import load_dotenv - -from chat_settings import ( - get_initial_message, - get_chatgpt_response, - update_chat, -) - - -# Carga las claves -load_dotenv() -openai.api_key = os.getenv("OPENAI_API_KEY") -LLM = "gpt-3.5-turbo-0613" - - -# Streamlit Application -def main(): - st.title("ProPilot - OpenAI Demo Function Calling") - st.markdown( - """ - Demo of OpenAI function calling using gpt-3.5-turbo-0613. ProPilot - QuePlan - """ - ) - - if 'messages' not in st.session_state: - st.session_state['messages'] = get_initial_message() - - query = st.text_input("Ingresa tu texto") - - if st.button("Enviar") and query: - st.session_state['messages'] = update_chat(st.session_state['messages'], "user", query) - chatgpt_response = get_chatgpt_response(st.session_state['messages'], LLM) - st.session_state['messages'] = update_chat(st.session_state['messages'], "assistant", chatgpt_response) - - if st.session_state['messages']: - for i, msg in enumerate(st.session_state['messages']): - if msg['role'] == 'user': - message(msg['content'], is_user=True, key=str(i)) - else: - message(msg['content'], key=str(i)) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/crazy_functions/vt_fns/vt_state.py b/spaces/qingxu98/gpt-academic/crazy_functions/vt_fns/vt_state.py deleted file mode 100644 index 18187286383ce2f3e881510852cf3aba7e6c43d1..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/crazy_functions/vt_fns/vt_state.py +++ /dev/null @@ -1,28 +0,0 @@ -import pickle - -class VoidTerminalState(): - def __init__(self): - self.reset_state() - - def reset_state(self): - self.has_provided_explaination = False - - def lock_plugin(self, chatbot): - chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端' - chatbot._cookies['plugin_state'] = pickle.dumps(self) - - def unlock_plugin(self, chatbot): - self.reset_state() - chatbot._cookies['lock_plugin'] = None - chatbot._cookies['plugin_state'] = pickle.dumps(self) - - def set_state(self, chatbot, key, value): - setattr(self, key, value) - chatbot._cookies['plugin_state'] = pickle.dumps(self) - - def get_state(chatbot): - state = chatbot._cookies.get('plugin_state', None) - if state is not None: state = pickle.loads(state) - else: state = VoidTerminalState() - state.chatbot = chatbot - return state \ No newline at end of file diff --git a/spaces/quantumiracle-git/OpenBiDexHand/robotinder-data/data_parser.py b/spaces/quantumiracle-git/OpenBiDexHand/robotinder-data/data_parser.py deleted file mode 100644 index 6d3ad39be42f388634560bb137b0c7a6891e8d43..0000000000000000000000000000000000000000 --- a/spaces/quantumiracle-git/OpenBiDexHand/robotinder-data/data_parser.py +++ /dev/null @@ -1,27 +0,0 @@ -import json - -from os import listdir -from os.path import isfile, join, isdir -mypath = './' -onlyfolders = [f for f in listdir(mypath) if isdir(join(mypath, f))] - -idx = 0 -data_info = {'idx': [], 'env': [], 'name': []} -for folder in onlyfolders: - print(folder) - fs = [join(mypath, folder, f) for f in listdir(join(mypath, folder)) if isfile(join(mypath, folder, f))] - # print(fs) - for f in fs: - if f.endswith(".gif"): - idx += 1 - data_info['idx'].append(idx) - data_info['env'].append(folder) - data_info['name'].append(f.split('/')[-1]) - -with open('data_info.json', 'w') as f: - json.dump(data_info, f) - -with open('data_info.json', 'r') as f: - data_info = json.load(f) -print(data_info) - \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/CODE_OF_CONDUCT.md b/spaces/quidiaMuxgu/Expedit-SAM/CODE_OF_CONDUCT.md deleted file mode 100644 index 08b500a221857ec3f451338e80b4a9ab1173a1af..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/rabiyulfahim/dalle-mini/html2canvas.js b/spaces/rabiyulfahim/dalle-mini/html2canvas.js deleted file mode 100644 index dd1606d8698aae0ed4877058d6a218fda3a515cd..0000000000000000000000000000000000000000 --- a/spaces/rabiyulfahim/dalle-mini/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline