diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bobby Fischer Teaches Chess How to Download the EPUB Version from Forum 6.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bobby Fischer Teaches Chess How to Download the EPUB Version from Forum 6.md deleted file mode 100644 index 03e4e7c87bd85285421cdaebd0bea1b988228389..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bobby Fischer Teaches Chess How to Download the EPUB Version from Forum 6.md +++ /dev/null @@ -1,6 +0,0 @@ -

bobby fischer teaches chess epub download forum 6


Download Zip ⚹⚹⚹ https://imgfil.com/2uy0pz



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Design Transformer Indrajit Dasgupta Pdf Download [HOT].md b/spaces/1gistliPinn/ChatGPT4/Examples/Design Transformer Indrajit Dasgupta Pdf Download [HOT].md deleted file mode 100644 index bf31f1a3fb676526c4ab027d411a9447975e7c9b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Design Transformer Indrajit Dasgupta Pdf Download [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

design transformer indrajit dasgupta pdf download


Download Filehttps://imgfil.com/2uxXzE



-
-tidisvirbwork/design-transformer-indrajit-dasgupta-pdf-download ... This repository has no tags. If you need specific material, you'll have to find it elsewhere. Here we only briefly label the parts we particularly like. If you want to learn more about these design techniques and how they help create holistic design, find Jim Quick's book, Design Thinking for Successful Product Development. And don't forget "Research Design and Design Analysis": the book is for those who want to better understand how academics and 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/DigiDNA IMazing 2.3.5 With Crack TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/DigiDNA IMazing 2.3.5 With Crack TOP.md deleted file mode 100644 index 57ed42acf259ba42b514feed9f2de4cf3d628710..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/DigiDNA IMazing 2.3.5 With Crack TOP.md +++ /dev/null @@ -1,52 +0,0 @@ - -

DigiDNA iMazing 2.3.5 With Crack: How to Transfer and Manage Your iOS Data Easily

-

If you are looking for a reliable and powerful software to transfer and manage your iOS data, you may want to check out DigiDNA iMazing 2.3.5 with crack. This software allows you to connect your iPhone, iPad, or iPod touch to your computer and access your data without iTunes or iCloud. You can also backup, restore, and clone your devices, as well as transfer music, photos, videos, messages, contacts, and more.

-

In this article, we will show you how to download and install DigiDNA iMazing 2.3.5 with crack, as well as some of its key features and benefits.

-

DigiDNA iMazing 2.3.5 With Crack


Download · https://imgfil.com/2uy1dO



-

How to Download and Install DigiDNA iMazing 2.3.5 With Crack

-

To download and install DigiDNA iMazing 2.3.5 with crack, you need to follow these steps:

-
    -
  1. Click on the link below to download the setup file and the crack file.
  2. -
  3. Extract the files using WinRAR or any other extraction tool.
  4. -
  5. Run the setup file and follow the instructions to install the software.
  6. -
  7. Copy the crack file and paste it into the installation folder.
  8. -
  9. Launch the software and enjoy its full features.
  10. -
-

Download DigiDNA iMazing 2.3.5 With Crack Here

-

Key Features and Benefits of DigiDNA iMazing 2.3.5 With Crack

-

DigiDNA iMazing 2.3.5 with crack is a versatile and user-friendly software that offers many features and benefits for iOS users. Some of them are:

- -

Conclusion

-

DigiDNA iMazing 2.3.5 with crack is a powerful and reliable software that can help you transfer and manage your iOS data easily and efficiently. It is compatible with Windows and Mac OS, and supports all iOS devices from iPhone 4s to iPhone Xs Max, iPad 1 to iPad Pro, and iPod touch 1 to iPod touch 6. If you want to try this software for free, you can download it from the link below.

-

Download DigiDNA iMazing 2.3.5 With Crack Here

- -

How to Use DigiDNA iMazing 2.3.5 With Crack

-

Using DigiDNA iMazing 2.3.5 with crack is very easy and intuitive. Here are some steps to get you started:

-
    -
  1. Connect your iOS device to your computer using a USB cable or Wi-Fi.
  2. -
  3. Launch DigiDNA iMazing 2.3.5 and select your device from the sidebar.
  4. -
  5. Choose the action you want to perform from the main window or the toolbar.
  6. -
  7. Follow the on-screen instructions and confirm your choices.
  8. -
  9. Wait for the process to complete and check the results.
  10. -
-

You can also customize the settings and preferences of DigiDNA iMazing 2.3.5 from the menu bar. You can change the backup location, backup encryption, backup frequency, device name, device icon, and more.

-

Frequently Asked Questions About DigiDNA iMazing 2.3.5 With Crack

-

Here are some common questions and answers about DigiDNA iMazing 2.3.5 with crack:

-

-

Is DigiDNA iMazing 2.3.5 with crack safe to use?

-

DigiDNA iMazing 2.3.5 with crack is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, using cracked software is illegal and may violate the terms and conditions of the original software developer. We do not recommend or endorse using cracked software and we are not responsible for any consequences that may arise from doing so.

-

Does DigiDNA iMazing 2.3.5 with crack require an internet connection?

-

DigiDNA iMazing 2.3.5 with crack does not require an internet connection to function properly. However, you may need an internet connection to download and install the software, update the software, or access some online features such as iCloud or App Store.

-

Can I use DigiDNA iMazing 2.3.5 with crack on multiple devices?

-

DigiDNA iMazing 2.3.5 with crack can be used on multiple devices as long as they are connected to the same computer. You can also transfer your license to another computer by deactivating it on the old computer and activating it on the new computer.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Durood E Tanjeena Pdf Free 485.md b/spaces/1gistliPinn/ChatGPT4/Examples/Durood E Tanjeena Pdf Free 485.md deleted file mode 100644 index 2f382cfaba261de0f729d3849f98412e594a04f2..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Durood E Tanjeena Pdf Free 485.md +++ /dev/null @@ -1,38 +0,0 @@ -
-

How to Download and Recite Durood E Tanjeena PDF for Free

-

Durood E Tanjeena is a powerful prayer that is often recited by Muslims in times of difficulty. It is said to be very beneficial and is believed to bring blessings and protection from Allah. In this article, we will show you how to download and recite Durood E Tanjeena PDF for free.

-

What is Durood E Tanjeena?

-

Durood E Tanjeena is a supplication that can be translated to "Praise be to Allah, the Highest." It is a short prayer that consists of 11 words in Arabic. The prayer is as follows:

-

durood e tanjeena pdf free 485


Download File ……… https://imgfil.com/2uy0Pm



-
-

اللهم صل على محمد وعلى آل محمد كما صليت على إبراهيم وعلى آل إبراهيم إنك حميد مجيد

-

Allahumma salli ala Muhammad wa ala ali Muhammad kama sallayta ala Ibrahim wa ala ali Ibrahim innaka Hamidun Majid

-
-

The meaning of Durood E Tanjeena can be translated to "O Allah, send blessings upon Muhammad and upon the family of Muhammad, as You sent blessings upon Ibrahim and upon the family of Ibrahim. Verily, You are Praiseworthy and Glorious."

-

How to Download Durood E Tanjeena PDF for Free?

-

If you want to download Durood E Tanjeena PDF for free, you can use one of the following links:

- -

These links will allow you to download Durood E Tanjeena PDF with clear Arabic text and Urdu translation. You can also read online or print the PDF file for your convenience.

-

How to Recite Durood E Tanjeena?

-

There are different ways to recite Durood E Tanjeena, depending on your intention and situation. Here are some common methods:

- -

When reciting Durood E Tanjeena, you should have faith and sincerity in your heart. You should also send salutations upon the Prophet Muhammad (peace be upon him) before and after reciting it.

-

What are the Benefits of Durood E Tanjeena?

-

Durood E Tanjeena has many benefits for those who recite it regularly. Some of these benefits include:

-

-

-

top 25 telugu dialogue free download, telugu ever green punch dialogues free download,best punch telugu dialogues download, telugu dialogue ringtones (adsbygoogle = window.adsbygoogle || []).push();

  • Bujjigadu Ringtone


    Download


    Your browser does not support the audio element.
  • Desa Muduru Ringtone


    Download


    Your browser does not support the audio element.
  • Dookudu Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Geetanjali Ringtone


    Download


    Your browser does not support the audio element.
  • Indra Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Jalsa Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Kumara Samy Ringtone


    Download


    Your browser does not support the audio element.
  • Leader Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Narasimha Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • NTR Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Punch Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Rebel Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Sheeva Ringtone


    Download


    Your browser does not support the audio element.
  • Soundu Seyyathu Ringtone


    Download


    Your browser does not support the audio element.
  • Telugu Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • -

    Best Tamil Dialogues Free Download


    DOWNLOAD ☆☆☆ https://ssurll.com/2uzyQp



    -

    vadivelu famous comedy dialogue ringtone free download,vadivelu famous comedy mobile dialogue ringtones free download,vadivelu famous comedy punch dialogues ringtones free download,vadivelu famous comedy love dialogues ringtones free download. (adsbygoogle = window.adsbygoogle || []).push();

  • Vadivel Song LaLe Lala lala Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vadivelu Kaipillai Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vadivelu Periya Rowdy Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vadivelu Stamina Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vadivelu Winner Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Varum Anna Varadhu Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vela Solliye Kolraanga Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Vela Vetti Vadivelu Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • Venneer Kududa Vadivelu Dialogue Ringtone


    Download


    Your browser does not support the audio element.
  • aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Betternet VPN For Windows Premium V8.3.5 Download LINK.md b/spaces/contluForse/HuggingGPT/assets/Betternet VPN For Windows Premium V8.3.5 Download LINK.md deleted file mode 100644 index 76d38fae0476fa37e5fa7175ee53cb836a3af658..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Betternet VPN For Windows Premium V8.3.5 Download LINK.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    One should always opt for a VPN service provider which offers unblocked VPN connectivity along with the one-click installation of this mod on HTC Touch. As this application is designed for downloading and running on the rooted platform, one should ensure that the VPN server on your phone is not vulnerable to attack due to cracked apk or malicious programs. Most VPNs protect against hacking and break your windows password by installing the latest update on their website.

    -

    Betternet VPN for Windows Premium v8.3.5 download


    Download Zip ★★★★★ https://ssurll.com/2uzvHl



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Carnes Lord Aristotle Politics Pdf Download.md b/spaces/contluForse/HuggingGPT/assets/Carnes Lord Aristotle Politics Pdf Download.md deleted file mode 100644 index 6def4a45c31369abc38330d79d00cc24499c2f6a..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Carnes Lord Aristotle Politics Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Carnes Lord Aristotle Politics Pdf Download


    Download File ✸✸✸ https://ssurll.com/2uzyTd



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Cytomic - The Glue 1.2.1 VST.RTAS WIN.OSX x86 x64 20 Learn How to Use this Plugin in Different Genres and Styles.md b/spaces/contluForse/HuggingGPT/assets/Cytomic - The Glue 1.2.1 VST.RTAS WIN.OSX x86 x64 20 Learn How to Use this Plugin in Different Genres and Styles.md deleted file mode 100644 index e65225a31a731d33afd1c40b283a41a73419bf6d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Cytomic - The Glue 1.2.1 VST.RTAS WIN.OSX x86 x64 20 Learn How to Use this Plugin in Different Genres and Styles.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Cytomic - The Glue 1.2.1 VST.RTAS WIN.OSX x86 x64 20


    Download Filehttps://ssurll.com/2uzxOM



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Download English subtitle of Trap movie - The ultimate resource for finding and downloading the subtitle.md b/spaces/contluForse/HuggingGPT/assets/Download English subtitle of Trap movie - The ultimate resource for finding and downloading the subtitle.md deleted file mode 100644 index 9d9d5281be1a8f9c8e83a53dbd1ab0f1d3a1870f..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download English subtitle of Trap movie - The ultimate resource for finding and downloading the subtitle.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    YoTurkish is the most favorite website for watching turkish series with english subtitles for free online. With a large database and great features, we are confident that YoTurkish.com & YoTurkish.app is the best website that you just can't miss.

    -

    Some countries show English-language movies and TV shows with subtitles.
    Other countries dub them.
    The former kind seem to have much better knowledge of English by their population, as measured e.g. by being able to talk to a random stranger on the street in English.

    -

    download english subtitle of Trap movie


    Download Zip --->>> https://ssurll.com/2uzwK0



    -

    Watch online streaming dan Nonton film Time Trap 2017 BluRay 480p & 720p mp4 mkv hindi dubbed, eng sub, sub indo, nonton online streaming film Time Trap 2017 full hd movies free download film gratis via google drive, openload, uptobox, upfile direct link download on index movies, world4ufree, bolly4u, downloadhub, tamilrockers, rarbg, torrent, yify, eztv, erosnow, mkvcage, pahe.in, ganool, filmywap, bioskopkeren, layarkaca21, indoxxi, dunia21, Lk21, 123movies, 300mbfilms, subscene, 300mb movies, Tv21, Televisi21, 9xmovie, khatrimaza, moviesbaba, hdmovie8, Mkvking, Mkvking.com .

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Free Directx 7.0 For Windows 7 Ultimate 64 Bit HOT.md b/spaces/contluForse/HuggingGPT/assets/Download Free Directx 7.0 For Windows 7 Ultimate 64 Bit HOT.md deleted file mode 100644 index d380e0ff7537188c7ef31ab20a1c5554f02b2c56..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Free Directx 7.0 For Windows 7 Ultimate 64 Bit HOT.md +++ /dev/null @@ -1,46 +0,0 @@ - -

    Download Free DirectX 7.0 for Windows 7 Ultimate 64 Bit

    -

    Are you looking for a way to enhance your gaming and multimedia experience on your Windows 7 Ultimate 64 bit system? If so, you might want to download and install DirectX 7.0, a collection of application programming interfaces (APIs) that enable high-performance graphics, sound, and input for various applications.

    -

    download free directx 7.0 for windows 7 ultimate 64 bit


    Download ⚹⚹⚹ https://ssurll.com/2uzxoB



    -

    DirectX 7.0 was released in 1999 by Microsoft and it introduced many new features and improvements, such as hardware acceleration, Direct3D 7.0, DirectDraw 7.0, DirectSound 3D, DirectMusic, DirectInput, and DirectPlay. DirectX 7.0 also supported older versions of DirectX, such as DirectX 3 and DirectX 5.

    -

    However, DirectX 7.0 is not included in Windows 7 Ultimate 64 bit by default. You need to download and install it manually from a reliable source. In this article, we will show you how to download free DirectX 7.0 for Windows 7 Ultimate 64 bit and how to install it correctly.

    -

    Why Download Free DirectX 7.0 for Windows 7 Ultimate 64 Bit?

    -

    DirectX 7.0 is a legacy version of DirectX that is still compatible with some older games and applications that require it. By downloading free DirectX 7.0 for Windows 7 Ultimate 64 bit, you can enjoy these games and applications without any issues or errors.

    -

    Some of the benefits of downloading free DirectX 7.0 for Windows 7 Ultimate 64 bit are:

    - -

    How to Download Free DirectX 7.0 for Windows 7 Ultimate 64 Bit

    -

    There are many websites that offer free downloads of DirectX 7.0 for Windows 7 Ultimate 64 bit, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or unwanted programs that can harm your computer or compromise your privacy.

    -

    -

    Therefore, we recommend you to download free DirectX 7.0 for Windows 7 Ultimate 64 bit from the official Microsoft website or from a reputable archive website. Here are the links to download free DirectX 7.0 for Windows 7 Ultimate 64 bit from these sources:

    - -

    Both links will provide you with a web installer that will download and install the necessary files for DirectX

    -

    How to Install Free DirectX 7.0 for Windows 7 Ultimate 64 Bit

    -

    After you download free DirectX 7.0 for Windows 7 Ultimate 64 bit from one of the links above, you need to run the web installer and follow the instructions on the screen.

    -

    The web installer will check your system requirements and download the appropriate files for your Windows version and language. The installation process may take several minutes depending on your internet speed and system performance.

    -

    Once the installation is complete, you will need to restart your computer to apply the changes. You can then check if DirectX 7.0 is installed correctly by using the DxDiag tool.

    -

    How to Check if DirectX 7.0 is Installed Correctly

    -

    To check if DirectX 7.0 is installed correctly on your Windows 7 Ultimate 64 bit system, you can use the DxDiag tool, which reports detailed information about the DirectX components and drivers installed on your system.

    -

    To use the DxDiag tool, follow these steps:

    -
      -
    1. Click on Start and type dxdiag in the search box.
    2. -
    3. Click on dxdiag.exe from the search results.
    4. -
    5. Wait for the tool to collect information about your system.
    6. -
    7. Click on the System tab and check the DirectX Version field. It should say DirectX 7 or higher.
    8. -
    9. Click on the Display tab and check the DirectDraw Acceleration, Direct3D Acceleration, and AGP Texture Acceleration fields. They should say Enabled or Not Available depending on your hardware capabilities.
    10. -
    11. Click on the Sound tab and check the DirectSound Acceleration field. It should say Enabled or Not Available depending on your hardware capabilities.
    12. -
    13. Click on Exit to close the tool.
    14. -
    -

    If you see any errors or problems with DirectX 7.0, you can try to troubleshoot them by following the instructions from Microsoft's website or from the installation program.

    -

    Conclusion

    -

    In this article, we have shown you how to download free DirectX 7.0 for Windows 7 Ultimate 64 bit and how to install it correctly. We have also shown you how to check if DirectX 7.0 is installed correctly and how to troubleshoot any issues that may arise.

    -

    By downloading free DirectX 7.0 for Windows 7 Ultimate 64 bit, you can enjoy a better gaming and multimedia experience on your system without spending any money or compromising your security. We hope you found this article helpful and informative.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/European Gays Barely..md b/spaces/contluForse/HuggingGPT/assets/European Gays Barely..md deleted file mode 100644 index e9370b30339fc84fc77e3a3226d563ea1827b492..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/European Gays Barely..md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Henry L. Minton's Departing from Deviance is an importantcontribution to queer studies generally and to the historiography of thelesbian and gay civil rights movement specifically. The book climaxes withthe American Psychiatric Association's 1973 removal of homosexualityfrom its list of mental illnesses. To get to that point, Minton traces afascinating history, beginning at the turn of the twentieth century, of thepioneering role that lesbian and gay activists and advocates played as theyinfluenced the production and publication of a number of significantpsychological and sexological studies. Prior to the 1960s it was, largelyspeaking, only through studies such as these that gay and lesbian activistscould find a voice. In a scornful and dismissive society, the medicalprofession provided the legitimating sponsorship for them to do so. Thesescientific studies, Minton persuasively argues, became emancipatory for thegays and lesbians who influenced, conducted, and participated in them.Additionally, the author explores the significant influence that moremainstream sexologists and psychologists, for example, Evelyn Hooker andAlfred C. Kinsey, had within the scientific fields on homosexual rights.Finally, Minton considers the immediate impact that lesbian and gay activistshad on the American Psychiatric Association in the early 1970s, when afteryears of groundwork by those noted above they helped to push thatorganization into reevaluating its position on homosexuality.

    -

    European, gays barely.


    DOWNLOADhttps://ssurll.com/2uzy9R



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/optim/rmsprop_tf.py b/spaces/cooelf/Multimodal-CoT/timm/optim/rmsprop_tf.py deleted file mode 100644 index 5115555cd26040e3af297a6e79e7bd5e4d202623..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/optim/rmsprop_tf.py +++ /dev/null @@ -1,136 +0,0 @@ -""" RMSProp modified to behave like Tensorflow impl - -Originally cut & paste from PyTorch RMSProp -https://github.com/pytorch/pytorch/blob/063946d2b3f3f1e953a2a3b54e0b34f1393de295/torch/optim/rmsprop.py -Licensed under BSD-Clause 3 (ish), https://github.com/pytorch/pytorch/blob/master/LICENSE - -Modifications Copyright 2020 Ross Wightman -""" - -import torch -from torch.optim import Optimizer - - -class RMSpropTF(Optimizer): - """Implements RMSprop algorithm (TensorFlow style epsilon) - - NOTE: This is a direct cut-and-paste of PyTorch RMSprop with eps applied before sqrt - and a few other modifications to closer match Tensorflow for matching hyper-params. - - Noteworthy changes include: - 1. Epsilon applied inside square-root - 2. square_avg initialized to ones - 3. LR scaling of update accumulated in momentum buffer - - Proposed by G. Hinton in his - `course `_. - - The centered version first appears in `Generating Sequences - With Recurrent Neural Networks `_. - - Arguments: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 1e-2) - momentum (float, optional): momentum factor (default: 0) - alpha (float, optional): smoothing (decay) constant (default: 0.9) - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-10) - centered (bool, optional) : if ``True``, compute the centered RMSProp, - the gradient is normalized by an estimation of its variance - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - decoupled_decay (bool, optional): decoupled weight decay as per https://arxiv.org/abs/1711.05101 - lr_in_momentum (bool, optional): learning rate scaling is included in the momentum buffer - update as per defaults in Tensorflow - - """ - - def __init__(self, params, lr=1e-2, alpha=0.9, eps=1e-10, weight_decay=0, momentum=0., centered=False, - decoupled_decay=False, lr_in_momentum=True): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= momentum: - raise ValueError("Invalid momentum value: {}".format(momentum)) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - if not 0.0 <= alpha: - raise ValueError("Invalid alpha value: {}".format(alpha)) - - defaults = dict(lr=lr, momentum=momentum, alpha=alpha, eps=eps, centered=centered, weight_decay=weight_decay, - decoupled_decay=decoupled_decay, lr_in_momentum=lr_in_momentum) - super(RMSpropTF, self).__init__(params, defaults) - - def __setstate__(self, state): - super(RMSpropTF, self).__setstate__(state) - for group in self.param_groups: - group.setdefault('momentum', 0) - group.setdefault('centered', False) - - def step(self, closure=None): - """Performs a single optimization step. - - Arguments: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data - if grad.is_sparse: - raise RuntimeError('RMSprop does not support sparse gradients') - state = self.state[p] - - # State initialization - if len(state) == 0: - state['step'] = 0 - state['square_avg'] = torch.ones_like(p.data) # PyTorch inits to zero - if group['momentum'] > 0: - state['momentum_buffer'] = torch.zeros_like(p.data) - if group['centered']: - state['grad_avg'] = torch.zeros_like(p.data) - - square_avg = state['square_avg'] - one_minus_alpha = 1. - group['alpha'] - - state['step'] += 1 - - if group['weight_decay'] != 0: - if 'decoupled_decay' in group and group['decoupled_decay']: - p.data.add_(-group['weight_decay'], p.data) - else: - grad = grad.add(group['weight_decay'], p.data) - - # Tensorflow order of ops for updating squared avg - square_avg.add_(one_minus_alpha, grad.pow(2) - square_avg) - # square_avg.mul_(alpha).addcmul_(1 - alpha, grad, grad) # PyTorch original - - if group['centered']: - grad_avg = state['grad_avg'] - grad_avg.add_(one_minus_alpha, grad - grad_avg) - # grad_avg.mul_(alpha).add_(1 - alpha, grad) # PyTorch original - avg = square_avg.addcmul(-1, grad_avg, grad_avg).add(group['eps']).sqrt_() # eps moved in sqrt - else: - avg = square_avg.add(group['eps']).sqrt_() # eps moved in sqrt - - if group['momentum'] > 0: - buf = state['momentum_buffer'] - # Tensorflow accumulates the LR scaling in the momentum buffer - if 'lr_in_momentum' in group and group['lr_in_momentum']: - buf.mul_(group['momentum']).addcdiv_(group['lr'], grad, avg) - p.data.add_(-buf) - else: - # PyTorch scales the param update by LR - buf.mul_(group['momentum']).addcdiv_(grad, avg) - p.data.add_(-group['lr'], buf) - else: - p.data.addcdiv_(-group['lr'], grad, avg) - - return loss diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/blocks.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/run.sh b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/run.sh deleted file mode 100644 index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py deleted file mode 100644 index 30b1a3d6580cf0360710426fbea1f05acdf07b4b..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSigmoid(nn.Module): - """Hard Sigmoid Module. Apply the hard sigmoid function: - Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value) - Default: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1) - - Args: - bias (float): Bias of the input feature map. Default: 1.0. - divisor (float): Divisor of the input feature map. Default: 2.0. - min_value (float): Lower bound value. Default: 0.0. - max_value (float): Upper bound value. Default: 1.0. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, bias=1.0, divisor=2.0, min_value=0.0, max_value=1.0): - super(HSigmoid, self).__init__() - self.bias = bias - self.divisor = divisor - assert self.divisor != 0 - self.min_value = min_value - self.max_value = max_value - - def forward(self, x): - x = (x + self.bias) / self.divisor - - return x.clamp_(self.min_value, self.max_value) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/roi_pool.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/roi_pool.py deleted file mode 100644 index d339d8f2941eabc1cbe181a9c6c5ab5ff4ff4e5f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/roi_pool.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['roi_pool_forward', 'roi_pool_backward']) - - -class RoIPoolFunction(Function): - - @staticmethod - def symbolic(g, input, rois, output_size, spatial_scale): - return g.op( - 'MaxRoiPool', - input, - rois, - pooled_shape_i=output_size, - spatial_scale_f=spatial_scale) - - @staticmethod - def forward(ctx, input, rois, output_size, spatial_scale=1.0): - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.input_shape = input.size() - - assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!' - - output_shape = (rois.size(0), input.size(1), ctx.output_size[0], - ctx.output_size[1]) - output = input.new_zeros(output_shape) - argmax = input.new_zeros(output_shape, dtype=torch.int) - - ext_module.roi_pool_forward( - input, - rois, - output, - argmax, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - ctx.save_for_backward(rois, argmax) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, argmax = ctx.saved_tensors - grad_input = grad_output.new_zeros(ctx.input_shape) - - ext_module.roi_pool_backward( - grad_output, - rois, - argmax, - grad_input, - pooled_height=ctx.output_size[0], - pooled_width=ctx.output_size[1], - spatial_scale=ctx.spatial_scale) - - return grad_input, None, None, None - - -roi_pool = RoIPoolFunction.apply - - -class RoIPool(nn.Module): - - def __init__(self, output_size, spatial_scale=1.0): - super(RoIPool, self).__init__() - - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - - def forward(self, input, rois): - return roi_pool(input, rois, self.output_size, self.spatial_scale) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(output_size={self.output_size}, ' - s += f'spatial_scale={self.spatial_scale})' - return s diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py deleted file mode 100644 index 6a1611a04d9d927223c9afbe5bf68af04d62937a..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,133 +0,0 @@ -import warnings - -import annotator.uniformer.mmcv as mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=(2048, 1024), - img_ratios=[0.5, 1.0], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (None | tuple | list[tuple]): Images scales for resizing. - img_ratios (float | list[float]): Image ratios for resizing - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale, - img_ratios=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - if img_ratios is not None: - img_ratios = img_ratios if isinstance(img_ratios, - list) else [img_ratios] - assert mmcv.is_list_of(img_ratios, float) - if img_scale is None: - # mode 1: given img_scale=None and a range of image ratio - self.img_scale = None - assert mmcv.is_list_of(img_ratios, float) - elif isinstance(img_scale, tuple) and mmcv.is_list_of( - img_ratios, float): - assert len(img_scale) == 2 - # mode 2: given a scale and a range of image ratio - self.img_scale = [(int(img_scale[0] * ratio), - int(img_scale[1] * ratio)) - for ratio in img_ratios] - else: - # mode 3: given multiple scales - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None - self.flip = flip - self.img_ratios = img_ratios - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): - h, w = results['img'].shape[:2] - img_scale = [(int(w * ratio), int(h * ratio)) - for ratio in self.img_ratios] - else: - img_scale = self.img_scale - flip_aug = [False, True] if self.flip else [False] - for scale in img_scale: - for flip in flip_aug: - for direction in self.flip_direction: - _results = results.copy() - _results['scale'] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip})' - repr_str += f'flip_direction={self.flip_direction}' - return repr_str diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/util.py b/spaces/cymic/Talking_Head_Anime_3/tha3/util.py deleted file mode 100644 index a161150f4ce39d7217703be2668ae3f9db21cabe..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/tha3/util.py +++ /dev/null @@ -1,281 +0,0 @@ -import math -import os -from typing import List - -import PIL.Image -import numpy -import torch -from matplotlib import cm -from torch import Tensor - - -def is_power2(x): - return x != 0 and ((x & (x - 1)) == 0) - - -def numpy_srgb_to_linear(x): - x = numpy.clip(x, 0.0, 1.0) - return numpy.where(x <= 0.04045, x / 12.92, ((x + 0.055) / 1.055) ** 2.4) - - -def numpy_linear_to_srgb(x): - x = numpy.clip(x, 0.0, 1.0) - return numpy.where(x <= 0.003130804953560372, x * 12.92, 1.055 * (x ** (1.0 / 2.4)) - 0.055) - - -def torch_srgb_to_linear(x: torch.Tensor): - x = torch.clip(x, 0.0, 1.0) - return torch.where(torch.le(x, 0.04045), x / 12.92, ((x + 0.055) / 1.055) ** 2.4) - - -def torch_linear_to_srgb(x): - x = torch.clip(x, 0.0, 1.0) - return torch.where(torch.le(x, 0.003130804953560372), x * 12.92, 1.055 * (x ** (1.0 / 2.4)) - 0.055) - - -def image_linear_to_srgb(image): - assert image.shape[2] == 3 or image.shape[2] == 4 - if image.shape[2] == 3: - return numpy_linear_to_srgb(image) - else: - height, width, _ = image.shape - rgb_image = numpy_linear_to_srgb(image[:, :, 0:3]) - a_image = image[:, :, 3:4] - return numpy.concatenate((rgb_image, a_image), axis=2) - - -def image_srgb_to_linear(image): - assert image.shape[2] == 3 or image.shape[2] == 4 - if image.shape[2] == 3: - return numpy_srgb_to_linear(image) - else: - height, width, _ = image.shape - rgb_image = numpy_srgb_to_linear(image[:, :, 0:3]) - a_image = image[:, :, 3:4] - return numpy.concatenate((rgb_image, a_image), axis=2) - - -def save_rng_state(file_name): - rng_state = torch.get_rng_state() - torch_save(rng_state, file_name) - - -def load_rng_state(file_name): - rng_state = torch_load(file_name) - torch.set_rng_state(rng_state) - - -def grid_change_to_numpy_image(torch_image, num_channels=3): - height = torch_image.shape[1] - width = torch_image.shape[2] - size_image = (torch_image[0, :, :] ** 2 + torch_image[1, :, :] ** 2).sqrt().view(height, width, 1).numpy() - hsv = cm.get_cmap('hsv') - angle_image = hsv(((torch.atan2( - torch_image[0, :, :].view(height * width), - torch_image[1, :, :].view(height * width)).view(height, width) + math.pi) / (2 * math.pi)).numpy()) * 3 - numpy_image = size_image * angle_image[:, :, 0:3] - rgb_image = numpy_linear_to_srgb(numpy_image) - if num_channels == 3: - return rgb_image - elif num_channels == 4: - return numpy.concatenate([rgb_image, numpy.ones_like(size_image)], axis=2) - else: - raise RuntimeError("Unsupported num_channels: " + str(num_channels)) - - -def rgb_to_numpy_image(torch_image: Tensor, min_pixel_value=-1.0, max_pixel_value=1.0): - assert torch_image.dim() == 3 - assert torch_image.shape[0] == 3 - height = torch_image.shape[1] - width = torch_image.shape[2] - - reshaped_image = torch_image.numpy().reshape(3, height * width).transpose().reshape(height, width, 3) - numpy_image = (reshaped_image - min_pixel_value) / (max_pixel_value - min_pixel_value) - return numpy_linear_to_srgb(numpy_image) - - -def rgba_to_numpy_image_greenscreen(torch_image: Tensor, - min_pixel_value=-1.0, - max_pixel_value=1.0, - include_alpha=False): - height = torch_image.shape[1] - width = torch_image.shape[2] - - numpy_image = (torch_image.numpy().reshape(4, height * width).transpose().reshape(height, width, - 4) - min_pixel_value) \ - / (max_pixel_value - min_pixel_value) - rgb_image = numpy_linear_to_srgb(numpy_image[:, :, 0:3]) - a_image = numpy_image[:, :, 3] - rgb_image[:, :, 0:3] = rgb_image[:, :, 0:3] * a_image.reshape(a_image.shape[0], a_image.shape[1], 1) - rgb_image[:, :, 1] = rgb_image[:, :, 1] + (1 - a_image) - - if not include_alpha: - return rgb_image - else: - return numpy.concatenate((rgb_image, numpy.ones_like(numpy_image[:, :, 3:4])), axis=2) - - -def rgba_to_numpy_image(torch_image: Tensor, min_pixel_value=-1.0, max_pixel_value=1.0): - assert torch_image.dim() == 3 - assert torch_image.shape[0] == 4 - height = torch_image.shape[1] - width = torch_image.shape[2] - - reshaped_image = torch_image.numpy().reshape(4, height * width).transpose().reshape(height, width, 4) - numpy_image = (reshaped_image - min_pixel_value) / (max_pixel_value - min_pixel_value) - rgb_image = numpy_linear_to_srgb(numpy_image[:, :, 0:3]) - a_image = numpy.clip(numpy_image[:, :, 3], 0.0, 1.0) - rgba_image = numpy.concatenate((rgb_image, a_image.reshape(height, width, 1)), axis=2) - return rgba_image - - -def extract_numpy_image_from_filelike_with_pytorch_layout(file, has_alpha=True, scale=2.0, offset=-1.0): - try: - pil_image = PIL.Image.open(file) - except Exception as e: - raise RuntimeError(file) - return extract_numpy_image_from_PIL_image_with_pytorch_layout(pil_image, has_alpha, scale, offset) - - -def extract_numpy_image_from_PIL_image_with_pytorch_layout(pil_image, has_alpha=True, scale=2.0, offset=-1.0): - if has_alpha: - num_channel = 4 - else: - num_channel = 3 - image_size = pil_image.width - - # search for transparent pixels(alpha==0) and change them to [0 0 0 0] to avoid the color influence to the model - for i, px in enumerate(pil_image.getdata()): - if px[3] <= 0: - y = i // image_size - x = i % image_size - pil_image.putpixel((x, y), (0, 0, 0, 0)) - - raw_image = numpy.asarray(pil_image) - image = (raw_image / 255.0).reshape(image_size, image_size, num_channel) - image[:, :, 0:3] = numpy_srgb_to_linear(image[:, :, 0:3]) - image = image \ - .reshape(image_size * image_size, num_channel) \ - .transpose() \ - .reshape(num_channel, image_size, image_size) * scale + offset - return image - - -def extract_pytorch_image_from_filelike(file, has_alpha=True, scale=2.0, offset=-1.0): - try: - pil_image = PIL.Image.open(file) - except Exception as e: - raise RuntimeError(file) - image = extract_numpy_image_from_PIL_image_with_pytorch_layout(pil_image, has_alpha, scale, offset) - return torch.from_numpy(image).float() - - -def extract_pytorch_image_from_PIL_image(pil_image, has_alpha=True, scale=2.0, offset=-1.0): - image = extract_numpy_image_from_PIL_image_with_pytorch_layout(pil_image, has_alpha, scale, offset) - return torch.from_numpy(image).float() - - -def extract_numpy_image_from_filelike(file): - pil_image = PIL.Image.open(file) - image_width = pil_image.width - image_height = pil_image.height - if pil_image.mode == "RGBA": - image = (numpy.asarray(pil_image) / 255.0).reshape(image_height, image_width, 4) - else: - image = (numpy.asarray(pil_image) / 255.0).reshape(image_height, image_width, 3) - image[:, :, 0:3] = numpy_srgb_to_linear(image[:, :, 0:3]) - return image - - -def convert_avs_to_avi(avs_file, avi_file): - os.makedirs(os.path.dirname(avi_file), exist_ok=True) - - file = open("temp.vdub", "w") - file.write("VirtualDub.Open(\"%s\");" % avs_file) - file.write("VirtualDub.video.SetCompression(\"cvid\", 0, 10000, 0);") - file.write("VirtualDub.SaveAVI(\"%s\");" % avi_file) - file.write("VirtualDub.Close();") - file.close() - - os.system("C:\\ProgramData\\chocolatey\\lib\\virtualdub\\tools\\vdub64.exe /i temp.vdub") - - os.remove("temp.vdub") - - -def convert_avi_to_mp4(avi_file, mp4_file): - os.makedirs(os.path.dirname(mp4_file), exist_ok=True) - os.system("ffmpeg -y -i %s -c:v libx264 -preset slow -crf 22 -c:a libfaac -b:a 128k %s" % \ - (avi_file, mp4_file)) - - -def convert_avi_to_webm(avi_file, webm_file): - os.makedirs(os.path.dirname(webm_file), exist_ok=True) - os.system("ffmpeg -y -i %s -vcodec libvpx -qmin 0 -qmax 50 -crf 10 -b:v 1M -acodec libvorbis %s" % \ - (avi_file, webm_file)) - - -def convert_mp4_to_webm(mp4_file, webm_file): - os.makedirs(os.path.dirname(webm_file), exist_ok=True) - os.system("ffmpeg -y -i %s -vcodec libvpx -qmin 0 -qmax 50 -crf 10 -b:v 1M -acodec libvorbis %s" % \ - (mp4_file, webm_file)) - - -def create_parent_dir(file_name): - os.makedirs(os.path.dirname(file_name), exist_ok=True) - - -def run_command(command_parts: List[str]): - command = " ".join(command_parts) - os.system(command) - - -def save_pytorch_image(image, file_name): - if image.shape[0] == 1: - image = image.squeeze() - if image.shape[0] == 4: - numpy_image = rgba_to_numpy_image(image.detach().cpu()) - pil_image = PIL.Image.fromarray(numpy.uint8(numpy.rint(numpy_image * 255.0)), mode='RGBA') - else: - numpy_image = rgb_to_numpy_image(image.detach().cpu()) - pil_image = PIL.Image.fromarray(numpy.uint8(numpy.rint(numpy_image * 255.0)), mode='RGB') - os.makedirs(os.path.dirname(file_name), exist_ok=True) - pil_image.save(file_name) - - -def torch_load(file_name): - with open(file_name, 'rb') as f: - return torch.load(f) - - -def torch_save(content, file_name): - os.makedirs(os.path.dirname(file_name), exist_ok=True) - with open(file_name, 'wb') as f: - torch.save(content, f) - - -def resize_PIL_image(pil_image, size=(256, 256)): - w, h = pil_image.size - d = min(w, h) - r = ((w - d) // 2, (h - d) // 2, (w + d) // 2, (h + d) // 2) - return pil_image.resize(size, resample=PIL.Image.LANCZOS, box=r) - - -def extract_PIL_image_from_filelike(file): - return PIL.Image.open(file) - - -def convert_output_image_from_torch_to_numpy(output_image): - if output_image.shape[2] == 2: - h, w, c = output_image.shape - output_image = torch.transpose(output_image.reshape(h * w, c), 0, 1).reshape(c, h, w) - if output_image.shape[0] == 4: - numpy_image = rgba_to_numpy_image(output_image) - elif output_image.shape[0] == 1: - c, h, w = output_image.shape - alpha_image = torch.cat([output_image.repeat(3, 1, 1) * 2.0 - 1.0, torch.ones(1, h, w)], dim=0) - numpy_image = rgba_to_numpy_image(alpha_image) - elif output_image.shape[0] == 2: - numpy_image = grid_change_to_numpy_image(output_image, num_channels=4) - else: - raise RuntimeError("Unsupported # image channels: %d" % output_image.shape[0]) - return numpy_image diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/networks.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/networks.py deleted file mode 100644 index ead9cdcb8720b845c233de79dc8a8d1668492108..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/networks.py +++ /dev/null @@ -1,521 +0,0 @@ -"""This script defines deep neural networks for Deep3DFaceRecon_pytorch -""" - -import os -import numpy as np -import torch.nn.functional as F -from torch.nn import init -import functools -from torch.optim import lr_scheduler -import torch -from torch import Tensor -import torch.nn as nn -try: - from torch.hub import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url -from typing import Type, Any, Callable, Union, List, Optional -from .arcface_torch.backbones import get_model -from kornia.geometry import warp_affine - -def resize_n_crop(image, M, dsize=112): - # image: (b, c, h, w) - # M : (b, 2, 3) - return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True) - -def filter_state_dict(state_dict, remove_name='fc'): - new_state_dict = {} - for key in state_dict: - if remove_name in key: - continue - new_state_dict[key] = state_dict[key] - return new_state_dict - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def define_net_recon(net_recon, use_last_fc=False, init_path=None): - return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path) - -def define_net_recog(net_recog, pretrained_path=None): - net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path) - net.eval() - return net - -class ReconNetWrapper(nn.Module): - fc_dim=257 - def __init__(self, net_recon, use_last_fc=False, init_path=None): - super(ReconNetWrapper, self).__init__() - self.use_last_fc = use_last_fc - if net_recon not in func_dict: - return NotImplementedError('network [%s] is not implemented', net_recon) - func, last_dim = func_dict[net_recon] - backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim) - if init_path and os.path.isfile(init_path): - state_dict = filter_state_dict(torch.load(init_path, map_location='cpu')) - backbone.load_state_dict(state_dict) - print("loading init net_recon %s from %s" %(net_recon, init_path)) - self.backbone = backbone - if not use_last_fc: - self.final_layers = nn.ModuleList([ - conv1x1(last_dim, 80, bias=True), # id layer - conv1x1(last_dim, 64, bias=True), # exp layer - conv1x1(last_dim, 80, bias=True), # tex layer - conv1x1(last_dim, 3, bias=True), # angle layer - conv1x1(last_dim, 27, bias=True), # gamma layer - conv1x1(last_dim, 2, bias=True), # tx, ty - conv1x1(last_dim, 1, bias=True) # tz - ]) - for m in self.final_layers: - nn.init.constant_(m.weight, 0.) - nn.init.constant_(m.bias, 0.) - - def forward(self, x): - x = self.backbone(x) - if not self.use_last_fc: - output = [] - for layer in self.final_layers: - output.append(layer(x)) - x = torch.flatten(torch.cat(output, dim=1), 1) - return x - - -class RecogNetWrapper(nn.Module): - def __init__(self, net_recog, pretrained_path=None, input_size=112): - super(RecogNetWrapper, self).__init__() - net = get_model(name=net_recog, fp16=False) - if pretrained_path: - state_dict = torch.load(pretrained_path, map_location='cpu') - net.load_state_dict(state_dict) - print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path)) - for param in net.parameters(): - param.requires_grad = False - self.net = net - self.preprocess = lambda x: 2 * x - 1 - self.input_size=input_size - - def forward(self, image, M): - image = self.preprocess(resize_n_crop(image, M, self.input_size)) - id_feature = F.normalize(self.net(image), dim=-1, p=2) - return id_feature - - -# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py -__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', - 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', - 'wide_resnet50_2', 'wide_resnet101_2'] - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth', - 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth', - 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth', - 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth', - 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth', -} - - -def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d: - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d: - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias) - - -class BasicBlock(nn.Module): - expansion: int = 1 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion: int = 4 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__( - self, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - num_classes: int = 1000, - zero_init_residual: bool = False, - use_last_fc: bool = False, - groups: int = 1, - width_per_group: int = 64, - replace_stride_with_dilation: Optional[List[bool]] = None, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.use_last_fc = use_last_fc - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - dilate=replace_stride_with_dilation[2]) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - - if self.use_last_fc: - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type] - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type] - - def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int, - stride: int = 1, dilate: bool = False) -> nn.Sequential: - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def _forward_impl(self, x: Tensor) -> Tensor: - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - if self.use_last_fc: - x = torch.flatten(x, 1) - x = self.fc(x) - return x - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - -def _resnet( - arch: str, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - pretrained: bool, - progress: bool, - **kwargs: Any -) -> ResNet: - model = ResNet(block, layers, **kwargs) - if pretrained: - state_dict = load_state_dict_from_url(model_urls[arch], - progress=progress) - model.load_state_dict(state_dict) - return model - - -def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-18 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress, - **kwargs) - - -def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-34 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-50 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-101 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress, - **kwargs) - - -def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-152 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress, - **kwargs) - - -def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-50 32x4d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 4 - return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-101 32x8d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 8 - return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-50-2 model from - `"Wide Residual Networks" `_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-101-2 model from - `"Wide Residual Networks" `_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -func_dict = { - 'resnet18': (resnet18, 512), - 'resnet50': (resnet50, 2048) -} diff --git a/spaces/damo-vilab/MS-Vid2Vid-XL-demo/app.py b/spaces/damo-vilab/MS-Vid2Vid-XL-demo/app.py deleted file mode 100644 index 22899779df5bc65d95fc756dd8efeaaaaadc80ff..0000000000000000000000000000000000000000 --- a/spaces/damo-vilab/MS-Vid2Vid-XL-demo/app.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python - -import os -import pathlib -import tempfile - -import cv2 -import gradio as gr -import torch -from huggingface_hub import snapshot_download -from modelscope.outputs import OutputKeys -from modelscope.pipelines import pipeline - -DESCRIPTION = "# ModelScope-Vid2Vid-XL" -if not torch.cuda.is_available(): - DESCRIPTION += "\n

    Running on CPU 🥶 This demo does not work on CPU.

    " - -if torch.cuda.is_available(): - model_cache_dir = os.getenv("MODEL_CACHE_DIR", "./models") - model_dir = pathlib.Path(model_cache_dir) / "MS-Vid2Vid-XL" - snapshot_download(repo_id="damo-vilab/MS-Vid2Vid-XL", repo_type="model", local_dir=model_dir) - pipe = pipeline(task="video-to-video", model=model_dir.as_posix(), model_revision="v1.1.0", device="cuda:0") - - -def check_input_video(video_path: str) -> None: - cap = cv2.VideoCapture(video_path) - n_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - cap.release() - if n_frames != 32 or width != 448 or height != 256: - raise gr.Error( - f"Input video must be 32 frames of size 448x256. Your video is {n_frames} frames of size {width}x{height}." - ) - - -def video_to_video(video_path: str, text: str) -> str: - check_input_video(video_path) - p_input = {"video_path": video_path, "text": text} - output_file = tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) - pipe(p_input, output_video=output_file.name)[OutputKeys.OUTPUT_VIDEO] - return output_file.name - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Group(): - input_video = gr.Video(label="Input video") - text_description = gr.Textbox(label="Text description") - run_button = gr.Button() - output_video = gr.Video(label="Output video") - - gr.on( - triggers=[text_description.submit, run_button.click], - fn=check_input_video, - inputs=input_video, - queue=False, - api_name=False, - ).success( - fn=video_to_video, - inputs=[input_video, text_description], - outputs=output_video, - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=10).launch() diff --git a/spaces/dandan4272/hand_gesture_rec/model/stgcn/Utils.py b/spaces/dandan4272/hand_gesture_rec/model/stgcn/Utils.py deleted file mode 100644 index 582504c67b985a265b4e5c942a01e482869ab488..0000000000000000000000000000000000000000 --- a/spaces/dandan4272/hand_gesture_rec/model/stgcn/Utils.py +++ /dev/null @@ -1,199 +0,0 @@ -### Reference from: https://github.com/yysijie/st-gcn/blob/master/net/utils/graph.py - -import os -import torch -import numpy as np - - -class Graph: - """The Graph to model the skeletons extracted by the Alpha-Pose. - Args: - - strategy: (string) must be one of the follow candidates - - uniform: Uniform Labeling, - - distance: Distance Partitioning, - - spatial: Spatial Configuration, - For more information, please refer to the section 'Partition Strategies' - in our paper (https://arxiv.org/abs/1801.07455). - - layout: (string) must be one of the follow candidates - - coco_cut: Is COCO format but cut 4 joints (L-R ears, L-R eyes) out. - - max_hop: (int) the maximal distance between two connected nodes. - - dilation: (int) controls the spacing between the kernel points. - """ - def __init__(self, - layout='coco_cut', - # strategy='uniform', - strategy='spatial', - - max_hop=1, - dilation=1): - self.max_hop = max_hop - self.dilation = dilation - - self.get_edge(layout) - self.hop_dis = get_hop_distance(self.num_node, self.edge, max_hop) - self.get_adjacency(strategy) - - # [(4, 3), (3, 2), (7, 6), (6, 5), (13, 12), (12, - # 11), - # (10, 9), (9, 8), (11, 5), (8, 2), (5, 1), (2, 1), - # (0, 1), (15, 0), (14, 0), (17, 15), (16, 14)] - - # def get_edge(self, layout): - # if layout == 'coco_cut': - # self.num_node = 14 - # self_link = [(i, i) for i in range(self.num_node)] - # neighbor_link = [(6, 4), (4, 2), (2, 13), (13, 1), (5, 3), (3, 1), (12, 10), - # (10, 8), (8, 2), (11, 9), (9, 7), (7, 1), (13, 0)] - # self.edge = self_link + neighbor_link - # self.center = 13 - # else: - # raise ValueError('This layout is not supported!') - - def get_edge(self, layout): - if layout == 'coco_cut': - self.num_node = 21 - self_link = [(i, i) for i in range(self.num_node)] - # neighbor_link = [(4, 3), (3, 2), (2, 1), (8, 7), (7, 6), (6, 5), (12, 11), - # (11, 10), (10, 9), (16, 15), (15, 14), (14, 13), (20, 19),(19,18), - # (18,17),(4,2),(8,5),(12,9),(16,13),(20,17)] - # neighbor_link = [(4, 3), (3, 2), (5,2),(2, 1), (8, 7), (7, 6), (6, 5), (12, 11), - # (11, 10), (10, 9), (16, 15), (15, 14), (14, 13), (20, 19),(19,18), - # (18,17),(5,9),(9,13),(13,17),(1,0),(17,0)] - - neighbor_link_1 = [(0,1), (1, 2),(2, 3), (3, 4), (0, 5), (5, 6), (6, 7), - (7, 8), (0, 9), (9,10), (10,11), (11,12), (0,13),(13,14), - (14,15),(15,16),(0,17),(17,18),(18,19),(19,20)] - neighbor_link = [(4, 3), (3, 2),(2, 1), (8, 7), (7, 6), (6, 5), (12, 11), - (11, 10), (10, 9), (16, 15), (15, 14), (14, 13), (20, 19),(19,18), - (18,17),(2,5),(5,9),(9,13),(13,17),(1,0),(17,0),(4,0),(8,0),(12,0),(16,0),(20,0)] - - # neighbor_link=[(3, 4), (0, 5), (17, 18), (0, 17), (13, 14), (13, 17), (18, 19), (5, 6), (5, 9), (14, 15), (0, 1), - # (9, 10), (1, 2), (9, 13), (10, 11), (19, 20), (6, 7), (15, 16), (2, 3), (11, 12), (7, 8)] - - # neighbor_link = [(4, 3), (3, 2),(2, 1), (8, 7), (7, 6), (6, 5), (12, 11), - # (11, 10), (10, 9), (16, 15), (15, 14), (14, 13), (20, 19),(19,18), - # (18,17),(2,5),(5,9),(9,13),(13,17),(1,0),(17,0)] - - # neighbor_link = [(4, 3), (3, 2), (2, 1), (8, 7), (7, 6), (6, 5), (12, 11), - # (11, 10), (10, 9), (16, 15), (15, 14), (14, 13), (20, 19),(19,18), - # (18,17),(5,9),(9,13),(13,17),(1,0),(5,0),(17,0)] - - # neighbor_link = [(4, 3), (3, 2), (7, 6), (6, 5), (13, 12), (12, - # 11), - # (10, 9), (9, 8), (11, 5), (8, 2), (5, 1), (2, 1), - # (0, 1), (15, 0), (14, 0), (17, 15), (16, 14)] - - self.edge = self_link + neighbor_link_1 - self.center = 0 - else: - raise ValueError('This layout is not supported!') - - def get_adjacency(self, strategy): - valid_hop = range(0, self.max_hop + 1, self.dilation) - adjacency = np.zeros((self.num_node, self.num_node)) - for hop in valid_hop: - adjacency[self.hop_dis == hop] = 1 - normalize_adjacency = normalize_digraph(adjacency) - - if strategy == 'uniform': - A = np.zeros((1, self.num_node, self.num_node)) - A[0] = normalize_adjacency - self.A = A - elif strategy == 'distance': - A = np.zeros((len(valid_hop), self.num_node, self.num_node)) - for i, hop in enumerate(valid_hop): - A[i][self.hop_dis == hop] = normalize_adjacency[self.hop_dis == - hop] - self.A = A - elif strategy == 'spatial': - A = [] - for hop in valid_hop: - a_root = np.zeros((self.num_node, self.num_node)) - a_close = np.zeros((self.num_node, self.num_node)) - a_further = np.zeros((self.num_node, self.num_node)) - for i in range(self.num_node): - for j in range(self.num_node): - if self.hop_dis[j, i] == hop: - if self.hop_dis[j, self.center] == self.hop_dis[i, self.center]: - a_root[j, i] = normalize_adjacency[j, i] - elif self.hop_dis[j, self.center] > self.hop_dis[i, self.center]: - a_close[j, i] = normalize_adjacency[j, i] - else: - a_further[j, i] = normalize_adjacency[j, i] - if hop == 0: - A.append(a_root) - else: - A.append(a_root + a_close) - A.append(a_further) - A = np.stack(A) - self.A = A - #self.A = np.swapaxes(np.swapaxes(A, 0, 1), 1, 2) - else: - raise ValueError("This strategy is not supported!") - - -def get_hop_distance(num_node, edge, max_hop=1): - A = np.zeros((num_node, num_node)) - for i, j in edge: - A[j, i] = 1 - A[i, j] = 1 - - # compute hop steps - hop_dis = np.zeros((num_node, num_node)) + np.inf - transfer_mat = [np.linalg.matrix_power(A, d) for d in range(max_hop + 1)] - arrive_mat = (np.stack(transfer_mat) > 0) - for d in range(max_hop, -1, -1): - hop_dis[arrive_mat[d]] = d - return hop_dis - - -def normalize_digraph(A): - Dl = np.sum(A, 0) - num_node = A.shape[0] - Dn = np.zeros((num_node, num_node)) - for i in range(num_node): - if Dl[i] > 0: - Dn[i, i] = Dl[i]**(-1) - AD = np.dot(A, Dn) - return AD - - -def normalize_undigraph(A): - Dl = np.sum(A, 0) - num_node = A.shape[0] - Dn = np.zeros((num_node, num_node)) - for i in range(num_node): - if Dl[i] > 0: - Dn[i, i] = Dl[i]**(-0.5) - DAD = np.dot(np.dot(Dn, A), Dn) - return DAD - - -def normalize_points_with_size(xy, width, height, flip=False): - """Normalize scale points in image with size of image to (0-1). - xy : (frames, parts, xy) or (parts, xy) - """ - if xy.ndim == 2: - xy = np.expand_dims(xy, 0) - # print(xy[:, :, 1].min(), xy[:, :, 1].max()) - xy[:, :, 0] /= width - xy[:, :, 1] /= height - print('preprocess') - # print(xy[:, :, 0].min(), xy[:, :, 0].max()) - # print(xy[:, :, 1].min(), xy[:, :, 1].max()) - if flip: - xy[:, :, 0] = 1 - xy[:, :, 0] - return xy - - -def scale_pose(xy): - """Normalize pose points by scale with max/min value of each pose. - xy : (frames, parts, xy) or (parts, xy) - """ - if xy.ndim == 2: - xy = np.expand_dims(xy, 0) - xy_min = np.nanmin(xy, axis=1) - xy_max = np.nanmax(xy, axis=1) - for i in range(xy.shape[0]): - xy[i] = ((xy[i] - xy_min[i]) / (xy_max[i] - xy_min[i])) * 2 - 1 - return xy.squeeze() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aef15a25.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aef15a25.js deleted file mode 100644 index 9399883f87ad7d84309bdca42939b4c4be3fa491..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-aef15a25.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as P,e as Q,s as U,I as V,F as j,o as J,m as O,g as S,G as E,h as z,J as ue,ay as _e,w as B,u as C,k as D,H as I,C as se,am as fe,t as W,x as X,az as oe,ap as Y,Y as M,j as H,p as A,B as ce,V as Z,ae as y,N,O as T,Q as p,R as x,T as q,E as R,P as he,r as me,v as be}from"./index-9e76ffee.js";import{B as $}from"./Button-30a08c0b.js";import{B as de}from"./BlockTitle-af232cbc.js";import"./Info-77722665.js";function K(l,e,i){const n=l.slice();return n[13]=e[i],n[15]=i,n}function ge(l){let e;return{c(){e=W(l[3])},m(i,n){z(i,e,n)},p(i,n){n&8&&X(e,i[3])},d(i){i&&D(e)}}}function L(l,e){let i,n,s,o,h=!1,b,a,t=e[13]+"",f,d,u,m,r,g;function v(){return e[11](e[13],e[15])}return m=oe(e[10][0]),{key:l,first:null,c(){i=O("label"),n=O("input"),b=J(),a=O("span"),f=W(t),d=J(),n.disabled=e[2],S(n,"type","radio"),S(n,"name",s="radio-"+e[6]),n.__value=o=e[13],Y(n,n.__value),S(n,"class","svelte-1p9xokt"),S(a,"class","ml-2 svelte-1p9xokt"),S(i,"data-testid",u=`${e[13]}-radio-label`),S(i,"class","svelte-1p9xokt"),M(i,"disabled",e[2]),M(i,"selected",e[0]===e[13]),m.p(n),this.first=i},m(k,w){z(k,i,w),H(i,n),n.checked=n.__value===e[0],H(i,b),H(i,a),H(a,f),H(i,d),r||(g=[A(n,"change",e[9]),A(n,"input",v)],r=!0)},p(k,w){e=k,w&4&&(n.disabled=e[2]),w&64&&s!==(s="radio-"+e[6])&&S(n,"name",s),w&2&&o!==(o=e[13])&&(n.__value=o,Y(n,n.__value),h=!0),(h||w&3)&&(n.checked=n.__value===e[0]),w&2&&t!==(t=e[13]+"")&&X(f,t),w&2&&u!==(u=`${e[13]}-radio-label`)&&S(i,"data-testid",u),w&4&&M(i,"disabled",e[2]),w&3&&M(i,"selected",e[0]===e[13])},d(k){k&&D(i),m.r(),r=!1,ce(g)}}}function ve(l){let e,i,n,s=[],o=new Map,h;e=new de({props:{show_label:l[5],info:l[4],$$slots:{default:[ge]},$$scope:{ctx:l}}});let b=V(l[1]);const a=t=>t[15];for(let t=0;t{i(8,s=!1)});const m=[[]];function r(){n=this.__value,i(0,n)}const g=(v,k)=>d("select",{value:v,index:k});return l.$$set=v=>{"value"in v&&i(0,n=v.value),"value_is_output"in v&&i(8,s=v.value_is_output),"choices"in v&&i(1,o=v.choices),"disabled"in v&&i(2,h=v.disabled),"label"in v&&i(3,b=v.label),"info"in v&&i(4,a=v.info),"show_label"in v&&i(5,t=v.show_label),"elem_id"in v&&i(6,f=v.elem_id)},l.$$.update=()=>{l.$$.dirty&1&&u()},[n,o,h,b,a,t,f,d,s,r,m,g]}class ee extends P{constructor(e){super(),Q(this,e,re,ve,U,{value:0,value_is_output:8,choices:1,disabled:2,label:3,info:4,show_label:5,elem_id:6})}}function we(l){let e,i,n,s,o,h;const b=[l[12]];let a={};for(let u=0;uT(n,"value",t)),N.push(()=>T(n,"value_is_output",f)),n.$on("change",l[15]),n.$on("input",l[16]),n.$on("select",l[17]),{c(){j(e.$$.fragment),i=J(),j(n.$$.fragment)},m(u,m){E(e,u,m),z(u,i,m),E(n,u,m),h=!0},p(u,m){const r=m&4096?p(b,[x(u[12])]):{};e.$set(r);const g={};m&4&&(g.label=u[2]),m&8&&(g.info=u[3]),m&16&&(g.elem_id=u[4]),m&256&&(g.show_label=u[8]),m&128&&(g.choices=u[7]),!s&&m&1&&(s=!0,g.value=u[0],q(()=>s=!1)),!o&&m&2&&(o=!0,g.value_is_output=u[1],q(()=>o=!1)),n.$set(g)},i(u){h||(B(e.$$.fragment,u),B(n.$$.fragment,u),h=!0)},o(u){C(e.$$.fragment,u),C(n.$$.fragment,u),h=!1},d(u){u&&D(i),I(e,u),I(n,u)}}}function ke(l){let e,i;return e=new $({props:{visible:l[6],type:"fieldset",elem_id:l[4],elem_classes:l[5],container:l[9],scale:l[10],min_width:l[11],$$slots:{default:[we]},$$scope:{ctx:l}}}),{c(){j(e.$$.fragment)},m(n,s){E(e,n,s),i=!0},p(n,[s]){const o={};s&64&&(o.visible=n[6]),s&16&&(o.elem_id=n[4]),s&32&&(o.elem_classes=n[5]),s&512&&(o.container=n[9]),s&1024&&(o.scale=n[10]),s&2048&&(o.min_width=n[11]),s&266655&&(o.$$scope={dirty:s,ctx:n}),e.$set(o)},i(n){i||(B(e.$$.fragment,n),i=!0)},o(n){C(e.$$.fragment,n),i=!1},d(n){I(e,n)}}}function Re(l,e,i){let{label:n="Radio"}=e,{info:s=void 0}=e,{elem_id:o=""}=e,{elem_classes:h=[]}=e,{visible:b=!0}=e,{value:a=null}=e,{value_is_output:t=!1}=e,{choices:f=[]}=e,{show_label:d}=e,{container:u=!1}=e,{scale:m=null}=e,{min_width:r=void 0}=e,{loading_status:g}=e;function v(_){a=_,i(0,a)}function k(_){t=_,i(1,t)}function w(_){R.call(this,l,_)}function F(_){R.call(this,l,_)}function G(_){R.call(this,l,_)}return l.$$set=_=>{"label"in _&&i(2,n=_.label),"info"in _&&i(3,s=_.info),"elem_id"in _&&i(4,o=_.elem_id),"elem_classes"in _&&i(5,h=_.elem_classes),"visible"in _&&i(6,b=_.visible),"value"in _&&i(0,a=_.value),"value_is_output"in _&&i(1,t=_.value_is_output),"choices"in _&&i(7,f=_.choices),"show_label"in _&&i(8,d=_.show_label),"container"in _&&i(9,u=_.container),"scale"in _&&i(10,m=_.scale),"min_width"in _&&i(11,r=_.min_width),"loading_status"in _&&i(12,g=_.loading_status)},[a,t,n,s,o,h,b,f,d,u,m,r,g,v,k,w,F,G]}let Be=class extends P{constructor(e){super(),Q(this,e,Re,ke,U,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,choices:7,show_label:8,container:9,scale:10,min_width:11,loading_status:12})}};function Ce(l){let e,i,n,s,o,h;const b=[l[12]];let a={};for(let u=0;uT(n,"value",t)),N.push(()=>T(n,"value_is_output",f)),n.$on("change",l[15]),n.$on("input",l[16]),n.$on("select",l[17]),{c(){j(e.$$.fragment),i=J(),j(n.$$.fragment)},m(u,m){E(e,u,m),z(u,i,m),E(n,u,m),h=!0},p(u,m){const r=m&4096?p(b,[x(u[12])]):{};e.$set(r);const g={};m&4&&(g.label=u[2]),m&8&&(g.info=u[3]),m&16&&(g.elem_id=u[4]),m&256&&(g.show_label=u[8]),m&128&&(g.choices=u[7]),!s&&m&1&&(s=!0,g.value=u[0],q(()=>s=!1)),!o&&m&2&&(o=!0,g.value_is_output=u[1],q(()=>o=!1)),n.$set(g)},i(u){h||(B(e.$$.fragment,u),B(n.$$.fragment,u),h=!0)},o(u){C(e.$$.fragment,u),C(n.$$.fragment,u),h=!1},d(u){u&&D(i),I(e,u),I(n,u)}}}function Se(l){let e,i;return e=new $({props:{visible:l[6],type:"fieldset",elem_id:l[4],elem_classes:l[5],container:l[9],scale:l[10],min_width:l[11],$$slots:{default:[Ce]},$$scope:{ctx:l}}}),{c(){j(e.$$.fragment)},m(n,s){E(e,n,s),i=!0},p(n,[s]){const o={};s&64&&(o.visible=n[6]),s&16&&(o.elem_id=n[4]),s&32&&(o.elem_classes=n[5]),s&512&&(o.container=n[9]),s&1024&&(o.scale=n[10]),s&2048&&(o.min_width=n[11]),s&266655&&(o.$$scope={dirty:s,ctx:n}),e.$set(o)},i(n){i||(B(e.$$.fragment,n),i=!0)},o(n){C(e.$$.fragment,n),i=!1},d(n){I(e,n)}}}function je(l,e,i){let{label:n="Radio"}=e,{info:s=void 0}=e,{elem_id:o=""}=e,{elem_classes:h=[]}=e,{visible:b=!0}=e,{value:a=null}=e,{value_is_output:t=!1}=e,{choices:f=[]}=e,{show_label:d}=e,{container:u=!1}=e,{scale:m=null}=e,{min_width:r=void 0}=e,{loading_status:g}=e;function v(_){a=_,i(0,a)}function k(_){t=_,i(1,t)}function w(_){R.call(this,l,_)}function F(_){R.call(this,l,_)}function G(_){R.call(this,l,_)}return l.$$set=_=>{"label"in _&&i(2,n=_.label),"info"in _&&i(3,s=_.info),"elem_id"in _&&i(4,o=_.elem_id),"elem_classes"in _&&i(5,h=_.elem_classes),"visible"in _&&i(6,b=_.visible),"value"in _&&i(0,a=_.value),"value_is_output"in _&&i(1,t=_.value_is_output),"choices"in _&&i(7,f=_.choices),"show_label"in _&&i(8,d=_.show_label),"container"in _&&i(9,u=_.container),"scale"in _&&i(10,m=_.scale),"min_width"in _&&i(11,r=_.min_width),"loading_status"in _&&i(12,g=_.loading_status)},[a,t,n,s,o,h,b,f,d,u,m,r,g,v,k,w,F,G]}class Ee extends P{constructor(e){super(),Q(this,e,je,Se,U,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,choices:7,show_label:8,container:9,scale:10,min_width:11,loading_status:12})}}function Ie(l){let e,i,n,s;function o(a){l[19](a)}function h(a){l[20](a)}let b={label:l[2],info:l[3],elem_id:l[4],elem_classes:l[5],visible:l[6],choices:l[7],show_label:l[9],container:l[10],scale:l[11],min_width:l[12],loading_status:l[13]};return l[0]!==void 0&&(b.value=l[0]),l[1]!==void 0&&(b.value_is_output=l[1]),e=new Ee({props:b}),N.push(()=>T(e,"value",o)),N.push(()=>T(e,"value_is_output",h)),e.$on("change",l[21]),e.$on("input",l[22]),e.$on("select",l[23]),{c(){j(e.$$.fragment)},m(a,t){E(e,a,t),s=!0},p(a,t){const f={};t&4&&(f.label=a[2]),t&8&&(f.info=a[3]),t&16&&(f.elem_id=a[4]),t&32&&(f.elem_classes=a[5]),t&64&&(f.visible=a[6]),t&128&&(f.choices=a[7]),t&512&&(f.show_label=a[9]),t&1024&&(f.container=a[10]),t&2048&&(f.scale=a[11]),t&4096&&(f.min_width=a[12]),t&8192&&(f.loading_status=a[13]),!i&&t&1&&(i=!0,f.value=a[0],q(()=>i=!1)),!n&&t&2&&(n=!0,f.value_is_output=a[1],q(()=>n=!1)),e.$set(f)},i(a){s||(B(e.$$.fragment,a),s=!0)},o(a){C(e.$$.fragment,a),s=!1},d(a){I(e,a)}}}function Ne(l){let e,i,n,s;function o(a){l[14](a)}function h(a){l[15](a)}let b={label:l[2],info:l[3],elem_id:l[4],elem_classes:l[5],visible:l[6],choices:l[7],show_label:l[9],container:l[10],scale:l[11],min_width:l[12],loading_status:l[13]};return l[0]!==void 0&&(b.value=l[0]),l[1]!==void 0&&(b.value_is_output=l[1]),e=new Be({props:b}),N.push(()=>T(e,"value",o)),N.push(()=>T(e,"value_is_output",h)),e.$on("change",l[16]),e.$on("input",l[17]),e.$on("select",l[18]),{c(){j(e.$$.fragment)},m(a,t){E(e,a,t),s=!0},p(a,t){const f={};t&4&&(f.label=a[2]),t&8&&(f.info=a[3]),t&16&&(f.elem_id=a[4]),t&32&&(f.elem_classes=a[5]),t&64&&(f.visible=a[6]),t&128&&(f.choices=a[7]),t&512&&(f.show_label=a[9]),t&1024&&(f.container=a[10]),t&2048&&(f.scale=a[11]),t&4096&&(f.min_width=a[12]),t&8192&&(f.loading_status=a[13]),!i&&t&1&&(i=!0,f.value=a[0],q(()=>i=!1)),!n&&t&2&&(n=!0,f.value_is_output=a[1],q(()=>n=!1)),e.$set(f)},i(a){s||(B(e.$$.fragment,a),s=!0)},o(a){C(e.$$.fragment,a),s=!1},d(a){I(e,a)}}}function Te(l){let e,i,n,s;const o=[Ne,Ie],h=[];function b(a,t){return a[8]==="static"?0:1}return e=b(l),i=h[e]=o[e](l),{c(){i.c(),n=he()},m(a,t){h[e].m(a,t),z(a,n,t),s=!0},p(a,[t]){let f=e;e=b(a),e===f?h[e].p(a,t):(me(),C(h[f],1,1,()=>{h[f]=null}),be(),i=h[e],i?i.p(a,t):(i=h[e]=o[e](a),i.c()),B(i,1),i.m(n.parentNode,n))},i(a){s||(B(i),s=!0)},o(a){C(i),s=!1},d(a){a&&D(n),h[e].d(a)}}}function qe(l,e,i){let{label:n="Radio"}=e,{info:s=void 0}=e,{elem_id:o=""}=e,{elem_classes:h=[]}=e,{visible:b=!0}=e,{value:a=null}=e,{value_is_output:t=!1}=e,{choices:f=[]}=e,{mode:d}=e,{show_label:u}=e,{container:m=!1}=e,{scale:r=null}=e,{min_width:g=void 0}=e,{loading_status:v}=e;function k(c){a=c,i(0,a)}function w(c){t=c,i(1,t)}function F(c){R.call(this,l,c)}function G(c){R.call(this,l,c)}function _(c){R.call(this,l,c)}function le(c){a=c,i(0,a)}function ie(c){t=c,i(1,t)}function ne(c){R.call(this,l,c)}function ae(c){R.call(this,l,c)}function te(c){R.call(this,l,c)}return l.$$set=c=>{"label"in c&&i(2,n=c.label),"info"in c&&i(3,s=c.info),"elem_id"in c&&i(4,o=c.elem_id),"elem_classes"in c&&i(5,h=c.elem_classes),"visible"in c&&i(6,b=c.visible),"value"in c&&i(0,a=c.value),"value_is_output"in c&&i(1,t=c.value_is_output),"choices"in c&&i(7,f=c.choices),"mode"in c&&i(8,d=c.mode),"show_label"in c&&i(9,u=c.show_label),"container"in c&&i(10,m=c.container),"scale"in c&&i(11,r=c.scale),"min_width"in c&&i(12,g=c.min_width),"loading_status"in c&&i(13,v=c.loading_status)},[a,t,n,s,o,h,b,f,d,u,m,r,g,v,k,w,F,G,_,le,ie,ne,ae,te]}class ze extends P{constructor(e){super(),Q(this,e,qe,Te,U,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,choices:7,mode:8,show_label:9,container:10,scale:11,min_width:12,loading_status:13})}}const Me=ze,Oe=["static","dynamic"];export{Me as Component,Oe as modes}; -//# sourceMappingURL=index-aef15a25.js.map diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/vq_model.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/vq_model.py deleted file mode 100644 index 65f734dccb2dd48174a48134294b597a2c0b8ea4..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/vq_model.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .modeling_utils import ModelMixin -from .vae import Decoder, DecoderOutput, Encoder, VectorQuantizer - - -@dataclass -class VQEncoderOutput(BaseOutput): - """ - Output of VQModel encoding method. - - Args: - latents (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Encoded output sample of the model. Output of the last layer of the model. - """ - - latents: torch.FloatTensor - - -class VQModel(ModelMixin, ConfigMixin): - r"""VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray - Kavukcuoglu. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("DownEncoderBlock2D",)`): Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to : - obj:`("UpDecoderBlock2D",)`): Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to : - obj:`(64,)`): Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to `3`): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): TODO - num_vq_embeddings (`int`, *optional*, defaults to `256`): Number of codebook vectors in the VQ-VAE. - vq_embed_dim (`int`, *optional*): Hidden dim of codebook vectors in the VQ-VAE. - scaling_factor (`float`, *optional*, defaults to `0.18215`): - The component-wise standard deviation of the trained latent space computed using the first batch of the - training set. This is used to scale the latent space to have unit variance when training the diffusion - model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the - diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 - / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image - Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. - """ - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 3, - sample_size: int = 32, - num_vq_embeddings: int = 256, - norm_num_groups: int = 32, - vq_embed_dim: Optional[int] = None, - scaling_factor: float = 0.18215, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=False, - ) - - vq_embed_dim = vq_embed_dim if vq_embed_dim is not None else latent_channels - - self.quant_conv = nn.Conv2d(latent_channels, vq_embed_dim, 1) - self.quantize = VectorQuantizer(num_vq_embeddings, vq_embed_dim, beta=0.25, remap=None, sane_index_shape=False) - self.post_quant_conv = nn.Conv2d(vq_embed_dim, latent_channels, 1) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - ) - - def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> VQEncoderOutput: - h = self.encoder(x) - h = self.quant_conv(h) - - if not return_dict: - return (h,) - - return VQEncoderOutput(latents=h) - - def decode( - self, h: torch.FloatTensor, force_not_quantize: bool = False, return_dict: bool = True - ) -> Union[DecoderOutput, torch.FloatTensor]: - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward(self, sample: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - Args: - sample (`torch.FloatTensor`): Input sample. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`DecoderOutput`] instead of a plain tuple. - """ - x = sample - h = self.encode(x).latents - dec = self.decode(h).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_upscale.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_upscale.py deleted file mode 100644 index b91262551b0f2fefad50d85782cea5e2dda884ac..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_upscale.py +++ /dev/null @@ -1,290 +0,0 @@ -from logging import getLogger -from typing import Any, Callable, List, Optional, Union - -import numpy as np -import PIL -import torch - -from ...schedulers import DDPMScheduler -from ..onnx_utils import ORT_TO_NP_TYPE, OnnxRuntimeModel -from ..pipeline_utils import ImagePipelineOutput -from . import StableDiffusionUpscalePipeline - - -logger = getLogger(__name__) - - -NUM_LATENT_CHANNELS = 4 -NUM_UNET_INPUT_CHANNELS = 7 - -ORT_TO_PT_TYPE = { - "float16": torch.float16, - "float32": torch.float32, -} - - -def preprocess(image): - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 64 for x in (w, h)) # resize to integer multiple of 32 - - image = [np.array(i.resize((w, h)))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - - return image - - -class OnnxStableDiffusionUpscalePipeline(StableDiffusionUpscalePipeline): - def __init__( - self, - vae: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: Any, - unet: OnnxRuntimeModel, - low_res_scheduler: DDPMScheduler, - scheduler: Any, - max_noise_level: int = 350, - ): - super().__init__(vae, text_encoder, tokenizer, unet, low_res_scheduler, scheduler, max_noise_level) - - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image, List[PIL.Image.Image]], - num_inference_steps: int = 75, - guidance_scale: float = 9.0, - noise_level: int = 20, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - ): - # 1. Check inputs - self.check_inputs(prompt, image, noise_level, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - latents_dtype = ORT_TO_PT_TYPE[str(text_embeddings.dtype)] - - # 4. Preprocess image - image = preprocess(image) - image = image.cpu() - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Add noise to image - noise_level = torch.tensor([noise_level], dtype=torch.long, device=device) - noise = torch.randn(image.shape, generator=generator, device=device, dtype=latents_dtype) - image = self.low_res_scheduler.add_noise(image, noise, noise_level) - - batch_multiplier = 2 if do_classifier_free_guidance else 1 - image = np.concatenate([image] * batch_multiplier * num_images_per_prompt) - noise_level = np.concatenate([noise_level] * image.shape[0]) - - # 6. Prepare latent variables - height, width = image.shape[2:] - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - NUM_LATENT_CHANNELS, - height, - width, - latents_dtype, - device, - generator, - latents, - ) - - # 7. Check that sizes of image and latents match - num_channels_image = image.shape[1] - if NUM_LATENT_CHANNELS + num_channels_image != NUM_UNET_INPUT_CHANNELS: - raise ValueError( - "Incorrect configuration settings! The config of `pipeline.unet` expects" - f" {NUM_UNET_INPUT_CHANNELS} but received `num_channels_latents`: {NUM_LATENT_CHANNELS} +" - f" `num_channels_image`: {num_channels_image} " - f" = {NUM_LATENT_CHANNELS+num_channels_image}. Please verify the config of" - " `pipeline.unet` or your `image` input." - ) - - # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - timestep_dtype = next( - (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)" - ) - timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype] - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - - # concat latents, mask, masked_image_latents in the channel dimension - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - latent_model_input = np.concatenate([latent_model_input, image], axis=1) - - # timestep to tensor - timestep = np.array([t], dtype=timestep_dtype) - - # predict the noise residual - noise_pred = self.unet( - sample=latent_model_input, - timestep=timestep, - encoder_hidden_states=text_embeddings, - class_labels=noise_level.astype(np.int64), - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - torch.from_numpy(noise_pred), t, latents, **extra_step_kwargs - ).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 10. Post-processing - image = self.decode_latents(latents.float()) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) - - def decode_latents(self, latents): - latents = 1 / 0.08333 * latents - image = self.vae(latent_sample=latents)[0] - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - return image - - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - # if hasattr(text_inputs, "attention_mask"): - # attention_mask = text_inputs.attention_mask.to(device) - # else: - # attention_mask = None - - # no positional arguments to text_encoder - text_embeddings = self.text_encoder( - input_ids=text_input_ids.int().to(device), - # attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - bs_embed, seq_len, _ = text_embeddings.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt) - text_embeddings = text_embeddings.reshape(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - # if hasattr(uncond_input, "attention_mask"): - # attention_mask = uncond_input.attention_mask.to(device) - # else: - # attention_mask = None - - uncond_embeddings = self.text_encoder( - input_ids=uncond_input.input_ids.int().to(device), - # attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - seq_len = uncond_embeddings.shape[1] - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt) - uncond_embeddings = uncond_embeddings.reshape(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - return text_embeddings diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py deleted file mode 100644 index 25b0c6ea1432972a6303423ea8517420a6ab9499..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py +++ /dev/null @@ -1,350 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - EulerAncestralDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - StableDiffusionInstructPix2PixPipeline, - UNet2DConditionModel, -) -from diffusers.utils import floats_tensor, load_image, slow, torch_device -from diffusers.utils.testing_utils import require_torch_gpu - -from ...pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class StableDiffusionInstructPix2PixPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableDiffusionInstructPix2PixPipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width", "cross_attention_kwargs"} - batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=8, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = PNDMScheduler(skip_prk_steps=True) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - } - return components - - def get_dummy_inputs(self, device, seed=0): - image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - image = image.cpu().permute(0, 2, 3, 1)[0] - image = Image.fromarray(np.uint8(image)).convert("RGB") - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": image, - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "image_guidance_scale": 1, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_default_case(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.7318, 0.3723, 0.4662, 0.623, 0.5770, 0.5014, 0.4281, 0.5550, 0.4813]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_negative_prompt(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - negative_prompt = "french fries" - output = sd_pipe(**inputs, negative_prompt=negative_prompt) - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.7323, 0.3688, 0.4611, 0.6255, 0.5746, 0.5017, 0.433, 0.5553, 0.4827]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_multiple_init_images(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - inputs["prompt"] = [inputs["prompt"]] * 2 - - image = np.array(inputs["image"]).astype(np.float32) / 255.0 - image = torch.from_numpy(image).unsqueeze(0).to(device) - image = image.permute(0, 3, 1, 2) - inputs["image"] = image.repeat(2, 1, 1, 1) - - image = sd_pipe(**inputs).images - image_slice = image[-1, -3:, -3:, -1] - - assert image.shape == (2, 32, 32, 3) - expected_slice = np.array([0.606, 0.5712, 0.5099, 0.598, 0.5805, 0.7205, 0.6793, 0.554, 0.5607]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_euler(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - components["scheduler"] = EulerAncestralDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - sd_pipe = StableDiffusionInstructPix2PixPipeline(**components) - sd_pipe = sd_pipe.to(device) - sd_pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = sd_pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - slice = [round(x, 4) for x in image_slice.flatten().tolist()] - print(",".join([str(x) for x in slice])) - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.726, 0.3902, 0.4868, 0.585, 0.5672, 0.511, 0.3906, 0.551, 0.4846]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - -@slow -@require_torch_gpu -class StableDiffusionInstructPix2PixPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, seed=0): - generator = torch.manual_seed(seed) - image = load_image( - "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_pix2pix/example.jpg" - ) - inputs = { - "prompt": "turn him into a cyborg", - "image": image, - "generator": generator, - "num_inference_steps": 3, - "guidance_scale": 7.5, - "image_guidance_scale": 1.0, - "output_type": "numpy", - } - return inputs - - def test_stable_diffusion_pix2pix_default(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.5902, 0.6015, 0.6027, 0.5983, 0.6092, 0.6061, 0.5765, 0.5785, 0.5555]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_k_lms(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.6578, 0.6817, 0.6972, 0.6761, 0.6856, 0.6916, 0.6428, 0.6516, 0.6301]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_ddim(self): - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None - ) - pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1].flatten() - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.3828, 0.3834, 0.3818, 0.3792, 0.3865, 0.3752, 0.3792, 0.3847, 0.3753]) - - assert np.abs(expected_slice - image_slice).max() < 1e-3 - - def test_stable_diffusion_pix2pix_intermediate_state(self): - number_of_steps = 0 - - def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None: - callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 1: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([-0.2463, -0.4644, -0.9756, 1.5176, 1.4414, 0.7866, 0.9897, 0.8521, 0.7983]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - elif step == 2: - latents = latents.detach().cpu().numpy() - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array([-0.2644, -0.4626, -0.9653, 1.5176, 1.4551, 0.7686, 0.9805, 0.8452, 0.8115]) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2 - - callback_fn.has_been_called = False - - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - inputs = self.get_inputs() - pipe(**inputs, callback=callback_fn, callback_steps=1) - assert callback_fn.has_been_called - assert number_of_steps == 3 - - def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16 - ) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing(1) - pipe.enable_sequential_cpu_offload() - - inputs = self.get_inputs() - _ = pipe(**inputs) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 2.2 GB is allocated - assert mem_bytes < 2.2 * 10**9 - - def test_stable_diffusion_pix2pix_pipeline_multiple_of_8(self): - inputs = self.get_inputs() - # resize to resolution that is divisible by 8 but not 16 or 32 - inputs["image"] = inputs["image"].resize((504, 504)) - - model_id = "timbrooks/instruct-pix2pix" - pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( - model_id, - safety_checker=None, - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - output = pipe(**inputs) - image = output.images[0] - - image_slice = image[255:258, 383:386, -1] - - assert image.shape == (504, 504, 3) - expected_slice = np.array([0.2726, 0.2529, 0.2664, 0.2655, 0.2641, 0.2642, 0.2591, 0.2649, 0.2590]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/data/__init__.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/data/__init__.py deleted file mode 100644 index 9a9761c518a1b07c5996165869742af0a52c82bc..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/data/__init__.py +++ /dev/null @@ -1,116 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- : (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import numpy as np -import importlib -import torch.utils.data -from face3d.data.base_dataset import BaseDataset - - -def find_dataset_using_name(dataset_name): - """Import the module "data/[dataset_name]_dataset.py". - - In the file, the class called DatasetNameDataset() will - be instantiated. It has to be a subclass of BaseDataset, - and it is case-insensitive. - """ - dataset_filename = "data." + dataset_name + "_dataset" - datasetlib = importlib.import_module(dataset_filename) - - dataset = None - target_dataset_name = dataset_name.replace('_', '') + 'dataset' - for name, cls in datasetlib.__dict__.items(): - if name.lower() == target_dataset_name.lower() \ - and issubclass(cls, BaseDataset): - dataset = cls - - if dataset is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name)) - - return dataset - - -def get_option_setter(dataset_name): - """Return the static method of the dataset class.""" - dataset_class = find_dataset_using_name(dataset_name) - return dataset_class.modify_commandline_options - - -def create_dataset(opt, rank=0): - """Create a dataset given the option. - - This function wraps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from data import create_dataset - >>> dataset = create_dataset(opt) - """ - data_loader = CustomDatasetDataLoader(opt, rank=rank) - dataset = data_loader.load_data() - return dataset - -class CustomDatasetDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, opt, rank=0): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.opt = opt - dataset_class = find_dataset_using_name(opt.dataset_mode) - self.dataset = dataset_class(opt) - self.sampler = None - print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__)) - if opt.use_ddp and opt.isTrain: - world_size = opt.world_size - self.sampler = torch.utils.data.distributed.DistributedSampler( - self.dataset, - num_replicas=world_size, - rank=rank, - shuffle=not opt.serial_batches - ) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - sampler=self.sampler, - num_workers=int(opt.num_threads / world_size), - batch_size=int(opt.batch_size / world_size), - drop_last=True) - else: - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batch_size, - shuffle=(not opt.serial_batches) and opt.isTrain, - num_workers=int(opt.num_threads), - drop_last=True - ) - - def set_epoch(self, epoch): - self.dataset.current_epoch = epoch - if self.sampler is not None: - self.sampler.set_epoch(epoch) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), self.opt.max_dataset_size) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.opt.batch_size >= self.opt.max_dataset_size: - break - yield data diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/management/test_skill_manager.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/management/test_skill_manager.py deleted file mode 100644 index b0be858a1fd908548d301a51a1eeeb26c4551335..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/management/test_skill_manager.py +++ /dev/null @@ -1,36 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/6 12:38 -@Author : alexanderwu -@File : test_skill_manager.py -""" -from metagpt.actions import WritePRD, WriteTest -from metagpt.logs import logger -from metagpt.management.skill_manager import SkillManager - - -def test_skill_manager(): - manager = SkillManager() - logger.info(manager._store) - - write_prd = WritePRD("WritePRD") - write_prd.desc = "基于老板或其他人的需求进行PRD的撰写,包括用户故事、需求分解等" - write_test = WriteTest("WriteTest") - write_test.desc = "进行测试用例的撰写" - manager.add_skill(write_prd) - manager.add_skill(write_test) - - skill = manager.get_skill("WriteTest") - logger.info(skill) - - rsp = manager.retrieve_skill("写PRD") - logger.info(rsp) - assert rsp[0] == "WritePRD" - - rsp = manager.retrieve_skill("写测试用例") - logger.info(rsp) - assert rsp[0] == 'WriteTest' - - rsp = manager.retrieve_skill_scored("写PRD") - logger.info(rsp) diff --git a/spaces/diacanFperku/AutoGPT/PS3 Emulator BIOS V1.9.4.rar (51.73 KB [REPACK].md b/spaces/diacanFperku/AutoGPT/PS3 Emulator BIOS V1.9.4.rar (51.73 KB [REPACK].md deleted file mode 100644 index f30b55cc02e45f793874db2eca80e2165d9280c0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/PS3 Emulator BIOS V1.9.4.rar (51.73 KB [REPACK].md +++ /dev/null @@ -1,6 +0,0 @@ -

    PS3 Emulator BIOS v1.9.4.rar (51.73 KB


    Download ✶✶✶ https://gohhs.com/2uFUGz



    -
    -PS3 Emulator BIOS V1.9.4.rar 51.73 KB >> . /2017/06/14/biologia-na-czasie-3-zakres-rozszerzony-pdf-free.. 27. Jan 2018 . Download Ps3 Emulator Bios ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/PornHub Premium Accounts 8 October 2019.md b/spaces/diacanFperku/AutoGPT/PornHub Premium Accounts 8 October 2019.md deleted file mode 100644 index 25b7fff01f81440d891626e7728b331b4d970ee0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/PornHub Premium Accounts 8 October 2019.md +++ /dev/null @@ -1,6 +0,0 @@ -

    PornHub Premium Accounts 8 October 2019


    DOWNLOAD ►►► https://gohhs.com/2uFV3o



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Premam [2015] Malayalam 720p BDRip X264 AC3 5.1 1.4GB 53.md b/spaces/diacanFperku/AutoGPT/Premam [2015] Malayalam 720p BDRip X264 AC3 5.1 1.4GB 53.md deleted file mode 100644 index 57f1075ddfba958b4621014131b3d435f45c2d81..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Premam [2015] Malayalam 720p BDRip X264 AC3 5.1 1.4GB 53.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Premam [2015] Malayalam 720p BDRip x264 AC3 5.1 1.4GB 53


    Download Filehttps://gohhs.com/2uFVHC



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/modules.py b/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-b-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/dineshreddy/WALT/mmdet/core/utils/__init__.py b/spaces/dineshreddy/WALT/mmdet/core/utils/__init__.py deleted file mode 100644 index 5c51dac6d648f41d5c5f46dbf703f19469a7bb6c..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/utils/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .dist_utils import DistOptimizerHook, allreduce_grads, reduce_mean -from .misc import mask2ndarray, multi_apply, unmap - -__all__ = [ - 'allreduce_grads', 'DistOptimizerHook', 'reduce_mean', 'multi_apply', - 'unmap', 'mask2ndarray' -] diff --git a/spaces/doevent/blip/train_nlvr.py b/spaces/doevent/blip/train_nlvr.py deleted file mode 100644 index 84b247bda2334c1fd894b6c11d33ef48c8e7df28..0000000000000000000000000000000000000000 --- a/spaces/doevent/blip/train_nlvr.py +++ /dev/null @@ -1,213 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path -import json -import pickle - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from models.blip_nlvr import blip_nlvr - -import utils -from utils import cosine_lr_schedule, warmup_lr_schedule -from data import create_dataset, create_sampler, create_loader - -def train(model, data_loader, optimizer, epoch, device, config): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=50, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=50, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - step_size = 10 - - for i,(image0, image1, text, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - loss = model(images, text, targets=targets, train=True) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - metric_logger.update(loss=loss.item()) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(model, data_loader, device, config): - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - - header = 'Evaluation:' - print_freq = 50 - - for image0, image1, text, targets in metric_logger.log_every(data_loader, print_freq, header): - images = torch.cat([image0, image1], dim=0) - images, targets = images.to(device), targets.to(device) - - prediction = model(images, text, targets=targets, train=False) - - _, pred_class = prediction.max(1) - accuracy = (targets==pred_class).sum() / targets.size(0) - - metric_logger.meters['acc'].update(accuracy.item(), n=image0.size(0)) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating dataset") - datasets = create_dataset('nlvr', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True,False,False], num_tasks, global_rank) - else: - samplers = [None, None, None] - - batch_size=[config['batch_size_train'],config['batch_size_test'],config['batch_size_test']] - train_loader, val_loader, test_loader = create_loader(datasets,samplers,batch_size=batch_size, - num_workers=[4,4,4],is_trains=[True,False,False], - collate_fns=[None,None,None]) - - #### Model #### - print("Creating model") - model = blip_nlvr(pretrained=config['pretrained'], image_size=config['image_size'], - vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - print("Start training") - start_time = time.time() - best = 0 - best_epoch = 0 - - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device, config) - - val_stats = evaluate(model, val_loader, device, config) - test_stats = evaluate(model, test_loader, device, config) - - if utils.is_main_process(): - if args.evaluate: - log_stats = {**{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - else: - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'val_{k}': v for k, v in val_stats.items()}, - **{f'test_{k}': v for k, v in test_stats.items()}, - 'epoch': epoch, - } - - if float(val_stats['acc'])>best: - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) - best = float(val_stats['acc']) - best_epoch = epoch - - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - if args.evaluate: - break - - dist.barrier() - - if utils.is_main_process(): - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write("best epoch: %d"%best_epoch) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/nlvr.yaml') - parser.add_argument('--output_dir', default='output/NLVR') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/dteam/chatgpt-dteam/bin_public/utils/utils_db.py b/spaces/dteam/chatgpt-dteam/bin_public/utils/utils_db.py deleted file mode 100644 index 7f115752a1f41c74b1713f2f254fad9b3c7a4fa8..0000000000000000000000000000000000000000 --- a/spaces/dteam/chatgpt-dteam/bin_public/utils/utils_db.py +++ /dev/null @@ -1,93 +0,0 @@ -import psycopg2 -import datetime -from bin_public.config.presets import * -from dateutil import tz -import os - - -def current_time(type): - if type == 'ymd': - return datetime.datetime.now().strftime("%Y-%m-%d") - if type == 'ymdhms': - return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - - -# hologres 基础函数:查询 -def holo_query_func(run_sql, is_query=0): - conn = psycopg2.connect(host=os.environ['HOST'], - port=os.environ['PORT'], - dbname=os.environ['DBNAME'], - user=os.environ['AK'], - password=os.environ['SK']) - cur = conn.cursor() - cur.execute(run_sql) - if is_query: - data = cur.fetchall() - cur.close() - conn.close() - if is_query: - return data - -def holo_query_account_mapping(invite_code): - run_sql = f""" - select end_date, status, mapping_ak - from s_account_invite_code - where invite_code = '{invite_code}' - order by gmt_modify desc - limit 1 - """ - data = holo_query_func(run_sql, is_query=1) - # 数据库中查不到,则返回no_invite_code_msg - if len(data) == 0: - status_text = standard_error_msg + no_invite_code_msg - return status_text, None - # 数据库中查到,判断是否可用 - if len(data) == 1: - end_date = data[0][0] - status = data[0][1] - mapping_ak = data[0][2] - if end_date < datetime.datetime.now().strftime("%Y%m%d") or status != '1': - status_text = standard_error_msg + no_useful_invite_code_msg - return status_text, None - return 'Success status: ready', mapping_ak - - -def key_preprocessing(keyTxt): - invite_code = keyTxt - # 这里先用这个逻辑,到时候等实际的邀请码来了就改一下这个函数就行 - if keyTxt.startswith("dteam_"): - status_display, keyTxt = holo_query_account_mapping(keyTxt) - yield status_display, keyTxt, invite_code - return - else: - if len(keyTxt) != 51: - status_display = standard_error_msg + no_apikey_msg - yield status_display, keyTxt, invite_code - return - yield 'Success status: ready', keyTxt, invite_code - return - - -def holo_query_insert_chat_message(invite_code, prompt, response, all_token_cnt, history): - run_sql = f""" - insert into s_account_chat_message( - gmt_create - ,invite_code - ,prompt - ,response - ,all_token_cnt - ,history - ,chat_seq - ,log_timestamp - ) - select - '{datetime.datetime.now().replace(tzinfo=tz.gettz('Asina/Shanghai')).strftime("%Y-%m-%d %H:%M:%S")}' as gmt_create - ,'{str(invite_code).replace("'", '"')}' as invite_code - ,'{str(prompt).replace("'", '"')}' as prompt - ,'{str(response).replace("'", '"')}' as response - ,'{str(all_token_cnt).replace("'", '"')}' as all_token_cnt - ,'{str(history).replace("'", '"')}' as history - ,'{len(history)}' as chat_seq - ,localtimestamp as log_timestamp - """ - holo_query_func(run_sql, is_query=0) \ No newline at end of file diff --git a/spaces/dvitel/codebleu/weighted_ngram_match.py b/spaces/dvitel/codebleu/weighted_ngram_match.py deleted file mode 100644 index c9abe3a2a5107425d2bbddfefa252c5b78030f93..0000000000000000000000000000000000000000 --- a/spaces/dvitel/codebleu/weighted_ngram_match.py +++ /dev/null @@ -1,558 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -# Natural Language Toolkit: BLEU Score -# -# Copyright (C) 2001-2020 NLTK Project -# Authors: Chin Yee Lee, Hengfeng Li, Ruxin Hou, Calvin Tanujaya Lim -# Contributors: Björn Mattsson, Dmitrijs Milajevs, Liling Tan -# URL: -# For license information, see LICENSE.TXT - -"""BLEU score implementation.""" - -import math -import sys -from fractions import Fraction -import warnings -from collections import Counter - -from .utils import ngrams -import pdb - - -def sentence_bleu( - references, - hypothesis, - weights=(0.25, 0.25, 0.25, 0.25), - smoothing_function=None, - auto_reweigh=False, -): - """ - Calculate BLEU score (Bilingual Evaluation Understudy) from - Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. - "BLEU: a method for automatic evaluation of machine translation." - In Proceedings of ACL. http://www.aclweb.org/anthology/P02-1040.pdf - >>> hypothesis1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which', - ... 'ensures', 'that', 'the', 'military', 'always', - ... 'obeys', 'the', 'commands', 'of', 'the', 'party'] - >>> hypothesis2 = ['It', 'is', 'to', 'insure', 'the', 'troops', - ... 'forever', 'hearing', 'the', 'activity', 'guidebook', - ... 'that', 'party', 'direct'] - >>> reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that', - ... 'ensures', 'that', 'the', 'military', 'will', 'forever', - ... 'heed', 'Party', 'commands'] - >>> reference2 = ['It', 'is', 'the', 'guiding', 'principle', 'which', - ... 'guarantees', 'the', 'military', 'forces', 'always', - ... 'being', 'under', 'the', 'command', 'of', 'the', - ... 'Party'] - >>> reference3 = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the', - ... 'army', 'always', 'to', 'heed', 'the', 'directions', - ... 'of', 'the', 'party'] - >>> sentence_bleu([reference1, reference2, reference3], hypothesis1) # doctest: +ELLIPSIS - 0.5045... - If there is no ngrams overlap for any order of n-grams, BLEU returns the - value 0. This is because the precision for the order of n-grams without - overlap is 0, and the geometric mean in the final BLEU score computation - multiplies the 0 with the precision of other n-grams. This results in 0 - (independently of the precision of the othe n-gram orders). The following - example has zero 3-gram and 4-gram overlaps: - >>> round(sentence_bleu([reference1, reference2, reference3], hypothesis2),4) # doctest: +ELLIPSIS - 0.0 - To avoid this harsh behaviour when no ngram overlaps are found a smoothing - function can be used. - >>> chencherry = SmoothingFunction() - >>> sentence_bleu([reference1, reference2, reference3], hypothesis2, - ... smoothing_function=chencherry.method1) # doctest: +ELLIPSIS - 0.0370... - The default BLEU calculates a score for up to 4-grams using uniform - weights (this is called BLEU-4). To evaluate your translations with - higher/lower order ngrams, use customized weights. E.g. when accounting - for up to 5-grams with uniform weights (this is called BLEU-5) use: - >>> weights = (1./5., 1./5., 1./5., 1./5., 1./5.) - >>> sentence_bleu([reference1, reference2, reference3], hypothesis1, weights) # doctest: +ELLIPSIS - 0.3920... - :param references: reference sentences - :type references: list(list(str)) - :param hypothesis: a hypothesis sentence - :type hypothesis: list(str) - :param weights: weights for unigrams, bigrams, trigrams and so on - :type weights: list(float) - :param smoothing_function: - :type smoothing_function: SmoothingFunction - :param auto_reweigh: Option to re-normalize the weights uniformly. - :type auto_reweigh: bool - :return: The sentence-level BLEU score. - :rtype: float - """ - return corpus_bleu( - [references], [hypothesis], weights, smoothing_function, auto_reweigh - ) - - -def corpus_bleu( - list_of_references, - hypotheses, - weights=(0.25, 0.25, 0.25, 0.25), - smoothing_function=None, - auto_reweigh=False, -): - """ - Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all - the hypotheses and their respective references. - Instead of averaging the sentence level BLEU scores (i.e. marco-average - precision), the original BLEU metric (Papineni et al. 2002) accounts for - the micro-average precision (i.e. summing the numerators and denominators - for each hypothesis-reference(s) pairs before the division). - >>> hyp1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which', - ... 'ensures', 'that', 'the', 'military', 'always', - ... 'obeys', 'the', 'commands', 'of', 'the', 'party'] - >>> ref1a = ['It', 'is', 'a', 'guide', 'to', 'action', 'that', - ... 'ensures', 'that', 'the', 'military', 'will', 'forever', - ... 'heed', 'Party', 'commands'] - >>> ref1b = ['It', 'is', 'the', 'guiding', 'principle', 'which', - ... 'guarantees', 'the', 'military', 'forces', 'always', - ... 'being', 'under', 'the', 'command', 'of', 'the', 'Party'] - >>> ref1c = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the', - ... 'army', 'always', 'to', 'heed', 'the', 'directions', - ... 'of', 'the', 'party'] - >>> hyp2 = ['he', 'read', 'the', 'book', 'because', 'he', 'was', - ... 'interested', 'in', 'world', 'history'] - >>> ref2a = ['he', 'was', 'interested', 'in', 'world', 'history', - ... 'because', 'he', 'read', 'the', 'book'] - >>> list_of_references = [[ref1a, ref1b, ref1c], [ref2a]] - >>> hypotheses = [hyp1, hyp2] - >>> corpus_bleu(list_of_references, hypotheses) # doctest: +ELLIPSIS - 0.5920... - The example below show that corpus_bleu() is different from averaging - sentence_bleu() for hypotheses - >>> score1 = sentence_bleu([ref1a, ref1b, ref1c], hyp1) - >>> score2 = sentence_bleu([ref2a], hyp2) - >>> (score1 + score2) / 2 # doctest: +ELLIPSIS - 0.6223... - :param list_of_references: a corpus of lists of reference sentences, w.r.t. hypotheses - :type list_of_references: list(list(list(str))) - :param hypotheses: a list of hypothesis sentences - :type hypotheses: list(list(str)) - :param weights: weights for unigrams, bigrams, trigrams and so on - :type weights: list(float) - :param smoothing_function: - :type smoothing_function: SmoothingFunction - :param auto_reweigh: Option to re-normalize the weights uniformly. - :type auto_reweigh: bool - :return: The corpus-level BLEU score. - :rtype: float - """ - # Before proceeding to compute BLEU, perform sanity checks. - - p_numerators = Counter() # Key = ngram order, and value = no. of ngram matches. - p_denominators = Counter() # Key = ngram order, and value = no. of ngram in ref. - hyp_lengths, ref_lengths = 0, 0 - - assert len(list_of_references) == len(hypotheses), ( - "The number of hypotheses and their reference(s) should be the " "same " - ) - - # Iterate through each hypothesis and their corresponding references. - for references, hypothesis in zip(list_of_references, hypotheses): - # For each order of ngram, calculate the numerator and - # denominator for the corpus-level modified precision. - for i, _ in enumerate(weights, start=1): - p_i_numeraotr, p_i_denominator = modified_recall(references, hypothesis, i) - p_numerators[i] += p_i_numeraotr - p_denominators[i] += p_i_denominator - - # Calculate the hypothesis length and the closest reference length. - # Adds them to the corpus-level hypothesis and reference counts. - hyp_len = len(hypothesis) - hyp_lengths += hyp_len - ref_lengths += closest_ref_length(references, hyp_len) - - # Calculate corpus-level brevity penalty. - bp = brevity_penalty(ref_lengths, hyp_lengths) - - # Uniformly re-weighting based on maximum hypothesis lengths if largest - # order of n-grams < 4 and weights is set at default. - if auto_reweigh: - if hyp_lengths < 4 and weights == (0.25, 0.25, 0.25, 0.25): - weights = (1 / hyp_lengths,) * hyp_lengths - - # Collects the various recall values for the different ngram orders. - p_n = [ - (p_numerators[i], p_denominators[i]) - for i, _ in enumerate(weights, start=1) - ] - - # Returns 0 if there's no matching n-grams - # We only need to check for p_numerators[1] == 0, since if there's - # no unigrams, there won't be any higher order ngrams. - if p_numerators[1] == 0: - return 0 - - # If there's no smoothing, set use method0 from SmoothinFunction class. - if not smoothing_function: - smoothing_function = SmoothingFunction().method1 - # Smoothen the modified precision. - # Note: smoothing_function() may convert values into floats; - # it tries to retain the Fraction object as much as the - # smoothing method allows. - p_n = smoothing_function( - p_n, references=references, hypothesis=hypothesis, hyp_len=hyp_lengths - ) - # pdb.set_trace() - s = (w_i * math.log(p_i[0]/p_i[1]) for w_i, p_i in zip(weights, p_n)) - s = bp * math.exp(math.fsum(s)) - return s - - -def modified_recall(references, hypothesis, n): - """ - Calculate modified ngram recall. - :param references: A list of reference translations. - :type references: list(list(str)) - :param hypothesis: A hypothesis translation. - :type hypothesis: list(str) - :param n: The ngram order. - :type n: int - :return: BLEU's modified precision for the nth order ngram. - :rtype: Fraction - """ - # Extracts all ngrams in hypothesis - # Set an empty Counter if hypothesis is empty. - # pdb.set_trace() - numerator = 0 - denominator = 0 - - counts = Counter(ngrams(hypothesis, n)) if len(hypothesis) >= n else Counter() - # Extract a union of references' counts. - # max_counts = reduce(or_, [Counter(ngrams(ref, n)) for ref in references]) - max_counts = {} - for reference_and_weights in references: - reference = reference_and_weights[0] - weights = reference_and_weights[1] - reference_counts = ( - Counter(ngrams(reference, n)) if len(reference) >= n else Counter() - ) - # for ngram in reference_counts: - # max_counts[ngram] = max(max_counts.get(ngram, 0), counts[ngram]) - clipped_counts = { - ngram: min(count, counts[ngram]) for ngram, count in reference_counts.items() - } - # reweight - if n == 1 and len(weights) == len(reference_counts): - def weighted_sum(weights, counts): - sum_counts = 0 - for ngram, count in counts.items(): - sum_counts += count * (weights[ngram[0]] if ngram[0] in weights else 1) - return sum_counts - - numerator += weighted_sum(weights, clipped_counts) - denominator += max(1, weighted_sum(weights, reference_counts)) - - else: - numerator += sum(clipped_counts.values()) - denominator += max(1, sum(reference_counts.values())) - - # # Assigns the intersection between hypothesis and references' counts. - # clipped_counts = { - # ngram: min(count, max_counts[ngram]) for ngram, count in counts.items() - # } - - # numerator += sum(clipped_counts.values()) - # # Ensures that denominator is minimum 1 to avoid ZeroDivisionError. - # # Usually this happens when the ngram order is > len(reference). - # denominator += max(1, sum(counts.values())) - - #return Fraction(numerator, denominator, _normalize=False) - return numerator, denominator - - -def closest_ref_length(references, hyp_len): - """ - This function finds the reference that is the closest length to the - hypothesis. The closest reference length is referred to as *r* variable - from the brevity penalty formula in Papineni et. al. (2002) - :param references: A list of reference translations. - :type references: list(list(str)) - :param hyp_len: The length of the hypothesis. - :type hyp_len: int - :return: The length of the reference that's closest to the hypothesis. - :rtype: int - """ - ref_lens = (len(reference) for reference in references) - closest_ref_len = min( - ref_lens, key=lambda ref_len: (abs(ref_len - hyp_len), ref_len) - ) - return closest_ref_len - - -def brevity_penalty(closest_ref_len, hyp_len): - """ - Calculate brevity penalty. - As the modified n-gram precision still has the problem from the short - length sentence, brevity penalty is used to modify the overall BLEU - score according to length. - An example from the paper. There are three references with length 12, 15 - and 17. And a concise hypothesis of the length 12. The brevity penalty is 1. - >>> reference1 = list('aaaaaaaaaaaa') # i.e. ['a'] * 12 - >>> reference2 = list('aaaaaaaaaaaaaaa') # i.e. ['a'] * 15 - >>> reference3 = list('aaaaaaaaaaaaaaaaa') # i.e. ['a'] * 17 - >>> hypothesis = list('aaaaaaaaaaaa') # i.e. ['a'] * 12 - >>> references = [reference1, reference2, reference3] - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> brevity_penalty(closest_ref_len, hyp_len) - 1.0 - In case a hypothesis translation is shorter than the references, penalty is - applied. - >>> references = [['a'] * 28, ['a'] * 28] - >>> hypothesis = ['a'] * 12 - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> brevity_penalty(closest_ref_len, hyp_len) - 0.2635971381157267 - The length of the closest reference is used to compute the penalty. If the - length of a hypothesis is 12, and the reference lengths are 13 and 2, the - penalty is applied because the hypothesis length (12) is less then the - closest reference length (13). - >>> references = [['a'] * 13, ['a'] * 2] - >>> hypothesis = ['a'] * 12 - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> brevity_penalty(closest_ref_len, hyp_len) # doctest: +ELLIPSIS - 0.9200... - The brevity penalty doesn't depend on reference order. More importantly, - when two reference sentences are at the same distance, the shortest - reference sentence length is used. - >>> references = [['a'] * 13, ['a'] * 11] - >>> hypothesis = ['a'] * 12 - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> bp1 = brevity_penalty(closest_ref_len, hyp_len) - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(reversed(references), hyp_len) - >>> bp2 = brevity_penalty(closest_ref_len, hyp_len) - >>> bp1 == bp2 == 1 - True - A test example from mteval-v13a.pl (starting from the line 705): - >>> references = [['a'] * 11, ['a'] * 8] - >>> hypothesis = ['a'] * 7 - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> brevity_penalty(closest_ref_len, hyp_len) # doctest: +ELLIPSIS - 0.8668... - >>> references = [['a'] * 11, ['a'] * 8, ['a'] * 6, ['a'] * 7] - >>> hypothesis = ['a'] * 7 - >>> hyp_len = len(hypothesis) - >>> closest_ref_len = closest_ref_length(references, hyp_len) - >>> brevity_penalty(closest_ref_len, hyp_len) - 1.0 - :param hyp_len: The length of the hypothesis for a single sentence OR the - sum of all the hypotheses' lengths for a corpus - :type hyp_len: int - :param closest_ref_len: The length of the closest reference for a single - hypothesis OR the sum of all the closest references for every hypotheses. - :type closest_ref_len: int - :return: BLEU's brevity penalty. - :rtype: float - """ - if hyp_len > closest_ref_len: - return 1 - # If hypothesis is empty, brevity penalty = 0 should result in BLEU = 0.0 - elif hyp_len == 0: - return 0 - else: - return math.exp(1 - closest_ref_len / hyp_len) - - -class SmoothingFunction: - """ - This is an implementation of the smoothing techniques - for segment-level BLEU scores that was presented in - Boxing Chen and Collin Cherry (2014) A Systematic Comparison of - Smoothing Techniques for Sentence-Level BLEU. In WMT14. - http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdf - """ - - def __init__(self, epsilon=0.1, alpha=5, k=5): - """ - This will initialize the parameters required for the various smoothing - techniques, the default values are set to the numbers used in the - experiments from Chen and Cherry (2014). - >>> hypothesis1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which', 'ensures', - ... 'that', 'the', 'military', 'always', 'obeys', 'the', - ... 'commands', 'of', 'the', 'party'] - >>> reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that', 'ensures', - ... 'that', 'the', 'military', 'will', 'forever', 'heed', - ... 'Party', 'commands'] - >>> chencherry = SmoothingFunction() - >>> print(sentence_bleu([reference1], hypothesis1)) # doctest: +ELLIPSIS - 0.4118... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method0)) # doctest: +ELLIPSIS - 0.4118... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method1)) # doctest: +ELLIPSIS - 0.4118... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method2)) # doctest: +ELLIPSIS - 0.4489... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method3)) # doctest: +ELLIPSIS - 0.4118... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method4)) # doctest: +ELLIPSIS - 0.4118... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method5)) # doctest: +ELLIPSIS - 0.4905... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method6)) # doctest: +ELLIPSIS - 0.4135... - >>> print(sentence_bleu([reference1], hypothesis1, smoothing_function=chencherry.method7)) # doctest: +ELLIPSIS - 0.4905... - :param epsilon: the epsilon value use in method 1 - :type epsilon: float - :param alpha: the alpha value use in method 6 - :type alpha: int - :param k: the k value use in method 4 - :type k: int - """ - self.epsilon = epsilon - self.alpha = alpha - self.k = k - - def method0(self, p_n, *args, **kwargs): - """ - No smoothing. - """ - p_n_new = [] - for i, p_i in enumerate(p_n): - if p_i[0] != 0: - p_n_new.append(p_i) - else: - _msg = str( - "\nThe hypothesis contains 0 counts of {}-gram overlaps.\n" - "Therefore the BLEU score evaluates to 0, independently of\n" - "how many N-gram overlaps of lower order it contains.\n" - "Consider using lower n-gram order or use " - "SmoothingFunction()" - ).format(i + 1) - warnings.warn(_msg) - # When numerator==0 where denonminator==0 or !=0, the result - # for the precision score should be equal to 0 or undefined. - # Due to BLEU geometric mean computation in logarithm space, - # we we need to take the return sys.float_info.min such that - # math.log(sys.float_info.min) returns a 0 precision score. - p_n_new.append(sys.float_info.min) - return p_n_new - - def method1(self, p_n, *args, **kwargs): - """ - Smoothing method 1: Add *epsilon* counts to precision with 0 counts. - """ - return [ - ((p_i[0] + self.epsilon), p_i[1]) - if p_i[0] == 0 - else p_i - for p_i in p_n - ] - - def method2(self, p_n, *args, **kwargs): - """ - Smoothing method 2: Add 1 to both numerator and denominator from - Chin-Yew Lin and Franz Josef Och (2004) Automatic evaluation of - machine translation quality using longest common subsequence and - skip-bigram statistics. In ACL04. - """ - return [ - (p_i[0] + 1, p_i[1] + 1) - for p_i in p_n - ] - - def method3(self, p_n, *args, **kwargs): - """ - Smoothing method 3: NIST geometric sequence smoothing - The smoothing is computed by taking 1 / ( 2^k ), instead of 0, for each - precision score whose matching n-gram count is null. - k is 1 for the first 'n' value for which the n-gram match count is null/ - For example, if the text contains: - - one 2-gram match - - and (consequently) two 1-gram matches - the n-gram count for each individual precision score would be: - - n=1 => prec_count = 2 (two unigrams) - - n=2 => prec_count = 1 (one bigram) - - n=3 => prec_count = 1/2 (no trigram, taking 'smoothed' value of 1 / ( 2^k ), with k=1) - - n=4 => prec_count = 1/4 (no fourgram, taking 'smoothed' value of 1 / ( 2^k ), with k=2) - """ - incvnt = 1 # From the mteval-v13a.pl, it's referred to as k. - for i, p_i in enumerate(p_n): - if p_i.numerator == 0: - p_n[i] = 1 / (2 ** incvnt * p_i.denominator) - incvnt += 1 - return p_n - - def method4(self, p_n, references, hypothesis, hyp_len=None, *args, **kwargs): - """ - Smoothing method 4: - Shorter translations may have inflated precision values due to having - smaller denominators; therefore, we give them proportionally - smaller smoothed counts. Instead of scaling to 1/(2^k), Chen and Cherry - suggests dividing by 1/ln(len(T)), where T is the length of the translation. - """ - hyp_len = hyp_len if hyp_len else len(hypothesis) - for i, p_i in enumerate(p_n): - if p_i.numerator == 0 and hyp_len != 0: - incvnt = i + 1 * self.k / math.log( - hyp_len - ) # Note that this K is different from the K from NIST. - p_n[i] = incvnt / p_i.denominator - return p_n - - def method5(self, p_n, references, hypothesis, hyp_len=None, *args, **kwargs): - """ - Smoothing method 5: - The matched counts for similar values of n should be similar. To a - calculate the n-gram matched count, it averages the n−1, n and n+1 gram - matched counts. - """ - hyp_len = hyp_len if hyp_len else len(hypothesis) - m = {} - # Requires an precision value for an addition ngram order. - p_n_plus1 = p_n + [modified_precision(references, hypothesis, 5)] - m[-1] = p_n[0] + 1 - for i, p_i in enumerate(p_n): - p_n[i] = (m[i - 1] + p_i + p_n_plus1[i + 1]) / 3 - m[i] = p_n[i] - return p_n - - def method6(self, p_n, references, hypothesis, hyp_len=None, *args, **kwargs): - """ - Smoothing method 6: - Interpolates the maximum likelihood estimate of the precision *p_n* with - a prior estimate *pi0*. The prior is estimated by assuming that the ratio - between pn and pn−1 will be the same as that between pn−1 and pn−2; from - Gao and He (2013) Training MRF-Based Phrase Translation Models using - Gradient Ascent. In NAACL. - """ - hyp_len = hyp_len if hyp_len else len(hypothesis) - # This smoothing only works when p_1 and p_2 is non-zero. - # Raise an error with an appropriate message when the input is too short - # to use this smoothing technique. - assert p_n[2], "This smoothing method requires non-zero precision for bigrams." - for i, p_i in enumerate(p_n): - if i in [0, 1]: # Skips the first 2 orders of ngrams. - continue - else: - pi0 = 0 if p_n[i - 2] == 0 else p_n[i - 1] ** 2 / p_n[i - 2] - # No. of ngrams in translation that matches the reference. - m = p_i.numerator - # No. of ngrams in translation. - l = sum(1 for _ in ngrams(hypothesis, i + 1)) - # Calculates the interpolated precision. - p_n[i] = (m + self.alpha * pi0) / (l + self.alpha) - return p_n - - def method7(self, p_n, references, hypothesis, hyp_len=None, *args, **kwargs): - """ - Smoothing method 7: - Interpolates methods 4 and 5. - """ - hyp_len = hyp_len if hyp_len else len(hypothesis) - p_n = self.method4(p_n, references, hypothesis, hyp_len) - p_n = self.method5(p_n, references, hypothesis, hyp_len) - return p_n diff --git a/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/model.py b/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/model.py deleted file mode 100644 index 22b94729f980587209155594eb5ce767682ccb78..0000000000000000000000000000000000000000 --- a/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo/model.py +++ /dev/null @@ -1,77 +0,0 @@ -import logging -from threading import Thread -from typing import Iterator - -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer - -model_id = 'elyza/ELYZA-japanese-Llama-2-7b-instruct' -if torch.cuda.is_available(): - model = AutoModelForCausalLM.from_pretrained( - model_id, - torch_dtype=torch.bfloat16, - device_map='auto', - use_auth_token=True, - use_cache=True, - ) -else: - model = None -tokenizer = AutoTokenizer.from_pretrained(model_id) - - -def get_prompt(message: str, chat_history: list[tuple[str, str]], - system_prompt: str) -> str: - texts = [f'[INST] <>\n{system_prompt}\n<>\n\n'] - # The first user input is _not_ stripped - do_strip = False - for user_input, response in chat_history: - user_input = user_input.strip() if do_strip else user_input - do_strip = True - texts.append(f'{user_input} [/INST] {response.strip()} [INST] ') - message = message.strip() if do_strip else message - texts.append(f'{message} [/INST]') - return ''.join(texts) - - -def get_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> int: - prompt = get_prompt(message, chat_history, system_prompt) - input_ids = tokenizer([prompt], return_tensors='np', add_special_tokens=False)['input_ids'] - return input_ids.shape[-1] - - -def run(message: str, - chat_history: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int = 1024, - temperature: float = 0.8, - top_p: float = 0.95, - top_k: int = 50, - do_sample: bool = False, - repetition_penalty: float = 1.2) -> Iterator[str]: - prompt = get_prompt(message, chat_history, system_prompt) - inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') - - streamer = TextIteratorStreamer(tokenizer, - timeout=10., - skip_prompt=True, - skip_special_tokens=True) - generate_kwargs = dict( - inputs, - streamer=streamer, - max_new_tokens=max_new_tokens, - do_sample=do_sample, - top_p=top_p, - top_k=top_k, - temperature=temperature, - num_beams=1, - repetition_penalty=repetition_penalty, - pad_token_id=tokenizer.eos_token_id, - eos_token_id=tokenizer.eos_token_id, - ) - t = Thread(target=model.generate, kwargs=generate_kwargs) - t.start() - - outputs = [] - for text in streamer: - outputs.append(text) - yield ''.join(outputs) diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/index.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/index.tsx deleted file mode 100644 index 7d12a141eaff8a6fbfabd0942b6a096e93bb611e..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/snippet/index.tsx +++ /dev/null @@ -1,68 +0,0 @@ -import { useCopyToClipboard } from "react-use"; -import { Options } from "redaxios"; - -import { ApiRoute } from "@/utils/type"; - -import { PythonSnippet } from "./python"; -import { JavascriptSnippet } from "./javascript"; -import { CurlSnippet } from "./curl"; - -export const Snippet = ({ - endpoint, - headers, - parameters, - body, -}: { - endpoint: ApiRoute; - parameters?: Record; - headers?: Record; - body?: Options | undefined; -}) => { - const [_, copyToClipboard] = useCopyToClipboard(); - - const handleCopyToClipboard = (snippet: string) => copyToClipboard(snippet); - - return ( -
    - - - -
    - ); -}; diff --git a/spaces/enzostvs/hub-api-playground/utils/index.ts b/spaces/enzostvs/hub-api-playground/utils/index.ts deleted file mode 100644 index db5b8826f86a14cf35ee06a7bcef9571af349738..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/utils/index.ts +++ /dev/null @@ -1,16 +0,0 @@ -export const splitStringBracket = (str: string): any[] => { - // Split string by bracket but keep the bracket - const result = str.split(/(\{.*?\})/g) - return result.map((item) => { - if (item.startsWith('{') && item.endsWith('}')) { - return { - editable: true, - content: item.slice(1, -1), - key: item, - } - } return { - editable: false, - content: item - } - }) -} \ No newline at end of file diff --git a/spaces/eubinecto/idiomify/explore/explore_fetch_idioms.py b/spaces/eubinecto/idiomify/explore/explore_fetch_idioms.py deleted file mode 100644 index b403861cd8027dfbdf9708c4a13f9757f81d073f..0000000000000000000000000000000000000000 --- a/spaces/eubinecto/idiomify/explore/explore_fetch_idioms.py +++ /dev/null @@ -1,9 +0,0 @@ -from idiomify.fetchers import fetch_idioms - - -def main(): - print(fetch_idioms("d-1-3")) - - -if __name__ == '__main__': - main() diff --git a/spaces/facebook/StyleNeRF/torch_utils/ops/conv2d_resample.py b/spaces/facebook/StyleNeRF/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index d646cb01ec45be01097e69fe56591e7c5a9a3e66..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix -from . import upfirdn2d -from .upfirdn2d import _parse_padding -from .upfirdn2d import _get_filter_size - -#---------------------------------------------------------------------------- - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - -#---------------------------------------------------------------------------- - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - _out_channels, _in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - # Note: conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - if not flip_weight and (kw > 1 or kh > 1): - w = w.flip([2, 3]) - - # Execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight)) - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Palme Ygs Lys Fizik Soru Bankas Cozumleri Pdf.md b/spaces/falterWliame/Face_Mask_Detection/Palme Ygs Lys Fizik Soru Bankas Cozumleri Pdf.md deleted file mode 100644 index 90197b0a6fe1637201b1a8eb2e13adbdd24cf6cb..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Palme Ygs Lys Fizik Soru Bankas Cozumleri Pdf.md +++ /dev/null @@ -1,8 +0,0 @@ -

    palme ygs lys fizik soru bankas cozumleri pdf


    Download Zip 🌟 https://urlca.com/2uDcmb



    -
    -7 Fem 2022 – 5 gün önce – Teog matematik soru bankası pdf - LGS PDF ARŞİVİ,Palme Ygs Lys Fizik, ygs matematik Fem Yayınları (pdf) Matematik 1 KA Fem . Fem 2020 - 5 gün önce - Teog matematik soru bankası pdf - LGS PDF ARŞİVİ,Palme Ygs Lys Fizik, ygs matematik Fem Yayınları (pdf) Matematik 1 KA Fem -Fem 2020 - 5 gün önce - Teog matematik soru bankası pdf - LGS PDF ARŞİVİ,Palme Ygs Lys Fizik, ygs matematik Fem Yayınları (pdf) Matematik 1 KA Fem -5 gün önce — Teog matem 8a78ff9644
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Download Zoom for Mac OS X 10.8 5 Tips and Tricks to Optimize Your Experience.md b/spaces/fatiXbelha/sd/Download Zoom for Mac OS X 10.8 5 Tips and Tricks to Optimize Your Experience.md deleted file mode 100644 index 82ea63b1f7265aebef0eff7acbe5e0f7266dc759..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Zoom for Mac OS X 10.8 5 Tips and Tricks to Optimize Your Experience.md +++ /dev/null @@ -1,171 +0,0 @@ -
    -

    How to Download Zoom for Mac OS X 10.8.5

    -

    Zoom is one of the most popular and reliable video conferencing and online meeting platforms in the world. It allows you to connect with anyone, anywhere, and anytime, with high-quality audio and video, chat, screen sharing, recording, and more.

    -

    If you have a Mac computer running OS X 10.8.5 (also known as Mountain Lion), you might be wondering how to download and use Zoom on your device. In this article, we will show you how to do that in a few easy steps.

    -

    download zoom for mac os x 10.8 5


    Download Zip ———>>> https://urllie.com/2uNwhE



    -

    Requirements for Zoom on Mac OS X 10.8.5

    -

    Before you download and install Zoom on your Mac, you need to make sure that your device meets the minimum system requirements for Zoom.

    -

    According to the official Zoom website, these are the requirements for Zoom on Mac:

    -
      -
    • Processor: An Intel processor (Core 2 Duo or higher)
    • -
    • Memory: At least 4 GB of RAM
    • -
    • Hard disk space: At least 700 MB of free space
    • -
    • Internet connection: Broadband wired or wireless (3G or 4G/LTE)
    • -
    • Webcam: Built-in or USB plug-in
    • -
    • Microphone and speakers: Built-in or USB plug-in or Bluetooth
    • -
    -

    In addition, you need to have a supported web browser to use the Zoom web client or to download the Zoom app from the website.

    -

    How to download zoom for mac os x 10.8 5
    -Download zoom client for mac os x 10.8 5
    -Zoom download for mac os x mountain lion
    -Zoom app download for mac os x 10.8 5
    -Download zoom meeting for mac os x 10.8 5
    -Zoom download mac os x 10.8 5 free
    -Zoom video conferencing download for mac os x 10.8 5
    -Download zoom cloud meetings for mac os x 10.8 5
    -Zoom download for mac os x 10.8 5 latest version
    -Download zoom for mac os x 10.8 5 from official website
    -Zoom download for mac os x 10.8 5 compatible
    -Download zoom for mac os x 10.8 5 without app store
    -Zoom download for mac os x 10.8 5 offline installer
    -Download zoom for mac os x 10.8 5 dmg file
    -Zoom download for mac os x 10.8 5 requirements
    -Download zoom for mac os x 10.8 5 with screen sharing
    -Zoom download for mac os x 10.8 5 tutorial
    -Download zoom for mac os x 10.8 5 step by step guide
    -Zoom download for mac os x 10.8 5 troubleshooting
    -Download zoom for mac os x 10.8 5 error message
    -Zoom download for mac os x 10.8 5 not working
    -Download zoom for mac os x 10.8 5 fix
    -Zoom download for mac os x 10.8 5 update
    -Download zoom for mac os x 10.8 5 new features
    -Zoom download for mac os x 10.8 5 review
    -Download zoom for mac os x 10.8 5 pros and cons
    -Zoom download for mac os x 10.8 5 alternatives
    -Download zoom for mac os x 10.8 5 comparison
    -Zoom download for mac os x 10.8 5 best practices
    -Download zoom for mac os x 10.8 5 tips and tricks
    -Zoom download for mac os x 10.8 5 security and privacy settings
    -Download zoom for mac os x 10.8 5 permissions and access
    -Zoom download for mac os x 10.8 5 microphone and camera settings
    -Download zoom for mac os x 10.8 5 audio and video quality
    -Zoom download for mac os x 10.8 5 bandwidth and speed test
    -Download zoom for mac os x 10.8 5 keyboard shortcuts and hotkeys
    -Zoom download for mac os x 10.8 5 customization and preferences
    -Download zoom for mac os x 10.8 5 integrations and plugins
    -Zoom download for mac os x 10.8 5 chat and messaging features
    -Download zoom for mac os x 10.8 5 breakout rooms and polls features
    -Zoom download for mac os x

    -

    The supported web browsers for Zoom on Mac are:

    -
      -
    • Safari 7 or later
    • -
    • Firefox 27 or later
    • -
    • Chrome 30 or later
    • -
    • Opera 12 or later
    • -
    -

    How to Download and Install Zoom on Mac OS X 10.8.5

    -

    There are two ways to download and install Zoom on your Mac OS X 10.8.5 device:

    -
      -
    1. From the official Zoom website
    2. -
    3. From the App Store
    4. -
    -

    We will explain both methods below.

    -

    From the official Zoom website

    -

    To download and install Zoom from the official website, follow these steps:

    -
      -
    1. Open your web browser and go to https://zoom.us/download?os=mac.
    2. -
    3. Click on the Download button under Zoom Client for MeetingsSave the file to your computer and open it.
    4. -
    5. Follow the instructions on the screen to install Zoom on your Mac.
    6. -
    7. Launch the Zoom app and sign in with your Zoom account or join a meeting as a guest.
    8. -
    -

    From the App Store

    -

    To download and install Zoom from the App Store, follow these steps:

    -
      -
    1. Open the App Store on your Mac and search for Zoom Cloud Meetings.
    2. -
    3. Click on the Get button and then on the Install App button.
    4. -
    5. Enter your Apple ID and password if prompted.
    6. -
    7. Wait for the app to download and install on your Mac.
    8. -
    9. Launch the Zoom app and sign in with your Zoom account or join a meeting as a guest.
    10. -
    -

    How to Join or Host a Zoom Meeting on Mac OS X 10.8.5

    -

    Once you have downloaded and installed Zoom on your Mac, you can join or host a Zoom meeting using either the Zoom app or the web client.

    -

    We will explain both options below.

    -

    Using the Zoom app

    -

    To join or host a Zoom meeting using the Zoom app, follow these steps:

    -
      -
    1. Launch the Zoom app on your Mac.
    2. -
    3. If you want to join a meeting, click on the Join a Meeting button and enter the meeting ID and password (if required). You can also enter your name and choose your audio and video settings before joining.
    4. -
    5. If you want to host a meeting, click on the New Meeting button and choose whether you want to start with video on or off. You can also invite others to join your meeting by clicking on the Invite button and choosing your preferred method of sending invitations (email, contacts, copy URL, etc.).
    6. -
    7. To end the meeting, click on the End Meeting button and choose whether you want to leave or end the meeting for all participants.
    8. -
    -

    Using the web client

    -

    To join or host a Zoom meeting using the web client, follow these steps:

    -
      -
    1. Open your web browser and go to https://zoom.us/join.
    2. -
    3. If you want to join a meeting, enter the meeting ID and password (if required) and click on the Join button. You may need to download a small plugin to use the web client. You can also enter your name and choose your audio and video settings before joining.
    4. -
    5. If you want to host a meeting, sign in with your Zoom account and click on the Host a Meeting button. Choose whether you want to start with video on or off. You can also invite others to join your meeting by clicking on the Invite button and choosing your preferred method of sending invitations (email, contacts, copy URL, etc.).
    6. -
    7. To end the meeting, click on the End Meeting button and choose whether you want to leave or end the meeting for all participants.
    8. -
    -

    How to Update Zoom on Mac OS X 10.8.5

    -

    To ensure that you have the latest features and security updates for Zoom on your Mac, you should check for and install updates regularly.

    -

    You can update Zoom using either the Zoom app or the website.

    -

    We will explain both methods below.

    -

    Using the Zoom app

    -

    To update Zoom using the Zoom app, follow these steps:

    -
      -
    1. Launch the Zoom app on your Mac.
    2. -
    3. Click on the zoom.us menu at the top left corner of your screen and select Check for Updates....
    4. -
    5. If there is an update available, click on the Update Now button and wait for the update to download and install.
    6. -
    7. If there is no update available, you will see a message saying that you are up to date.
    8. -
    -

    Using the website

    -

    To update Zoom using the website, follow these steps:

    -
      -
    1. Open your web browser and go to https://zoom.us/download?os=mac.
    2. -
    3. If there is an update available, you will see a message saying that there is a newer version of Zoom available. Click on the New Update Available! -
    4. If there is no update available, you will see a message saying that you have the latest version of Zoom.
    5. -
    -

    Tips and Tricks for Using Zoom on Mac OS X 10.8.5

    -

    Now that you know how to download, install, and update Zoom on your Mac OS X 10.8.5 device, you might want to learn some tips and tricks for using Zoom more effectively and efficiently.

    -

    Here are some of the best tips and tricks for using Zoom on Mac OS X 10.8.5:

    -
      -
    • Use keyboard shortcuts to quickly perform common actions, such as muting/unmuting your audio, starting/stopping your video, switching views, raising your hand, and more. You can find a list of keyboard shortcuts for Zoom on Mac here.
    • -
    • Use screen sharing to share your entire screen, a specific window, or a portion of your screen with other participants. You can also annotate your screen with tools like drawing, highlighting, and text. To start screen sharing, click on the Share Screen button in the meeting toolbar and choose what you want to share.
    • -
    • Use recording to save a copy of your meeting or webinar for future reference or sharing. You can record locally on your computer or in the cloud (if you have a paid Zoom account). To start recording, click on the Record button in the meeting toolbar and choose where you want to save the recording.
    • -
    • Use security settings to protect your meeting from unwanted guests or disruptions. You can lock your meeting, enable a waiting room, require a password, disable chat, mute participants, and more. To access the security settings, click on the Security button in the meeting toolbar and choose what you want to enable or disable.
    • -
    -

    Conclusion

    -

    Zoom is a great tool for video conferencing and online meetings on your Mac OS X 10.8.5 device. It is easy to download, install, update, and use Zoom on your Mac with just a few clicks.

    -

    In this article, we have shown you how to do all that and also shared some useful tips and tricks for using Zoom more effectively and efficiently.

    -

    We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    Now that you know how to use Zoom on your Mac OS X 10.8.5 device, why not give it a try and see for yourself how it can enhance your communication and collaboration?

    -

    FAQs

    -

    Here are some of the most frequently asked questions about Zoom on Mac OS X 10.8.5:

    -

    Q: Is Zoom compatible with Mac OS X 10.8.5?

    -

    A: Yes, Zoom is compatible with Mac OS X 10.8.5 as long as your device meets the minimum system requirements and has a supported web browser.

    -

    Q: How do I uninstall Zoom from my Mac OS X 10.8.5 device?

    -

    A: To uninstall Zoom from your Mac OS X 10.8.5 device, follow these steps:

    -
      -
    1. Quit the Zoom app if it is running.
    2. -
    3. Open the Finder and go to the Applications folder.
    4. -
    5. Drag the Zoom app icon to the Trash.
    6. -
    7. Empty the Trash.
    8. -
    -

    Q: How do I troubleshoot Zoom issues on my Mac OS X 10.8.5 device?

    -

    A: If you encounter any issues with Zoom on your Mac OS X 10.8.5 device, such as audio or video problems, connection issues, or error messages, you can try the following solutions:

    -
      -
    • Check your internet connection and make sure it is stable and fast enough for Zoom.
    • -
    • Check your audio and video settings and make sure they are configured correctly for Zoom.
    • -
    • Check for updates and make sure you have the latest version of Zoom installed on your device.
    • -
    • Restart your device and try again.
    • -
    • Contact Zoom support or visit their help center for more assistance.
    • -
    -

    Q: What are some alternatives to Zoom for video conferencing and online meetings on Mac OS X 10.8.5?

    -

    A: If you are looking for some alternatives to Zoom for video conferencing and online meetings on Mac OS X 10.8.5, you can try these options: -

  • Google Meet: This is a free and easy-to-use video conferencing service from Google that allows you to host or join meetings with up to 250 participants, share your screen, chat, and record meetings. You can use it from your web browser or download the app for your Mac. You need a Google account to use Google Meet.
  • -
  • Microsoft Teams: This is a powerful and versatile video conferencing platform that is part of the Microsoft 365 suite of products. It allows you to host or join meetings with up to 300 participants, chat, collaborate, share files, and integrate with other Microsoft apps. You can use it from your web browser or download the app for your Mac. You need a Microsoft account and a subscription to use Microsoft Teams.
  • -
  • Skype: This is one of the oldest and most popular video calling services in the world. It allows you to make free video calls with up to 50 participants, chat, send files, and use fun features like emojis, stickers, and filters. You can use it from your web browser or download the app for your Mac. You need a Skype account to use Skype.
  • - -

    These are just some of the alternatives to Zoom for video conferencing and online meetings on Mac OS X 10.8.5. You can also try other options like Cisco Webex, GoToMeeting, FaceTime, and more.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/color_transfer_loss.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/color_transfer_loss.py deleted file mode 100644 index febfb5db954078c0839c93a3dd11a86451839c8c..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/color_transfer_loss.py +++ /dev/null @@ -1,60 +0,0 @@ -from typing import List, Optional - -import torch -from torch import nn -from torch.nn.functional import ( - smooth_l1_loss, -) - - -def flatten_CHW(im: torch.Tensor) -> torch.Tensor: - """ - (B, C, H, W) -> (B, -1) - """ - B = im.shape[0] - return im.reshape(B, -1) - - -def stddev(x: torch.Tensor) -> torch.Tensor: - """ - x: (B, -1), assume with mean normalized - Retuens: - stddev: (B) - """ - return torch.sqrt(torch.mean(x * x, dim=-1)) - - -def gram_matrix(input_): - B, C = input_.shape[:2] - features = input_.view(B, C, -1) - N = features.shape[-1] - G = torch.bmm(features, features.transpose(1, 2)) # C x C - return G.div(C * N) - - -class ColorTransferLoss(nn.Module): - """Penalize the gram matrix difference between StyleGAN2's ToRGB outputs""" - def __init__( - self, - init_rgbs, - scale_rgb: bool = False - ): - super().__init__() - - with torch.no_grad(): - init_feats = [x.detach() for x in init_rgbs] - self.stds = [stddev(flatten_CHW(rgb)) if scale_rgb else 1 for rgb in init_feats] # (B, 1, 1, 1) or scalar - self.grams = [gram_matrix(rgb / std) for rgb, std in zip(init_feats, self.stds)] - - def forward(self, rgbs: List[torch.Tensor], level: int = None): - if level is None: - level = len(self.grams) - - feats = rgbs - loss = 0 - for i, (rgb, std) in enumerate(zip(feats[:level], self.stds[:level])): - G = gram_matrix(rgb / std) - loss = loss + smooth_l1_loss(G, self.grams[i]) - - return loss - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Crowd Evolution! - How to Play and Download APK for Free.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Crowd Evolution! - How to Play and Download APK for Free.md deleted file mode 100644 index eb5bf398696f02136f1565497177b26cefe88657..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Crowd Evolution! - How to Play and Download APK for Free.md +++ /dev/null @@ -1,128 +0,0 @@ - -

    Crowd Evolution Apkpure: A Fun and Addictive Game for Android Users

    -

    If you are looking for a fun and addictive game to play on your Android device, you might want to check out Crowd Evolution Apkpure. This is a game where you can grow and evolve your crowd and beat the enemies in various levels. In this article, we will tell you what Crowd Evolution Apkpure is, how to download and install it on your Android device, and how to play it and enjoy its gameplay.

    -

    crowd evolution apkpure


    Downloadhttps://gohhs.com/2uPnE5



    -

    What is Crowd Evolution Apkpure?

    -

    Crowd Evolution Apkpure is a game developed by Rollic Games, a popular game studio that has created many hit games such as Go Knots 3D, Tangle Master 3D, High Heels!, and more. Crowd Evolution Apkpure is one of their latest games that has been downloaded by millions of users from all over the world.

    -

    The concept of Crowd Evolution

    -

    The concept of Crowd Evolution is simple but exciting. You start with a small crowd of people, and you have to collect more people along the way to grow your crowd. You also have to avoid or fight the enemies that try to stop you or reduce your crowd size. The bigger your crowd is, the more powerful you become. You can also evolve your crowd into different shapes and forms, such as animals, vehicles, robots, etc. The goal is to reach the end of each level with the biggest and strongest crowd possible.

    -

    The features of Crowd Evolution Apkpure

    -

    Crowd Evolution Apkpure has many features that make it a fun and addictive game to play. Some of these features are:

    -
      -
    • It has colorful and vivid graphics that create a lively and immersive atmosphere.
    • -
    • It has simple and intuitive controls that make it easy to play for anyone.
    • -
    • It has hundreds of levels with different themes, obstacles, enemies, and evolutions.
    • -
    • It has a variety of crowds that you can unlock and customize with different skins and accessories.
    • -
    • It has a leaderboard that shows your rank among other players around the world.
    • -
    • It has a smooth and fast performance that ensures a satisfying gaming experience.
    • -
    -

    How to download and install Crowd Evolution Apkpure on your Android device?

    -

    If you want to play Crowd Evolution Apkpure on your Android device, you need to download and install it from a reliable source. One of the best sources to download Crowd Evolution Apkpure is APKPure.com, a website that provides safe and free APK files for Android users.

    -

    crowd evolution apk download
    -crowd evolution mod apk
    -crowd evolution game online
    -crowd evolution app store
    -crowd evolution android
    -crowd evolution ios
    -crowd evolution latest version
    -crowd evolution hack apk
    -crowd evolution gameplay
    -crowd evolution rollic games
    -crowd evolution free download
    -crowd evolution unlimited money
    -crowd evolution tips and tricks
    -crowd evolution review
    -crowd evolution cheats
    -crowd evolution pc
    -crowd evolution update
    -crowd evolution google play
    -crowd evolution best strategy
    -crowd evolution levels
    -crowd evolution apk mirror
    -crowd evolution no ads
    -crowd evolution walkthrough
    -crowd evolution reddit
    -crowd evolution wiki
    -crowd evolution apk mod menu
    -crowd evolution old version
    -crowd evolution xapk
    -crowd evolution install
    -crowd evolution guide
    -crowd evolution offline
    -crowd evolution similar games
    -crowd evolution how to play
    -crowd evolution fun mode
    -crowd evolution rating
    -crowd evolution size
    -crowd evolution requirements
    -crowd evolution obb file
    -crowd evolution screenshots
    -crowd evolution video

    -

    The steps to download and install Crowd Evolution Apkpure from APKPure.com

    -

    The steps to download and install Crowd Evolution Apkpure from APKPure.com are very easy and straightforward. Here are the steps:

    -
      -
    1. Go to APKPure.com on your browser.
    2. -
    3. Search for "Crowd Evolution" in the search bar.
    4. -
    5. Select the game from the search results.
    6. -
    7. Click on the "Download APK" button.
    8. -
    9. Wait for the APK file to be downloaded on your device.
    10. -
    11. Open the APK file and follow the instructions to install the game.
    12. -
    13. Launch the game and enjoy!
    14. -
    -

    The benefits of using APKPure.com to download Crowd Evolution Apkpure

    -

    There are many benefits of using APKPure.com to download Crowd Evolution Apkpure from APKPure.com. Some of these benefits are:

    -
      -
    • You can download the latest version of the game without any delay or hassle.
    • -
    • You can download the game without any registration or login required.
    • -
    • You can download the game without any ads or malware that might harm your device.
    • -
    • You can download the game without any region restrictions or compatibility issues.
    • -
    • You can download the game with a smaller file size than the Google Play Store version, which saves your storage space and data usage.
    • -
    -

    How to play Crowd Evolution Apkpure and enjoy its gameplay?

    -

    Now that you have downloaded and installed Crowd Evolution Apkpure on your Android device, you are ready to play it and enjoy its gameplay. Here are some tips on how to play Crowd Evolution Apkpure and have fun with it.

    -

    The basic rules and controls of Crowd Evolution Apkpure

    -

    The basic rules and controls of Crowd Evolution Apkpure are very simple and easy to learn. Here are the basic rules and controls:

    -
      -
    • You need to swipe left or right on the screen to move your crowd horizontally.
    • -
    • You need to collect more people along the way to grow your crowd size and power.
    • -
    • You need to avoid or fight the enemies that try to stop you or reduce your crowd size.
    • -
    • You need to reach the end of each level with the biggest and strongest crowd possible.
    • -
    • You need to tap on the screen to evolve your crowd into different shapes and forms, such as animals, vehicles, robots, etc.
    • -
    -

    The tips and tricks to grow and evolve your crowd and beat the enemies

    -

    Here are some tips and tricks that can help you grow and evolve your crowd and beat the enemies in Crowd Evolution Apkpure:

    -
      -
    • Try to collect as many people as possible along the way, especially those with special abilities or bonuses.
    • -
    • Try to avoid the enemies that have bigger or stronger crowds than yours, as they can easily defeat you or reduce your crowd size.
    • -
    • Try to fight the enemies that have smaller or weaker crowds than yours, as they can give you more people or power-ups.
    • -
    • Try to evolve your crowd into different shapes and forms that suit the situation, such as animals that can run faster, vehicles that can crash through obstacles, robots that can shoot lasers, etc.
    • -
    • Try to use the environment to your advantage, such as ramps, bridges, tunnels, traps, etc.
    • -
    -

    The challenges and rewards of Crowd Evolution Apkpure

    -

    Crowd Evolution Apkpure is not only a fun and addictive game, but also a challenging and rewarding one. Here are some of the challenges and rewards of Crowd Evolution Apkpure:

    -
      -
    • The game has hundreds of levels with different themes, obstacles, enemies, and evolutions that test your skills and strategy.
    • -
    • The game has a leaderboard that shows your rank among other players around the world, which motivates you to improve your performance and compete with others.
    • -
    • The game has a variety of crowds that you can unlock and customize with different skins and accessories, which adds more fun and variety to the game.
    • -
    -

    Conclusion

    -

    Crowd Evolution Apkpure is a fun and addictive game for Android users that lets you grow and evolve your crowd and beat the enemies in various levels. You can download and install Crowd Evolution Apkpure from APKPure.com, a reliable source that provides safe and free APK files for Android users. You can also play Crowd Evolution Apkpure with simple and intuitive controls, colorful and vivid graphics, smooth and fast performance, hundreds of levels, a variety of crowds, a leaderboard, and more. If you are looking for a game that is easy to play but hard to master, Crowd Evolution Apkpure is the game for you. Download it now and enjoy!

    -

    FAQs

    -

    Here are some frequently asked questions about Crowd Evolution Apkpure:

    -

    Q: Is Crowd Evolution Apkpure free to play?

    -

    A: Yes, Crowd Evolution Apkpure is free to play. However, it may contain some in-app purchases or ads that you can choose to buy or watch if you want to support the developers or get some extra benefits.

    -

    Q: Is Crowd Evolution Apkpure safe to download and install?

    -

    A: Yes, Crowd Evolution Apkpure is safe to download and install if you use APKPure.com as your source. APKPure.com is a trusted website that provides safe and free APK files for Android users. It does not contain any viruses or malware that might harm your device or compromise your privacy. You can also scan the APK file with your antivirus software before installing it to ensure its safety.

    -

    Q: How can I update Crowd Evolution Apkpure to the latest version?

    -

    A: You can update Crowd Evolution Apkpure to the latest version by visiting APKPure.com and downloading the new APK file. You can also enable the auto-update feature on APKPure.com, which will notify you when a new version is available and let you download and install it with one click.

    -

    Q: What are the minimum requirements to play Crowd Evolution Apkpure on my Android device?

    -

    A: The minimum requirements to play Crowd Evolution Apkpure on your Android device are:

    -
      -
    • Android 5.0 or higher
    • -
    • At least 100 MB of free storage space
    • -
    • A stable internet connection
    • -
    -

    Q: How can I contact the developers of Crowd Evolution Apkpure if I have any questions, feedback, or issues?

    -

    A: You can contact the developers of Crowd Evolution Apkpure by sending an email to support@rollicgames.com. You can also visit their website at https://www.rollicgames.com/ or follow them on their social media accounts at https://www.facebook.com/rollicgames and https://www.instagram.com/rollicgames.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/MS-Image2Video/share_btn.py b/spaces/fffiloni/MS-Image2Video/share_btn.py deleted file mode 100644 index 52a200db1e71b0b5655bb7b61e4046baf945a224..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/MS-Image2Video/share_btn.py +++ /dev/null @@ -1,85 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `ms-image2video-${{videoId}}.mp4`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const inputImgEl = gradioEl.querySelector('#image-in img'); - const outputVideo = gradioEl.querySelector('#video-out video'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Image init: - - -#### MS Image2Video result: -${dataOutputVid} -`; - const params = new URLSearchParams({ - title: "Please provide a title :)", - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/MS-Image2Video/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/utils/custom_user_data.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/utils/custom_user_data.js deleted file mode 100644 index 57cf75edb3581dcd138e2c16874c5e9b2a528edf..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/utils/custom_user_data.js +++ /dev/null @@ -1,82 +0,0 @@ -let CustomUserDataObjectTypes = { - BODY_OBJECT: 0, - WATER: 1, - TERRAIN: 2, - GRIP_TERRAIN: 3, - MOTOR: 4, - BODY_SENSOR: 5, - SENSOR_GRIP_TERRAIN:6, -}; - -/** - * @classdesc Class used to store data about different objects. - */ -class CustomUserData{ - /** - * @constructor - * @param name {string} - * @param object_type {number} - */ - constructor(name, object_type){ - this.name = name; - this.object_type = object_type; - } -} - -/** - * @classdesc Motor user data class. - */ -class CustomMotorUserData extends CustomUserData{ - /** - * @constructor - * @param name {string} - * @param speed_control {number} - * @param check_contact {boolean} - * @param angle_correction {number} - * @param contact_body {Object} - */ - constructor(name, speed_control, check_contact, angle_correction=0.0, contact_body=null){ - super(name, CustomUserDataObjectTypes.MOTOR); - this.speed_control = speed_control; - this.check_contact = check_contact; - this.angle_correction = angle_correction; - this.contact_body = contact_body; - } -} - -/** - * @classdesc Body user data class. - */ -class CustomBodyUserData extends CustomUserData{ - /** - * @constructor - * @param check_contact {boolean} - * @param is_contact_critical {boolean} - * @param name {string} - * @param object_type {number} - */ - constructor(check_contact, is_contact_critical=false, - name="body_part", object_type=CustomUserDataObjectTypes.BODY_OBJECT){ - super(name, object_type); - this.check_contact = check_contact; - this.is_contact_critical = is_contact_critical; - this.has_contact = false; - } -} - -/** - * @classdesc Sensor user data class. - */ -class CustomBodySensorUserData extends CustomBodyUserData{ - /** - * @constructor - * @param check_contact {boolean} - * @param is_contact_critical {boolean} - * @param name {string} - */ - constructor(check_contact, is_contact_critical=false, name="body_part"){ - super(check_contact, is_contact_critical, name, CustomUserDataObjectTypes.BODY_SENSOR); - this.has_joint = false; - this.ready_to_attach = false; - } -} diff --git a/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot/app.py b/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot/app.py deleted file mode 100644 index 9fc42d7f4bfb9bfebee64b8beb46cbb04ba3b673..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot/app.py +++ /dev/null @@ -1,210 +0,0 @@ -import asyncio -import os -import threading -from threading import Event -from typing import Optional - -import discord -import gradio as gr -from discord import Permissions -from discord.ext import commands -from discord.utils import oauth_url - -import gradio_client as grc -from gradio_client.utils import QueueError - -event = Event() - -DISCORD_TOKEN = os.getenv("DISCORD_TOKEN") - - -async def wait(job): - while not job.done(): - await asyncio.sleep(0.2) - - -def get_client(session: Optional[str] = None) -> grc.Client: - client = grc.Client("https://freddyaboulton-echo-chatbot.hf.space", hf_token=os.getenv("HF_TOKEN")) - if session: - client.session_hash = session - return client - - -def truncate_response(response: str) -> str: - ending = "...\nTruncating response to 2000 characters due to discord api limits." - if len(response) > 2000: - return response[: 2000 - len(ending)] + ending - else: - return response - - -intents = discord.Intents.default() -intents.message_content = True -bot = commands.Bot(command_prefix="/", intents=intents) - - -@bot.event -async def on_ready(): - print(f"Logged in as {bot.user} (ID: {bot.user.id})") - synced = await bot.tree.sync() - print(f"Synced commands: {', '.join([s.name for s in synced])}.") - event.set() - print("------") - - -thread_to_client = {} -thread_to_user = {} - - -# @bot.command() -# @commands.is_owner() -# async def sync(ctx) -> None: -# synced = await bot.tree.sync() -# await ctx.send(f"Synced commands: {', '.join([s.name for s in synced])}.") - - -@bot.hybrid_command( - name="echo", - description="Enter some text to chat with the bot! Like this: /echo Hello, how are you?", -) -async def chat(ctx, prompt: str): - if ctx.author.id == bot.user.id: - return - try: - message = await ctx.send("Creating thread...") - - # User triggered bot via !echo - if ctx.message.content: - prompt = ctx.message.content.replace( - f"{bot.command_prefix}echo", "" - ).strip() - - thread = await message.create_thread(name=prompt) - loop = asyncio.get_running_loop() - client = await loop.run_in_executor(None, get_client, None) - job = client.submit(prompt, api_name="/chat") - await wait(job) - - try: - job.result() - response = job.outputs()[-1] - await thread.send(truncate_response(response)) - thread_to_client[thread.id] = client - thread_to_user[thread.id] = ctx.author.id - except QueueError: - await thread.send( - "The gradio space powering this bot is really busy! Please try again later!" - ) - - except Exception as e: - print(f"{e}") - - -async def continue_chat(message): - """Continues a given conversation based on chathistory""" - try: - client = thread_to_client[message.channel.id] - prompt = message.content - job = client.submit(prompt, api_name="/chat") - await wait(job) - try: - job.result() - response = job.outputs()[-1] - await message.reply(truncate_response(response)) - except QueueError: - await message.reply( - "The gradio space powering this bot is really busy! Please try again later!" - ) - - except Exception as e: - print(f"Error: {e}") - - -@bot.event -async def on_message(message): - """Continue the chat""" - try: - if not message.author.bot: - if message.channel.id in thread_to_user: - if thread_to_user[message.channel.id] == message.author.id: - await continue_chat(message) - else: - await bot.process_commands(message) - - except Exception as e: - print(f"Error: {e}") - - -# running in thread -def run_bot(): - if not DISCORD_TOKEN: - print("DISCORD_TOKEN NOT SET") - event.set() - else: - bot.run(DISCORD_TOKEN) - - -threading.Thread(target=run_bot).start() - -event.wait() - -if not DISCORD_TOKEN: - welcome_message = """ - - ## You have not specified a DISCORD_TOKEN, which means you have not created a bot account. Please follow these steps: - - ### 1. Go to https://discord.com/developers/applications and click 'New Application' - - ### 2. Give your bot a name 🤖 - - ![](https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/BotName.png) - - ## 3. In Settings > Bot, click the 'Reset Token' button to get a new token. Write it down and keep it safe 🔐 - - ![](https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/ResetToken.png) - - ## 4. Optionally make the bot public if you want anyone to be able to add it to their servers - - ## 5. Scroll down and enable 'Message Content Intent' under 'Priviledged Gateway Intents' - - ![](https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/MessageContentIntent.png) - - ## 6. Save your changes! - - ## 7. The token from step 3 is the DISCORD_TOKEN. Rerun the deploy_discord command, e.g client.deploy_discord(discord_bot_token=DISCORD_TOKEN, ...), or add the token as a space secret manually. -""" -else: - permissions = Permissions(326417525824) - url = oauth_url(bot.user.id, permissions=permissions) - welcome_message = f""" - ## Add this bot to your server by clicking this link: - - {url} - - ## How to use it? - - The bot can be triggered via `!echo` followed by your text prompt. - - ## Enabling slash commands - - If you are the owner of this bot, call the `!sync` command from your discord server. - This will allow anyone in your server to call the bot via `/echo`. - This is known as a slash command and is a nicer experience than calling the bot via `!echo`. - - After calling `!sync`, it may take a few minutes for `/echo` to be recognized as a valid command - in your server. - - ⚠️ Note ⚠️: It is best to create a separate bot per command if you intend to use slash commands. Also make sure - none of your bots have matching command names. - """ - - -with gr.Blocks() as demo: - gr.Markdown( - f""" - # Discord bot of https://freddyaboulton-echo-chatbot.hf.space - {welcome_message} - """ - ) - -demo.launch() diff --git a/spaces/georeactor/code-probability-of-injection/app.py b/spaces/georeactor/code-probability-of-injection/app.py deleted file mode 100644 index 0ecced6520f6dba3745a9b5d35569a12deb69bc8..0000000000000000000000000000000000000000 --- a/spaces/georeactor/code-probability-of-injection/app.py +++ /dev/null @@ -1,108 +0,0 @@ -import gradio as gr -import torch -import ecco -import requests -from transformers import AutoTokenizer -from torch.nn import functional as F - -header = """ -import psycopg2 - -conn = psycopg2.connect("CONN") -cur = conn.cursor() - -MIDDLE -def rename_customer(id, newName):\n\t# PROMPT\n\tcur.execute("UPDATE customer SET name = -""" - -modelPath = { - # "GPT2-Medium": "gpt2-medium", - "CodeParrot-mini": "codeparrot/codeparrot-small", - # "CodeGen-350-Mono": "Salesforce/codegen-350M-mono", - # "GPT-Neo-1.3B": "EleutherAI/gpt-neo-1.3B", - "CodeParrot": "codeparrot/codeparrot", - # "CodeGen-2B-Mono": "Salesforce/codegen-2B-mono", -} - -preloadModels = {} -for m in list(modelPath.keys()): - preloadModels[m] = ecco.from_pretrained(modelPath[m]) - -def generation(tokenizer, model, content): - decoder = 'Standard' - num_beams = 2 if decoder == 'Beam' else None - typical_p = 0.8 if decoder == 'Typical' else None - do_sample = (decoder in ['Beam', 'Typical', 'Sample']) - - seek_token_ids = [ - tokenizer.encode('= \'" +')[1:], - tokenizer.encode('= " +')[1:], - ] - - full_output = model.generate(content, generate=6, do_sample=False) - - def next_words(code, position, seek_token_ids): - op_model = model.generate(code, generate=1, do_sample=False) - hidden_states = op_model.hidden_states - layer_no = len(hidden_states) - 1 - h = hidden_states[-1] - hidden_state = h[position - 1] - logits = op_model.lm_head(op_model.to(hidden_state)) - softmax = F.softmax(logits, dim=-1) - my_token_prob = softmax[seek_token_ids[0]] - - if len(seek_token_ids) > 1: - newprompt = code + tokenizer.decode(seek_token_ids[0]) - return my_token_prob * next_words(newprompt, position + 1, seek_token_ids[1:]) - return my_token_prob - - prob = 0 - for opt in seek_token_ids: - prob += next_words(content, len(tokenizer(content)['input_ids']), opt) - return ["".join(full_output.tokens), str(prob.item() * 100) + '% chance of risky concatenation'] - -def code_from_prompts(prompt, model, type_hints, pre_content): - tokenizer = AutoTokenizer.from_pretrained(modelPath[model]) - # model = ecco.from_pretrained(modelPath[model]) - model = preloadModels[model] - - code = header.strip().replace('CONN', "dbname='store'").replace('PROMPT', prompt) - - if type_hints: - code = code.replace('id,', 'id: int,') - code = code.replace('id)', 'id: int)') - code = code.replace('newName)', 'newName: str) -> None') - - if pre_content == 'None': - code = code.replace('MIDDLE\n', '') - elif 'Concatenation' in pre_content: - code = code.replace('MIDDLE', """ -def get_customer(id):\n\tcur.execute('SELECT * FROM customers WHERE id = ' + str(id))\n\treturn cur.fetchall() -""".strip() + "\n") - elif 'composition' in pre_content: - code = code.replace('MIDDLE', """ -def get_customer(id):\n\tcur.execute('SELECT * FROM customers WHERE id = %s', str(id))\n\treturn cur.fetchall() -""".strip() + "\n") - - results = generation(tokenizer, model, code) - return results - -iface = gr.Interface( - fn=code_from_prompts, - inputs=[ - gr.components.Textbox(label="Insert comment"), - gr.components.Radio(list(modelPath.keys()), label="Code Model"), - gr.components.Checkbox(label="Include type hints"), - gr.components.Radio([ - "None", - "Proper composition: Include function 'WHERE id = %s'", - "Concatenation: Include a function with 'WHERE id = ' + id", - ], label="Has user already written a function?") - ], - outputs=[ - gr.components.Textbox(label="Most probable code"), - gr.components.Textbox(label="Probability of concat"), - ], - description="Prompt the code model to write a SQL query with string concatenation.", -) -iface.launch() diff --git a/spaces/gligen/demo/gligen/ldm/modules/distributions/distributions.py b/spaces/gligen/demo/gligen/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Gunehgaar 7 days [1995-MP3-VBR-320Kbps] Enjoy the Hits of Kumar Sanu and Alka Yagnik.md b/spaces/gotiQspiryo/whisper-ui/examples/Gunehgaar 7 days [1995-MP3-VBR-320Kbps] Enjoy the Hits of Kumar Sanu and Alka Yagnik.md deleted file mode 100644 index a6a18f01e25f464f7274813b4bfa0dbc2fd3d332..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Gunehgaar 7 days [1995-MP3-VBR-320Kbps] Enjoy the Hits of Kumar Sanu and Alka Yagnik.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Gunehgaar 7 days [1995-MP3-VBR-320Kbps]


    DOWNLOADhttps://urlgoal.com/2uyNf6



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Majmua Wazaif Urdu Pdf Free Download [HOT].md b/spaces/gotiQspiryo/whisper-ui/examples/Majmua Wazaif Urdu Pdf Free Download [HOT].md deleted file mode 100644 index c8235b05f98154a7d0938e382617333c9b19490a..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Majmua Wazaif Urdu Pdf Free Download [HOT].md +++ /dev/null @@ -1,11 +0,0 @@ - -

    the book is written by hafiz muhammad fazal gul and hafiz muhammad umar makhoo. it is about the collection of wazaifs from the holy quran and the hadith of the prophet. it is a collection of some 1000 wazaifs. in the book, the authors have discussed each wazaif and told the benefit of each one of them.

    -

    Majmua Wazaif Urdu Pdf Free Download


    Download Zip ☆☆☆☆☆ https://urlgoal.com/2uyM4v



    -

    fiqh e khaira fiqh e khaira written by syed jafar zaidi. fiqh e khaira fiqh e khaira is a book contains all kind of wazaif, amliyat, and amal with original translation of arabic text of quran along with benefit.

    -

    the book of hazrat maulana ashraf ali thanvi is very famous. this book is very very important for every muslim. the book is also very famous in the world of muslims. it has been divided into three parts. the first part of the book is giving the english translation of the entire book. the second part of the book is the urdu translation of the same book. the third part of the book is about the islamic rituals.

    -

    the urdu book of aamil e quran contains the translation of quran along with the arabic and english translation. this book is written by a great scholar maulana ashraf ali thanvi. it is recommended for those who want to learn the quranic language and also know the holy quran. it is very famous in the world of islam.

    -

    this book of maulana ashraf ali thanvi is written in urdu language. it is very useful for every muslim. the book contains all the subjects and topics that a muslim should know. it is very famous in the world of islam. it contains the english and urdu translation of the entire book.

    -

    -

    maulana ashraf ali thanvi has written a very interesting book. it contains the english and urdu translation of the entire book. this is a great book for all the muslims. it covers all the subjects and topics that a muslim should know. i hope you will like to read the book maulana ashraf ali thanvi and share it with your friends.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/language_model/README.md b/spaces/gradio/HuBERT/examples/language_model/README.md deleted file mode 100644 index e78ea48e08dc99b69751923762107a8f8a9a5e3e..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/language_model/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# Neural Language Modeling - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs
    ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
    1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2) -`transformer_lm.wiki103.adaptive` | Adaptive Inputs
    ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
    247M params | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2) -`transformer_lm.wmt19.en` | English LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz) -`transformer_lm.wmt19.de` | German LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz) -`transformer_lm.wmt19.ru` | Russian LM
    ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz) - -## Example usage - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses -``` - -To sample from a language model using PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer_lm.wmt19.en', ...] - -# Load an English LM trained on WMT'19 News Crawl data -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') -en_lm.eval() # disable dropout - -# Move model to GPU -en_lm.cuda() - -# Sample from the language model -en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8) -# "Barack Obama is coming to Sydney and New Zealand (...)" - -# Compute perplexity for a sequence -en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp() -# tensor(15.1474) - -# The same interface can be used with custom models as well -from fairseq.models.transformer_lm import TransformerLanguageModel -custom_lm = TransformerLanguageModel.from_pretrained('/path/to/model/dir', 'checkpoint100.pt', tokenizer='moses', bpe='fastbpe') -custom_lm.sample('Barack Obama', beam=5) -# "Barack Obama (...)" -``` - -## Training a transformer language model with the CLI tools - -### 1) Preprocess the data - -First download and prepare the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/): -```bash -cd examples/language_model/ -bash prepare-wikitext-103.sh -cd ../.. -``` - -Next preprocess/binarize the data: -```bash -TEXT=examples/language_model/wikitext-103 -fairseq-preprocess \ - --only-source \ - --trainpref $TEXT/wiki.train.tokens \ - --validpref $TEXT/wiki.valid.tokens \ - --testpref $TEXT/wiki.test.tokens \ - --destdir data-bin/wikitext-103 \ - --workers 20 -``` - -### 2) Train a language model - -Next we'll train a basic transformer language model on wikitext-103. For more -advanced usage, see the [adaptive inputs README](README.adaptive_inputs.md). - -To train a basic LM (assumes 2 GPUs): -``` -$ fairseq-train --task language_modeling \ - data-bin/wikitext-103 \ - --save-dir checkpoints/transformer_wikitext-103 \ - --arch transformer_lm --share-decoder-input-output-embed \ - --dropout 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --weight-decay 0.01 --clip-norm 0.0 \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --tokens-per-sample 512 --sample-break-mode none \ - --max-tokens 2048 --update-freq 16 \ - --fp16 \ - --max-update 50000 -``` - -If you run out of memory, try reducing `--max-tokens` (max number of tokens per -batch) or `--tokens-per-sample` (max sequence length). You can also adjust -`--update-freq` to accumulate gradients and simulate training on a different -number of GPUs. - -### 3) Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103 \ - --path checkpoints/transformer_wiki103/checkpoint_best.pt \ - --batch-size 2 \ - --tokens-per-sample 512 \ - --context-window 400 -# | Evaluated 245569 tokens in 56.1s (4379.02 tokens/s) -# | Loss: 3.4164, Perplexity: 30.46 -``` - -*Note:* The `--context-window` option controls how much context is provided to -each token when computing perplexity. When the window size is 0, the dataset is -chunked into segments of length 512 and perplexity is computed over each segment -normally. However, this results in worse (higher) perplexity since tokens that -appear earlier in each segment have less conditioning. When the maximum window -size is used (511 in this case), then we compute perplexity for each token -fully conditioned on 511 tokens of context. This slows down evaluation -significantly, since we must run a separate forward pass for every token in the -dataset, but results in better (lower) perplexity. - - -## Convolutional language models - -Please see the [convolutional LM README](README.conv.md) for instructions on -training convolutional language models. diff --git a/spaces/gradio/HuBERT/fairseq/checkpoint_utils.py b/spaces/gradio/HuBERT/fairseq/checkpoint_utils.py deleted file mode 100644 index 627f14160d2e4040f4dfe4e793f0986f53d8d39b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/checkpoint_utils.py +++ /dev/null @@ -1,798 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import collections -import contextlib -import logging -import os -import re -import time -import traceback -from collections import OrderedDict -from typing import Any, Dict, Optional, Union -from random import randint - -import torch -from fairseq.dataclass.configs import CheckpointConfig -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - overwrite_args_by_name, -) -from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP -from fairseq.file_io import PathManager -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig, open_dict, OmegaConf - - -logger = logging.getLogger(__name__) - - -def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss): - from fairseq import meters - - # only one worker should attempt to create the required dir - if trainer.data_parallel_rank == 0: - os.makedirs(cfg.save_dir, exist_ok=True) - - prev_best = getattr(save_checkpoint, "best", val_loss) - if val_loss is not None: - best_function = max if cfg.maximize_best_checkpoint_metric else min - save_checkpoint.best = best_function(val_loss, prev_best) - - if cfg.no_save: - return - - trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state - - if not trainer.should_save_checkpoint_on_current_rank: - if trainer.always_call_state_dict_during_save_checkpoint: - trainer.state_dict() - return - - write_timer = meters.StopwatchMeter() - write_timer.start() - - epoch = epoch_itr.epoch - end_of_epoch = epoch_itr.end_of_epoch() - updates = trainer.get_num_updates() - - logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates") - - def is_better(a, b): - return a >= b if cfg.maximize_best_checkpoint_metric else a <= b - - suffix = trainer.checkpoint_suffix - checkpoint_conds = collections.OrderedDict() - checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = ( - end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0 - ) - checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and ( - not hasattr(save_checkpoint, "best") - or is_better(val_loss, save_checkpoint.best) - ) - if val_loss is not None and cfg.keep_best_checkpoints > 0: - worst_best = getattr(save_checkpoint, "best", None) - chkpts = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*)\.pt".format( - cfg.best_checkpoint_metric - ), - ) - if len(chkpts) > 0: - p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0] - worst_best = float(p.rsplit("_")[-1].replace(".pt", "")) - # add random digits to resolve ties - rand_sfx = randint(0, cfg.keep_best_checkpoints) - checkpoint_conds[ - "checkpoint.best_{}_{:.3f}{}.pt".format(cfg.best_checkpoint_metric, - val_loss, rand_sfx) - ] = worst_best is None or is_better(val_loss, worst_best) - checkpoint_conds[ - "checkpoint_last{}.pt".format(suffix) - ] = not cfg.no_last_checkpoints - - extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss} - if hasattr(save_checkpoint, "best"): - extra_state.update({"best": save_checkpoint.best}) - - checkpoints = [ - os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond - ] - if len(checkpoints) > 0: - trainer.save_checkpoint(checkpoints[0], extra_state) - for cp in checkpoints[1:]: - if cfg.write_checkpoints_asynchronously: - # TODO[ioPath]: Need to implement a delayed asynchronous - # file copying/moving feature. - logger.warning( - f"ioPath is not copying {checkpoints[0]} to {cp} " - "since async write mode is on." - ) - else: - assert PathManager.copy( - checkpoints[0], cp, overwrite=True - ), f"Failed to copy {checkpoints[0]} to {cp}" - - write_timer.stop() - logger.info( - "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format( - checkpoints[0], epoch, updates, val_loss, write_timer.sum - ) - ) - - if not end_of_epoch and cfg.keep_interval_updates > 0: - # remove old checkpoints; checkpoints are sorted in descending order - if cfg.keep_interval_updates_pattern == -1: - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix) - ) - else: - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix), - keep_match=True, - ) - checkpoints = [ - x[0] - for x in checkpoints - if x[1] % cfg.keep_interval_updates_pattern != 0 - ] - - for old_chk in checkpoints[cfg.keep_interval_updates :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_last_epochs > 0: - # remove old epoch checkpoints; checkpoints are sorted in descending order - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix) - ) - for old_chk in checkpoints[cfg.keep_last_epochs :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - - if cfg.keep_best_checkpoints > 0: - # only keep the best N checkpoints according to validation metric - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if not cfg.maximize_best_checkpoint_metric: - checkpoints = checkpoints[::-1] - for old_chk in checkpoints[cfg.keep_best_checkpoints :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - - -def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args): - """ - Load a checkpoint and restore the training iterator. - - *passthrough_args* will be passed through to - ``trainer.get_train_iterator``. - """ - - reset_optimizer = cfg.reset_optimizer - reset_lr_scheduler = cfg.reset_lr_scheduler - optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides) - reset_meters = cfg.reset_meters - reset_dataloader = cfg.reset_dataloader - - if cfg.finetune_from_model is not None and ( - reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader - ): - raise ValueError( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - ) - - suffix = trainer.checkpoint_suffix - if ( - cfg.restore_file == "checkpoint_last.pt" - ): # default value of restore_file is 'checkpoint_last.pt' - checkpoint_path = os.path.join( - cfg.save_dir, "checkpoint_last{}.pt".format(suffix) - ) - first_launch = not PathManager.exists(checkpoint_path) - if cfg.finetune_from_model is not None and first_launch: - # if there is no last checkpoint to restore, start the finetune from pretrained model - # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc. - if PathManager.exists(cfg.finetune_from_model): - checkpoint_path = cfg.finetune_from_model - reset_optimizer = True - reset_lr_scheduler = True - reset_meters = True - reset_dataloader = True - logger.info( - f"loading pretrained model from {checkpoint_path}: " - "optimizer, lr scheduler, meters, dataloader will be reset" - ) - else: - raise ValueError( - f"--funetune-from-model {cfg.finetune_from_model} does not exist" - ) - elif suffix is not None: - checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt") - else: - checkpoint_path = cfg.restore_file - - if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model: - raise ValueError( - "--finetune-from-model and --restore-file (non-default value) " - "can not be specified together: " + str(cfg) - ) - - extra_state = trainer.load_checkpoint( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - reset_meters=reset_meters, - ) - - if ( - extra_state is not None - and "best" in extra_state - and not reset_optimizer - and not reset_meters - ): - save_checkpoint.best = extra_state["best"] - - if extra_state is not None and not reset_dataloader: - # restore iterator from checkpoint - itr_state = extra_state["train_iterator"] - epoch_itr = trainer.get_train_iterator( - epoch=itr_state["epoch"], load_dataset=True, **passthrough_args - ) - epoch_itr.load_state_dict(itr_state) - else: - epoch_itr = trainer.get_train_iterator( - epoch=1, load_dataset=True, **passthrough_args - ) - - trainer.lr_step(epoch_itr.epoch) - - return extra_state, epoch_itr - - -def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False): - """Loads a checkpoint to CPU (with upgrading for backward compatibility). - - If doing single-GPU training or if the checkpoint is only being loaded by at - most one process on each node (current default behavior is for only rank 0 - to read the checkpoint from disk), load_on_all_ranks should be False to - avoid errors from torch.distributed not having been initialized or - torch.distributed.barrier() hanging. - - If all processes on each node may be loading the checkpoint - simultaneously, load_on_all_ranks should be set to True to avoid I/O - conflicts. - - There's currently no support for > 1 but < all processes loading the - checkpoint on each node. - """ - local_path = PathManager.get_local_path(path) - # The locally cached file returned by get_local_path() may be stale for - # remote files that are periodically updated/overwritten (ex: - # checkpoint_last.pt) - so we remove the local copy, sync across processes - # (if needed), and then download a fresh copy. - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - # With potentially multiple processes removing the same file, the - # file being missing is benign (missing_ok isn't available until - # Python 3.8). - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - if "args" in state and state["args"] is not None and arg_overrides is not None: - args = state["args"] - for arg_name, arg_val in arg_overrides.items(): - setattr(args, arg_name, arg_val) - - if "cfg" in state and state["cfg"] is not None: - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - old_primitive = _utils.is_primitive_type - _utils.is_primitive_type = lambda _: True - - state["cfg"] = OmegaConf.create(state["cfg"]) - - _utils.is_primitive_type = old_primitive - OmegaConf.set_struct(state["cfg"], True) - - if arg_overrides is not None: - overwrite_args_by_name(state["cfg"], arg_overrides) - - state = _upgrade_state_dict(state) - return state - - -def load_model_ensemble( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - """Loads an ensemble of models. - - Args: - filenames (List[str]): checkpoint files to load - arg_overrides (Dict[str,Any], optional): override model args that - were used during model training - task (fairseq.tasks.FairseqTask, optional): task to use for loading - """ - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble, args, _task = load_model_ensemble_and_task( - filenames, - arg_overrides, - task, - strict, - suffix, - num_shards, - state, - ) - return ensemble, args - - -def get_maybe_sharded_checkpoint_filename( - filename: str, suffix: str, shard_idx: int, num_shards: int -) -> str: - orig_filename = filename - filename = filename.replace(".pt", suffix + ".pt") - fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt" - model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt" - if PathManager.exists(fsdp_filename): - return fsdp_filename - elif num_shards > 1: - return model_parallel_filename - else: - return filename - - -def load_model_ensemble_and_task( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - assert state is None or len(filenames) == 1 - - from fairseq import tasks - - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble = [] - cfg = None - for filename in filenames: - orig_filename = filename - model_shard_state = {"shard_weights": [], "shard_metadata": []} - assert num_shards > 0 - st = time.time() - for shard_idx in range(num_shards): - filename = get_maybe_sharded_checkpoint_filename( - orig_filename, suffix, shard_idx, num_shards - ) - - if not PathManager.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - if state is None: - state = load_checkpoint_to_cpu(filename, arg_overrides) - if "args" in state and state["args"] is not None: - cfg = convert_namespace_to_omegaconf(state["args"]) - elif "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - else: - raise RuntimeError( - f"Neither args nor cfg exist in state keys = {state.keys()}" - ) - - if task is None: - task = tasks.setup_task(cfg.task) - - if "task_state" in state: - task.load_state_dict(state["task_state"]) - - if "fsdp_metadata" in state and num_shards > 1: - model_shard_state["shard_weights"].append(state["model"]) - model_shard_state["shard_metadata"].append(state["fsdp_metadata"]) - # check FSDP import before the code goes too far - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if shard_idx == num_shards - 1: - consolidated_model_state = FSDP.consolidate_shard_weights( - shard_weights=model_shard_state["shard_weights"], - shard_metadata=model_shard_state["shard_metadata"], - ) - model = task.build_model(cfg.model) - model.load_state_dict( - consolidated_model_state, strict=strict, model_cfg=cfg.model - ) - else: - # model parallel checkpoint or unsharded checkpoint - model = task.build_model(cfg.model) - model.load_state_dict( - state["model"], strict=strict, model_cfg=cfg.model - ) - - # reset state so it gets loaded for the next model in ensemble - state = None - if shard_idx % 10 == 0 and shard_idx > 0: - elapsed = time.time() - st - logger.info(f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard") - - # build model for ensemble - ensemble.append(model) - return ensemble, cfg, task - - -def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False): - """Retrieves all checkpoints found in `path` directory. - - Checkpoints are identified by matching filename to the specified pattern. If - the pattern contains groups, the result will be sorted by the first group in - descending order. - """ - pt_regexp = re.compile(pattern) - files = PathManager.ls(path) - - entries = [] - for i, f in enumerate(files): - m = pt_regexp.fullmatch(f) - if m is not None: - idx = float(m.group(1)) if len(m.groups()) > 0 else i - entries.append((idx, m.group(0))) - if keep_match: - return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)] - else: - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)] - - -def torch_persistent_save(obj, filename, async_write: bool = False): - if async_write: - with PathManager.opena(filename, "wb") as f: - _torch_persistent_save(obj, f) - else: - if PathManager.supports_rename(filename): - # do atomic save - with PathManager.open(filename + ".tmp", "wb") as f: - _torch_persistent_save(obj, f) - PathManager.rename(filename + ".tmp", filename) - else: - # fallback to non-atomic save - with PathManager.open(filename, "wb") as f: - _torch_persistent_save(obj, f) - - -def _torch_persistent_save(obj, f): - if isinstance(f, str): - with PathManager.open(f, "wb") as h: - torch_persistent_save(obj, h) - return - for i in range(3): - try: - return torch.save(obj, f) - except Exception: - if i == 2: - logger.error(traceback.format_exc()) - - -def _upgrade_state_dict(state): - """Helper for upgrading old model checkpoints.""" - - # add optimizer_history - if "optimizer_history" not in state: - state["optimizer_history"] = [ - {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} - ] - state["last_optimizer_state"] = state["optimizer"] - del state["optimizer"] - del state["best_loss"] - # move extra_state into sub-dictionary - if "epoch" in state and "extra_state" not in state: - state["extra_state"] = { - "epoch": state["epoch"], - "batch_offset": state["batch_offset"], - "val_loss": state["val_loss"], - } - del state["epoch"] - del state["batch_offset"] - del state["val_loss"] - # reduce optimizer history's memory usage (only keep the last state) - if "optimizer" in state["optimizer_history"][-1]: - state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"] - for optim_hist in state["optimizer_history"]: - del optim_hist["optimizer"] - # record the optimizer class name - if "optimizer_name" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG" - # move best_loss into lr_scheduler_state - if "lr_scheduler_state" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["lr_scheduler_state"] = { - "best": state["optimizer_history"][-1]["best_loss"] - } - del state["optimizer_history"][-1]["best_loss"] - # keep track of number of updates - if "num_updates" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["num_updates"] = 0 - # old model checkpoints may not have separate source/target positions - if ( - "args" in state - and hasattr(state["args"], "max_positions") - and not hasattr(state["args"], "max_source_positions") - ): - state["args"].max_source_positions = state["args"].max_positions - state["args"].max_target_positions = state["args"].max_positions - # use stateful training data iterator - if "train_iterator" not in state["extra_state"]: - state["extra_state"]["train_iterator"] = { - "epoch": state["extra_state"]["epoch"], - "iterations_in_epoch": state["extra_state"].get("batch_offset", 0), - } - - # backward compatibility, cfg updates - if "args" in state and state["args"] is not None: - # default to translation task - if not hasattr(state["args"], "task"): - state["args"].task = "translation" - # --raw-text and --lazy-load are deprecated - if getattr(state["args"], "raw_text", False): - state["args"].dataset_impl = "raw" - elif getattr(state["args"], "lazy_load", False): - state["args"].dataset_impl = "lazy" - # epochs start at 1 - if state["extra_state"]["train_iterator"] is not None: - state["extra_state"]["train_iterator"]["epoch"] = max( - state["extra_state"]["train_iterator"].get("epoch", 1), 1 - ) - # --remove-bpe ==> --postprocess - if hasattr(state["args"], "remove_bpe"): - state["args"].post_process = state["args"].remove_bpe - # --min-lr ==> --stop-min-lr - if hasattr(state["args"], "min_lr"): - state["args"].stop_min_lr = state["args"].min_lr - del state["args"].min_lr - # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion - if ( - hasattr(state["args"], "criterion") - and state["args"].criterion in [ - "binary_cross_entropy", - "kd_binary_cross_entropy", - ] - ): - state["args"].criterion = "wav2vec" - # remove log_keys if it's None (criteria will supply a default value of []) - if hasattr(state["args"], "log_keys") and state["args"].log_keys is None: - delattr(state["args"], "log_keys") - # speech_pretraining => audio pretraining - if ( - hasattr(state["args"], "task") - and state["args"].task == "speech_pretraining" - ): - state["args"].task = "audio_pretraining" - # audio_cpc => wav2vec - if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc": - state["args"].arch = "wav2vec" - # convert legacy float learning rate to List[float] - if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float): - state["args"].lr = [state["args"].lr] - # convert task data arg to a string instead of List[string] - if ( - hasattr(state["args"], "data") - and isinstance(state["args"].data, list) - and len(state["args"].data) > 0 - ): - state["args"].data = state["args"].data[0] - # remove keys in state["args"] related to teacher-student learning - for key in [ - "static_teachers", - "static_teacher_weights", - "dynamic_teachers", - "dynamic_teacher_weights", - ]: - if key in state["args"]: - delattr(state["args"], key) - - state["cfg"] = convert_namespace_to_omegaconf(state["args"]) - - if "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - with open_dict(cfg): - # any upgrades for Hydra-based configs - if ( - "task" in cfg - and "eval_wer_config" in cfg.task - and isinstance(cfg.task.eval_wer_config.print_alignment, bool) - ): - cfg.task.eval_wer_config.print_alignment = "hard" - if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool): - cfg.generation.print_alignment = "hard" - if ( - "model" in cfg - and "w2v_args" in cfg.model - and cfg.model.w2v_args is not None - and ( - hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args - ) - and hasattr(cfg.model.w2v_args.task, "eval_wer_config") - and cfg.model.w2v_args.task.eval_wer_config is not None - and isinstance( - cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool - ) - ): - cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard" - - return state - - -def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]): - """Prune the given state_dict if desired for LayerDrop - (https://arxiv.org/abs/1909.11556). - - Training with LayerDrop allows models to be robust to pruning at inference - time. This function prunes state_dict to allow smaller models to be loaded - from a larger model and re-maps the existing state_dict for this to occur. - - It's called by functions that load models from checkpoints and does not - need to be called directly. - """ - arch = None - if model_cfg is not None: - arch = ( - model_cfg._name - if isinstance(model_cfg, DictConfig) - else getattr(model_cfg, "arch", None) - ) - - if not model_cfg or arch is None or arch == "ptt_transformer": - # args should not be none, but don't crash if it is. - return state_dict - - encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None) - decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None) - - if not encoder_layers_to_keep and not decoder_layers_to_keep: - return state_dict - - # apply pruning - logger.info( - "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop" - ) - - def create_pruning_pass(layers_to_keep, layer_name): - keep_layers = sorted( - int(layer_string) for layer_string in layers_to_keep.split(",") - ) - mapping_dict = {} - for i in range(len(keep_layers)): - mapping_dict[str(keep_layers[i])] = str(i) - - regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name)) - return {"substitution_regex": regex, "mapping_dict": mapping_dict} - - pruning_passes = [] - if encoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder")) - if decoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder")) - - new_state_dict = {} - for layer_name in state_dict.keys(): - match = re.search(r"\.layers\.(\d+)\.", layer_name) - # if layer has no number in it, it is a supporting layer, such as an - # embedding - if not match: - new_state_dict[layer_name] = state_dict[layer_name] - continue - - # otherwise, layer should be pruned. - original_layer_number = match.group(1) - # figure out which mapping dict to replace from - for pruning_pass in pruning_passes: - if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[ - "substitution_regex" - ].search(layer_name): - new_layer_number = pruning_pass["mapping_dict"][original_layer_number] - substitution_match = pruning_pass["substitution_regex"].search( - layer_name - ) - new_state_key = ( - layer_name[: substitution_match.start(1)] - + new_layer_number - + layer_name[substitution_match.end(1) :] - ) - new_state_dict[new_state_key] = state_dict[layer_name] - - # Since layers are now pruned, *_layers_to_keep are no longer needed. - # This is more of "It would make it work fix" rather than a proper fix. - if isinstance(model_cfg, DictConfig): - context = open_dict(model_cfg) - else: - context = contextlib.ExitStack() - with context: - if hasattr(model_cfg, "encoder_layers_to_keep"): - model_cfg.encoder_layers_to_keep = None - if hasattr(model_cfg, "decoder_layers_to_keep"): - model_cfg.decoder_layers_to_keep = None - - return new_state_dict - - -def load_pretrained_component_from_model( - component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str -): - """ - Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the - provided `component` object. If state_dict fails to load, there may be a - mismatch in the architecture of the corresponding `component` found in the - `checkpoint` file. - """ - if not PathManager.exists(checkpoint): - raise IOError("Model file not found: {}".format(checkpoint)) - state = load_checkpoint_to_cpu(checkpoint) - if isinstance(component, FairseqEncoder): - component_type = "encoder" - elif isinstance(component, FairseqDecoder): - component_type = "decoder" - else: - raise ValueError( - "component to load must be either a FairseqEncoder or " - "FairseqDecoder. Loading other component types are not supported." - ) - component_state_dict = OrderedDict() - for key in state["model"].keys(): - if key.startswith(component_type): - # encoder.input_layers.0.0.weight --> input_layers.0.0.weight - component_subkey = key[len(component_type) + 1 :] - component_state_dict[component_subkey] = state["model"][key] - component.load_state_dict(component_state_dict, strict=True) - return component - - -def verify_checkpoint_directory(save_dir: str) -> None: - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - temp_file_path = os.path.join(save_dir, "dummy") - try: - with open(temp_file_path, "w"): - pass - except OSError as e: - logger.warning( - "Unable to access checkpoint save directory: {}".format(save_dir) - ) - raise e - else: - os.remove(temp_file_path) diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/Conversations.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/Conversations.tsx deleted file mode 100644 index 4371963e128ff90172eb01621f6468e4b90adfd4..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chatbar/components/Conversations.tsx +++ /dev/null @@ -1,21 +0,0 @@ -import { Conversation } from '@/types/chat'; - -import { ConversationComponent } from './Conversation'; - -interface Props { - conversations: Conversation[]; -} - -export const Conversations = ({ conversations }: Props) => { - return ( -
    - {conversations - .filter((conversation) => !conversation.folderId) - .slice() - .reverse() - .map((conversation, index) => ( - - ))} -
    - ); -}; diff --git a/spaces/gstaff/KiteWind/templates/stlite/stlite-snippet-template.html b/spaces/gstaff/KiteWind/templates/stlite/stlite-snippet-template.html deleted file mode 100644 index 32d017c2983a3ec53906079a0da556c748ed8abe..0000000000000000000000000000000000000000 --- a/spaces/gstaff/KiteWind/templates/stlite/stlite-snippet-template.html +++ /dev/null @@ -1,27 +0,0 @@ -
    - - - -
    \ No newline at end of file diff --git a/spaces/gstdl/screener-saham-demo/app/helper_script.py b/spaces/gstdl/screener-saham-demo/app/helper_script.py deleted file mode 100644 index 9e262c55b77787872896cf669faa4ccf89752868..0000000000000000000000000000000000000000 --- a/spaces/gstdl/screener-saham-demo/app/helper_script.py +++ /dev/null @@ -1,176 +0,0 @@ -from bokeh.plotting import figure -from bokeh.models import ColumnDataSource, HoverTool, Arrow, NormalHead -from bokeh.palettes import Spectral4 -from bokeh.embed import components -import sqlite3 -import pandas as pd - - -def get_tickers(pattern, last_dates=1): - # connect to database - with sqlite3.connect("dataset/ihsg.db") as con: - - # retrieve data from database - tickers = pd.read_sql(f""" - SELECT Kode - FROM patterns - WHERE Date IN ( - SELECT Date - FROM ( - SELECT Date, ROW_NUMBER() OVER(ORDER BY Date DESC) AS rnk - FROM historical - WHERE Kode = 'IHSG' - ) a - WHERE rnk <= {last_dates + 1} - ) - AND Pattern = '{pattern}' - ORDER BY Pattern_Score DESC, Open_Close_Change DESC, High_Low_Change DESC - """, - con=con, - ).iloc[:, 0].to_list() - - return tickers - -def get_data(kode, pattern): - - # connect to database - with sqlite3.connect("dataset/ihsg.db") as con: - - # retrieve data from database - df = pd.read_sql(f""" - SELECT * - FROM historical - WHERE Kode = '{kode}' - ORDER BY Date - """, - con=con, - parse_dates=['Date'], - ) - - # df = pd.read_sql(f""" - # SELECT - # historical.Date, - # historical.Open, - # historical.High, - # historical.Low, - # historical.Close, - # patterns.Pattern_Score - # FROM historical - # LEFT JOIN ( - # SELECT Date, Kode, Pattern_Score - # FROM patterns - # WHERE Pattern = '{pattern}' - # ) AS patterns - # USING(Kode, Date) - # WHERE Kode = '{kode}' - # ORDER BY Date - # """, - # con=con, - # parse_dates=['Date'], - # ) - - nama = pd.read_sql( - f"SELECT Nama FROM list_perusahaan WHERE Kode = '{kode}'", - con=con, - ).values[0][0] - - return df, nama - -def plot_candlestick(df, nama, kode): - - # calculate simple moving average - for period in [5,20,200]: - df[f'sma{period}'] = df['Close'].rolling(period, period).mean() - - # Prepare data for plotting - cds = ColumnDataSource(df) - cds_inc = ColumnDataSource(df[df["Close"] >= df["Open"]]) - cds_dec = ColumnDataSource(df[df["Open"] > df["Close"]]) - - # assign figure canvas to variable p - x_range = (max(len(df) - 60.5, 0), len(df)) - p = figure( - tools="pan,zoom_in,zoom_out,box_zoom,undo,redo,reset,save", - plot_width=600, - plot_height=400, - title = f"{kode}\t({nama})", - x_range= x_range, - y_range= ( - df.loc[x_range[0]//1-5:x_range[1], ["Open", "High", "Low", "Close", "sma5", "sma20", "sma200"]].min().min() * 0.875, - df.loc[x_range[0]//1-5:x_range[1], ["Open", "High", "Low", "Close", "sma5", "sma20", "sma200"]].max().max() * 1.125 - ) - ) - - # xaxis setup - p.xaxis.major_label_overrides = { - i: date.strftime('%d %b %Y') for i, date in enumerate(df["Date"]) - } - p.xaxis.bounds = (0, df.index[-1]) - p.xaxis.major_label_orientation = (22/7)/4 - p.grid.grid_line_alpha=0.3 - - # # plot pattern arrow - # for idx in df[df["Pattern_Score"].notna()].tail().index: - # row = df.loc[idx, ["Open", "High", "Low", "Close"]] - # x_start = row.min() - # if x_start < 200: - # x_start -= 2 - # x_end = x_start - 4 - # elif x_start < 500: - # x_start -= 4 - # x_end = x_start - 4 - # else: - # x_start -= 8 - # x_end = x_start - 6 - # p.add_layout(Arrow( - # end=NormalHead(fill_color="black"), - # line_color="black", - # x_start = x_start, - # x_end = x_end, - # y_start = idx, - # y_end=idx - # )) - - - # plot candlestick wicks with HoverTool - p.add_tools(HoverTool( - renderers=[p.segment("index", "High", "index", "Low", source=cds, color="black", line_width=1)], - tooltips=[ - ("Date","@Date{%F}"), - ("Open","@Open{0.2f}"), - ("High", "@High{0.2f}"), - ("Low", "@Low{0.2f}"), - ("Close", "@Close{0.2f}"), - ], - formatters={"@Date":"datetime"} - )) - - # plot candlestick bars - for data, color in [(cds_inc, "#26a69a"), (cds_dec, "#ef5350")]: - p.vbar("index", 0.5, "Open", "Close", source=data, fill_color=color, line_color="black", line_width=1) - - # plot moving average with HoverTool - for period, color in zip([5,20,200], Spectral4): - p.add_tools(HoverTool( - renderers=[p.line( - "index", - f"sma{period}", - source=cds, - line_width=2, - alpha=0.8, - color=color, - legend_label=f'SMA {period}\t')], - tooltips=[ - (f"SMA {period}", "@sma%s{0.2f}" %(period)), - ], - )) - - # legend setup - p.legend.location = "top_left" - p.legend.click_policy="hide" - p.legend.orientation="horizontal" - - # generate script and div - script, div = components(p) - - return script, div \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/openpose/src/model.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/openpose/src/model.py deleted file mode 100644 index e5f67d39e3f8b1068ec1c3d27cee07670acbce91..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/openpose/src/model.py +++ /dev/null @@ -1,218 +0,0 @@ -import torch -from collections import OrderedDict - -import torch -import torch.nn as nn - - -def make_layers(block, no_relu_layers): - layers = [] - for layer_name, v in block.items(): - if 'pool' in layer_name: - layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], - padding=v[2]) - layers.append((layer_name, layer)) - else: - conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], - kernel_size=v[2], stride=v[3], - padding=v[4]) - layers.append((layer_name, conv2d)) - if layer_name not in no_relu_layers: - layers.append(('relu_'+layer_name, nn.ReLU(inplace=True))) - - return nn.Sequential(OrderedDict(layers)) - - -class bodypose_model(nn.Module): - def __init__(self): - super(bodypose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1', - 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2', - 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1', - 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1'] - blocks = {} - block0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3_CPM', [512, 256, 3, 1, 1]), - ('conv4_4_CPM', [256, 128, 3, 1, 1]) - ]) - - # Stage 1 - block1_1 = OrderedDict([ - ('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L1', [512, 38, 1, 1, 0]) - ]) - - block1_2 = OrderedDict([ - ('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L2', [512, 19, 1, 1, 0]) - ]) - blocks['block1_1'] = block1_1 - blocks['block1_2'] = block1_2 - - self.model0 = make_layers(block0, no_relu_layers) - - # Stages 2 - 6 - for i in range(2, 7): - blocks['block%d_1' % i] = OrderedDict([ - ('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0]) - ]) - - blocks['block%d_2' % i] = OrderedDict([ - ('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_1 = blocks['block1_1'] - self.model2_1 = blocks['block2_1'] - self.model3_1 = blocks['block3_1'] - self.model4_1 = blocks['block4_1'] - self.model5_1 = blocks['block5_1'] - self.model6_1 = blocks['block6_1'] - - self.model1_2 = blocks['block1_2'] - self.model2_2 = blocks['block2_2'] - self.model3_2 = blocks['block3_2'] - self.model4_2 = blocks['block4_2'] - self.model5_2 = blocks['block5_2'] - self.model6_2 = blocks['block6_2'] - - def forward(self, x): - - out1 = self.model0(x) - - out1_1 = self.model1_1(out1) - out1_2 = self.model1_2(out1) - out2 = torch.cat([out1_1, out1_2, out1], 1) - - out2_1 = self.model2_1(out2) - out2_2 = self.model2_2(out2) - out3 = torch.cat([out2_1, out2_2, out1], 1) - - out3_1 = self.model3_1(out3) - out3_2 = self.model3_2(out3) - out4 = torch.cat([out3_1, out3_2, out1], 1) - - out4_1 = self.model4_1(out4) - out4_2 = self.model4_2(out4) - out5 = torch.cat([out4_1, out4_2, out1], 1) - - out5_1 = self.model5_1(out5) - out5_2 = self.model5_2(out5) - out6 = torch.cat([out5_1, out5_2, out1], 1) - - out6_1 = self.model6_1(out6) - out6_2 = self.model6_2(out6) - - return out6_1, out6_2 - - -class handpose_model(nn.Module): - def __init__(self): - super(handpose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3', - 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6'] - # stage 1 - block1_0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3', [512, 512, 3, 1, 1]), - ('conv4_4', [512, 512, 3, 1, 1]), - ('conv5_1', [512, 512, 3, 1, 1]), - ('conv5_2', [512, 512, 3, 1, 1]), - ('conv5_3_CPM', [512, 128, 3, 1, 1]) - ]) - - block1_1 = OrderedDict([ - ('conv6_1_CPM', [128, 512, 1, 1, 0]), - ('conv6_2_CPM', [512, 22, 1, 1, 0]) - ]) - - blocks = {} - blocks['block1_0'] = block1_0 - blocks['block1_1'] = block1_1 - - # stage 2-6 - for i in range(2, 7): - blocks['block%d' % i] = OrderedDict([ - ('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]), - ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_0 = blocks['block1_0'] - self.model1_1 = blocks['block1_1'] - self.model2 = blocks['block2'] - self.model3 = blocks['block3'] - self.model4 = blocks['block4'] - self.model5 = blocks['block5'] - self.model6 = blocks['block6'] - - def forward(self, x): - out1_0 = self.model1_0(x) - out1_1 = self.model1_1(out1_0) - concat_stage2 = torch.cat([out1_1, out1_0], 1) - out_stage2 = self.model2(concat_stage2) - concat_stage3 = torch.cat([out_stage2, out1_0], 1) - out_stage3 = self.model3(concat_stage3) - concat_stage4 = torch.cat([out_stage3, out1_0], 1) - out_stage4 = self.model4(concat_stage4) - concat_stage5 = torch.cat([out_stage4, out1_0], 1) - out_stage5 = self.model5(concat_stage5) - concat_stage6 = torch.cat([out_stage5, out1_0], 1) - out_stage6 = self.model6(concat_stage6) - return out_stage6 diff --git a/spaces/h2oai/wave-tour/examples/tags.py b/spaces/h2oai/wave-tour/examples/tags.py deleted file mode 100644 index 02478dc05c4ece39d21592f3b077716434d78012..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/tags.py +++ /dev/null @@ -1,20 +0,0 @@ -# Tags -# Display a set of tags in a row. Each tag consists of a box with text inside. -# Can be used in different scenarios including highlighting a specific keyword or holding a numeric value with -# different colors to indicate error, warning, or success. -# --- -from h2o_wave import site, ui - -page = site['/demo'] - -page['example'] = ui.form_card( - box='1 1 2 2', - items=[ - ui.tags([ - ui.tag(color='#610404', label='Error'), - ui.tag(color='#7F6001', label='Warning'), - ui.tag(color='#054007', label='Success'), - ]) - ]) - -page.save() diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_download_pretrained.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_download_pretrained.py deleted file mode 100644 index 6340918ed5b7c56fdbbfb84e2bcb26ccf662c8b5..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/tests/test_download_pretrained.py +++ /dev/null @@ -1,111 +0,0 @@ -import requests -import torch -from PIL import Image -import hashlib -import tempfile -import unittest -from io import BytesIO -from pathlib import Path -from unittest.mock import patch - -from urllib3 import HTTPResponse -from urllib3._collections import HTTPHeaderDict - -import open_clip -from open_clip.pretrained import download_pretrained_from_url - - -class DownloadPretrainedTests(unittest.TestCase): - - def create_response(self, data, status_code=200, content_type='application/octet-stream'): - fp = BytesIO(data) - headers = HTTPHeaderDict({ - 'Content-Type': content_type, - 'Content-Length': str(len(data)) - }) - raw = HTTPResponse(fp, preload_content=False, headers=headers, status=status_code) - return raw - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_openaipublic(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest() - urllib.request.urlopen.return_value = self.create_response(file_contents) - with tempfile.TemporaryDirectory() as root: - url = f'https://openaipublic.azureedge.net/clip/models/{expected_hash}/RN50.pt' - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_called_once() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_openaipublic_corrupted(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest() - urllib.request.urlopen.return_value = self.create_response(b'corrupted pretrained model') - with tempfile.TemporaryDirectory() as root: - url = f'https://openaipublic.azureedge.net/clip/models/{expected_hash}/RN50.pt' - with self.assertRaisesRegex(RuntimeError, r'checksum does not not match'): - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_called_once() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_openaipublic_valid_cache(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest() - urllib.request.urlopen.return_value = self.create_response(file_contents) - with tempfile.TemporaryDirectory() as root: - local_file = Path(root) / 'RN50.pt' - local_file.write_bytes(file_contents) - url = f'https://openaipublic.azureedge.net/clip/models/{expected_hash}/RN50.pt' - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_not_called() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_openaipublic_corrupted_cache(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest() - urllib.request.urlopen.return_value = self.create_response(file_contents) - with tempfile.TemporaryDirectory() as root: - local_file = Path(root) / 'RN50.pt' - local_file.write_bytes(b'corrupted pretrained model') - url = f'https://openaipublic.azureedge.net/clip/models/{expected_hash}/RN50.pt' - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_called_once() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_mlfoundations(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest()[:8] - urllib.request.urlopen.return_value = self.create_response(file_contents) - with tempfile.TemporaryDirectory() as root: - url = f'https://github.com/mlfoundations/download/v0.2-weights/rn50-quickgelu-{expected_hash}.pt' - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_called_once() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_url_from_mlfoundations_corrupted(self, urllib): - file_contents = b'pretrained model weights' - expected_hash = hashlib.sha256(file_contents).hexdigest()[:8] - urllib.request.urlopen.return_value = self.create_response(b'corrupted pretrained model') - with tempfile.TemporaryDirectory() as root: - url = f'https://github.com/mlfoundations/download/v0.2-weights/rn50-quickgelu-{expected_hash}.pt' - with self.assertRaisesRegex(RuntimeError, r'checksum does not not match'): - download_pretrained_from_url(url, root) - urllib.request.urlopen.assert_called_once() - - @patch('open_clip.pretrained.urllib') - def test_download_pretrained_from_hfh(self, urllib): - model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hf-internal-testing/tiny-open-clip-model') - tokenizer = open_clip.get_tokenizer('hf-hub:hf-internal-testing/tiny-open-clip-model') - img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" - image = preprocess(Image.open(requests.get(img_url, stream=True).raw)).unsqueeze(0) - text = tokenizer(["a diagram", "a dog", "a cat"]) - - with torch.no_grad(): - image_features = model.encode_image(image) - text_features = model.encode_text(text) - image_features /= image_features.norm(dim=-1, keepdim=True) - text_features /= text_features.norm(dim=-1, keepdim=True) - - text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) - - self.assertTrue(torch.allclose(text_probs, torch.tensor([[0.0597, 0.6349, 0.3053]]), 1e-3)) diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/promptgenerator.py b/spaces/hamelcubsfan/AutoGPT/autogpt/promptgenerator.py deleted file mode 100644 index 0ad7046a0c41dab356abcd0151b65890e5544cd2..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/promptgenerator.py +++ /dev/null @@ -1,138 +0,0 @@ -""" A module for generating custom prompt strings.""" -from __future__ import annotations - -import json -from typing import Any - - -class PromptGenerator: - """ - A class for generating custom prompt strings based on constraints, commands, - resources, and performance evaluations. - """ - - def __init__(self) -> None: - """ - Initialize the PromptGenerator object with empty lists of constraints, - commands, resources, and performance evaluations. - """ - self.constraints = [] - self.commands = [] - self.resources = [] - self.performance_evaluation = [] - self.response_format = { - "thoughts": { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user", - }, - "command": {"name": "command name", "args": {"arg name": "value"}}, - } - - def add_constraint(self, constraint: str) -> None: - """ - Add a constraint to the constraints list. - - Args: - constraint (str): The constraint to be added. - """ - self.constraints.append(constraint) - - def add_command(self, command_label: str, command_name: str, args=None) -> None: - """ - Add a command to the commands list with a label, name, and optional arguments. - - Args: - command_label (str): The label of the command. - command_name (str): The name of the command. - args (dict, optional): A dictionary containing argument names and their - values. Defaults to None. - """ - if args is None: - args = {} - - command_args = {arg_key: arg_value for arg_key, arg_value in args.items()} - - command = { - "label": command_label, - "name": command_name, - "args": command_args, - } - - self.commands.append(command) - - def _generate_command_string(self, command: dict[str, Any]) -> str: - """ - Generate a formatted string representation of a command. - - Args: - command (dict): A dictionary containing command information. - - Returns: - str: The formatted command string. - """ - args_string = ", ".join( - f'"{key}": "{value}"' for key, value in command["args"].items() - ) - return f'{command["label"]}: "{command["name"]}", args: {args_string}' - - def add_resource(self, resource: str) -> None: - """ - Add a resource to the resources list. - - Args: - resource (str): The resource to be added. - """ - self.resources.append(resource) - - def add_performance_evaluation(self, evaluation: str) -> None: - """ - Add a performance evaluation item to the performance_evaluation list. - - Args: - evaluation (str): The evaluation item to be added. - """ - self.performance_evaluation.append(evaluation) - - def _generate_numbered_list(self, items: list[Any], item_type="list") -> str: - """ - Generate a numbered list from given items based on the item_type. - - Args: - items (list): A list of items to be numbered. - item_type (str, optional): The type of items in the list. - Defaults to 'list'. - - Returns: - str: The formatted numbered list. - """ - if item_type == "command": - return "\n".join( - f"{i+1}. {self._generate_command_string(item)}" - for i, item in enumerate(items) - ) - else: - return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items)) - - def generate_prompt_string(self) -> str: - """ - Generate a prompt string based on the constraints, commands, resources, - and performance evaluations. - - Returns: - str: The generated prompt string. - """ - formatted_response_format = json.dumps(self.response_format, indent=4) - return ( - f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n" - "Commands:\n" - f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n" - f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n" - "Performance Evaluation:\n" - f"{self._generate_numbered_list(self.performance_evaluation)}\n\n" - "You should only respond in JSON format as described below \nResponse" - f" Format: \n{formatted_response_format} \nEnsure the response can be" - " parsed by Python json.loads" - ) diff --git a/spaces/haoheliu/audioldm2-text2audio-text2music/app.py b/spaces/haoheliu/audioldm2-text2audio-text2music/app.py deleted file mode 100644 index ab13a479523c5d51d186dd49ee71f1274c9b9c0c..0000000000000000000000000000000000000000 --- a/spaces/haoheliu/audioldm2-text2audio-text2music/app.py +++ /dev/null @@ -1,173 +0,0 @@ -import gradio as gr -import torch -from diffusers import AudioLDM2Pipeline - - -# make Space compatible with CPU duplicates -if torch.cuda.is_available(): - device = "cuda" - torch_dtype = torch.float16 -else: - device = "cpu" - torch_dtype = torch.float32 - -# load the diffusers pipeline -repo_id = "cvssp/audioldm2" -pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch_dtype).to(device) -# pipe.unet = torch.compile(pipe.unet) - -# set the generator for reproducibility -generator = torch.Generator(device) - - -def text2audio(text, negative_prompt, duration, guidance_scale, random_seed, n_candidates): - if text is None: - raise gr.Error("Please provide a text input.") - - waveforms = pipe( - text, - audio_length_in_s=duration, - guidance_scale=guidance_scale, - num_inference_steps=200, - negative_prompt=negative_prompt, - num_waveforms_per_prompt=n_candidates if n_candidates else 1, - generator=generator.manual_seed(int(random_seed)), - )["audios"] - - return gr.make_waveform((16000, waveforms[0]), bg_image="bg.png") - - -iface = gr.Blocks() - -with iface: - gr.HTML( - """ -
    -
    -

    - AudioLDM 2: A General Framework for Audio, Music, and Speech Generation -

    -

    - [Paper] [Project - page] [🧨 - Diffusers] -

    -
    - """ - ) - gr.HTML("""This is the demo for AudioLDM 2, powered by 🧨 Diffusers. Demo uses the checkpoint AudioLDM 2 base. For faster inference without waiting in - queue, you may duplicate the space and upgrade to a GPU in the settings.""") - gr.DuplicateButton() - - with gr.Group(): - textbox = gr.Textbox( - value="The vibrant beat of Brazilian samba drums.", - max_lines=1, - label="Input text", - info="Your text is important for the audio quality. Please ensure it is descriptive by using more adjectives.", - elem_id="prompt-in", - ) - negative_textbox = gr.Textbox( - value="Low quality.", - max_lines=1, - label="Negative prompt", - info="Enter a negative prompt not to guide the audio generation. Selecting appropriate negative prompts can improve the audio quality significantly.", - elem_id="prompt-in", - ) - - with gr.Accordion("Click to modify detailed configurations", open=False): - seed = gr.Number( - value=45, - label="Seed", - info="Change this value (any integer number) will lead to a different generation result.", - ) - duration = gr.Slider(5, 15, value=10, step=2.5, label="Duration (seconds)") - guidance_scale = gr.Slider( - 0, - 7, - value=3.5, - step=0.5, - label="Guidance scale", - info="Larger => better quality and relevancy to text; Smaller => better diversity", - ) - n_candidates = gr.Slider( - 1, - 5, - value=3, - step=1, - label="Number waveforms to generate", - info="Automatic quality control. This number control the number of candidates (e.g., generate three audios and choose the best to show you). A larger value usually lead to better quality with heavier computation", - ) - - outputs = gr.Video(label="Output", elem_id="output-video") - btn = gr.Button("Submit") - - btn.click( - text2audio, - inputs=[textbox, negative_textbox, duration, guidance_scale, seed, n_candidates], - # inputs=[textbox, negative_textbox, 10, guidance_scale, seed, n_candidates], - outputs=[outputs], - ) - - gr.HTML( - """ - - """ - ) - gr.Examples( - [ - ["A hammer is hitting a wooden surface.", "Low quality.", 10, 3.5, 45, 3], - ["A cat is meowing for attention.", "Low quality.", 10, 3.5, 45, 3], - ["An excited crowd cheering at a sports game.", "Low quality.", 10, 3.5, 45, 3], - ["Birds singing sweetly in a blooming garden.", "Low quality.", 10, 3.5, 45, 3], - ["A modern synthesizer creating futuristic soundscapes.", "Low quality.", 10, 3.5, 45, 3], - ["The vibrant beat of Brazilian samba drums.", "Low quality.", 10, 3.5, 45, 3], - ], - fn=text2audio, - inputs=[textbox, negative_textbox, duration, guidance_scale, seed, n_candidates], - outputs=[outputs], - cache_examples=True, - ) - gr.HTML( - """ -

    Essential Tricks for Enhancing the Quality of Your Generated - Audio

    -

    1. Try using more adjectives to describe your sound. For example: "A man is speaking - clearly and slowly in a large room" is better than "A man is speaking".

    -

    2. Try using different random seeds, which can significantly affect the quality of the generated - output.

    -

    3. It's better to use general terms like 'man' or 'woman' instead of specific names for individuals or - abstract objects that humans may not be familiar with.

    -

    4. Using a negative prompt to not guide the diffusion process can improve the - audio quality significantly. Try using negative prompts like 'low quality'.

    -
    - """ - ) - with gr.Accordion("Additional information", open=False): - gr.HTML( - """ -
    -

    We build the model with data from AudioSet, - Freesound and BBC Sound Effect library. We share this demo - based on the UK - copyright exception of data for academic research. -

    -
    - """ - ) - -iface.queue(max_size=20).launch() diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/matcher.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/matcher.py deleted file mode 100644 index 92331b97a7eb8e5e8bba6219a576b926520ea351..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/matcher.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch - - -class Matcher(object): - """ - This class assigns to each predicted "element" (e.g., a box) a ground-truth - element. Each predicted element will have exactly zero or one matches; each - ground-truth element may be assigned to zero or more predicted elements. - - Matching is based on the MxN match_quality_matrix, that characterizes how well - each (ground-truth, predicted)-pair match. For example, if the elements are - boxes, the matrix may contain box IoU overlap values. - - The matcher returns a tensor of size N containing the index of the ground-truth - element m that matches to prediction n. If there is no match, a negative value - is returned. - """ - - BELOW_LOW_THRESHOLD = -1 - BETWEEN_THRESHOLDS = -2 - - def __init__(self, high_threshold, low_threshold, allow_low_quality_matches=False): - """ - Args: - high_threshold (float): quality values greater than or equal to - this value are candidate matches. - low_threshold (float): a lower quality threshold used to stratify - matches into three levels: - 1) matches >= high_threshold - 2) BETWEEN_THRESHOLDS matches in [low_threshold, high_threshold) - 3) BELOW_LOW_THRESHOLD matches in [0, low_threshold) - allow_low_quality_matches (bool): if True, produce additional matches - for predictions that have only low-quality match candidates. See - set_low_quality_matches_ for more details. - """ - assert low_threshold <= high_threshold - self.high_threshold = high_threshold - self.low_threshold = low_threshold - self.allow_low_quality_matches = allow_low_quality_matches - - def __call__(self, match_quality_matrix): - """ - Args: - match_quality_matrix (Tensor[float]): an MxN tensor, containing the - pairwise quality between M ground-truth elements and N predicted elements. - - Returns: - matches (Tensor[int64]): an N tensor where N[i] is a matched gt in - [0, M - 1] or a negative value indicating that prediction i could not - be matched. - """ - if match_quality_matrix.numel() == 0: - # empty targets or proposals not supported during training - if match_quality_matrix.shape[0] == 0: - # raise ValueError( - # "No ground-truth boxes available for one of the images " - # "during training") - length = match_quality_matrix.size(1) - device = match_quality_matrix.device - return torch.ones(length, dtype=torch.int64, device=device) * -1 - else: - raise ValueError( - "No proposal boxes available for one of the images " - "during training") - - # match_quality_matrix is M (gt) x N (predicted) - # Max over gt elements (dim 0) to find best gt candidate for each prediction - matched_vals, matches = match_quality_matrix.max(dim=0) - if self.allow_low_quality_matches: - all_matches = matches.clone() - - # Assign candidate matches with low quality to negative (unassigned) values - below_low_threshold = matched_vals < self.low_threshold - between_thresholds = (matched_vals >= self.low_threshold) & ( - matched_vals < self.high_threshold - ) - matches[below_low_threshold] = Matcher.BELOW_LOW_THRESHOLD - matches[between_thresholds] = Matcher.BETWEEN_THRESHOLDS - - if self.allow_low_quality_matches: - self.set_low_quality_matches_(matches, all_matches, match_quality_matrix) - - return matches - - def set_low_quality_matches_(self, matches, all_matches, match_quality_matrix): - """ - Produce additional matches for predictions that have only low-quality matches. - Specifically, for each ground-truth find the set of predictions that have - maximum overlap with it (including ties); for each prediction in that set, if - it is unmatched, then match it to the ground-truth with which it has the highest - quality value. - """ - # For each gt, find the prediction with which it has highest quality - highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1) - # Find highest quality match available, even if it is low, including ties - gt_pred_pairs_of_highest_quality = torch.nonzero( - match_quality_matrix == highest_quality_foreach_gt[:, None] - ) - # Example gt_pred_pairs_of_highest_quality: - # tensor([[ 0, 39796], - # [ 1, 32055], - # [ 1, 32070], - # [ 2, 39190], - # [ 2, 40255], - # [ 3, 40390], - # [ 3, 41455], - # [ 4, 45470], - # [ 5, 45325], - # [ 5, 46390]]) - # Each row is a (gt index, prediction index) - # Note how gt items 1, 2, 3, and 5 each have two ties - - pred_inds_to_update = gt_pred_pairs_of_highest_quality[:, 1] - matches[pred_inds_to_update] = all_matches[pred_inds_to_update] diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/sampling.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/sampling.py deleted file mode 100644 index ecf251a2fa301d9e31eee7d3ba5dc6eaab1732f8..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/sampling.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import torch - -__all__ = ["subsample_labels"] - - -def subsample_labels(labels, num_samples, positive_fraction, bg_label): - """ - Return `num_samples` (or fewer, if not enough found) - random samples from `labels` which is a mixture of positives & negatives. - It will try to return as many positives as possible without - exceeding `positive_fraction * num_samples`, and then try to - fill the remaining slots with negatives. - - Args: - labels (Tensor): (N, ) label vector with values: - * -1: ignore - * bg_label: background ("negative") class - * otherwise: one or more foreground ("positive") classes - num_samples (int): The total number of labels with value >= 0 to return. - Values that are not sampled will be filled with -1 (ignore). - positive_fraction (float): The number of subsampled labels with values > 0 - is `min(num_positives, int(positive_fraction * num_samples))`. The number - of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`. - In order words, if there are not enough positives, the sample is filled with - negatives. If there are also not enough negatives, then as many elements are - sampled as is possible. - bg_label (int): label index of background ("negative") class. - - Returns: - pos_idx, neg_idx (Tensor): - 1D vector of indices. The total length of both is `num_samples` or fewer. - """ - positive = torch.nonzero((labels != -1) & (labels != bg_label), as_tuple=True)[0] - negative = torch.nonzero(labels == bg_label, as_tuple=True)[0] - - num_pos = int(num_samples * positive_fraction) - # protect against not enough positive examples - num_pos = min(positive.numel(), num_pos) - num_neg = num_samples - num_pos - # protect against not enough negative examples - num_neg = min(negative.numel(), num_neg) - - # randomly select positive and negative examples - perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] - perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] - - pos_idx = positive[perm1] - neg_idx = negative[perm2] - return pos_idx, neg_idx diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/roi_heads.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/roi_heads.py deleted file mode 100644 index 4f7225bf10544461bbe1e3c777863557f2ad5808..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/roi_heads.py +++ /dev/null @@ -1,227 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import numpy as np -import torch - -from detectron2.layers import ShapeSpec, cat, interpolate -from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads -from detectron2.modeling.roi_heads.mask_head import ( - build_mask_head, - mask_rcnn_inference, - mask_rcnn_loss, -) -from detectron2.modeling.roi_heads.roi_heads import select_foreground_proposals - -from .point_features import ( - generate_regular_grid_point_coords, - get_uncertain_point_coords_on_grid, - get_uncertain_point_coords_with_randomness, - point_sample, - point_sample_fine_grained_features, -) -from .point_head import build_point_head, roi_mask_point_loss - - -def calculate_uncertainty(logits, classes): - """ - We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the - foreground class in `classes`. - - Args: - logits (Tensor): A tensor of shape (R, C, ...) or (R, 1, ...) for class-specific or - class-agnostic, where R is the total number of predicted masks in all images and C is - the number of foreground classes. The values are logits. - classes (list): A list of length R that contains either predicted of ground truth class - for eash predicted mask. - - Returns: - scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with - the most uncertain locations having the highest uncertainty score. - """ - if logits.shape[1] == 1: - gt_class_logits = logits.clone() - else: - gt_class_logits = logits[ - torch.arange(logits.shape[0], device=logits.device), classes - ].unsqueeze(1) - return -(torch.abs(gt_class_logits)) - - -@ROI_HEADS_REGISTRY.register() -class PointRendROIHeads(StandardROIHeads): - """ - The RoI heads class for PointRend instance segmentation models. - - In this class we redefine the mask head of `StandardROIHeads` leaving all other heads intact. - To avoid namespace conflict with other heads we use names starting from `mask_` for all - variables that correspond to the mask head in the class's namespace. - """ - - def __init__(self, cfg, input_shape): - # TODO use explicit args style - super().__init__(cfg, input_shape) - self._init_mask_head(cfg, input_shape) - - def _init_mask_head(self, cfg, input_shape): - # fmt: off - self.mask_on = cfg.MODEL.MASK_ON - if not self.mask_on: - return - self.mask_coarse_in_features = cfg.MODEL.ROI_MASK_HEAD.IN_FEATURES - self.mask_coarse_side_size = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION - self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()} - # fmt: on - - in_channels = np.sum([input_shape[f].channels for f in self.mask_coarse_in_features]) - self.mask_coarse_head = build_mask_head( - cfg, - ShapeSpec( - channels=in_channels, - width=self.mask_coarse_side_size, - height=self.mask_coarse_side_size, - ), - ) - self._init_point_head(cfg, input_shape) - - def _init_point_head(self, cfg, input_shape): - # fmt: off - self.mask_point_on = cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON - if not self.mask_point_on: - return - assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES - self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES - self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS - self.mask_point_oversample_ratio = cfg.MODEL.POINT_HEAD.OVERSAMPLE_RATIO - self.mask_point_importance_sample_ratio = cfg.MODEL.POINT_HEAD.IMPORTANCE_SAMPLE_RATIO - # next two parameters are use in the adaptive subdivions inference procedure - self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS - self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS - # fmt: on - - in_channels = np.sum([input_shape[f].channels for f in self.mask_point_in_features]) - self.mask_point_head = build_point_head( - cfg, ShapeSpec(channels=in_channels, width=1, height=1) - ) - - def _forward_mask(self, features, instances): - """ - Forward logic of the mask prediction branch. - - Args: - features (dict[str, Tensor]): #level input features for mask prediction - instances (list[Instances]): the per-image instances to train/predict masks. - In training, they can be the proposals. - In inference, they can be the predicted boxes. - - Returns: - In training, a dict of losses. - In inference, update `instances` with new fields "pred_masks" and return it. - """ - if not self.mask_on: - return {} if self.training else instances - - if self.training: - proposals, _ = select_foreground_proposals(instances, self.num_classes) - proposal_boxes = [x.proposal_boxes for x in proposals] - mask_coarse_logits = self._forward_mask_coarse(features, proposal_boxes) - - losses = {"loss_mask": mask_rcnn_loss(mask_coarse_logits, proposals)} - losses.update(self._forward_mask_point(features, mask_coarse_logits, proposals)) - return losses - else: - pred_boxes = [x.pred_boxes for x in instances] - mask_coarse_logits = self._forward_mask_coarse(features, pred_boxes) - - mask_logits = self._forward_mask_point(features, mask_coarse_logits, instances) - mask_rcnn_inference(mask_logits, instances) - return instances - - def _forward_mask_coarse(self, features, boxes): - """ - Forward logic of the coarse mask head. - """ - point_coords = generate_regular_grid_point_coords( - np.sum(len(x) for x in boxes), self.mask_coarse_side_size, boxes[0].device - ) - mask_coarse_features_list = [features[k] for k in self.mask_coarse_in_features] - features_scales = [self._feature_scales[k] for k in self.mask_coarse_in_features] - # For regular grids of points, this function is equivalent to `len(features_list)' calls - # of `ROIAlign` (with `SAMPLING_RATIO=2`), and concat the results. - mask_features, _ = point_sample_fine_grained_features( - mask_coarse_features_list, features_scales, boxes, point_coords - ) - return self.mask_coarse_head(mask_features) - - def _forward_mask_point(self, features, mask_coarse_logits, instances): - """ - Forward logic of the mask point head. - """ - if not self.mask_point_on: - return {} if self.training else mask_coarse_logits - - mask_features_list = [features[k] for k in self.mask_point_in_features] - features_scales = [self._feature_scales[k] for k in self.mask_point_in_features] - - if self.training: - proposal_boxes = [x.proposal_boxes for x in instances] - gt_classes = cat([x.gt_classes for x in instances]) - with torch.no_grad(): - point_coords = get_uncertain_point_coords_with_randomness( - mask_coarse_logits, - lambda logits: calculate_uncertainty(logits, gt_classes), - self.mask_point_train_num_points, - self.mask_point_oversample_ratio, - self.mask_point_importance_sample_ratio, - ) - - fine_grained_features, point_coords_wrt_image = point_sample_fine_grained_features( - mask_features_list, features_scales, proposal_boxes, point_coords - ) - coarse_features = point_sample(mask_coarse_logits, point_coords, align_corners=False) - point_logits = self.mask_point_head(fine_grained_features, coarse_features) - return { - "loss_mask_point": roi_mask_point_loss( - point_logits, instances, point_coords_wrt_image - ) - } - else: - pred_boxes = [x.pred_boxes for x in instances] - pred_classes = cat([x.pred_classes for x in instances]) - # The subdivision code will fail with the empty list of boxes - if len(pred_classes) == 0: - return mask_coarse_logits - - mask_logits = mask_coarse_logits.clone() - for subdivions_step in range(self.mask_point_subdivision_steps): - mask_logits = interpolate( - mask_logits, scale_factor=2, mode="bilinear", align_corners=False - ) - # If `mask_point_subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - H, W = mask_logits.shape[-2:] - if ( - self.mask_point_subdivision_num_points >= 4 * H * W - and subdivions_step < self.mask_point_subdivision_steps - 1 - ): - continue - uncertainty_map = calculate_uncertainty(mask_logits, pred_classes) - point_indices, point_coords = get_uncertain_point_coords_on_grid( - uncertainty_map, self.mask_point_subdivision_num_points - ) - fine_grained_features, _ = point_sample_fine_grained_features( - mask_features_list, features_scales, pred_boxes, point_coords - ) - coarse_features = point_sample( - mask_coarse_logits, point_coords, align_corners=False - ) - point_logits = self.mask_point_head(fine_grained_features, coarse_features) - - # put mask point predictions to the right places on the upsampled grid. - R, C, H, W = mask_logits.shape - point_indices = point_indices.unsqueeze(1).expand(-1, C, -1) - mask_logits = ( - mask_logits.reshape(R, C, H * W) - .scatter_(2, point_indices, point_logits) - .view(R, C, H, W) - ) - return mask_logits diff --git a/spaces/haung/clear/README.md b/spaces/haung/clear/README.md deleted file mode 100644 index a1d8f85510b85899dc7d22770ab7859484075847..0000000000000000000000000000000000000000 --- a/spaces/haung/clear/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Real CUGAN -emoji: 🔥 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/heiyubili/bingo/src/components/theme-toggle.tsx b/spaces/heiyubili/bingo/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/helliun/gpt4-associative-memory/README.md b/spaces/helliun/gpt4-associative-memory/README.md deleted file mode 100644 index 831e2ded4f38c5aa4493e9756ddb202afcaf68c6..0000000000000000000000000000000000000000 --- a/spaces/helliun/gpt4-associative-memory/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GPT-4 with Associative Memory -emoji: 😻 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hoang1007/wav2vec2/src/model/__init__.py b/spaces/hoang1007/wav2vec2/src/model/__init__.py deleted file mode 100644 index fd8d3ee4f5e40f284e5b9db83dd87abd5373c0f0..0000000000000000000000000000000000000000 --- a/spaces/hoang1007/wav2vec2/src/model/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .wav2vec2 import Wav2Vec2PretrainingModule diff --git a/spaces/hugforziio/chat-gpt-batch/app_js.py b/spaces/hugforziio/chat-gpt-batch/app_js.py deleted file mode 100644 index a65f4bdc768d741fd8a2007052b90ee8a775c73d..0000000000000000000000000000000000000000 --- a/spaces/hugforziio/chat-gpt-batch/app_js.py +++ /dev/null @@ -1,196 +0,0 @@ - -saved_prompts_refresh_btn__click_js = """(global_state_json, saved_prompts)=>{ - try { - if(global_state_json=="") {global_state_json=null;}; - console.log('global_state_json:\\n', global_state_json); - const global_state = JSON.parse(global_state_json??"{ }")??{ }; - - const saved = (JSON.parse(localStorage?.getItem?.('[gradio][chat-gpt-ui][prompts]') ?? '[]')); - console.log('saved:\\n', saved); - global_state['saved_prompts'] = saved; - global_state['selected_saved_prompt_title'] = saved.map(it=>it?.title??"[untitled]")[0]; - - const results = [JSON.stringify(global_state), global_state['selected_saved_prompt_title']]; - console.log(results); - return results; - } catch(error) { - console.log(error); - return ["{ }", ""]; - }; -}""" - - -selected_saved_prompt_title__change_js = """(global_state_json, selected_saved_prompt_title)=>{ - if(global_state_json=="") {global_state_json=null;}; - const global_state = JSON.parse(global_state_json??"{ }")??{ }; - const found = (global_state?.['saved_prompts']??[]).find(it=>it?.title==selected_saved_prompt_title); - return [JSON.stringify(global_state), found?.title??'', found?.content??{data:[], headers:["role", "content"]}]; -}""" - - -saved_prompts_delete_btn__click_js = """(global_state_json, saved_prompts, prompt_title, prompt_table)=>{ - if(prompt_title==""||!prompt_title){ - return [global_state_json, selected_saved_prompt_title, prompt_title, prompt_table]; - }; - console.log('global_state_json:\\n', global_state_json); - - if(global_state_json=="") {global_state_json=null;}; - const global_state = JSON.parse(global_state_json??"{ }")??{ }; - console.log(global_state); - - const saved = (JSON.parse(localStorage?.getItem?.('[gradio][chat-gpt-ui][prompts]') ?? '[]')); - console.log('saved:\\n', saved); - - - global_state['saved_prompts'] = saved?.filter?.(it=>it.title!=prompt_title)??[]; - - global_state['selected_saved_prompt_title'] = ""; - - console.log(global_state); - - localStorage?.setItem?.('[gradio][chat-gpt-ui][prompts]', JSON.stringify(global_state['saved_prompts'])); - - return [JSON.stringify(global_state), "", "", {data: [], headers: ['role', 'content']}]; -}""" - - -saved_prompts_save_btn__click_js = """(global_state_json, saved_prompts, prompt_title, prompt_table)=>{ - if(prompt_title==""||!prompt_title){ - return [global_state_json, selected_saved_prompt_title, prompt_title, prompt_table]; - }; - console.log('global_state_json:\\n', global_state_json); - - if(global_state_json=="") {global_state_json=null;}; - const global_state = JSON.parse(global_state_json??"{ }")??{ }; - console.log(global_state); - - const saved = (JSON.parse(localStorage?.getItem?.('[gradio][chat-gpt-ui][prompts]') ?? '[]')); - console.log('saved:\\n', saved); - - - const new_prompt_obj = { - title: prompt_title, content: prompt_table, - }; - - global_state['saved_prompts'] = saved?.filter?.(it=>it.title!=prompt_title)??[]; - - global_state['saved_prompts'].unshift(new_prompt_obj); - - global_state['selected_saved_prompt_title'] = prompt_title; - - console.log(global_state); - - localStorage?.setItem?.('[gradio][chat-gpt-ui][prompts]', JSON.stringify(global_state['saved_prompts'])); - - return [JSON.stringify(global_state), prompt_title, prompt_title, prompt_table]; -}""" - - -copy_prompt__click_js = """(prompt_title, prompt_table)=>{ - try { - const txt = JSON.stringify({ - title: prompt_title, - content: prompt_table, - }, null, 2); - console.log(txt); - const promise = navigator?.clipboard?.writeText?.(txt); - } catch(error) {console?.log?.(error);}; - return [prompt_title, prompt_table]; -}""" - - -paste_prompt__click_js = """async (prompt_title, prompt_table)=>{ - console.log("flag1"); - try { - const promise = navigator?.clipboard?.readText?.(); - console.log(promise); - console.log("flag1 p"); - const result = await promise?.then?.((txt)=>{ - console.log("flag1 t"); - const json = JSON.parse(txt); - const title = json?.title ?? ""; - console.log("flag1 0"); - console.log(title); - const content = json?.content ?? {data: [], headers: ['role', 'content']}; - console.log(content); - const result = [title, content]; - console.log("flag1 1"); - console.log(result); - console.log("flag1 2"); - return result; - }); - console.log("flag1 3"); - if (result!=null) { - return result; - }; - } catch(error) {console?.log?.(error);}; - console.log("flag2"); - try { - const promise = navigator?.clipboard?.read?.(); - console.log(promise); - promise?.then?.((data)=>{ - console.log(data); - }); - } catch(error) {console?.log?.(error);}; - console.log("flag3"); - return [prompt_title, prompt_table]; -}""" - - -chat_copy_history_btn__click_js = """(txt)=>{ - console.log(txt); - try {let promise = navigator?.clipboard?.writeText?.(txt);} - catch(error) {console?.log?.(error);}; -}""" - - -chat_copy_history_md_btn__click_js = """(txt)=>{ - console.log(txt); - try {let promise = navigator?.clipboard?.writeText?.(txt);} - catch(error) {console?.log?.(error);}; -}""" - - -# api_key_refresh_btn__click_js = """()=>{ -# const the_api_key = localStorage?.getItem?.('[gradio][chat-gpt-ui][api_key_text]') ?? ''; -# return the_api_key; -# }""" - - -# api_key_save_btn__click_js = """(api_key_text)=>{ -# localStorage.setItem('[gradio][chat-gpt-ui][api_key_text]', api_key_text); -# return api_key_text; -# }""" - - -api_key_refresh_btn__click_js = """()=>{ - const the_api_key = localStorage?.getItem?.('[gradio][chat-gpt-ui][api_key_text]') ?? ''; - return the_api_key; -}""" - - -api_key_save_btn__click_js = """(api_key_text)=>{ - localStorage.setItem('[gradio][chat-gpt-ui][api_key_text]', api_key_text); - return api_key_text; -}""" - - -api_key__get_from_browser = """()=>{ - const api_key = localStorage?.getItem?.('[gradio][chat-gpt-ui][api_key]') ?? ''; - const token = localStorage?.getItem?.('[gradio][chat-gpt-ui][token]') ?? ''; - return [api_key, token]; -}""" - -api_key__save_to_browser = """(api_key, token)=>{ - localStorage?.setItem?.('[gradio][chat-gpt-ui][api_key]', api_key); - token = localStorage?.getItem?.('[gradio][chat-gpt-ui][token]') ?? token ?? ''; - if (!token?.length) { - const temp_url = URL.createObjectURL(new Blob()); - const uuid = temp_url.toString(); - URL.revokeObjectURL(temp_url); - token = uuid.substr(uuid.lastIndexOf("/") + 1); - }; - localStorage.setItem('[gradio][chat-gpt-ui][token]', token); - return [api_key, token]; -}""" - diff --git a/spaces/hugginglearners/Hearts_Leaderboard/README.md b/spaces/hugginglearners/Hearts_Leaderboard/README.md deleted file mode 100644 index 2476dcc6a0832647ca0c481a8d6a063bbf7e1454..0000000000000000000000000000000000000000 --- a/spaces/hugginglearners/Hearts_Leaderboard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hearts Leaderboard -emoji: 📈 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.0.22 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hyoo/translate/app.py b/spaces/hyoo/translate/app.py deleted file mode 100644 index 27ce69af4dc44f7dffe2377529152f8b886264b7..0000000000000000000000000000000000000000 --- a/spaces/hyoo/translate/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -from transformers import M2M100ForConditionalGeneration -from tokenization_small100 import SMALL100Tokenizer - -langs = """af,am,ar,ast,az,ba,be,bg,bn,br,bs,ca,ceb,cs,cy,da,de,el,en,es,et,fa,ff,fi,fr,fy,ga,gd,gl,gu,ha,he,hi,hr,ht,hu,hy,id,ig,ilo,is,it,ja,jv,ka,kk,km,kn,ko,lb,lg,ln,lo,lt,lv,mg,mk,ml,mn,mr,ms,my,ne,nl,no,ns,oc,or,pa,pl,ps,pt,ro,ru,sd,si,sk,sl,so,sq,sr,ss,su,sv,sw,ta,th,tl,tn,tr,uk,ur,uz,vi,wo,xh,yi,yo,zh,zu""" -lang_list = langs.split(',') - -model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100") -tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100") - -def translate(lang, text): - tokenizer.tgt_lang = lang - encoded_text = tokenizer(text, return_tensors="pt") - generated_tokens = model.generate(**encoded_text) - return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] - -with gr.Blocks(analytics_enabled=False) as app: - - Source = gr.Textbox( label="Source" ) - Language = gr.Dropdown( lang_list, label="Language" ) - Translate = gr.Button( "Translate" ) - Result = gr.Textbox( label="Result" ) - Info = gr.Markdown( "# [$hyoo_lingua](https://lingua.hyoo.ru/)" ) - - Translate.click( - translate, - inputs=[ Language, Source ], - outputs=[Result], - api_name="translate", - ) - - app.launch( inline=True ) - block.queue( concurrency_count=2 ) diff --git a/spaces/iankur/img2tex/app.py b/spaces/iankur/img2tex/app.py deleted file mode 100644 index 852bd2fa2a271bd6e738e96a8eedcb20e3fe5de3..0000000000000000000000000000000000000000 --- a/spaces/iankur/img2tex/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import subprocess - -import gradio as gr -import torch -import torchvision.transforms as T -from PIL import Image -from pdf2image import convert_from_path - -from util import pil_loader, show, Tokenizer -from resnet_transformer import ResNetTransformer - -tokenizer = Tokenizer.load('vocab_tsr.json') -model = ResNetTransformer( - d_model=128, - dim_feedforward=256, - nhead=4, - dropout=0., - num_decoder_layers=3, - max_output_len=250, - sos_index=tokenizer.sos_index, - eos_index=tokenizer.eos_index, - pad_index=tokenizer.pad_index, - num_classes=len(tokenizer) -) -ckpt = torch.load("ep20ser50.9cer7.8.ckpt", map_location="cpu") -d = {} -for k, v in ckpt["state_dict"].items(): - d[k.split(".", 1)[1]] = v -model.load_state_dict(d) -model.eval() - -transforms = T.Compose([ - T.ToTensor(), - T.Resize((400, 400)) -]) - -def display(img): - img = transforms(pil_loader(img, mode='L')) - img = img.unsqueeze(0) - output = model.predict(img) - output = ' '.join(tokenizer.decode(output[0].tolist())) - output_img, output_text = show(output), output - return output_text, output_img - -gr.Interface(fn=display, - inputs=gr.inputs.Image(type="pil", label="Table image to convert"), - outputs=["text", gr.outputs.Image(type="pil")], - examples=["test.jpeg", "transformer.png", "clip_vertical.png", "dalle2_big.png"], -).launch() diff --git a/spaces/ilumine-AI/AI-3D-Explorable-Video/index.html b/spaces/ilumine-AI/AI-3D-Explorable-Video/index.html deleted file mode 100644 index cf80e9c8a0ab939b206bdde38cce125b747accdb..0000000000000000000000000000000000000000 --- a/spaces/ilumine-AI/AI-3D-Explorable-Video/index.html +++ /dev/null @@ -1,92 +0,0 @@ - - - - - - - AI 3D Video - - - - -
    - -
    - - - - - diff --git a/spaces/inamXcontru/PoeticTTS/Adobe After Effects 2020 17.0.4.59 (x64) Multilingual Create Stunning Animations and Videos with Ease.md b/spaces/inamXcontru/PoeticTTS/Adobe After Effects 2020 17.0.4.59 (x64) Multilingual Create Stunning Animations and Videos with Ease.md deleted file mode 100644 index 0a99053d560a1236a540be73782de848f6a6f43c..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Adobe After Effects 2020 17.0.4.59 (x64) Multilingual Create Stunning Animations and Videos with Ease.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe After Effects 2020 17.0.4.59 (x64) Multilingual


    DOWNLOAD >>>>> https://gohhs.com/2uz4NE



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/Bajos Instintos 1080p Latino 18.md b/spaces/inamXcontru/PoeticTTS/Bajos Instintos 1080p Latino 18.md deleted file mode 100644 index ab1fc66a358bef525cde9ebae528a0ded5b20bc4..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bajos Instintos 1080p Latino 18.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bajos Instintos 1080p Latino 18


    Download ->->->-> https://gohhs.com/2uz4aq



    -
    - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bs En 13670 Pdf Download !LINK!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bs En 13670 Pdf Download !LINK!.md deleted file mode 100644 index 79f80493b1b3df07ee321130b32cca2517ecf396..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bs En 13670 Pdf Download !LINK!.md +++ /dev/null @@ -1,38 +0,0 @@ -
    -

    How to Download BS EN 13670:2009 Execution of Concrete Structures PDF

    -

    BS EN 13670:2009 is a British standard that provides common requirements for execution of concrete structures and applies to in-situ work and prefabricated concrete elements, covering both permanent and temporary concrete structures. It covers topics such as falsework and formwork, reinforcement requirements, prestressing, concreting, precast concrete elements and geometric tolerances.

    -

    bs en 13670 pdf download


    Download File ===> https://urlin.us/2uExO6



    -

    If you are looking for a way to download BS EN 13670:2009 Execution of Concrete Structures PDF, you have come to the right place. In this article, we will show you how to get access to this valuable document in a few easy steps.

    -

    Step 1: Visit the BSI Website

    -

    The first step is to visit the official website of the British Standards Institution (BSI), which is the UK's national standards body and the publisher of BS EN 13670:2009. You can find their website at www.bsigroup.com.

    -

    Step 2: Search for BS EN 13670:2009

    -

    The next step is to search for BS EN 13670:2009 on the BSI website. You can use the search bar at the top right corner of the homepage, or you can browse by category or industry. Once you find the standard, click on it to see more details.

    -

    Step 3: Choose Your Format and Add to Basket

    -

    The third step is to choose your preferred format and add it to your basket. You can choose between a hard copy or a PDF download. The PDF download is cheaper and more convenient, as you can access it instantly after purchase. To add it to your basket, click on the "Add to basket" button next to the PDF option.

    -

    -

    Step 4: Checkout and Download

    -

    The final step is to checkout and download your BS EN 13670:2009 Execution of Concrete Structures PDF. You will need to create an account or log in if you already have one, and provide your payment details. After you complete your purchase, you will receive an email with a link to download your PDF file. You can also access it from your account dashboard.

    -

    Conclusion

    -

    BS EN 13670:2009 Execution of Concrete Structures is a useful standard for anyone involved in concrete construction projects. It provides clear and consistent guidelines for executing concrete structures in accordance with best practices and quality standards. By following the steps above, you can easily download a PDF copy of this standard from the BSI website.

    - -

    Benefits of BS EN 13670:2009 Execution of Concrete Structures

    -

    By following BS EN 13670:2009 Execution of Concrete Structures, you can enjoy several benefits for your concrete construction projects. Some of these benefits are:

    -
      -
    • Improved quality and durability of concrete structures
    • -
    • Reduced risk of defects and failures
    • -
    • Enhanced safety and performance of concrete structures
    • -
    • Increased compliance with regulations and specifications
    • -
    • Optimized use of resources and materials
    • -
    • Better communication and coordination among project stakeholders
    • -
    -

    BS EN 13670:2009 Execution of Concrete Structures is a comprehensive and practical standard that covers all aspects of concrete execution, from design to inspection. It is compatible with other relevant standards, such as BS EN 1992 Eurocode 2: Design of concrete structures, BS EN 206 Concrete - Specification, performance, production and conformity, and BS 8500 Concrete - Complementary British Standard to BS EN 206.

    -

    Frequently Asked Questions about BS EN 13670:2009 Execution of Concrete Structures

    -

    Here are some common questions and answers about BS EN 13670:2009 Execution of Concrete Structures:

    -

    What is the difference between BS EN 13670:2009 and DD ENV 13670-1:2000?

    -

    BS EN 13670:2009 supersedes DD ENV 13670-1:2000, which was a draft for development and not a full standard. BS EN 13670:2009 is more comprehensive and updated than DD ENV 13670-1:2000, and incorporates feedback from users and experts.

    -

    How can I get a hard copy of BS EN 13670:2009?

    -

    If you prefer a hard copy of BS EN 13670:2009, you can order it from the BSI website or from other authorized distributors. The hard copy will be delivered to your address within a few days after purchase.

    -

    How can I get updates on BS EN 13670:2009?

    -

    If you want to stay informed about any changes or amendments to BS EN 13670:2009, you can subscribe to the BSI's free notification service. You will receive an email whenever there is a new edition or corrigendum of the standard.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Circuit Wizard Rar [PORTABLE].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Circuit Wizard Rar [PORTABLE].md deleted file mode 100644 index ca4cf3cb6f1dbd435c1d1e2b91870711db396ff9..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Circuit Wizard Rar [PORTABLE].md +++ /dev/null @@ -1,124 +0,0 @@ -

    Circuit Wizard rar


    Download File ---> https://urlin.us/2uExhU



    - -download - -minesweeper rar version 7. - -jeremy murphy kong racing. - -itunes file sharing client for mac. - -minecraft fork mc 1. - -new minecraft server. - -racing dice game for java. - -minecraft file server. - -minecraft fileserver. - -minecraft fork mc 0. - -minecraft1 server. - -minecraft1 server fork mc 0. - -minecraft2 server. - -minecraft2 server fork mc 0. - -minecraft forkserver. - -minecraft1rsc. - -minecraft1rsc fork mc 0. - -minecraft2rsc. - -minecraft2rsc fork mc 0. - -minesweeper 1. - -minesweeper. - -minesweeper rar. - -xbox. - -sparkle game. - -downloader for ftp. - -minecraft rar file download. - -minecraft data. - -minecraft rar download. - -minecraft server 1. - -minecraft 1. - -minecraft1 server fork mc 1. - -minecraft2 server fork mc 1. - -minecraft1rsc fork mc 1. - -minecraft2rsc fork mc 1. - -merchant central new: - -all kinds of clip art. - -bloodstone. - -branding. - -captain america. - -cannabis dabs. - -cannabis hash oil. - -change the name of my world. - -clicar. - -creative commons. - -craft beer. - -cubist. - -dab apk. - -dead end. - -digital image. - -digital images. - -email marketing. - -everlasting. - -felix. - -featured home designs. - -fear and loathing in Las Vegas. - -fear and loathing on the sun. - -figurines. - -flowers. - -flowers rar. - -fractal art. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Fsdreamteam Gsx Fsx 15.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Fsdreamteam Gsx Fsx 15.md deleted file mode 100644 index a12c76eee5f10524a2724b3514f71e1064e64a25..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Crack Fsdreamteam Gsx Fsx 15.md +++ /dev/null @@ -1,30 +0,0 @@ - -

    How to Crack Fsdreamteam Gsx Fsx 15

    -

    Fsdreamteam Gsx Fsx 15 is a software product that simulates various ground operations for Microsoft Flight Simulator X and Prepar3D, such as marshalling, catering, boarding, refueling, pushback, and more[^1^]. It is designed to enhance the realism and immersion of flight simulation enthusiasts. However, it is not a free product and requires a valid license to activate.

    -

    Crack Fsdreamteam Gsx Fsx 15


    Download 🆗 https://urlin.us/2uEx94



    -

    Some users may try to crack Fsdreamteam Gsx Fsx 15 to bypass the activation process and use it without paying. This is illegal and unethical, as it violates the intellectual property rights of the developers and harms their business. Moreover, cracking Fsdreamteam Gsx Fsx 15 may expose the users to malware, viruses, or other security risks that can damage their computers or compromise their personal data.

    -

    Therefore, we do not recommend or endorse cracking Fsdreamteam Gsx Fsx 15 or any other software product. Instead, we suggest that users purchase a legitimate license from the official website of Fsdreamteam or from authorized resellers. This way, they can enjoy the full features and benefits of Fsdreamteam Gsx Fsx 15, as well as receive updates, support, and customer service from the developers.

    In this article, we will provide a brief overview of Fsdreamteam Gsx Fsx 15 features and how to install it on your computer. We will also give some tips and tricks on how to use it effectively and customize it to your preferences.

    -

    Fsdreamteam Gsx Fsx 15 Features

    -

    Fsdreamteam Gsx Fsx 15 is a comprehensive and realistic ground services simulation for FSX and Prepar3D. It works with every airport, both default and third-party, and supports all default airplanes and many popular third-party airplanes. It offers a variety of ground operations, such as:

    -
      -
    • Marshalling: A follow-me car or a marshaller will guide you to your parking spot.
    • -
    • Catering: A catering truck will deliver food and drinks to your airplane.
    • -
    • Boarding/Deboarding: Passengers and baggage will board or deboard your airplane using stairs or jetways.
    • -
    • Refueling: A fuel truck will refuel your airplane according to your request.
    • -
    • Pushback: A tug will push your airplane back from the gate and align it with the taxiway.
    • -
    -

    Fsdreamteam Gsx Fsx 15 also features many native FSX animations and sounds, such as opening doors, moving jetways, loading baggage, etc. It uses DirectX 11 for enhanced graphics and performance (P3D4.4+ only). It also has a user-friendly interface that allows you to customize the vehicles, liveries, settings, and options to suit your needs.

    -

    -

    How to Install Fsdreamteam Gsx Fsx 15

    -

    To install Fsdreamteam Gsx Fsx 15, you need to follow these steps:

    -
      -
    1. Download the installer from the official website of Fsdreamteam or from an authorized reseller.
    2. -
    3. Run the installer and follow the instructions on the screen.
    4. -
    5. Select the simulator you want to install Fsdreamteam Gsx Fsx 15 on (FSX or Prepar3D).
    6. -
    7. Select the destination folder for Fsdreamteam Gsx Fsx 15 files.
    8. -
    9. Wait for the installation to complete.
    10. -
    11. Launch your simulator and activate Fsdreamteam Gsx Fsx 15 using your license key.
    12. -
    -

    Congratulations! You have successfully installed Fsdreamteam Gsx Fsx 15 on your computer. You can now enjoy the realistic ground services simulation for your flights.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fairy Tail Tagalog Version Full Episode.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fairy Tail Tagalog Version Full Episode.md deleted file mode 100644 index 3f9d0efd898fcbd631ce3685a8269d3bb78db9e7..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fairy Tail Tagalog Version Full Episode.md +++ /dev/null @@ -1,32 +0,0 @@ -
    -

    Fairy Tail: A Popular Anime Series Dubbed in Tagalog

    -

    Fairy Tail is a Japanese manga series written and illustrated by Hiro Mashima. It follows the adventures of Natsu Dragneel, a member of the Fairy Tail guild of mages, and his friends as they face various enemies and challenges in a fantasy world. The manga has been adapted into an anime series by A-1 Pictures and Satelight, which ran for 328 episodes from 2009 to 2019.

    -

    fairy tail tagalog version full episode


    Download Zip ->>> https://urlin.us/2uEwGl



    -

    One of the reasons why Fairy Tail is so popular among anime fans is its diverse and colorful cast of characters, each with their own unique personality, abilities, and backstory. The series also features a lot of humor, action, romance, and drama, making it appealing to a wide range of audiences. Fairy Tail has won several awards and accolades, such as the Kodansha Manga Award for shonen manga in 2009 and the Anime Grand Prix for best anime in 2012.

    -

    Fairy Tail has also been dubbed in various languages, including Tagalog, which is spoken by millions of people in the Philippines and other parts of the world. The Tagalog dub of Fairy Tail is available on Bilibili[^1^] [^2^], a Southeast Asian online platform for anime, comics, and games. The Tagalog dub features local voice actors who give life to the characters and their emotions. Some of the voice actors include:

    -
      -
    • Natsu Dragneel - Christian Velarde
    • -
    • Lucy Heartfilia - Lovely Mejala
    • -
    • Gray Fullbuster - John Patrick Dela Cruz
    • -
    • Erza Scarlet - Grace Cornel
    • -
    • Happy - Rona Aguilar
    • -
    -

    Fans of Fairy Tail who want to watch the Tagalog dub can find it on Bilibili's website or app. The Tagalog dub covers the first five seasons of the anime series, which span 175 episodes. The episodes are uploaded regularly by Bilibili's content creators, such as DADATV[^1^] and Phantom_Kaito[^2^]. Fans can also interact with other viewers and share their thoughts on the episodes through comments and reactions.

    -

    -

    Fairy Tail is a fun and exciting anime series that can be enjoyed by anyone who loves fantasy, magic, and adventure. The Tagalog dub adds another layer of enjoyment for those who speak or understand the language. Whether you are new to Fairy Tail or a longtime fan, you can watch the Tagalog dub on Bilibili and join the Fairy Tail guild.

    - -

    The Story of Fairy Tail

    -

    Fairy Tail is set in a fictional world called Earth Land, where magic is a common and essential part of life. There are various types of magic, such as elemental, transformation, celestial, and dragon slayer magic. Magic users can join guilds, which are organizations that offer jobs and support to their members. One of the most famous and notorious guilds is Fairy Tail, known for its powerful and eccentric mages and their tendency to cause trouble and destruction.

    -

    The main protagonist of the series is Natsu Dragneel, a fire dragon slayer who was raised by a dragon named Igneel. Natsu is a cheerful and reckless mage who loves to fight and eat. He is always accompanied by his best friend and partner, Happy, a blue cat-like creature who can fly and talk. Natsu's goal is to find Igneel, who disappeared when he was young.

    -

    At the beginning of the series, Natsu meets Lucy Heartfilia, a young and aspiring celestial mage who can summon spirits from another world using magical keys. Lucy dreams of joining Fairy Tail and becoming a famous writer. Natsu invites Lucy to join his guild, and she accepts. Together, they form a team with Gray Fullbuster, an ice mage who has a habit of stripping unconsciously, and Erza Scarlet, a swordswoman who can change her armor and weapons at will. The team goes on various missions and adventures, facing enemies such as dark guilds, ancient demons, rogue dragons, and evil wizards.

    -

    As the series progresses, the team learns more about their pasts and their connections to each other. They also encounter new allies and friends, such as Wendy Marvell, a sky dragon slayer who can heal with her magic; Carla, a white cat-like creature who can see the future; Gajeel Redfox, an iron dragon slayer who used to be an enemy; Levy McGarden, a solid script mage who loves to read; Juvia Lockser, a water mage who has a crush on Gray; Laxus Dreyar, the grandson of the guild master who has lightning magic; and many others. The team also faces bigger threats and challenges that test their bonds and their faith in their guild.

    - -

    The Themes of Fairy Tail

    -

    Fairy Tail is not just an anime series about magic and battles. It also explores various themes that resonate with its viewers. Some of the themes are:

    -
      -
    • Friendship - Fairy Tail emphasizes the importance of friendship and teamwork. The characters often rely on each other for support and encouragement. They also share their joys and sorrows with each other. They are willing to sacrifice themselves for their friends and fight for their sake. The series shows that friendship is stronger than any magic or enemy.
    • -
    • Family - Fairy Tail portrays different kinds of families, both biological and chosen. The characters have different backgrounds and histories, some of which are tragic or complicated. Some of them have lost their parents or siblings, while others have been abandoned or betrayed by them. However, they find a new family in Fairy Tail, where they are accepted and loved for who they are. The series shows that family is not defined by blood or name, but by heart and loyalty.
    • -
    • Courage - Fairy Tail inspires its viewers to be courageous and brave in the face of adversity. The characters often face overwhelming odds and dangers that seem impossible to overcome. However, they never give up or lose hope. They always stand up for what they believe in and fight for what they love. They also overcome their fears and doubts with the help of their friends. The series shows that courage is not the absence of fear, but the ability to act despite it.
    • -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Medal Of Honor Spearhead Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Medal Of Honor Spearhead Download.md deleted file mode 100644 index fe4d294672c431e296ca6dc3eb30325d61da2385..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Medal Of Honor Spearhead Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    medal of honor spearhead download


    Download Zip ->>> https://urlin.us/2uEygt



    -
    -Medal of Honor: Allied Assault Spearhead Free Download ApunKaGames (Size: 688 MB) is a First-person shooter video game. Apun Ka ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Belajar Menulis Huruf Abjad Pdf Download !FREE!.md b/spaces/inreVtussa/clothingai/Examples/Belajar Menulis Huruf Abjad Pdf Download !FREE!.md deleted file mode 100644 index 721250871ddf09a3e8666cc4000de770877f90b3..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Belajar Menulis Huruf Abjad Pdf Download !FREE!.md +++ /dev/null @@ -1,100 +0,0 @@ - -

    Belajar Menulis Huruf Abjad PDF Download: Cara Mudah dan Menyenangkan untuk Anak TK

    - -

    Belajar menulis huruf abjad adalah salah satu keterampilan dasar yang perlu dikuasai oleh anak-anak usia dini. Dengan belajar menulis huruf abjad, anak-anak dapat mengembangkan kemampuan membaca, berkomunikasi, dan berpikir secara logis. Namun, belajar menulis huruf abjad tidak harus menjadi kegiatan yang membosankan dan monoton. Ada banyak cara untuk membuat belajar menulis huruf abjad menjadi lebih mudah dan menyenangkan bagi anak-anak TK.

    - -

    Salah satu cara yang efektif dan praktis adalah dengan menggunakan lembar kerja PDF yang dapat diunduh secara gratis dari internet. Lembar kerja PDF ini berisi latihan-latihan untuk belajar menulis huruf abjad dengan cara menebalkan, menyalin, atau menulis kata dengan awalan huruf tertentu. Lembar kerja PDF ini juga dilengkapi dengan gambar-gambar yang dapat diwarnai oleh anak-anak, sehingga mereka dapat belajar sambil bermain.

    -

    belajar menulis huruf abjad pdf download


    Downloadhttps://tiurll.com/2uCjI9



    - -

    Keuntungan Menggunakan Lembar Kerja PDF untuk Belajar Menulis Huruf Abjad

    - -

    Lembar kerja PDF untuk belajar menulis huruf abjad memiliki beberapa keuntungan, antara lain:

    - -
      -
    • Mudah diakses dan diunduh dari berbagai sumber online, seperti Semesta Ibu, yang menyediakan berbagai media edukasi untuk anak-anak TK.
    • -
    • Hemat biaya dan waktu, karena tidak perlu membeli buku atau mencetak lembar kerja dari toko.
    • -
    • Fleksibel dan dapat disesuaikan dengan kebutuhan dan kemampuan anak-anak. Orang tua atau guru dapat memilih lembar kerja yang sesuai dengan tingkat kesulitan dan minat anak-anak.
    • -
    • Menarik dan variatif, karena lembar kerja PDF ini memiliki desain yang menarik dan beragam tema, seperti hewan, buah, benda, dll.
    • -
    • Mendukung perkembangan kognitif dan motorik anak-anak, karena lembar kerja PDF ini melatih anak-anak untuk mengenal huruf abjad, membentuk kata, mengasosiasikan gambar dengan kata, serta melatih koordinasi mata dan tangan.
    • -
    - -

    Tips Menggunakan Lembar Kerja PDF untuk Belajar Menulis Huruf Abjad

    - -

    Agar belajar menulis huruf abjad dengan lembar kerja PDF menjadi lebih efektif dan menyenangkan, berikut adalah beberapa tips yang dapat dilakukan oleh orang tua atau guru:

    - -
      -
    • Mendampingi dan memberi arahan kepada anak-anak selama belajar menulis huruf abjad. Orang tua atau guru dapat menjelaskan cara menebalkan, menyalin, atau menulis huruf abjad dengan benar dan rapi.
    • -
    • Memberi pujian dan motivasi kepada anak-anak setiap kali mereka berhasil menyelesaikan lembar kerja. Hal ini dapat meningkatkan rasa percaya diri dan semangat belajar anak-anak.
    • -
    • Memberi variasi dan tantangan kepada anak-anak dengan mengajukan pertanyaan-pertanyaan seputar huruf abjad, kata, atau gambar yang terdapat pada lembar kerja. Hal ini dapat merangsang daya ingat dan daya nalar anak-anak.
    • -
    • Mengajak anak-anak untuk mewarnai gambar-gambar yang terdapat pada lembar kerja. Hal ini dapat melatih kreativitas dan imajinasi anak-anak.
    • -
    • Mengulang-ulang latihan menulis huruf abjad secara rutin dan konsisten. Hal ini dapat membantu anak-anak untuk menguasai keterampilan menulis huruf abjad dengan lebih cepat dan mudah.
    • -
    - -

    Kesimpulan

    - -

    Belajar menulis huruf abjad adalah keterampilan dasar yang penting bagi anak-anak usia dini. Dengan menggunakan lembar kerja PDF yang dapat diunduh secara gratis dari internet, belajar menulis huruf abjad menjadi lebih mudah dan menyenangkan bagi anak-anak TK. Lembar kerja PDF ini memiliki banyak keuntungan, seperti mudah diakses, hemat biaya, fleksibel, menarik, variatif, serta mendukung perkembangan kognitif dan motorik anak-anak. Dengan tips-tips yang telah disebutkan di atas, orang tua atau guru dapat membuat belajar menulis huruf abjad dengan lembar kerja PDF menjadi lebih efektif dan menyenangkan.

    - -

    Jika Anda tertarik untuk mendapatkan lembar kerja PDF untuk belajar menulis huruf abjad, Anda dapat mengunjungi situs Semesta Ibu, yang menyediakan berbagai media edukasi untuk anak-anak TK. Anda dapat mengunduh lembar kerja PDF untuk belajar menulis huruf abjad A-Z dengan cara menebalkan, menyalin, atau menulis kata dengan awalan huruf tertentu. Anda juga dapat mengunduh lembar kerja PDF untuk belajar menulis angka, menebalkan huruf hijaiyah, mengenal huruf besar-kecil, serta flashcard huruf. Selain itu, Anda juga dapat mengunduh gambar mewarnai hewan, buah-buahan, benda-benda sehari-hari, serta mainan edukatif untuk anak-anak TK.

    - -

    Segera unduh lembar kerja PDF untuk belajar menulis huruf abjad dari Semesta Ibu sekarang juga!

    -

    Cara Mengunduh Lembar Kerja PDF untuk Belajar Menulis Huruf Abjad

    - -

    Untuk mengunduh lembar kerja PDF untuk belajar menulis huruf abjad, Anda dapat mengikuti langkah-langkah berikut:

    - -
      -
    1. Kunjungi situs Semesta Ibu, yang menyediakan berbagai media edukasi untuk anak-anak TK.
    2. -
    3. Pilih kategori Lembar Kerja Anak, dan cari subkategori Belajar Menulis Huruf Abjad.
    4. -
    5. Pilih lembar kerja yang Anda inginkan, berdasarkan huruf abjad A-Z.
    6. -
    7. Klik tautan download yang terdapat di bawah gambar pratinjau lembar kerja.
    8. -
    9. Simpan file PDF yang telah terunduh di komputer atau perangkat Anda.
    10. -
    11. Cetak lembar kerja PDF dengan printer atau di toko percetakan terdekat.
    12. -
    - -

    Anda juga dapat mengunduh lembar kerja PDF untuk belajar menulis huruf abjad dalam satu file, dengan cara bergabung dengan konstelasi Semesta Ibu. Dengan menjadi anggota konstelasi Semesta Ibu, Anda dapat mengakses folder berisi 1000+ halaman aktivitas anak dalam bentuk bundel per tema/aktivitas, tanpa iklan dan tanpa batas waktu. Anda juga dapat mendapatkan update terbaru dari Semesta Ibu melalui email.

    - -

    Contoh Lembar Kerja PDF untuk Belajar Menulis Huruf Abjad

    - -

    Berikut adalah beberapa contoh lembar kerja PDF untuk belajar menulis huruf abjad yang dapat Anda unduh dari Semesta Ibu:

    -

    - -
      -
    • Lembar Kerja Menebalkan Huruf A: Lembar kerja ini berisi latihan untuk menebalkan huruf A dengan garis putus-putus, serta kata-kata yang diawali dengan huruf A, seperti apel, ayam, dan anak. Lembar kerja ini juga dilengkapi dengan gambar apel yang dapat diwarnai oleh anak-anak.
    • -
    • Lembar Kerja Menebalkan Huruf B: Lembar kerja ini berisi latihan untuk menebalkan huruf B dengan garis putus-putus, serta kata-kata yang diawali dengan huruf B, seperti baju, buku, dan bunga. Lembar kerja ini juga dilengkapi dengan gambar baju yang dapat diwarnai oleh anak-anak.
    • -
    • Lembar Kerja Menebalkan Huruf C: Lembar kerja ini berisi latihan untuk menebalkan huruf C dengan garis putus-putus, serta kata-kata yang diawali dengan huruf C, seperti coklat, cincin, dan cacing. Lembar kerja ini juga dilengkapi dengan gambar coklat yang dapat diwarnai oleh anak-anak.
    • -
    • Lembar Kerja Menebalkan Huruf D: Lembar kerja ini berisi latihan untuk menebalkan huruf D dengan garis putus-putus, serta kata-kata yang diawali dengan huruf D, seperti domba, duri, dan daging. Lembar kerja ini juga dilengkapi dengan gambar domba yang dapat diwarnai oleh anak-anak.
    • -
    • Lembar Kerja Menebalkan Huruf E: Lembar kerja ini berisi latihan untuk menebalkan huruf E dengan garis putus-putus, serta kata-kata yang diawali dengan huruf E, seperti es krim, emas, dan elang. Lembar kerja ini juga dilengkapi dengan gambar es krim yang dapat diwarnai oleh anak-anak.
    • -
    - -

    Anda dapat melihat contoh-contoh lembar kerja lainnya untuk belajar menulis huruf abjad F-Z di situs Semesta Ibu.

    -

    Manfaat Belajar Menulis Huruf Abjad bagi Anak TK

    - -

    Belajar menulis huruf abjad tidak hanya bermanfaat untuk mengembangkan keterampilan menulis anak-anak, tetapi juga memiliki manfaat lain, antara lain:

    - -
      -
    • Meningkatkan kemampuan membaca anak-anak, karena dengan menulis huruf abjad, anak-anak dapat mengenal bunyi dan bentuk huruf, serta membentuk kata-kata dari huruf-huruf tersebut.
    • -
    • Meningkatkan kemampuan berkomunikasi anak-anak, karena dengan menulis huruf abjad, anak-anak dapat menyampaikan pikiran, perasaan, dan ide mereka melalui tulisan.
    • -
    • Meningkatkan kemampuan berpikir secara logis anak-anak, karena dengan menulis huruf abjad, anak-anak dapat mengurutkan huruf-huruf menjadi kata-kata yang bermakna dan sesuai dengan kaidah bahasa.
    • -
    • Meningkatkan konsentrasi dan fokus anak-anak, karena dengan menulis huruf abjad, anak-anak harus memperhatikan garis-garis dan bentuk-bentuk huruf yang harus mereka tulis dengan rapi dan benar.
    • -
    • Meningkatkan kepercayaan diri dan harga diri anak-anak, karena dengan menulis huruf abjad, anak-anak dapat merasa bangga dan puas dengan hasil karya mereka.
    • -
    - -

    Tips Membantu Anak TK Belajar Menulis Huruf Abjad

    - -

    Belajar menulis huruf abjad membutuhkan latihan yang rutin dan konsisten. Namun, orang tua atau guru tidak perlu memaksakan anak-anak untuk belajar menulis huruf abjad jika mereka belum siap atau tidak tertarik. Ada beberapa tips yang dapat dilakukan oleh orang tua atau guru untuk membantu anak-anak TK belajar menulis huruf abjad, antara lain:

    - -
      -
    • Memberikan contoh dan model cara menulis huruf abjad yang baik dan benar. Orang tua atau guru dapat menunjukkan cara menulis huruf abjad dengan menggunakan pensil, kapur, spidol, atau alat tulis lainnya pada kertas, papan tulis, atau media lainnya.
    • -
    • Memberikan bantuan dan bimbingan kepada anak-anak selama belajar menulis huruf abjad. Orang tua atau guru dapat membantu anak-anak untuk memegang pensil dengan benar, mengarahkan tangan anak-anak untuk mengikuti garis-garis huruf, atau memberikan koreksi jika ada kesalahan dalam menulis huruf.
    • -
    • Memberikan pilihan dan kebebasan kepada anak-anak untuk belajar menulis huruf abjad. Orang tua atau guru dapat memberikan pilihan kepada anak-anak untuk belajar menulis huruf abjad yang mereka sukai atau tertarik terlebih dahulu. Orang tua atau guru juga dapat memberikan kebebasan kepada anak-anak untuk belajar menulis huruf abjad dengan media yang mereka sukai, seperti pasir, tanah liat, lilin, atau media lainnya.
    • -
    • Memberikan dorongan dan apresiasi kepada anak-anak selama belajar menulis huruf abjad. Orang tua atau guru dapat memberikan dorongan kepada anak-anak untuk mencoba dan terus berlatih menulis huruf abjad. Orang tua atau guru juga dapat memberikan apresiasi kepada anak-anak untuk setiap kemajuan atau prestasi yang mereka capai dalam belajar menulis huruf abjad.
    • -
    • Memberikan kesempatan dan tantangan kepada anak-anak untuk menerapkan keterampilan menulis huruf abjad. Orang tua atau guru dapat memberikan kesempatan kepada anak-anak untuk menerapkan keterampilan menulis huruf abjad dalam kegiatan sehari-hari, seperti membuat kartu ucapan, membuat label barang-barang, atau membuat buku cerita. Orang tua atau guru juga dapat memberikan tantangan kepada anak-anak untuk meningkatkan keterampilan menulis huruf abjad mereka, seperti menulis kata-kata yang lebih panjang atau sulit.
    • -
    - -

    Dengan tips-tips di atas, orang tua atau guru dapat membuat belajar menulis huruf abjad menjadi lebih mudah dan menyenangkan bagi anak-anak TK.

    -

    Kesimpulan

    - -

    Belajar menulis huruf abjad adalah keterampilan dasar yang penting bagi anak-anak TK. Dengan menggunakan lembar kerja PDF yang dapat diunduh secara gratis dari internet, belajar menulis huruf abjad menjadi lebih mudah dan menyenangkan bagi anak-anak TK. Lembar kerja PDF ini memiliki banyak keuntungan, seperti mudah diakses, hemat biaya, fleksibel, menarik, variatif, serta mendukung perkembangan kognitif dan motorik anak-anak. Dengan tips-tips yang telah disebutkan di atas, orang tua atau guru dapat membuat belajar menulis huruf abjad dengan lembar kerja PDF menjadi lebih efektif dan menyenangkan. Jika Anda tertarik untuk mendapatkan lembar kerja PDF untuk belajar menulis huruf abjad, Anda dapat mengunjungi situs Semesta Ibu, yang menyediakan berbagai media edukasi untuk anak-anak TK.

    - -

    Selamat belajar menulis huruf abjad dengan lembar kerja PDF!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Dc Heath And Company Worksheets Answers Zip.md b/spaces/inreVtussa/clothingai/Examples/Dc Heath And Company Worksheets Answers Zip.md deleted file mode 100644 index 94e35acfd1dd83e4b2923663d62492ea0c5fe46c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Dc Heath And Company Worksheets Answers Zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dc Heath And Company Worksheets Answers Zip


    Download File ✒ ✒ ✒ https://tiurll.com/2uCluy



    - -family, herd, company, band, team, audience, troop, committee, jury, flock. PRONOUNS ... A transitive verb is followed by a direct object—that is, a word or words that answer the question what? Or ... by a ZIP code. ... D C Heath and Company. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/generateGradio.ts b/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/generateGradio.ts deleted file mode 100644 index 5e43f39f49931d3d8714ba2aee35fd9054d2c773..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/generateGradio.ts +++ /dev/null @@ -1,66 +0,0 @@ -import { generateSeed } from "@/lib/generateSeed" -import { VideoOptions } from "@/types" - -const gradioApi = `${process.env.VIDEO_HOTSHOT_XL_API_GRADIO || ""}` -const accessToken = `${process.env.AUTH_HOTSHOT_XL_API_GRADIO_ACCESS_TOKEN || ""}` - -export async function generateGradio({ - positivePrompt = "", - negativePrompt = "", - size = "512x512", - huggingFaceLora, - // replicateLora, // not supported yet - nbFrames = 8, - duration = 1000, - steps = 40, -}: VideoOptions): Promise { - /* - console.log(`SEND TO ${gradioApi + (gradioApi.endsWith("/") ? "" : "/") + "api/predict"}:`, [ - // accessToken, - positivePrompt, - negativePrompt, - huggingFaceLora, - size, - generateSeed(), - steps, - nbFrames, - duration, - ]) - */ - const res = await fetch(gradioApi + (gradioApi.endsWith("/") ? "" : "/") + "api/predict", { - method: "POST", - headers: { - "Content-Type": "application/json", - // Authorization: `Bearer ${token}`, - }, - body: JSON.stringify({ - fn_index: 1, // <- important! - data: [ - accessToken, - positivePrompt, - negativePrompt, - huggingFaceLora, - size, - generateSeed(), - steps, - nbFrames, - duration, - ], - }), - cache: "no-store", - // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache) - // next: { revalidate: 1 } - }) - - const { data } = await res.json() - - // console.log("data:", data) - // Recommendation: handle errors - if (res.status !== 200 || !Array.isArray(data)) { - // This will activate the closest `error.js` Error Boundary - throw new Error(`Failed to fetch data (status: ${res.status})`) - } - // console.log("data:", data.slice(0, 50)) - - return data[0] -} \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/select.tsx b/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/select.tsx deleted file mode 100644 index 704239634b359b9e680dab25275e205e72579f82..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/select.tsx +++ /dev/null @@ -1,121 +0,0 @@ -"use client" - -import * as React from "react" -import * as SelectPrimitive from "@radix-ui/react-select" -import { Check, ChevronDown } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = "popper", ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator, -} diff --git a/spaces/jbilcke-hf/media-server/src/batch/README.md b/spaces/jbilcke-hf/media-server/src/batch/README.md deleted file mode 100644 index bdc38e92f2e8d7967f27130f1ca2a04b98573d0a..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/media-server/src/batch/README.md +++ /dev/null @@ -1 +0,0 @@ -utilities to fix videos in post-prod \ No newline at end of file diff --git a/spaces/jbilcke-hf/webapp-factory-any-model/public/placeholder.html b/spaces/jbilcke-hf/webapp-factory-any-model/public/placeholder.html deleted file mode 100644 index 3e8bdd46c9f88d89e3c565267572577c3d0539db..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/webapp-factory-any-model/public/placeholder.html +++ /dev/null @@ -1,17 +0,0 @@ - - - Nothing to show (yet) - - - -
    -
    -
    -

    Nothing to show here (note: minimum prompt size is 10 characters)

    -
    -
    -
    - - - - \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RT.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RT.py deleted file mode 100644 index 950f2a066fb898df5bcd34a11df953c7d6b54228..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RT.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import dns.immutable -import dns.rdtypes.mxbase - - -@dns.immutable.immutable -class RT(dns.rdtypes.mxbase.UncompressedDowncasingMX): - - """RT record""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/psLib.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/psLib.py deleted file mode 100644 index 1e0408ce9c16f9a784f53ef1d17af88b0ab65647..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/psLib.py +++ /dev/null @@ -1,399 +0,0 @@ -from fontTools.misc.textTools import bytechr, byteord, bytesjoin, tobytes, tostr -from fontTools.misc import eexec -from .psOperators import ( - PSOperators, - ps_StandardEncoding, - ps_array, - ps_boolean, - ps_dict, - ps_integer, - ps_literal, - ps_mark, - ps_name, - ps_operator, - ps_procedure, - ps_procmark, - ps_real, - ps_string, -) -import re -from collections.abc import Callable -from string import whitespace -import logging - - -log = logging.getLogger(__name__) - -ps_special = b"()<>[]{}%" # / is one too, but we take care of that one differently - -skipwhiteRE = re.compile(bytesjoin([b"[", whitespace, b"]*"])) -endofthingPat = bytesjoin([b"[^][(){}<>/%", whitespace, b"]*"]) -endofthingRE = re.compile(endofthingPat) -commentRE = re.compile(b"%[^\n\r]*") - -# XXX This not entirely correct as it doesn't allow *nested* embedded parens: -stringPat = rb""" - \( - ( - ( - [^()]* \ [()] - ) - | - ( - [^()]* \( [^()]* \) - ) - )* - [^()]* - \) -""" -stringPat = b"".join(stringPat.split()) -stringRE = re.compile(stringPat) - -hexstringRE = re.compile(bytesjoin([b"<[", whitespace, b"0-9A-Fa-f]*>"])) - - -class PSTokenError(Exception): - pass - - -class PSError(Exception): - pass - - -class PSTokenizer(object): - def __init__(self, buf=b"", encoding="ascii"): - # Force self.buf to be a byte string - buf = tobytes(buf) - self.buf = buf - self.len = len(buf) - self.pos = 0 - self.closed = False - self.encoding = encoding - - def read(self, n=-1): - """Read at most 'n' bytes from the buffer, or less if the read - hits EOF before obtaining 'n' bytes. - If 'n' is negative or omitted, read all data until EOF is reached. - """ - if self.closed: - raise ValueError("I/O operation on closed file") - if n is None or n < 0: - newpos = self.len - else: - newpos = min(self.pos + n, self.len) - r = self.buf[self.pos : newpos] - self.pos = newpos - return r - - def close(self): - if not self.closed: - self.closed = True - del self.buf, self.pos - - def getnexttoken( - self, - # localize some stuff, for performance - len=len, - ps_special=ps_special, - stringmatch=stringRE.match, - hexstringmatch=hexstringRE.match, - commentmatch=commentRE.match, - endmatch=endofthingRE.match, - ): - - self.skipwhite() - if self.pos >= self.len: - return None, None - pos = self.pos - buf = self.buf - char = bytechr(byteord(buf[pos])) - if char in ps_special: - if char in b"{}[]": - tokentype = "do_special" - token = char - elif char == b"%": - tokentype = "do_comment" - _, nextpos = commentmatch(buf, pos).span() - token = buf[pos:nextpos] - elif char == b"(": - tokentype = "do_string" - m = stringmatch(buf, pos) - if m is None: - raise PSTokenError("bad string at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - elif char == b"<": - tokentype = "do_hexstring" - m = hexstringmatch(buf, pos) - if m is None: - raise PSTokenError("bad hexstring at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - else: - raise PSTokenError("bad token at character %d" % pos) - else: - if char == b"/": - tokentype = "do_literal" - m = endmatch(buf, pos + 1) - else: - tokentype = "" - m = endmatch(buf, pos) - if m is None: - raise PSTokenError("bad token at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - self.pos = pos + len(token) - token = tostr(token, encoding=self.encoding) - return tokentype, token - - def skipwhite(self, whitematch=skipwhiteRE.match): - _, nextpos = whitematch(self.buf, self.pos).span() - self.pos = nextpos - - def starteexec(self): - self.pos = self.pos + 1 - self.dirtybuf = self.buf[self.pos :] - self.buf, R = eexec.decrypt(self.dirtybuf, 55665) - self.len = len(self.buf) - self.pos = 4 - - def stopeexec(self): - if not hasattr(self, "dirtybuf"): - return - self.buf = self.dirtybuf - del self.dirtybuf - - -class PSInterpreter(PSOperators): - def __init__(self, encoding="ascii"): - systemdict = {} - userdict = {} - self.encoding = encoding - self.dictstack = [systemdict, userdict] - self.stack = [] - self.proclevel = 0 - self.procmark = ps_procmark() - self.fillsystemdict() - - def fillsystemdict(self): - systemdict = self.dictstack[0] - systemdict["["] = systemdict["mark"] = self.mark = ps_mark() - systemdict["]"] = ps_operator("]", self.do_makearray) - systemdict["true"] = ps_boolean(1) - systemdict["false"] = ps_boolean(0) - systemdict["StandardEncoding"] = ps_array(ps_StandardEncoding) - systemdict["FontDirectory"] = ps_dict({}) - self.suckoperators(systemdict, self.__class__) - - def suckoperators(self, systemdict, klass): - for name in dir(klass): - attr = getattr(self, name) - if isinstance(attr, Callable) and name[:3] == "ps_": - name = name[3:] - systemdict[name] = ps_operator(name, attr) - for baseclass in klass.__bases__: - self.suckoperators(systemdict, baseclass) - - def interpret(self, data, getattr=getattr): - tokenizer = self.tokenizer = PSTokenizer(data, self.encoding) - getnexttoken = tokenizer.getnexttoken - do_token = self.do_token - handle_object = self.handle_object - try: - while 1: - tokentype, token = getnexttoken() - if not token: - break - if tokentype: - handler = getattr(self, tokentype) - object = handler(token) - else: - object = do_token(token) - if object is not None: - handle_object(object) - tokenizer.close() - self.tokenizer = None - except: - if self.tokenizer is not None: - log.debug( - "ps error:\n" - "- - - - - - -\n" - "%s\n" - ">>>\n" - "%s\n" - "- - - - - - -", - self.tokenizer.buf[self.tokenizer.pos - 50 : self.tokenizer.pos], - self.tokenizer.buf[self.tokenizer.pos : self.tokenizer.pos + 50], - ) - raise - - def handle_object(self, object): - if not (self.proclevel or object.literal or object.type == "proceduretype"): - if object.type != "operatortype": - object = self.resolve_name(object.value) - if object.literal: - self.push(object) - else: - if object.type == "proceduretype": - self.call_procedure(object) - else: - object.function() - else: - self.push(object) - - def call_procedure(self, proc): - handle_object = self.handle_object - for item in proc.value: - handle_object(item) - - def resolve_name(self, name): - dictstack = self.dictstack - for i in range(len(dictstack) - 1, -1, -1): - if name in dictstack[i]: - return dictstack[i][name] - raise PSError("name error: " + str(name)) - - def do_token( - self, - token, - int=int, - float=float, - ps_name=ps_name, - ps_integer=ps_integer, - ps_real=ps_real, - ): - try: - num = int(token) - except (ValueError, OverflowError): - try: - num = float(token) - except (ValueError, OverflowError): - if "#" in token: - hashpos = token.find("#") - try: - base = int(token[:hashpos]) - num = int(token[hashpos + 1 :], base) - except (ValueError, OverflowError): - return ps_name(token) - else: - return ps_integer(num) - else: - return ps_name(token) - else: - return ps_real(num) - else: - return ps_integer(num) - - def do_comment(self, token): - pass - - def do_literal(self, token): - return ps_literal(token[1:]) - - def do_string(self, token): - return ps_string(token[1:-1]) - - def do_hexstring(self, token): - hexStr = "".join(token[1:-1].split()) - if len(hexStr) % 2: - hexStr = hexStr + "0" - cleanstr = [] - for i in range(0, len(hexStr), 2): - cleanstr.append(chr(int(hexStr[i : i + 2], 16))) - cleanstr = "".join(cleanstr) - return ps_string(cleanstr) - - def do_special(self, token): - if token == "{": - self.proclevel = self.proclevel + 1 - return self.procmark - elif token == "}": - proc = [] - while 1: - topobject = self.pop() - if topobject == self.procmark: - break - proc.append(topobject) - self.proclevel = self.proclevel - 1 - proc.reverse() - return ps_procedure(proc) - elif token == "[": - return self.mark - elif token == "]": - return ps_name("]") - else: - raise PSTokenError("huh?") - - def push(self, object): - self.stack.append(object) - - def pop(self, *types): - stack = self.stack - if not stack: - raise PSError("stack underflow") - object = stack[-1] - if types: - if object.type not in types: - raise PSError( - "typecheck, expected %s, found %s" % (repr(types), object.type) - ) - del stack[-1] - return object - - def do_makearray(self): - array = [] - while 1: - topobject = self.pop() - if topobject == self.mark: - break - array.append(topobject) - array.reverse() - self.push(ps_array(array)) - - def close(self): - """Remove circular references.""" - del self.stack - del self.dictstack - - -def unpack_item(item): - tp = type(item.value) - if tp == dict: - newitem = {} - for key, value in item.value.items(): - newitem[key] = unpack_item(value) - elif tp == list: - newitem = [None] * len(item.value) - for i in range(len(item.value)): - newitem[i] = unpack_item(item.value[i]) - if item.type == "proceduretype": - newitem = tuple(newitem) - else: - newitem = item.value - return newitem - - -def suckfont(data, encoding="ascii"): - m = re.search(rb"/FontName\s+/([^ \t\n\r]+)\s+def", data) - if m: - fontName = m.group(1) - fontName = fontName.decode() - else: - fontName = None - interpreter = PSInterpreter(encoding=encoding) - interpreter.interpret( - b"/Helvetica 4 dict dup /Encoding StandardEncoding put definefont pop" - ) - interpreter.interpret(data) - fontdir = interpreter.dictstack[0]["FontDirectory"].value - if fontName in fontdir: - rawfont = fontdir[fontName] - else: - # fall back, in case fontName wasn't found - fontNames = list(fontdir.keys()) - if len(fontNames) > 1: - fontNames.remove("Helvetica") - fontNames.sort() - rawfont = fontdir[fontNames[0]] - interpreter.close() - return unpack_item(rawfont) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/knowledge_graph/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/knowledge_graph/__init__.py deleted file mode 100644 index 1f8f4186fb588976416d984bd72c980b9841a1bf..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/knowledge_graph/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -"""KG-based data structures.""" - -from gpt_index.indices.knowledge_graph.base import GPTKnowledgeGraphIndex - -__all__ = [ - "GPTKnowledgeGraphIndex", -] diff --git a/spaces/johnyang/ChatPaper111/README.md b/spaces/johnyang/ChatPaper111/README.md deleted file mode 100644 index f34b04d96522d3f68d422c2b92856b896de434b1..0000000000000000000000000000000000000000 --- a/spaces/johnyang/ChatPaper111/README.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: ChatPaper -emoji: 📕 -colorFrom: pink -colorTo: purple -sdk: docker -sdk_version: 20.10.23 -app_file: frontend.py -pinned: false -license: gpl-3.0 -duplicated_from: yixin6178/ChatPaper ---- - -# ChatPaper - -Yet another paper reading assistant, similar as [ChatPDF](https://www.chatpdf.com/). - -## Setup - -1. Install dependencies (tested on Python 3.9) - -```bash - pip install -r requirements.txt -``` - -2. Setup GROBID local server - -```bash -bash serve_grobid.sh -``` - -3. Setup backend - -```bash -python backend.py --port 5000 --host localhost -``` - -4. Frontend - -```bash -streamlit run frontend.py --server.port 8502 --server.host localhost -``` - -## Demo Example - -- Prepare an [OpenAI API key](https://platform.openai.com/account/api-keys) and then upload a PDF to start chatting with the paper. - -![image-20230318232056584](https://s2.loli.net/2023/03/19/SbsuLQJpdqePoZV.png) - -## Implementation Details - -- Greedy Dynamic Context: Since the max token limit, we select the most relevant paragraphs in the pdf for each user query. Our model split the text input and output by the chatbot into four part: system_prompt (S), dynamic_source (D), user_query (Q), and model_answer(A). So upon each query, we first rank all the paragraphs by using a sentence_embedding model to calculate the similarity distance between the query embedding and all source embeddings. Then we compose the dynamic_source using a greedy method by to gradually push all relevant paragraphs (maintaing D <= MAX_TOKEN_LIMIT - Q - S - A - SOME_OVERHEAD). - -- Context Truncating: When context is too long, we now we simply pop out the first QA-pair. - -## TODO - -- [ ] **Context Condense**: how to deal with long context? maybe we can tune a soft prompt to condense the context -- [ ] **Poping context out based on similarity** - -## References - -1. SciPDF Parser: https://github.com/titipata/scipdf_parser -2. St-chat: https://github.com/AI-Yash/st-chat -3. Sentence-transformers: https://github.com/UKPLab/sentence-transformers -4. ChatGPT Chatbot Wrapper: https://github.com/acheong08/ChatGPT \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/decoder_only/network_test.py b/spaces/juancopi81/youtube-music-transcribe/t5x/examples/decoder_only/network_test.py deleted file mode 100644 index abad8406971f42135c978182601b2e412f1aea4a..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/examples/decoder_only/network_test.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for network.""" - -import os - -from absl import flags -from absl.testing import absltest -from absl.testing import parameterized - -import jax -import numpy as np -from t5x import test_utils - -# Parse absl flags test_srcdir and test_tmpdir. -jax.config.parse_flags_with_absl() - -FLAGS = flags.FLAGS - - -if __name__ == '__main__': - absltest.main() diff --git a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/chatgpt - macOS.command b/spaces/juanhuggingface/ChuanhuChatGPT_Beta/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/junkmind/SOTER/kernel_utils.py b/spaces/junkmind/SOTER/kernel_utils.py deleted file mode 100644 index 619f11bf61a655c45643d21e60bcef9445aac124..0000000000000000000000000000000000000000 --- a/spaces/junkmind/SOTER/kernel_utils.py +++ /dev/null @@ -1,366 +0,0 @@ -import os - -import cv2 -import numpy as np -import torch -from PIL import Image -from albumentations.augmentations.functional import image_compression -from facenet_pytorch.models.mtcnn import MTCNN -from concurrent.futures import ThreadPoolExecutor - -from torchvision.transforms import Normalize - -mean = [0.485, 0.456, 0.406] -std = [0.229, 0.224, 0.225] -normalize_transform = Normalize(mean, std) - - -class VideoReader: - """Helper class for reading one or more frames from a video file.""" - - def __init__(self, verbose=True, insets=(0, 0)): - """Creates a new VideoReader. - - Arguments: - verbose: whether to print warnings and error messages - insets: amount to inset the image by, as a percentage of - (width, height). This lets you "zoom in" to an image - to remove unimportant content around the borders. - Useful for face detection, which may not work if the - faces are too small. - """ - self.verbose = verbose - self.insets = insets - - def read_frames(self, path, num_frames, jitter=0, seed=None): - """Reads frames that are always evenly spaced throughout the video. - - Arguments: - path: the video file - num_frames: how many frames to read, -1 means the entire video - (warning: this will take up a lot of memory!) - jitter: if not 0, adds small random offsets to the frame indices; - this is useful so we don't always land on even or odd frames - seed: random seed for jittering; if you set this to a fixed value, - you probably want to set it only on the first video - """ - assert num_frames > 0 - - capture = cv2.VideoCapture(path) - frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - if frame_count <= 0: return None - - frame_idxs = np.linspace(0, frame_count - 1, num_frames, endpoint=True, dtype=np.int32) - if jitter > 0: - np.random.seed(seed) - jitter_offsets = np.random.randint(-jitter, jitter, len(frame_idxs)) - frame_idxs = np.clip(frame_idxs + jitter_offsets, 0, frame_count - 1) - - result = self._read_frames_at_indices(path, capture, frame_idxs) - capture.release() - return result - - def read_random_frames(self, path, num_frames, seed=None): - """Picks the frame indices at random. - - Arguments: - path: the video file - num_frames: how many frames to read, -1 means the entire video - (warning: this will take up a lot of memory!) - """ - assert num_frames > 0 - np.random.seed(seed) - - capture = cv2.VideoCapture(path) - frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - if frame_count <= 0: return None - - frame_idxs = sorted(np.random.choice(np.arange(0, frame_count), num_frames)) - result = self._read_frames_at_indices(path, capture, frame_idxs) - - capture.release() - return result - - def read_frames_at_indices(self, path, frame_idxs): - """Reads frames from a video and puts them into a NumPy array. - - Arguments: - path: the video file - frame_idxs: a list of frame indices. Important: should be - sorted from low-to-high! If an index appears multiple - times, the frame is still read only once. - - Returns: - - a NumPy array of shape (num_frames, height, width, 3) - - a list of the frame indices that were read - - Reading stops if loading a frame fails, in which case the first - dimension returned may actually be less than num_frames. - - Returns None if an exception is thrown for any reason, or if no - frames were read. - """ - assert len(frame_idxs) > 0 - capture = cv2.VideoCapture(path) - result = self._read_frames_at_indices(path, capture, frame_idxs) - capture.release() - return result - - def _read_frames_at_indices(self, path, capture, frame_idxs): - try: - frames = [] - idxs_read = [] - for frame_idx in range(frame_idxs[0], frame_idxs[-1] + 1): - # Get the next frame, but don't decode if we're not using it. - ret = capture.grab() - if not ret: - if self.verbose: - print("Error grabbing frame %d from movie %s" % (frame_idx, path)) - break - - # Need to look at this frame? - current = len(idxs_read) - if frame_idx == frame_idxs[current]: - ret, frame = capture.retrieve() - if not ret or frame is None: - if self.verbose: - print("Error retrieving frame %d from movie %s" % (frame_idx, path)) - break - - frame = self._postprocess_frame(frame) - frames.append(frame) - idxs_read.append(frame_idx) - - if len(frames) > 0: - return np.stack(frames), idxs_read - if self.verbose: - print("No frames read from movie %s" % path) - return None - except: - if self.verbose: - print("Exception while reading movie %s" % path) - return None - - def read_middle_frame(self, path): - """Reads the frame from the middle of the video.""" - capture = cv2.VideoCapture(path) - frame_count = int(capture.get(cv2.CAP_PROP_FRAME_COUNT)) - result = self._read_frame_at_index(path, capture, frame_count // 2) - capture.release() - return result - - def read_frame_at_index(self, path, frame_idx): - """Reads a single frame from a video. - - If you just want to read a single frame from the video, this is more - efficient than scanning through the video to find the frame. However, - for reading multiple frames it's not efficient. - - My guess is that a "streaming" approach is more efficient than a - "random access" approach because, unless you happen to grab a keyframe, - the decoder still needs to read all the previous frames in order to - reconstruct the one you're asking for. - - Returns a NumPy array of shape (1, H, W, 3) and the index of the frame, - or None if reading failed. - """ - capture = cv2.VideoCapture(path) - result = self._read_frame_at_index(path, capture, frame_idx) - capture.release() - return result - - def _read_frame_at_index(self, path, capture, frame_idx): - capture.set(cv2.CAP_PROP_POS_FRAMES, frame_idx) - ret, frame = capture.read() - if not ret or frame is None: - if self.verbose: - print("Error retrieving frame %d from movie %s" % (frame_idx, path)) - return None - else: - frame = self._postprocess_frame(frame) - return np.expand_dims(frame, axis=0), [frame_idx] - - def _postprocess_frame(self, frame): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - - if self.insets[0] > 0: - W = frame.shape[1] - p = int(W * self.insets[0]) - frame = frame[:, p:-p, :] - - if self.insets[1] > 0: - H = frame.shape[1] - q = int(H * self.insets[1]) - frame = frame[q:-q, :, :] - - return frame - - -class FaceExtractor: - def __init__(self, video_read_fn): - self.video_read_fn = video_read_fn - self.detector = MTCNN(margin=0, thresholds=[0.7, 0.8, 0.8], device="cpu") - - def process_videos(self, input_dir, filenames, video_idxs): - videos_read = [] - frames_read = [] - frames = [] - results = [] - for video_idx in video_idxs: - # Read the full-size frames from this video. - filename = filenames[video_idx] - video_path = os.path.join(input_dir, filename) - result = self.video_read_fn(video_path) - # Error? Then skip this video. - if result is None: continue - - videos_read.append(video_idx) - - # Keep track of the original frames (need them later). - my_frames, my_idxs = result - - frames.append(my_frames) - frames_read.append(my_idxs) - for i, frame in enumerate(my_frames): - h, w = frame.shape[:2] - img = Image.fromarray(frame.astype(np.uint8)) - img = img.resize(size=[s // 2 for s in img.size]) - - batch_boxes, probs = self.detector.detect(img, landmarks=False) - - faces = [] - scores = [] - if batch_boxes is None: - continue - for bbox, score in zip(batch_boxes, probs): - if bbox is not None: - xmin, ymin, xmax, ymax = [int(b * 2) for b in bbox] - w = xmax - xmin - h = ymax - ymin - p_h = h // 3 - p_w = w // 3 - crop = frame[max(ymin - p_h, 0):ymax + p_h, max(xmin - p_w, 0):xmax + p_w] - faces.append(crop) - scores.append(score) - - frame_dict = {"video_idx": video_idx, - "frame_idx": my_idxs[i], - "frame_w": w, - "frame_h": h, - "faces": faces, - "scores": scores} - results.append(frame_dict) - - return results - - def process_video(self, video_path): - """Convenience method for doing face extraction on a single video.""" - input_dir = os.path.dirname(video_path) - filenames = [os.path.basename(video_path)] - return self.process_videos(input_dir, filenames, [0]) - - - -def confident_strategy(pred, t=0.8): - pred = np.array(pred) - sz = len(pred) - fakes = np.count_nonzero(pred > t) - # 11 frames are detected as fakes with high probability - if fakes > sz // 2.5 and fakes > 11: - return np.mean(pred[pred > t]) - elif np.count_nonzero(pred < 0.2) > 0.9 * sz: - return np.mean(pred[pred < 0.2]) - else: - return np.mean(pred) - -strategy = confident_strategy - - -def put_to_center(img, input_size): - img = img[:input_size, :input_size] - image = np.zeros((input_size, input_size, 3), dtype=np.uint8) - start_w = (input_size - img.shape[1]) // 2 - start_h = (input_size - img.shape[0]) // 2 - image[start_h:start_h + img.shape[0], start_w: start_w + img.shape[1], :] = img - return image - - -def isotropically_resize_image(img, size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC): - h, w = img.shape[:2] - if max(w, h) == size: - return img - if w > h: - scale = size / w - h = h * scale - w = size - else: - scale = size / h - w = w * scale - h = size - interpolation = interpolation_up if scale > 1 else interpolation_down - resized = cv2.resize(img, (int(w), int(h)), interpolation=interpolation) - return resized - - -def predict_on_video(face_extractor, video_path, batch_size, input_size, models, strategy=np.mean, - apply_compression=False, device='cpu'): - batch_size *= 4 - try: - faces = face_extractor.process_video(video_path) - if len(faces) > 0: - x = np.zeros((batch_size, input_size, input_size, 3), dtype=np.uint8) - n = 0 - for frame_data in faces: - for face in frame_data["faces"]: - resized_face = isotropically_resize_image(face, input_size) - resized_face = put_to_center(resized_face, input_size) - if apply_compression: - resized_face = image_compression(resized_face, quality=90, image_type=".jpg") - if n + 1 < batch_size: - x[n] = resized_face - n += 1 - else: - pass - if n > 0: - if device == 'cpu': - x = torch.tensor(x, device='cpu').float() - else: - x = torch.tensor(x, device="cuda").float() - # Preprocess the images. - x = x.permute((0, 3, 1, 2)) - for i in range(len(x)): - x[i] = normalize_transform(x[i] / 255.) - # Make a prediction, then take the average. - with torch.no_grad(): - preds = [] - models_ = [models] - for model in models_: - if device == 'cpu': - y_pred = model(x[:n]) - else: - y_pred = model(x[:n].half()) - y_pred = torch.sigmoid(y_pred.squeeze()) - bpred = y_pred[:n].cpu().numpy() - preds.append(strategy(bpred)) - return np.mean(preds) - except Exception as e: - print("Prediction error on video %s: %s" % (video_path, str(e))) - - return 0.5 - - -def predict_on_video_set(face_extractor, videos, input_size, num_workers, test_dir, frames_per_video, models, - strategy=np.mean, - apply_compression=False): - def process_file(i): - filename = videos[i] - y_pred = predict_on_video(face_extractor=face_extractor, video_path=os.path.join(test_dir, filename), - input_size=input_size, - batch_size=frames_per_video, - models=models, strategy=strategy, apply_compression=apply_compression) - return y_pred - - with ThreadPoolExecutor(max_workers=num_workers) as ex: - predictions = ex.map(process_file, range(len(videos))) - return list(predictions) - diff --git a/spaces/kadirnar/yolox/configs/yolov3.py b/spaces/kadirnar/yolox/configs/yolov3.py deleted file mode 100644 index c747f8ae9f42549a1dbd7f03d8ee80e235d6467a..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/yolox/configs/yolov3.py +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -import torch.nn as nn - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 1.0 - self.width = 1.0 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - - def get_model(self, sublinear=False): - def init_yolo(M): - for m in M.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eps = 1e-3 - m.momentum = 0.03 - if "model" not in self.__dict__: - from yolox.models import YOLOX, YOLOFPN, YOLOXHead - backbone = YOLOFPN() - head = YOLOXHead(self.num_classes, self.width, in_channels=[128, 256, 512], act="lrelu") - self.model = YOLOX(backbone, head) - self.model.apply(init_yolo) - self.model.head.initialize_biases(1e-2) - - return self.model diff --git a/spaces/kazuk/youtube-whisper-08/README.md b/spaces/kazuk/youtube-whisper-08/README.md deleted file mode 100644 index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-08/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kcagle/AutoGPT/autogpt/memory/redismem.py b/spaces/kcagle/AutoGPT/autogpt/memory/redismem.py deleted file mode 100644 index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/memory/redismem.py +++ /dev/null @@ -1,156 +0,0 @@ -"""Redis memory provider.""" -from __future__ import annotations - -from typing import Any - -import numpy as np -import redis -from colorama import Fore, Style -from redis.commands.search.field import TextField, VectorField -from redis.commands.search.indexDefinition import IndexDefinition, IndexType -from redis.commands.search.query import Query - -from autogpt.llm_utils import create_embedding_with_ada -from autogpt.logs import logger -from autogpt.memory.base import MemoryProviderSingleton - -SCHEMA = [ - TextField("data"), - VectorField( - "embedding", - "HNSW", - {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}, - ), -] - - -class RedisMemory(MemoryProviderSingleton): - def __init__(self, cfg): - """ - Initializes the Redis memory provider. - - Args: - cfg: The config object. - - Returns: None - """ - redis_host = cfg.redis_host - redis_port = cfg.redis_port - redis_password = cfg.redis_password - self.dimension = 1536 - self.redis = redis.Redis( - host=redis_host, - port=redis_port, - password=redis_password, - db=0, # Cannot be changed - ) - self.cfg = cfg - - # Check redis connection - try: - self.redis.ping() - except redis.ConnectionError as e: - logger.typewriter_log( - "FAILED TO CONNECT TO REDIS", - Fore.RED, - Style.BRIGHT + str(e) + Style.RESET_ALL, - ) - logger.double_check( - "Please ensure you have setup and configured Redis properly for use. " - + f"You can check out {Fore.CYAN + Style.BRIGHT}" - f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}" - " to ensure you've set up everything correctly." - ) - exit(1) - - if cfg.wipe_redis_on_start: - self.redis.flushall() - try: - self.redis.ft(f"{cfg.memory_index}").create_index( - fields=SCHEMA, - definition=IndexDefinition( - prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH - ), - ) - except Exception as e: - print("Error creating Redis search index: ", e) - existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num") - self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0 - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. - - Args: - data: The data to add. - - Returns: Message indicating that the data has been added. - """ - if "Command Error:" in data: - return "" - vector = create_embedding_with_ada(data) - vector = np.array(vector).astype(np.float32).tobytes() - data_dict = {b"data": data, "embedding": vector} - pipe = self.redis.pipeline() - pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict) - _text = ( - f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}" - ) - self.vec_num += 1 - pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num) - pipe.execute() - return _text - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - - Args: - data: The data to compare to. - - Returns: The most relevant data. - """ - return self.get_relevant(data, 1) - - def clear(self) -> str: - """ - Clears the redis server. - - Returns: A message indicating that the memory has been cleared. - """ - self.redis.flushall() - return "Obliviated" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: A list of the most relevant data. - """ - query_embedding = create_embedding_with_ada(data) - base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]" - query = ( - Query(base_query) - .return_fields("data", "vector_score") - .sort_by("vector_score") - .dialect(2) - ) - query_vector = np.array(query_embedding).astype(np.float32).tobytes() - - try: - results = self.redis.ft(f"{self.cfg.memory_index}").search( - query, query_params={"vector": query_vector} - ) - except Exception as e: - print("Error calling Redis search: ", e) - return None - return [result.data for result in results.docs] - - def get_stats(self): - """ - Returns: The stats of the memory index. - """ - return self.redis.ft(f"{self.cfg.memory_index}").info() diff --git a/spaces/keras-dreambooth/dreambooth_monkey_island/README.md b/spaces/keras-dreambooth/dreambooth_monkey_island/README.md deleted file mode 100644 index 3a79adfa64660e25184e9aa653f34c851235604c..0000000000000000000000000000000000000000 --- a/spaces/keras-dreambooth/dreambooth_monkey_island/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Dreambooth Monkey Island -emoji: 🐒 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: apache-2.0 -tags: - - keras-dreambooth - - scifi ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-io/deit/examples/DontREADME.md b/spaces/keras-io/deit/examples/DontREADME.md deleted file mode 100644 index 76a47617e7b5ae3b878dc99b79e37613c146f4d2..0000000000000000000000000000000000000000 --- a/spaces/keras-io/deit/examples/DontREADME.md +++ /dev/null @@ -1 +0,0 @@ -# HAHA I Said Don't Read Me. \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/facerecon_model.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/facerecon_model.py deleted file mode 100644 index 7de8ca6eebc50ff1ed52c5ba37d31b43f977b5e1..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from src.face3d.models.base_model import BaseModel -from src.face3d.models import networks -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from src.face3d.util import util -from src.face3d.util.nvdiffrast import MeshRenderer -# from src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/paper_runfiles/blur_tests.sh b/spaces/kquote03/lama-video-watermark-remover/bin/paper_runfiles/blur_tests.sh deleted file mode 100644 index 8f204a4c643d08935e5561ed27a286536643958d..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/paper_runfiles/blur_tests.sh +++ /dev/null @@ -1,37 +0,0 @@ -##!/usr/bin/env bash -# -## !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst -# -## paths to data are valid for mml7 -#PLACES_ROOT="/data/inpainting/Places365" -#OUT_DIR="/data/inpainting/paper_data/Places365_val_test" -# -#source "$(dirname $0)/env.sh" -# -#for datadir in test_large_30k # val_large -#do -# for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512 -# do -# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \ -# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8 -# -# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" -# done -# -# for conf in segm_256 segm_512 -# do -# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \ -# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2 -# -# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" -# done -#done -# -#IN_DIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k/random_medium_512" -#PRED_DIR="/data/inpainting/predictions/final/images/r.suvorov_2021-03-05_17-08-35_train_ablv2_work_resume_epoch37/random_medium_512" -#BLUR_OUT_DIR="/data/inpainting/predictions/final/blur/images" -# -#for b in 0.1 -# -#"$BINDIR/blur_predicts.py" "$BASEDIR/../../configs/eval2.yaml" "$CUR_IN_DIR" "$CUR_OUT_DIR" "$CUR_EVAL_DIR" -# diff --git a/spaces/krrishD/Langchain_Code_QA_Bot/README.md b/spaces/krrishD/Langchain_Code_QA_Bot/README.md deleted file mode 100644 index 40352792699983e27240d22922b172b94d8a3f2e..0000000000000000000000000000000000000000 --- a/spaces/krrishD/Langchain_Code_QA_Bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Langchain Code QA Bot -emoji: 👀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/figure.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/figure.py deleted file mode 100644 index c6df929e04eebfb26c3d8cca0bd6c45057668c4c..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/figure.py +++ /dev/null @@ -1,3594 +0,0 @@ -""" -`matplotlib.figure` implements the following classes: - -`Figure` - Top level `~matplotlib.artist.Artist`, which holds all plot elements. - Many methods are implemented in `FigureBase`. - -`SubFigure` - A logical figure inside a figure, usually added to a figure (or parent - `SubFigure`) with `Figure.add_subfigure` or `Figure.subfigures` methods - (provisional API v3.4). - -`SubplotParams` - Control the default spacing between subplots. - -Figures are typically created using pyplot methods `~.pyplot.figure`, -`~.pyplot.subplots`, and `~.pyplot.subplot_mosaic`. - -.. plot:: - :include-source: - - fig, ax = plt.subplots(figsize=(2, 2), facecolor='lightskyblue', - layout='constrained') - fig.suptitle('Figure') - ax.set_title('Axes', loc='left', fontstyle='oblique', fontsize='medium') - -Some situations call for directly instantiating a `~.figure.Figure` class, -usually inside an application of some sort (see :ref:`user_interfaces` for a -list of examples) . More information about Figures can be found at -:ref:`figure_explanation`. - -""" - -from contextlib import ExitStack -import inspect -import itertools -import logging -from numbers import Integral - -import numpy as np - -import matplotlib as mpl -from matplotlib import _blocking_input, backend_bases, _docstring, projections -from matplotlib.artist import ( - Artist, allow_rasterization, _finalize_rasterization) -from matplotlib.backend_bases import ( - DrawEvent, FigureCanvasBase, NonGuiException, MouseButton, _get_renderer) -import matplotlib._api as _api -import matplotlib.cbook as cbook -import matplotlib.colorbar as cbar -import matplotlib.image as mimage - -from matplotlib.axes import Axes -from matplotlib.gridspec import GridSpec -from matplotlib.layout_engine import ( - ConstrainedLayoutEngine, TightLayoutEngine, LayoutEngine, - PlaceHolderLayoutEngine -) -import matplotlib.legend as mlegend -from matplotlib.patches import Rectangle -from matplotlib.text import Text -from matplotlib.transforms import (Affine2D, Bbox, BboxTransformTo, - TransformedBbox) - -_log = logging.getLogger(__name__) - - -def _stale_figure_callback(self, val): - if self.figure: - self.figure.stale = val - - -class _AxesStack: - """ - Helper class to track axes in a figure. - - Axes are tracked both in the order in which they have been added - (``self._axes`` insertion/iteration order) and in the separate "gca" stack - (which is the index to which they map in the ``self._axes`` dict). - """ - - def __init__(self): - self._axes = {} # Mapping of axes to "gca" order. - self._counter = itertools.count() - - def as_list(self): - """List the axes that have been added to the figure.""" - return [*self._axes] # This relies on dict preserving order. - - def remove(self, a): - """Remove the axes from the stack.""" - self._axes.pop(a) - - def bubble(self, a): - """Move an axes, which must already exist in the stack, to the top.""" - if a not in self._axes: - raise ValueError("Axes has not been added yet") - self._axes[a] = next(self._counter) - - def add(self, a): - """Add an axes to the stack, ignoring it if already present.""" - if a not in self._axes: - self._axes[a] = next(self._counter) - - def current(self): - """Return the active axes, or None if the stack is empty.""" - return max(self._axes, key=self._axes.__getitem__, default=None) - - -class SubplotParams: - """ - A class to hold the parameters for a subplot. - """ - - def __init__(self, left=None, bottom=None, right=None, top=None, - wspace=None, hspace=None): - """ - Defaults are given by :rc:`figure.subplot.[name]`. - - Parameters - ---------- - left : float - The position of the left edge of the subplots, - as a fraction of the figure width. - right : float - The position of the right edge of the subplots, - as a fraction of the figure width. - bottom : float - The position of the bottom edge of the subplots, - as a fraction of the figure height. - top : float - The position of the top edge of the subplots, - as a fraction of the figure height. - wspace : float - The width of the padding between subplots, - as a fraction of the average Axes width. - hspace : float - The height of the padding between subplots, - as a fraction of the average Axes height. - """ - for key in ["left", "bottom", "right", "top", "wspace", "hspace"]: - setattr(self, key, mpl.rcParams[f"figure.subplot.{key}"]) - self.update(left, bottom, right, top, wspace, hspace) - - def update(self, left=None, bottom=None, right=None, top=None, - wspace=None, hspace=None): - """ - Update the dimensions of the passed parameters. *None* means unchanged. - """ - if ((left if left is not None else self.left) - >= (right if right is not None else self.right)): - raise ValueError('left cannot be >= right') - if ((bottom if bottom is not None else self.bottom) - >= (top if top is not None else self.top)): - raise ValueError('bottom cannot be >= top') - if left is not None: - self.left = left - if right is not None: - self.right = right - if bottom is not None: - self.bottom = bottom - if top is not None: - self.top = top - if wspace is not None: - self.wspace = wspace - if hspace is not None: - self.hspace = hspace - - -class FigureBase(Artist): - """ - Base class for `.Figure` and `.SubFigure` containing the methods that add - artists to the figure or subfigure, create Axes, etc. - """ - def __init__(self, **kwargs): - super().__init__() - # remove the non-figure artist _axes property - # as it makes no sense for a figure to be _in_ an Axes - # this is used by the property methods in the artist base class - # which are over-ridden in this class - del self._axes - - self._suptitle = None - self._supxlabel = None - self._supylabel = None - - # groupers to keep track of x and y labels we want to align. - # see self.align_xlabels and self.align_ylabels and - # axis._get_tick_boxes_siblings - self._align_label_groups = {"x": cbook.Grouper(), "y": cbook.Grouper()} - - self.figure = self - self._localaxes = [] # track all axes - self.artists = [] - self.lines = [] - self.patches = [] - self.texts = [] - self.images = [] - self.legends = [] - self.subfigs = [] - self.stale = True - self.suppressComposite = None - self.set(**kwargs) - - def _get_draw_artists(self, renderer): - """Also runs apply_aspect""" - artists = self.get_children() - for sfig in self.subfigs: - artists.remove(sfig) - childa = sfig.get_children() - for child in childa: - if child in artists: - artists.remove(child) - - artists.remove(self.patch) - artists = sorted( - (artist for artist in artists if not artist.get_animated()), - key=lambda artist: artist.get_zorder()) - for ax in self._localaxes: - locator = ax.get_axes_locator() - ax.apply_aspect(locator(ax, renderer) if locator else None) - - for child in ax.get_children(): - if hasattr(child, 'apply_aspect'): - locator = child.get_axes_locator() - child.apply_aspect( - locator(child, renderer) if locator else None) - return artists - - def autofmt_xdate( - self, bottom=0.2, rotation=30, ha='right', which='major'): - """ - Date ticklabels often overlap, so it is useful to rotate them - and right align them. Also, a common use case is a number of - subplots with shared x-axis where the x-axis is date data. The - ticklabels are often long, and it helps to rotate them on the - bottom subplot and turn them off on other subplots, as well as - turn off xlabels. - - Parameters - ---------- - bottom : float, default: 0.2 - The bottom of the subplots for `subplots_adjust`. - rotation : float, default: 30 degrees - The rotation angle of the xtick labels in degrees. - ha : {'left', 'center', 'right'}, default: 'right' - The horizontal alignment of the xticklabels. - which : {'major', 'minor', 'both'}, default: 'major' - Selects which ticklabels to rotate. - """ - _api.check_in_list(['major', 'minor', 'both'], which=which) - allsubplots = all(ax.get_subplotspec() for ax in self.axes) - if len(self.axes) == 1: - for label in self.axes[0].get_xticklabels(which=which): - label.set_ha(ha) - label.set_rotation(rotation) - else: - if allsubplots: - for ax in self.get_axes(): - if ax.get_subplotspec().is_last_row(): - for label in ax.get_xticklabels(which=which): - label.set_ha(ha) - label.set_rotation(rotation) - else: - for label in ax.get_xticklabels(which=which): - label.set_visible(False) - ax.set_xlabel('') - - if allsubplots: - self.subplots_adjust(bottom=bottom) - self.stale = True - - def get_children(self): - """Get a list of artists contained in the figure.""" - return [self.patch, - *self.artists, - *self._localaxes, - *self.lines, - *self.patches, - *self.texts, - *self.images, - *self.legends, - *self.subfigs] - - def contains(self, mouseevent): - """ - Test whether the mouse event occurred on the figure. - - Returns - ------- - bool, {} - """ - inside, info = self._default_contains(mouseevent, figure=self) - if inside is not None: - return inside, info - inside = self.bbox.contains(mouseevent.x, mouseevent.y) - return inside, {} - - @_api.delete_parameter("3.6", "args") - @_api.delete_parameter("3.6", "kwargs") - def get_window_extent(self, renderer=None, *args, **kwargs): - # docstring inherited - return self.bbox - - def _suplabels(self, t, info, **kwargs): - """ - Add a centered %(name)s to the figure. - - Parameters - ---------- - t : str - The %(name)s text. - x : float, default: %(x0)s - The x location of the text in figure coordinates. - y : float, default: %(y0)s - The y location of the text in figure coordinates. - horizontalalignment, ha : {'center', 'left', 'right'}, default: %(ha)s - The horizontal alignment of the text relative to (*x*, *y*). - verticalalignment, va : {'top', 'center', 'bottom', 'baseline'}, \ -default: %(va)s - The vertical alignment of the text relative to (*x*, *y*). - fontsize, size : default: :rc:`figure.%(rc)ssize` - The font size of the text. See `.Text.set_size` for possible - values. - fontweight, weight : default: :rc:`figure.%(rc)sweight` - The font weight of the text. See `.Text.set_weight` for possible - values. - - Returns - ------- - text - The `.Text` instance of the %(name)s. - - Other Parameters - ---------------- - fontproperties : None or dict, optional - A dict of font properties. If *fontproperties* is given the - default values for font size and weight are taken from the - `.FontProperties` defaults. :rc:`figure.%(rc)ssize` and - :rc:`figure.%(rc)sweight` are ignored in this case. - - **kwargs - Additional kwargs are `matplotlib.text.Text` properties. - """ - - suplab = getattr(self, info['name']) - - x = kwargs.pop('x', None) - y = kwargs.pop('y', None) - if info['name'] in ['_supxlabel', '_suptitle']: - autopos = y is None - elif info['name'] == '_supylabel': - autopos = x is None - if x is None: - x = info['x0'] - if y is None: - y = info['y0'] - - if 'horizontalalignment' not in kwargs and 'ha' not in kwargs: - kwargs['horizontalalignment'] = info['ha'] - if 'verticalalignment' not in kwargs and 'va' not in kwargs: - kwargs['verticalalignment'] = info['va'] - if 'rotation' not in kwargs: - kwargs['rotation'] = info['rotation'] - - if 'fontproperties' not in kwargs: - if 'fontsize' not in kwargs and 'size' not in kwargs: - kwargs['size'] = mpl.rcParams[info['size']] - if 'fontweight' not in kwargs and 'weight' not in kwargs: - kwargs['weight'] = mpl.rcParams[info['weight']] - - sup = self.text(x, y, t, **kwargs) - if suplab is not None: - suplab.set_text(t) - suplab.set_position((x, y)) - suplab.update_from(sup) - sup.remove() - else: - suplab = sup - suplab._autopos = autopos - setattr(self, info['name'], suplab) - self.stale = True - return suplab - - @_docstring.Substitution(x0=0.5, y0=0.98, name='suptitle', ha='center', - va='top', rc='title') - @_docstring.copy(_suplabels) - def suptitle(self, t, **kwargs): - # docstring from _suplabels... - info = {'name': '_suptitle', 'x0': 0.5, 'y0': 0.98, - 'ha': 'center', 'va': 'top', 'rotation': 0, - 'size': 'figure.titlesize', 'weight': 'figure.titleweight'} - return self._suplabels(t, info, **kwargs) - - @_docstring.Substitution(x0=0.5, y0=0.01, name='supxlabel', ha='center', - va='bottom', rc='label') - @_docstring.copy(_suplabels) - def supxlabel(self, t, **kwargs): - # docstring from _suplabels... - info = {'name': '_supxlabel', 'x0': 0.5, 'y0': 0.01, - 'ha': 'center', 'va': 'bottom', 'rotation': 0, - 'size': 'figure.labelsize', 'weight': 'figure.labelweight'} - return self._suplabels(t, info, **kwargs) - - @_docstring.Substitution(x0=0.02, y0=0.5, name='supylabel', ha='left', - va='center', rc='label') - @_docstring.copy(_suplabels) - def supylabel(self, t, **kwargs): - # docstring from _suplabels... - info = {'name': '_supylabel', 'x0': 0.02, 'y0': 0.5, - 'ha': 'left', 'va': 'center', 'rotation': 'vertical', - 'rotation_mode': 'anchor', 'size': 'figure.labelsize', - 'weight': 'figure.labelweight'} - return self._suplabels(t, info, **kwargs) - - def get_edgecolor(self): - """Get the edge color of the Figure rectangle.""" - return self.patch.get_edgecolor() - - def get_facecolor(self): - """Get the face color of the Figure rectangle.""" - return self.patch.get_facecolor() - - def get_frameon(self): - """ - Return the figure's background patch visibility, i.e. - whether the figure background will be drawn. Equivalent to - ``Figure.patch.get_visible()``. - """ - return self.patch.get_visible() - - def set_linewidth(self, linewidth): - """ - Set the line width of the Figure rectangle. - - Parameters - ---------- - linewidth : number - """ - self.patch.set_linewidth(linewidth) - - def get_linewidth(self): - """ - Get the line width of the Figure rectangle. - """ - return self.patch.get_linewidth() - - def set_edgecolor(self, color): - """ - Set the edge color of the Figure rectangle. - - Parameters - ---------- - color : color - """ - self.patch.set_edgecolor(color) - - def set_facecolor(self, color): - """ - Set the face color of the Figure rectangle. - - Parameters - ---------- - color : color - """ - self.patch.set_facecolor(color) - - def set_frameon(self, b): - """ - Set the figure's background patch visibility, i.e. - whether the figure background will be drawn. Equivalent to - ``Figure.patch.set_visible()``. - - Parameters - ---------- - b : bool - """ - self.patch.set_visible(b) - self.stale = True - - frameon = property(get_frameon, set_frameon) - - def add_artist(self, artist, clip=False): - """ - Add an `.Artist` to the figure. - - Usually artists are added to `~.axes.Axes` objects using - `.Axes.add_artist`; this method can be used in the rare cases where - one needs to add artists directly to the figure instead. - - Parameters - ---------- - artist : `~matplotlib.artist.Artist` - The artist to add to the figure. If the added artist has no - transform previously set, its transform will be set to - ``figure.transSubfigure``. - clip : bool, default: False - Whether the added artist should be clipped by the figure patch. - - Returns - ------- - `~matplotlib.artist.Artist` - The added artist. - """ - artist.set_figure(self) - self.artists.append(artist) - artist._remove_method = self.artists.remove - - if not artist.is_transform_set(): - artist.set_transform(self.transSubfigure) - - if clip: - artist.set_clip_path(self.patch) - - self.stale = True - return artist - - @_docstring.dedent_interpd - def add_axes(self, *args, **kwargs): - """ - Add an `~.axes.Axes` to the figure. - - Call signatures:: - - add_axes(rect, projection=None, polar=False, **kwargs) - add_axes(ax) - - Parameters - ---------- - rect : tuple (left, bottom, width, height) - The dimensions (left, bottom, width, height) of the new - `~.axes.Axes`. All quantities are in fractions of figure width and - height. - - projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \ -'polar', 'rectilinear', str}, optional - The projection type of the `~.axes.Axes`. *str* is the name of - a custom projection, see `~matplotlib.projections`. The default - None results in a 'rectilinear' projection. - - polar : bool, default: False - If True, equivalent to projection='polar'. - - axes_class : subclass type of `~.axes.Axes`, optional - The `.axes.Axes` subclass that is instantiated. This parameter - is incompatible with *projection* and *polar*. See - :ref:`axisartist_users-guide-index` for examples. - - sharex, sharey : `~.axes.Axes`, optional - Share the x or y `~matplotlib.axis` with sharex and/or sharey. - The axis will have the same limits, ticks, and scale as the axis - of the shared axes. - - label : str - A label for the returned Axes. - - Returns - ------- - `~.axes.Axes`, or a subclass of `~.axes.Axes` - The returned axes class depends on the projection used. It is - `~.axes.Axes` if rectilinear projection is used and - `.projections.polar.PolarAxes` if polar projection is used. - - Other Parameters - ---------------- - **kwargs - This method also takes the keyword arguments for - the returned Axes class. The keyword arguments for the - rectilinear Axes class `~.axes.Axes` can be found in - the following table but there might also be other keyword - arguments if another projection is used, see the actual Axes - class. - - %(Axes:kwdoc)s - - Notes - ----- - In rare circumstances, `.add_axes` may be called with a single - argument, an Axes instance already created in the present figure but - not in the figure's list of Axes. - - See Also - -------- - .Figure.add_subplot - .pyplot.subplot - .pyplot.axes - .Figure.subplots - .pyplot.subplots - - Examples - -------- - Some simple examples:: - - rect = l, b, w, h - fig = plt.figure() - fig.add_axes(rect) - fig.add_axes(rect, frameon=False, facecolor='g') - fig.add_axes(rect, polar=True) - ax = fig.add_axes(rect, projection='polar') - fig.delaxes(ax) - fig.add_axes(ax) - """ - - if not len(args) and 'rect' not in kwargs: - raise TypeError( - "add_axes() missing 1 required positional argument: 'rect'") - elif 'rect' in kwargs: - if len(args): - raise TypeError( - "add_axes() got multiple values for argument 'rect'") - args = (kwargs.pop('rect'), ) - - if isinstance(args[0], Axes): - a = args[0] - key = a._projection_init - if a.get_figure() is not self: - raise ValueError( - "The Axes must have been created in the present figure") - else: - rect = args[0] - if not np.isfinite(rect).all(): - raise ValueError('all entries in rect must be finite ' - 'not {}'.format(rect)) - projection_class, pkw = self._process_projection_requirements( - *args, **kwargs) - - # create the new axes using the axes class given - a = projection_class(self, rect, **pkw) - key = (projection_class, pkw) - return self._add_axes_internal(a, key) - - @_docstring.dedent_interpd - def add_subplot(self, *args, **kwargs): - """ - Add an `~.axes.Axes` to the figure as part of a subplot arrangement. - - Call signatures:: - - add_subplot(nrows, ncols, index, **kwargs) - add_subplot(pos, **kwargs) - add_subplot(ax) - add_subplot() - - Parameters - ---------- - *args : int, (int, int, *index*), or `.SubplotSpec`, default: (1, 1, 1) - The position of the subplot described by one of - - - Three integers (*nrows*, *ncols*, *index*). The subplot will - take the *index* position on a grid with *nrows* rows and - *ncols* columns. *index* starts at 1 in the upper left corner - and increases to the right. *index* can also be a two-tuple - specifying the (*first*, *last*) indices (1-based, and including - *last*) of the subplot, e.g., ``fig.add_subplot(3, 1, (1, 2))`` - makes a subplot that spans the upper 2/3 of the figure. - - A 3-digit integer. The digits are interpreted as if given - separately as three single-digit integers, i.e. - ``fig.add_subplot(235)`` is the same as - ``fig.add_subplot(2, 3, 5)``. Note that this can only be used - if there are no more than 9 subplots. - - A `.SubplotSpec`. - - In rare circumstances, `.add_subplot` may be called with a single - argument, a subplot Axes instance already created in the - present figure but not in the figure's list of Axes. - - projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \ -'polar', 'rectilinear', str}, optional - The projection type of the subplot (`~.axes.Axes`). *str* is the - name of a custom projection, see `~matplotlib.projections`. The - default None results in a 'rectilinear' projection. - - polar : bool, default: False - If True, equivalent to projection='polar'. - - axes_class : subclass type of `~.axes.Axes`, optional - The `.axes.Axes` subclass that is instantiated. This parameter - is incompatible with *projection* and *polar*. See - :ref:`axisartist_users-guide-index` for examples. - - sharex, sharey : `~.axes.Axes`, optional - Share the x or y `~matplotlib.axis` with sharex and/or sharey. - The axis will have the same limits, ticks, and scale as the axis - of the shared axes. - - label : str - A label for the returned Axes. - - Returns - ------- - `~.axes.Axes` - - The Axes of the subplot. The returned Axes can actually be an - instance of a subclass, such as `.projections.polar.PolarAxes` for - polar projections. - - Other Parameters - ---------------- - **kwargs - This method also takes the keyword arguments for the returned Axes - base class; except for the *figure* argument. The keyword arguments - for the rectilinear base class `~.axes.Axes` can be found in - the following table but there might also be other keyword - arguments if another projection is used. - - %(Axes:kwdoc)s - - See Also - -------- - .Figure.add_axes - .pyplot.subplot - .pyplot.axes - .Figure.subplots - .pyplot.subplots - - Examples - -------- - :: - - fig = plt.figure() - - fig.add_subplot(231) - ax1 = fig.add_subplot(2, 3, 1) # equivalent but more general - - fig.add_subplot(232, frameon=False) # subplot with no frame - fig.add_subplot(233, projection='polar') # polar subplot - fig.add_subplot(234, sharex=ax1) # subplot sharing x-axis with ax1 - fig.add_subplot(235, facecolor="red") # red subplot - - ax1.remove() # delete ax1 from the figure - fig.add_subplot(ax1) # add ax1 back to the figure - """ - if 'figure' in kwargs: - # Axes itself allows for a 'figure' kwarg, but since we want to - # bind the created Axes to self, it is not allowed here. - raise _api.kwarg_error("add_subplot", "figure") - - if (len(args) == 1 - and isinstance(args[0], mpl.axes._base._AxesBase) - and args[0].get_subplotspec()): - ax = args[0] - key = ax._projection_init - if ax.get_figure() is not self: - raise ValueError("The Axes must have been created in " - "the present figure") - else: - if not args: - args = (1, 1, 1) - # Normalize correct ijk values to (i, j, k) here so that - # add_subplot(211) == add_subplot(2, 1, 1). Invalid values will - # trigger errors later (via SubplotSpec._from_subplot_args). - if (len(args) == 1 and isinstance(args[0], Integral) - and 100 <= args[0] <= 999): - args = tuple(map(int, str(args[0]))) - projection_class, pkw = self._process_projection_requirements( - *args, **kwargs) - ax = projection_class(self, *args, **pkw) - key = (projection_class, pkw) - return self._add_axes_internal(ax, key) - - def _add_axes_internal(self, ax, key): - """Private helper for `add_axes` and `add_subplot`.""" - self._axstack.add(ax) - if ax not in self._localaxes: - self._localaxes.append(ax) - self.sca(ax) - ax._remove_method = self.delaxes - # this is to support plt.subplot's re-selection logic - ax._projection_init = key - self.stale = True - ax.stale_callback = _stale_figure_callback - return ax - - def subplots(self, nrows=1, ncols=1, *, sharex=False, sharey=False, - squeeze=True, width_ratios=None, height_ratios=None, - subplot_kw=None, gridspec_kw=None): - """ - Add a set of subplots to this figure. - - This utility wrapper makes it convenient to create common layouts of - subplots in a single call. - - Parameters - ---------- - nrows, ncols : int, default: 1 - Number of rows/columns of the subplot grid. - - sharex, sharey : bool or {'none', 'all', 'row', 'col'}, default: False - Controls sharing of x-axis (*sharex*) or y-axis (*sharey*): - - - True or 'all': x- or y-axis will be shared among all subplots. - - False or 'none': each subplot x- or y-axis will be independent. - - 'row': each subplot row will share an x- or y-axis. - - 'col': each subplot column will share an x- or y-axis. - - When subplots have a shared x-axis along a column, only the x tick - labels of the bottom subplot are created. Similarly, when subplots - have a shared y-axis along a row, only the y tick labels of the - first column subplot are created. To later turn other subplots' - ticklabels on, use `~matplotlib.axes.Axes.tick_params`. - - When subplots have a shared axis that has units, calling - `.Axis.set_units` will update each axis with the new units. - - squeeze : bool, default: True - - If True, extra dimensions are squeezed out from the returned - array of Axes: - - - if only one subplot is constructed (nrows=ncols=1), the - resulting single Axes object is returned as a scalar. - - for Nx1 or 1xM subplots, the returned object is a 1D numpy - object array of Axes objects. - - for NxM, subplots with N>1 and M>1 are returned as a 2D array. - - - If False, no squeezing at all is done: the returned Axes object - is always a 2D array containing Axes instances, even if it ends - up being 1x1. - - width_ratios : array-like of length *ncols*, optional - Defines the relative widths of the columns. Each column gets a - relative width of ``width_ratios[i] / sum(width_ratios)``. - If not given, all columns will have the same width. Equivalent - to ``gridspec_kw={'width_ratios': [...]}``. - - height_ratios : array-like of length *nrows*, optional - Defines the relative heights of the rows. Each row gets a - relative height of ``height_ratios[i] / sum(height_ratios)``. - If not given, all rows will have the same height. Equivalent - to ``gridspec_kw={'height_ratios': [...]}``. - - subplot_kw : dict, optional - Dict with keywords passed to the `.Figure.add_subplot` call used to - create each subplot. - - gridspec_kw : dict, optional - Dict with keywords passed to the - `~matplotlib.gridspec.GridSpec` constructor used to create - the grid the subplots are placed on. - - Returns - ------- - `~.axes.Axes` or array of Axes - Either a single `~matplotlib.axes.Axes` object or an array of Axes - objects if more than one subplot was created. The dimensions of the - resulting array can be controlled with the *squeeze* keyword, see - above. - - See Also - -------- - .pyplot.subplots - .Figure.add_subplot - .pyplot.subplot - - Examples - -------- - :: - - # First create some toy data: - x = np.linspace(0, 2*np.pi, 400) - y = np.sin(x**2) - - # Create a figure - plt.figure() - - # Create a subplot - ax = fig.subplots() - ax.plot(x, y) - ax.set_title('Simple plot') - - # Create two subplots and unpack the output array immediately - ax1, ax2 = fig.subplots(1, 2, sharey=True) - ax1.plot(x, y) - ax1.set_title('Sharing Y axis') - ax2.scatter(x, y) - - # Create four polar Axes and access them through the returned array - axes = fig.subplots(2, 2, subplot_kw=dict(projection='polar')) - axes[0, 0].plot(x, y) - axes[1, 1].scatter(x, y) - - # Share an X-axis with each column of subplots - fig.subplots(2, 2, sharex='col') - - # Share a Y-axis with each row of subplots - fig.subplots(2, 2, sharey='row') - - # Share both X- and Y-axes with all subplots - fig.subplots(2, 2, sharex='all', sharey='all') - - # Note that this is the same as - fig.subplots(2, 2, sharex=True, sharey=True) - """ - gridspec_kw = dict(gridspec_kw or {}) - if height_ratios is not None: - if 'height_ratios' in gridspec_kw: - raise ValueError("'height_ratios' must not be defined both as " - "parameter and as key in 'gridspec_kw'") - gridspec_kw['height_ratios'] = height_ratios - if width_ratios is not None: - if 'width_ratios' in gridspec_kw: - raise ValueError("'width_ratios' must not be defined both as " - "parameter and as key in 'gridspec_kw'") - gridspec_kw['width_ratios'] = width_ratios - - gs = self.add_gridspec(nrows, ncols, figure=self, **gridspec_kw) - axs = gs.subplots(sharex=sharex, sharey=sharey, squeeze=squeeze, - subplot_kw=subplot_kw) - return axs - - def delaxes(self, ax): - """ - Remove the `~.axes.Axes` *ax* from the figure; update the current Axes. - """ - - def _reset_locators_and_formatters(axis): - # Set the formatters and locators to be associated with axis - # (where previously they may have been associated with another - # Axis instance) - axis.get_major_formatter().set_axis(axis) - axis.get_major_locator().set_axis(axis) - axis.get_minor_formatter().set_axis(axis) - axis.get_minor_locator().set_axis(axis) - - def _break_share_link(ax, grouper): - siblings = grouper.get_siblings(ax) - if len(siblings) > 1: - grouper.remove(ax) - for last_ax in siblings: - if ax is not last_ax: - return last_ax - return None - - self._axstack.remove(ax) - self._axobservers.process("_axes_change_event", self) - self.stale = True - self._localaxes.remove(ax) - - # Break link between any shared axes - for name in ax._axis_names: - last_ax = _break_share_link(ax, ax._shared_axes[name]) - if last_ax is not None: - _reset_locators_and_formatters(getattr(last_ax, f"{name}axis")) - - # Break link between any twinned axes - _break_share_link(ax, ax._twinned_axes) - - def clear(self, keep_observers=False): - """ - Clear the figure. - - Parameters - ---------- - keep_observers : bool, default: False - Set *keep_observers* to True if, for example, - a gui widget is tracking the Axes in the figure. - """ - self.suppressComposite = None - - # first clear the axes in any subfigures - for subfig in self.subfigs: - subfig.clear(keep_observers=keep_observers) - self.subfigs = [] - - for ax in tuple(self.axes): # Iterate over the copy. - ax.clear() - self.delaxes(ax) # Remove ax from self._axstack. - - self.artists = [] - self.lines = [] - self.patches = [] - self.texts = [] - self.images = [] - self.legends = [] - if not keep_observers: - self._axobservers = cbook.CallbackRegistry() - self._suptitle = None - self._supxlabel = None - self._supylabel = None - - self.stale = True - - # synonym for `clear`. - def clf(self, keep_observers=False): - """ - [*Discouraged*] Alias for the `clear()` method. - - .. admonition:: Discouraged - - The use of ``clf()`` is discouraged. Use ``clear()`` instead. - - Parameters - ---------- - keep_observers : bool, default: False - Set *keep_observers* to True if, for example, - a gui widget is tracking the Axes in the figure. - """ - return self.clear(keep_observers=keep_observers) - - # Note: the docstring below is modified with replace for the pyplot - # version of this function because the method name differs (plt.figlegend) - # the replacements are: - # " legend(" -> " figlegend(" for the signatures - # "fig.legend(" -> "plt.figlegend" for the code examples - # "ax.plot" -> "plt.plot" for consistency in using pyplot when able - @_docstring.dedent_interpd - def legend(self, *args, **kwargs): - """ - Place a legend on the figure. - - Call signatures:: - - legend() - legend(handles, labels) - legend(handles=handles) - legend(labels) - - The call signatures correspond to the following different ways to use - this method: - - **1. Automatic detection of elements to be shown in the legend** - - The elements to be added to the legend are automatically determined, - when you do not pass in any extra arguments. - - In this case, the labels are taken from the artist. You can specify - them either at artist creation or by calling the - :meth:`~.Artist.set_label` method on the artist:: - - ax.plot([1, 2, 3], label='Inline label') - fig.legend() - - or:: - - line, = ax.plot([1, 2, 3]) - line.set_label('Label via method') - fig.legend() - - Specific lines can be excluded from the automatic legend element - selection by defining a label starting with an underscore. - This is default for all artists, so calling `.Figure.legend` without - any arguments and without setting the labels manually will result in - no legend being drawn. - - - **2. Explicitly listing the artists and labels in the legend** - - For full control of which artists have a legend entry, it is possible - to pass an iterable of legend artists followed by an iterable of - legend labels respectively:: - - fig.legend([line1, line2, line3], ['label1', 'label2', 'label3']) - - - **3. Explicitly listing the artists in the legend** - - This is similar to 2, but the labels are taken from the artists' - label properties. Example:: - - line1, = ax1.plot([1, 2, 3], label='label1') - line2, = ax2.plot([1, 2, 3], label='label2') - fig.legend(handles=[line1, line2]) - - - **4. Labeling existing plot elements** - - .. admonition:: Discouraged - - This call signature is discouraged, because the relation between - plot elements and labels is only implicit by their order and can - easily be mixed up. - - To make a legend for all artists on all Axes, call this function with - an iterable of strings, one for each legend item. For example:: - - fig, (ax1, ax2) = plt.subplots(1, 2) - ax1.plot([1, 3, 5], color='blue') - ax2.plot([2, 4, 6], color='red') - fig.legend(['the blues', 'the reds']) - - - Parameters - ---------- - handles : list of `.Artist`, optional - A list of Artists (lines, patches) to be added to the legend. - Use this together with *labels*, if you need full control on what - is shown in the legend and the automatic mechanism described above - is not sufficient. - - The length of handles and labels should be the same in this - case. If they are not, they are truncated to the smaller length. - - labels : list of str, optional - A list of labels to show next to the artists. - Use this together with *handles*, if you need full control on what - is shown in the legend and the automatic mechanism described above - is not sufficient. - - Returns - ------- - `~matplotlib.legend.Legend` - - Other Parameters - ---------------- - %(_legend_kw_figure)s - - See Also - -------- - .Axes.legend - - Notes - ----- - Some artists are not supported by this function. See - :doc:`/tutorials/intermediate/legend_guide` for details. - """ - - handles, labels, extra_args, kwargs = mlegend._parse_legend_args( - self.axes, - *args, - **kwargs) - # check for third arg - if len(extra_args): - # _api.warn_deprecated( - # "2.1", - # message="Figure.legend will accept no more than two " - # "positional arguments in the future. Use " - # "'fig.legend(handles, labels, loc=location)' " - # "instead.") - # kwargs['loc'] = extra_args[0] - # extra_args = extra_args[1:] - pass - transform = kwargs.pop('bbox_transform', self.transSubfigure) - # explicitly set the bbox transform if the user hasn't. - l = mlegend.Legend(self, handles, labels, *extra_args, - bbox_transform=transform, **kwargs) - self.legends.append(l) - l._remove_method = self.legends.remove - self.stale = True - return l - - @_docstring.dedent_interpd - def text(self, x, y, s, fontdict=None, **kwargs): - """ - Add text to figure. - - Parameters - ---------- - x, y : float - The position to place the text. By default, this is in figure - coordinates, floats in [0, 1]. The coordinate system can be changed - using the *transform* keyword. - - s : str - The text string. - - fontdict : dict, optional - A dictionary to override the default text properties. If not given, - the defaults are determined by :rc:`font.*`. Properties passed as - *kwargs* override the corresponding ones given in *fontdict*. - - Returns - ------- - `~.text.Text` - - Other Parameters - ---------------- - **kwargs : `~matplotlib.text.Text` properties - Other miscellaneous text parameters. - - %(Text:kwdoc)s - - See Also - -------- - .Axes.text - .pyplot.text - """ - effective_kwargs = { - 'transform': self.transSubfigure, - **(fontdict if fontdict is not None else {}), - **kwargs, - } - text = Text(x=x, y=y, text=s, **effective_kwargs) - text.set_figure(self) - text.stale_callback = _stale_figure_callback - - self.texts.append(text) - text._remove_method = self.texts.remove - self.stale = True - return text - - @_docstring.dedent_interpd - def colorbar( - self, mappable, cax=None, ax=None, use_gridspec=True, **kwargs): - """ - Add a colorbar to a plot. - - Parameters - ---------- - mappable - The `matplotlib.cm.ScalarMappable` (i.e., `.AxesImage`, - `.ContourSet`, etc.) described by this colorbar. This argument is - mandatory for the `.Figure.colorbar` method but optional for the - `.pyplot.colorbar` function, which sets the default to the current - image. - - Note that one can create a `.ScalarMappable` "on-the-fly" to - generate colorbars not attached to a previously drawn artist, e.g. - :: - - fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax) - - cax : `~matplotlib.axes.Axes`, optional - Axes into which the colorbar will be drawn. - - ax : `~.axes.Axes` or iterable or `numpy.ndarray` of Axes, optional - One or more parent axes from which space for a new colorbar axes - will be stolen, if *cax* is None. This has no effect if *cax* is - set. - - use_gridspec : bool, optional - If *cax* is ``None``, a new *cax* is created as an instance of - Axes. If *ax* is positioned with a subplotspec and *use_gridspec* - is ``True``, then *cax* is also positioned with a subplotspec. - - Returns - ------- - colorbar : `~matplotlib.colorbar.Colorbar` - - Other Parameters - ---------------- - %(_make_axes_kw_doc)s - %(_colormap_kw_doc)s - - Notes - ----- - If *mappable* is a `~.contour.ContourSet`, its *extend* kwarg is - included automatically. - - The *shrink* kwarg provides a simple way to scale the colorbar with - respect to the axes. Note that if *cax* is specified, it determines the - size of the colorbar and *shrink* and *aspect* kwargs are ignored. - - For more precise control, you can manually specify the positions of the - axes objects in which the mappable and the colorbar are drawn. In this - case, do not use any of the axes properties kwargs. - - It is known that some vector graphics viewers (svg and pdf) renders - white gaps between segments of the colorbar. This is due to bugs in - the viewers, not Matplotlib. As a workaround, the colorbar can be - rendered with overlapping segments:: - - cbar = colorbar() - cbar.solids.set_edgecolor("face") - draw() - - However, this has negative consequences in other circumstances, e.g. - with semi-transparent images (alpha < 1) and colorbar extensions; - therefore, this workaround is not used by default (see issue #1188). - """ - - if ax is None: - ax = getattr(mappable, "axes", None) - - if (self.get_layout_engine() is not None and - not self.get_layout_engine().colorbar_gridspec): - use_gridspec = False - # Store the value of gca so that we can set it back later on. - if cax is None: - if ax is None: - _api.warn_deprecated("3.6", message=( - 'Unable to determine Axes to steal space for Colorbar. ' - 'Using gca(), but will raise in the future. ' - 'Either provide the *cax* argument to use as the Axes for ' - 'the Colorbar, provide the *ax* argument to steal space ' - 'from it, or add *mappable* to an Axes.')) - ax = self.gca() - current_ax = self.gca() - userax = False - if (use_gridspec - and isinstance(ax, mpl.axes._base._AxesBase) - and ax.get_subplotspec()): - cax, kwargs = cbar.make_axes_gridspec(ax, **kwargs) - else: - cax, kwargs = cbar.make_axes(ax, **kwargs) - cax.grid(visible=False, which='both', axis='both') - else: - userax = True - - # need to remove kws that cannot be passed to Colorbar - NON_COLORBAR_KEYS = ['fraction', 'pad', 'shrink', 'aspect', 'anchor', - 'panchor'] - cb_kw = {k: v for k, v in kwargs.items() if k not in NON_COLORBAR_KEYS} - - cb = cbar.Colorbar(cax, mappable, **cb_kw) - - if not userax: - self.sca(current_ax) - self.stale = True - return cb - - def subplots_adjust(self, left=None, bottom=None, right=None, top=None, - wspace=None, hspace=None): - """ - Adjust the subplot layout parameters. - - Unset parameters are left unmodified; initial values are given by - :rc:`figure.subplot.[name]`. - - Parameters - ---------- - left : float, optional - The position of the left edge of the subplots, - as a fraction of the figure width. - right : float, optional - The position of the right edge of the subplots, - as a fraction of the figure width. - bottom : float, optional - The position of the bottom edge of the subplots, - as a fraction of the figure height. - top : float, optional - The position of the top edge of the subplots, - as a fraction of the figure height. - wspace : float, optional - The width of the padding between subplots, - as a fraction of the average Axes width. - hspace : float, optional - The height of the padding between subplots, - as a fraction of the average Axes height. - """ - if (self.get_layout_engine() is not None and - not self.get_layout_engine().adjust_compatible): - _api.warn_external( - "This figure was using a layout engine that is " - "incompatible with subplots_adjust and/or tight_layout; " - "not calling subplots_adjust.") - return - self.subplotpars.update(left, bottom, right, top, wspace, hspace) - for ax in self.axes: - if ax.get_subplotspec() is not None: - ax._set_position(ax.get_subplotspec().get_position(self)) - self.stale = True - - def align_xlabels(self, axs=None): - """ - Align the xlabels of subplots in the same subplot column if label - alignment is being done automatically (i.e. the label position is - not manually set). - - Alignment persists for draw events after this is called. - - If a label is on the bottom, it is aligned with labels on Axes that - also have their label on the bottom and that have the same - bottom-most subplot row. If the label is on the top, - it is aligned with labels on Axes with the same top-most row. - - Parameters - ---------- - axs : list of `~matplotlib.axes.Axes` - Optional list of (or `~numpy.ndarray`) `~matplotlib.axes.Axes` - to align the xlabels. - Default is to align all Axes on the figure. - - See Also - -------- - matplotlib.figure.Figure.align_ylabels - matplotlib.figure.Figure.align_labels - - Notes - ----- - This assumes that ``axs`` are from the same `.GridSpec`, so that - their `.SubplotSpec` positions correspond to figure positions. - - Examples - -------- - Example with rotated xtick labels:: - - fig, axs = plt.subplots(1, 2) - for tick in axs[0].get_xticklabels(): - tick.set_rotation(55) - axs[0].set_xlabel('XLabel 0') - axs[1].set_xlabel('XLabel 1') - fig.align_xlabels() - """ - if axs is None: - axs = self.axes - axs = [ax for ax in np.ravel(axs) if ax.get_subplotspec() is not None] - for ax in axs: - _log.debug(' Working on: %s', ax.get_xlabel()) - rowspan = ax.get_subplotspec().rowspan - pos = ax.xaxis.get_label_position() # top or bottom - # Search through other axes for label positions that are same as - # this one and that share the appropriate row number. - # Add to a grouper associated with each axes of siblings. - # This list is inspected in `axis.draw` by - # `axis._update_label_position`. - for axc in axs: - if axc.xaxis.get_label_position() == pos: - rowspanc = axc.get_subplotspec().rowspan - if (pos == 'top' and rowspan.start == rowspanc.start or - pos == 'bottom' and rowspan.stop == rowspanc.stop): - # grouper for groups of xlabels to align - self._align_label_groups['x'].join(ax, axc) - - def align_ylabels(self, axs=None): - """ - Align the ylabels of subplots in the same subplot column if label - alignment is being done automatically (i.e. the label position is - not manually set). - - Alignment persists for draw events after this is called. - - If a label is on the left, it is aligned with labels on Axes that - also have their label on the left and that have the same - left-most subplot column. If the label is on the right, - it is aligned with labels on Axes with the same right-most column. - - Parameters - ---------- - axs : list of `~matplotlib.axes.Axes` - Optional list (or `~numpy.ndarray`) of `~matplotlib.axes.Axes` - to align the ylabels. - Default is to align all Axes on the figure. - - See Also - -------- - matplotlib.figure.Figure.align_xlabels - matplotlib.figure.Figure.align_labels - - Notes - ----- - This assumes that ``axs`` are from the same `.GridSpec`, so that - their `.SubplotSpec` positions correspond to figure positions. - - Examples - -------- - Example with large yticks labels:: - - fig, axs = plt.subplots(2, 1) - axs[0].plot(np.arange(0, 1000, 50)) - axs[0].set_ylabel('YLabel 0') - axs[1].set_ylabel('YLabel 1') - fig.align_ylabels() - """ - if axs is None: - axs = self.axes - axs = [ax for ax in np.ravel(axs) if ax.get_subplotspec() is not None] - for ax in axs: - _log.debug(' Working on: %s', ax.get_ylabel()) - colspan = ax.get_subplotspec().colspan - pos = ax.yaxis.get_label_position() # left or right - # Search through other axes for label positions that are same as - # this one and that share the appropriate column number. - # Add to a list associated with each axes of siblings. - # This list is inspected in `axis.draw` by - # `axis._update_label_position`. - for axc in axs: - if axc.yaxis.get_label_position() == pos: - colspanc = axc.get_subplotspec().colspan - if (pos == 'left' and colspan.start == colspanc.start or - pos == 'right' and colspan.stop == colspanc.stop): - # grouper for groups of ylabels to align - self._align_label_groups['y'].join(ax, axc) - - def align_labels(self, axs=None): - """ - Align the xlabels and ylabels of subplots with the same subplots - row or column (respectively) if label alignment is being - done automatically (i.e. the label position is not manually set). - - Alignment persists for draw events after this is called. - - Parameters - ---------- - axs : list of `~matplotlib.axes.Axes` - Optional list (or `~numpy.ndarray`) of `~matplotlib.axes.Axes` - to align the labels. - Default is to align all Axes on the figure. - - See Also - -------- - matplotlib.figure.Figure.align_xlabels - - matplotlib.figure.Figure.align_ylabels - """ - self.align_xlabels(axs=axs) - self.align_ylabels(axs=axs) - - def add_gridspec(self, nrows=1, ncols=1, **kwargs): - """ - Return a `.GridSpec` that has this figure as a parent. This allows - complex layout of Axes in the figure. - - Parameters - ---------- - nrows : int, default: 1 - Number of rows in grid. - - ncols : int, default: 1 - Number of columns in grid. - - Returns - ------- - `.GridSpec` - - Other Parameters - ---------------- - **kwargs - Keyword arguments are passed to `.GridSpec`. - - See Also - -------- - matplotlib.pyplot.subplots - - Examples - -------- - Adding a subplot that spans two rows:: - - fig = plt.figure() - gs = fig.add_gridspec(2, 2) - ax1 = fig.add_subplot(gs[0, 0]) - ax2 = fig.add_subplot(gs[1, 0]) - # spans two rows: - ax3 = fig.add_subplot(gs[:, 1]) - - """ - - _ = kwargs.pop('figure', None) # pop in case user has added this... - gs = GridSpec(nrows=nrows, ncols=ncols, figure=self, **kwargs) - return gs - - def subfigures(self, nrows=1, ncols=1, squeeze=True, - wspace=None, hspace=None, - width_ratios=None, height_ratios=None, - **kwargs): - """ - Add a set of subfigures to this figure or subfigure. - - A subfigure has the same artist methods as a figure, and is logically - the same as a figure, but cannot print itself. - See :doc:`/gallery/subplots_axes_and_figures/subfigures`. - - Parameters - ---------- - nrows, ncols : int, default: 1 - Number of rows/columns of the subfigure grid. - - squeeze : bool, default: True - If True, extra dimensions are squeezed out from the returned - array of subfigures. - - wspace, hspace : float, default: None - The amount of width/height reserved for space between subfigures, - expressed as a fraction of the average subfigure width/height. - If not given, the values will be inferred from a figure or - rcParams when necessary. - - width_ratios : array-like of length *ncols*, optional - Defines the relative widths of the columns. Each column gets a - relative width of ``width_ratios[i] / sum(width_ratios)``. - If not given, all columns will have the same width. - - height_ratios : array-like of length *nrows*, optional - Defines the relative heights of the rows. Each row gets a - relative height of ``height_ratios[i] / sum(height_ratios)``. - If not given, all rows will have the same height. - """ - gs = GridSpec(nrows=nrows, ncols=ncols, figure=self, - wspace=wspace, hspace=hspace, - width_ratios=width_ratios, - height_ratios=height_ratios) - - sfarr = np.empty((nrows, ncols), dtype=object) - for i in range(ncols): - for j in range(nrows): - sfarr[j, i] = self.add_subfigure(gs[j, i], **kwargs) - - if squeeze: - # Discarding unneeded dimensions that equal 1. If we only have one - # subfigure, just return it instead of a 1-element array. - return sfarr.item() if sfarr.size == 1 else sfarr.squeeze() - else: - # Returned axis array will be always 2-d, even if nrows=ncols=1. - return sfarr - - def add_subfigure(self, subplotspec, **kwargs): - """ - Add a `.SubFigure` to the figure as part of a subplot arrangement. - - Parameters - ---------- - subplotspec : `.gridspec.SubplotSpec` - Defines the region in a parent gridspec where the subfigure will - be placed. - - Returns - ------- - `.SubFigure` - - Other Parameters - ---------------- - **kwargs - Are passed to the `.SubFigure` object. - - See Also - -------- - .Figure.subfigures - """ - sf = SubFigure(self, subplotspec, **kwargs) - self.subfigs += [sf] - return sf - - def sca(self, a): - """Set the current Axes to be *a* and return *a*.""" - self._axstack.bubble(a) - self._axobservers.process("_axes_change_event", self) - return a - - def gca(self): - """ - Get the current Axes. - - If there is currently no Axes on this Figure, a new one is created - using `.Figure.add_subplot`. (To test whether there is currently an - Axes on a Figure, check whether ``figure.axes`` is empty. To test - whether there is currently a Figure on the pyplot figure stack, check - whether `.pyplot.get_fignums()` is empty.) - """ - ax = self._axstack.current() - return ax if ax is not None else self.add_subplot() - - def _gci(self): - # Helper for `~matplotlib.pyplot.gci`. Do not use elsewhere. - """ - Get the current colorable artist. - - Specifically, returns the current `.ScalarMappable` instance (`.Image` - created by `imshow` or `figimage`, `.Collection` created by `pcolor` or - `scatter`, etc.), or *None* if no such instance has been defined. - - The current image is an attribute of the current Axes, or the nearest - earlier Axes in the current figure that contains an image. - - Notes - ----- - Historically, the only colorable artists were images; hence the name - ``gci`` (get current image). - """ - # Look first for an image in the current Axes. - ax = self._axstack.current() - if ax is None: - return None - im = ax._gci() - if im is not None: - return im - # If there is no image in the current Axes, search for - # one in a previously created Axes. Whether this makes - # sense is debatable, but it is the documented behavior. - for ax in reversed(self.axes): - im = ax._gci() - if im is not None: - return im - return None - - def _process_projection_requirements( - self, *args, axes_class=None, polar=False, projection=None, - **kwargs): - """ - Handle the args/kwargs to add_axes/add_subplot/gca, returning:: - - (axes_proj_class, proj_class_kwargs) - - which can be used for new Axes initialization/identification. - """ - if axes_class is not None: - if polar or projection is not None: - raise ValueError( - "Cannot combine 'axes_class' and 'projection' or 'polar'") - projection_class = axes_class - else: - - if polar: - if projection is not None and projection != 'polar': - raise ValueError( - f"polar={polar}, yet projection={projection!r}. " - "Only one of these arguments should be supplied." - ) - projection = 'polar' - - if isinstance(projection, str) or projection is None: - projection_class = projections.get_projection_class(projection) - elif hasattr(projection, '_as_mpl_axes'): - projection_class, extra_kwargs = projection._as_mpl_axes() - kwargs.update(**extra_kwargs) - else: - raise TypeError( - f"projection must be a string, None or implement a " - f"_as_mpl_axes method, not {projection!r}") - return projection_class, kwargs - - def get_default_bbox_extra_artists(self): - bbox_artists = [artist for artist in self.get_children() - if (artist.get_visible() and artist.get_in_layout())] - for ax in self.axes: - if ax.get_visible(): - bbox_artists.extend(ax.get_default_bbox_extra_artists()) - return bbox_artists - - def get_tightbbox(self, renderer=None, bbox_extra_artists=None): - """ - Return a (tight) bounding box of the figure *in inches*. - - Note that `.FigureBase` differs from all other artists, which return - their `.Bbox` in pixels. - - Artists that have ``artist.set_in_layout(False)`` are not included - in the bbox. - - Parameters - ---------- - renderer : `.RendererBase` subclass - Renderer that will be used to draw the figures (i.e. - ``fig.canvas.get_renderer()``) - - bbox_extra_artists : list of `.Artist` or ``None`` - List of artists to include in the tight bounding box. If - ``None`` (default), then all artist children of each Axes are - included in the tight bounding box. - - Returns - ------- - `.BboxBase` - containing the bounding box (in figure inches). - """ - - if renderer is None: - renderer = self.figure._get_renderer() - - bb = [] - if bbox_extra_artists is None: - artists = self.get_default_bbox_extra_artists() - else: - artists = bbox_extra_artists - - for a in artists: - bbox = a.get_tightbbox(renderer) - if bbox is not None: - bb.append(bbox) - - for ax in self.axes: - if ax.get_visible(): - # some axes don't take the bbox_extra_artists kwarg so we - # need this conditional.... - try: - bbox = ax.get_tightbbox( - renderer, bbox_extra_artists=bbox_extra_artists) - except TypeError: - bbox = ax.get_tightbbox(renderer) - bb.append(bbox) - bb = [b for b in bb - if (np.isfinite(b.width) and np.isfinite(b.height) - and (b.width != 0 or b.height != 0))] - - isfigure = hasattr(self, 'bbox_inches') - if len(bb) == 0: - if isfigure: - return self.bbox_inches - else: - # subfigures do not have bbox_inches, but do have a bbox - bb = [self.bbox] - - _bbox = Bbox.union(bb) - - if isfigure: - # transform from pixels to inches... - _bbox = TransformedBbox(_bbox, self.dpi_scale_trans.inverted()) - - return _bbox - - @staticmethod - def _norm_per_subplot_kw(per_subplot_kw): - expanded = {} - for k, v in per_subplot_kw.items(): - if isinstance(k, tuple): - for sub_key in k: - if sub_key in expanded: - raise ValueError( - f'The key {sub_key!r} appears multiple times.' - ) - expanded[sub_key] = v - else: - if k in expanded: - raise ValueError( - f'The key {k!r} appears multiple times.' - ) - expanded[k] = v - return expanded - - @staticmethod - def _normalize_grid_string(layout): - if '\n' not in layout: - # single-line string - return [list(ln) for ln in layout.split(';')] - else: - # multi-line string - layout = inspect.cleandoc(layout) - return [list(ln) for ln in layout.strip('\n').split('\n')] - - def subplot_mosaic(self, mosaic, *, sharex=False, sharey=False, - width_ratios=None, height_ratios=None, - empty_sentinel='.', - subplot_kw=None, per_subplot_kw=None, gridspec_kw=None): - """ - Build a layout of Axes based on ASCII art or nested lists. - - This is a helper function to build complex GridSpec layouts visually. - - See :doc:`/gallery/subplots_axes_and_figures/mosaic` - for an example and full API documentation - - Parameters - ---------- - mosaic : list of list of {hashable or nested} or str - - A visual layout of how you want your Axes to be arranged - labeled as strings. For example :: - - x = [['A panel', 'A panel', 'edge'], - ['C panel', '.', 'edge']] - - produces 4 Axes: - - - 'A panel' which is 1 row high and spans the first two columns - - 'edge' which is 2 rows high and is on the right edge - - 'C panel' which in 1 row and 1 column wide in the bottom left - - a blank space 1 row and 1 column wide in the bottom center - - Any of the entries in the layout can be a list of lists - of the same form to create nested layouts. - - If input is a str, then it can either be a multi-line string of - the form :: - - ''' - AAE - C.E - ''' - - where each character is a column and each line is a row. Or it - can be a single-line string where rows are separated by ``;``:: - - 'AB;CC' - - The string notation allows only single character Axes labels and - does not support nesting but is very terse. - - The Axes identifiers may be `str` or a non-iterable hashable - object (e.g. `tuple` s may not be used). - - sharex, sharey : bool, default: False - If True, the x-axis (*sharex*) or y-axis (*sharey*) will be shared - among all subplots. In that case, tick label visibility and axis - units behave as for `subplots`. If False, each subplot's x- or - y-axis will be independent. - - width_ratios : array-like of length *ncols*, optional - Defines the relative widths of the columns. Each column gets a - relative width of ``width_ratios[i] / sum(width_ratios)``. - If not given, all columns will have the same width. Equivalent - to ``gridspec_kw={'width_ratios': [...]}``. In the case of nested - layouts, this argument applies only to the outer layout. - - height_ratios : array-like of length *nrows*, optional - Defines the relative heights of the rows. Each row gets a - relative height of ``height_ratios[i] / sum(height_ratios)``. - If not given, all rows will have the same height. Equivalent - to ``gridspec_kw={'height_ratios': [...]}``. In the case of nested - layouts, this argument applies only to the outer layout. - - subplot_kw : dict, optional - Dictionary with keywords passed to the `.Figure.add_subplot` call - used to create each subplot. These values may be overridden by - values in *per_subplot_kw*. - - per_subplot_kw : dict, optional - A dictionary mapping the Axes identifiers or tuples of identifiers - to a dictionary of keyword arguments to be passed to the - `.Figure.add_subplot` call used to create each subplot. The values - in these dictionaries have precedence over the values in - *subplot_kw*. - - If *mosaic* is a string, and thus all keys are single characters, - it is possible to use a single string instead of a tuple as keys; - i.e. ``"AB"`` is equivalent to ``("A", "B")``. - - .. versionadded:: 3.7 - - gridspec_kw : dict, optional - Dictionary with keywords passed to the `.GridSpec` constructor used - to create the grid the subplots are placed on. In the case of - nested layouts, this argument applies only to the outer layout. - For more complex layouts, users should use `.Figure.subfigures` - to create the nesting. - - empty_sentinel : object, optional - Entry in the layout to mean "leave this space empty". Defaults - to ``'.'``. Note, if *layout* is a string, it is processed via - `inspect.cleandoc` to remove leading white space, which may - interfere with using white-space as the empty sentinel. - - Returns - ------- - dict[label, Axes] - A dictionary mapping the labels to the Axes objects. The order of - the axes is left-to-right and top-to-bottom of their position in the - total layout. - - """ - subplot_kw = subplot_kw or {} - gridspec_kw = dict(gridspec_kw or {}) - per_subplot_kw = per_subplot_kw or {} - - if height_ratios is not None: - if 'height_ratios' in gridspec_kw: - raise ValueError("'height_ratios' must not be defined both as " - "parameter and as key in 'gridspec_kw'") - gridspec_kw['height_ratios'] = height_ratios - if width_ratios is not None: - if 'width_ratios' in gridspec_kw: - raise ValueError("'width_ratios' must not be defined both as " - "parameter and as key in 'gridspec_kw'") - gridspec_kw['width_ratios'] = width_ratios - - # special-case string input - if isinstance(mosaic, str): - mosaic = self._normalize_grid_string(mosaic) - per_subplot_kw = { - tuple(k): v for k, v in per_subplot_kw.items() - } - - per_subplot_kw = self._norm_per_subplot_kw(per_subplot_kw) - - # Only accept strict bools to allow a possible future API expansion. - _api.check_isinstance(bool, sharex=sharex, sharey=sharey) - - def _make_array(inp): - """ - Convert input into 2D array - - We need to have this internal function rather than - ``np.asarray(..., dtype=object)`` so that a list of lists - of lists does not get converted to an array of dimension > - 2 - - Returns - ------- - 2D object array - - """ - r0, *rest = inp - if isinstance(r0, str): - raise ValueError('List mosaic specification must be 2D') - for j, r in enumerate(rest, start=1): - if isinstance(r, str): - raise ValueError('List mosaic specification must be 2D') - if len(r0) != len(r): - raise ValueError( - "All of the rows must be the same length, however " - f"the first row ({r0!r}) has length {len(r0)} " - f"and row {j} ({r!r}) has length {len(r)}." - ) - out = np.zeros((len(inp), len(r0)), dtype=object) - for j, r in enumerate(inp): - for k, v in enumerate(r): - out[j, k] = v - return out - - def _identify_keys_and_nested(mosaic): - """ - Given a 2D object array, identify unique IDs and nested mosaics - - Parameters - ---------- - mosaic : 2D numpy object array - - Returns - ------- - unique_ids : tuple - The unique non-sub mosaic entries in this mosaic - nested : dict[tuple[int, int]], 2D object array - """ - # make sure we preserve the user supplied order - unique_ids = cbook._OrderedSet() - nested = {} - for j, row in enumerate(mosaic): - for k, v in enumerate(row): - if v == empty_sentinel: - continue - elif not cbook.is_scalar_or_string(v): - nested[(j, k)] = _make_array(v) - else: - unique_ids.add(v) - - return tuple(unique_ids), nested - - def _do_layout(gs, mosaic, unique_ids, nested): - """ - Recursively do the mosaic. - - Parameters - ---------- - gs : GridSpec - mosaic : 2D object array - The input converted to a 2D numpy array for this level. - unique_ids : tuple - The identified scalar labels at this level of nesting. - nested : dict[tuple[int, int]], 2D object array - The identified nested mosaics, if any. - - Returns - ------- - dict[label, Axes] - A flat dict of all of the Axes created. - """ - output = dict() - - # we need to merge together the Axes at this level and the axes - # in the (recursively) nested sub-mosaics so that we can add - # them to the figure in the "natural" order if you were to - # ravel in c-order all of the Axes that will be created - # - # This will stash the upper left index of each object (axes or - # nested mosaic) at this level - this_level = dict() - - # go through the unique keys, - for name in unique_ids: - # sort out where each axes starts/ends - indx = np.argwhere(mosaic == name) - start_row, start_col = np.min(indx, axis=0) - end_row, end_col = np.max(indx, axis=0) + 1 - # and construct the slice object - slc = (slice(start_row, end_row), slice(start_col, end_col)) - # some light error checking - if (mosaic[slc] != name).any(): - raise ValueError( - f"While trying to layout\n{mosaic!r}\n" - f"we found that the label {name!r} specifies a " - "non-rectangular or non-contiguous area.") - # and stash this slice for later - this_level[(start_row, start_col)] = (name, slc, 'axes') - - # do the same thing for the nested mosaics (simpler because these - # can not be spans yet!) - for (j, k), nested_mosaic in nested.items(): - this_level[(j, k)] = (None, nested_mosaic, 'nested') - - # now go through the things in this level and add them - # in order left-to-right top-to-bottom - for key in sorted(this_level): - name, arg, method = this_level[key] - # we are doing some hokey function dispatch here based - # on the 'method' string stashed above to sort out if this - # element is an Axes or a nested mosaic. - if method == 'axes': - slc = arg - # add a single axes - if name in output: - raise ValueError(f"There are duplicate keys {name} " - f"in the layout\n{mosaic!r}") - ax = self.add_subplot( - gs[slc], **{ - 'label': str(name), - **subplot_kw, - **per_subplot_kw.get(name, {}) - } - ) - output[name] = ax - elif method == 'nested': - nested_mosaic = arg - j, k = key - # recursively add the nested mosaic - rows, cols = nested_mosaic.shape - nested_output = _do_layout( - gs[j, k].subgridspec(rows, cols), - nested_mosaic, - *_identify_keys_and_nested(nested_mosaic) - ) - overlap = set(output) & set(nested_output) - if overlap: - raise ValueError( - f"There are duplicate keys {overlap} " - f"between the outer layout\n{mosaic!r}\n" - f"and the nested layout\n{nested_mosaic}" - ) - output.update(nested_output) - else: - raise RuntimeError("This should never happen") - return output - - mosaic = _make_array(mosaic) - rows, cols = mosaic.shape - gs = self.add_gridspec(rows, cols, **gridspec_kw) - ret = _do_layout(gs, mosaic, *_identify_keys_and_nested(mosaic)) - ax0 = next(iter(ret.values())) - for ax in ret.values(): - if sharex: - ax.sharex(ax0) - ax._label_outer_xaxis(check_patch=True) - if sharey: - ax.sharey(ax0) - ax._label_outer_yaxis(check_patch=True) - if extra := set(per_subplot_kw) - set(ret): - raise ValueError( - f"The keys {extra} are in *per_subplot_kw* " - "but not in the mosaic." - ) - return ret - - def _set_artist_props(self, a): - if a != self: - a.set_figure(self) - a.stale_callback = _stale_figure_callback - a.set_transform(self.transSubfigure) - - -@_docstring.interpd -class SubFigure(FigureBase): - """ - Logical figure that can be placed inside a figure. - - Typically instantiated using `.Figure.add_subfigure` or - `.SubFigure.add_subfigure`, or `.SubFigure.subfigures`. A subfigure has - the same methods as a figure except for those particularly tied to the size - or dpi of the figure, and is confined to a prescribed region of the figure. - For example the following puts two subfigures side-by-side:: - - fig = plt.figure() - sfigs = fig.subfigures(1, 2) - axsL = sfigs[0].subplots(1, 2) - axsR = sfigs[1].subplots(2, 1) - - See :doc:`/gallery/subplots_axes_and_figures/subfigures` - """ - callbacks = _api.deprecated( - "3.6", alternative=("the 'resize_event' signal in " - "Figure.canvas.callbacks") - )(property(lambda self: self._fig_callbacks)) - - def __init__(self, parent, subplotspec, *, - facecolor=None, - edgecolor=None, - linewidth=0.0, - frameon=None, - **kwargs): - """ - Parameters - ---------- - parent : `.Figure` or `.SubFigure` - Figure or subfigure that contains the SubFigure. SubFigures - can be nested. - - subplotspec : `.gridspec.SubplotSpec` - Defines the region in a parent gridspec where the subfigure will - be placed. - - facecolor : default: :rc:`figure.facecolor` - The figure patch face color. - - edgecolor : default: :rc:`figure.edgecolor` - The figure patch edge color. - - linewidth : float - The linewidth of the frame (i.e. the edge linewidth of the figure - patch). - - frameon : bool, default: :rc:`figure.frameon` - If ``False``, suppress drawing the figure background patch. - - Other Parameters - ---------------- - **kwargs : `.SubFigure` properties, optional - - %(SubFigure:kwdoc)s - """ - super().__init__(**kwargs) - if facecolor is None: - facecolor = mpl.rcParams['figure.facecolor'] - if edgecolor is None: - edgecolor = mpl.rcParams['figure.edgecolor'] - if frameon is None: - frameon = mpl.rcParams['figure.frameon'] - - self._subplotspec = subplotspec - self._parent = parent - self.figure = parent.figure - self._fig_callbacks = parent._fig_callbacks - - # subfigures use the parent axstack - self._axstack = parent._axstack - self.subplotpars = parent.subplotpars - self.dpi_scale_trans = parent.dpi_scale_trans - self._axobservers = parent._axobservers - self.canvas = parent.canvas - self.transFigure = parent.transFigure - self.bbox_relative = None - self._redo_transform_rel_fig() - self.figbbox = self._parent.figbbox - self.bbox = TransformedBbox(self.bbox_relative, - self._parent.transSubfigure) - self.transSubfigure = BboxTransformTo(self.bbox) - - self.patch = Rectangle( - xy=(0, 0), width=1, height=1, visible=frameon, - facecolor=facecolor, edgecolor=edgecolor, linewidth=linewidth, - # Don't let the figure patch influence bbox calculation. - in_layout=False, transform=self.transSubfigure) - self._set_artist_props(self.patch) - self.patch.set_antialiased(False) - - @property - def dpi(self): - return self._parent.dpi - - @dpi.setter - def dpi(self, value): - self._parent.dpi = value - - def get_dpi(self): - """ - Return the resolution of the parent figure in dots-per-inch as a float. - """ - return self._parent.dpi - - def set_dpi(self, val): - """ - Set the resolution of parent figure in dots-per-inch. - - Parameters - ---------- - val : float - """ - self._parent.dpi = val - self.stale = True - - def _get_renderer(self): - return self._parent._get_renderer() - - def _redo_transform_rel_fig(self, bbox=None): - """ - Make the transSubfigure bbox relative to Figure transform. - - Parameters - ---------- - bbox : bbox or None - If not None, then the bbox is used for relative bounding box. - Otherwise, it is calculated from the subplotspec. - """ - if bbox is not None: - self.bbox_relative.p0 = bbox.p0 - self.bbox_relative.p1 = bbox.p1 - return - # need to figure out *where* this subplotspec is. - gs = self._subplotspec.get_gridspec() - wr = np.asarray(gs.get_width_ratios()) - hr = np.asarray(gs.get_height_ratios()) - dx = wr[self._subplotspec.colspan].sum() / wr.sum() - dy = hr[self._subplotspec.rowspan].sum() / hr.sum() - x0 = wr[:self._subplotspec.colspan.start].sum() / wr.sum() - y0 = 1 - hr[:self._subplotspec.rowspan.stop].sum() / hr.sum() - if self.bbox_relative is None: - self.bbox_relative = Bbox.from_bounds(x0, y0, dx, dy) - else: - self.bbox_relative.p0 = (x0, y0) - self.bbox_relative.p1 = (x0 + dx, y0 + dy) - - def get_constrained_layout(self): - """ - Return whether constrained layout is being used. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide`. - """ - return self._parent.get_constrained_layout() - - def get_constrained_layout_pads(self, relative=False): - """ - Get padding for ``constrained_layout``. - - Returns a list of ``w_pad, h_pad`` in inches and - ``wspace`` and ``hspace`` as fractions of the subplot. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide`. - - Parameters - ---------- - relative : bool - If `True`, then convert from inches to figure relative. - """ - return self._parent.get_constrained_layout_pads(relative=relative) - - def get_layout_engine(self): - return self._parent.get_layout_engine() - - @property - def axes(self): - """ - List of Axes in the SubFigure. You can access and modify the Axes - in the SubFigure through this list. - - Modifying this list has no effect. Instead, use `~.SubFigure.add_axes`, - `~.SubFigure.add_subplot` or `~.SubFigure.delaxes` to add or remove an - Axes. - - Note: The `.SubFigure.axes` property and `~.SubFigure.get_axes` method - are equivalent. - """ - return self._localaxes[:] - - get_axes = axes.fget - - def draw(self, renderer): - # docstring inherited - - # draw the figure bounding box, perhaps none for white figure - if not self.get_visible(): - return - - artists = self._get_draw_artists(renderer) - - try: - renderer.open_group('subfigure', gid=self.get_gid()) - self.patch.draw(renderer) - mimage._draw_list_compositing_images( - renderer, self, artists, self.figure.suppressComposite) - for sfig in self.subfigs: - sfig.draw(renderer) - renderer.close_group('subfigure') - - finally: - self.stale = False - - -@_docstring.interpd -class Figure(FigureBase): - """ - The top level container for all the plot elements. - - Attributes - ---------- - patch - The `.Rectangle` instance representing the figure background patch. - - suppressComposite - For multiple images, the figure will make composite images - depending on the renderer option_image_nocomposite function. If - *suppressComposite* is a boolean, this will override the renderer. - """ - # Remove the self._fig_callbacks properties on figure and subfigure - # after the deprecation expires. - callbacks = _api.deprecated( - "3.6", alternative=("the 'resize_event' signal in " - "Figure.canvas.callbacks") - )(property(lambda self: self._fig_callbacks)) - - def __str__(self): - return "Figure(%gx%g)" % tuple(self.bbox.size) - - def __repr__(self): - return "<{clsname} size {h:g}x{w:g} with {naxes} Axes>".format( - clsname=self.__class__.__name__, - h=self.bbox.size[0], w=self.bbox.size[1], - naxes=len(self.axes), - ) - - @_api.make_keyword_only("3.6", "facecolor") - def __init__(self, - figsize=None, - dpi=None, - facecolor=None, - edgecolor=None, - linewidth=0.0, - frameon=None, - subplotpars=None, # rc figure.subplot.* - tight_layout=None, # rc figure.autolayout - constrained_layout=None, # rc figure.constrained_layout.use - *, - layout=None, - **kwargs - ): - """ - Parameters - ---------- - figsize : 2-tuple of floats, default: :rc:`figure.figsize` - Figure dimension ``(width, height)`` in inches. - - dpi : float, default: :rc:`figure.dpi` - Dots per inch. - - facecolor : default: :rc:`figure.facecolor` - The figure patch facecolor. - - edgecolor : default: :rc:`figure.edgecolor` - The figure patch edge color. - - linewidth : float - The linewidth of the frame (i.e. the edge linewidth of the figure - patch). - - frameon : bool, default: :rc:`figure.frameon` - If ``False``, suppress drawing the figure background patch. - - subplotpars : `SubplotParams` - Subplot parameters. If not given, the default subplot - parameters :rc:`figure.subplot.*` are used. - - tight_layout : bool or dict, default: :rc:`figure.autolayout` - Whether to use the tight layout mechanism. See `.set_tight_layout`. - - .. admonition:: Discouraged - - The use of this parameter is discouraged. Please use - ``layout='tight'`` instead for the common case of - ``tight_layout=True`` and use `.set_tight_layout` otherwise. - - constrained_layout : bool, default: :rc:`figure.constrained_layout.use` - This is equal to ``layout='constrained'``. - - .. admonition:: Discouraged - - The use of this parameter is discouraged. Please use - ``layout='constrained'`` instead. - - layout : {'constrained', 'compressed', 'tight', 'none', `.LayoutEngine`, \ -None}, default: None - The layout mechanism for positioning of plot elements to avoid - overlapping Axes decorations (labels, ticks, etc). Note that - layout managers can have significant performance penalties. - - - 'constrained': The constrained layout solver adjusts axes sizes - to avoid overlapping axes decorations. Can handle complex plot - layouts and colorbars, and is thus recommended. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide` - for examples. - - - 'compressed': uses the same algorithm as 'constrained', but - removes extra space between fixed-aspect-ratio Axes. Best for - simple grids of axes. - - - 'tight': Use the tight layout mechanism. This is a relatively - simple algorithm that adjusts the subplot parameters so that - decorations do not overlap. See `.Figure.set_tight_layout` for - further details. - - - 'none': Do not use a layout engine. - - - A `.LayoutEngine` instance. Builtin layout classes are - `.ConstrainedLayoutEngine` and `.TightLayoutEngine`, more easily - accessible by 'constrained' and 'tight'. Passing an instance - allows third parties to provide their own layout engine. - - If not given, fall back to using the parameters *tight_layout* and - *constrained_layout*, including their config defaults - :rc:`figure.autolayout` and :rc:`figure.constrained_layout.use`. - - Other Parameters - ---------------- - **kwargs : `.Figure` properties, optional - - %(Figure:kwdoc)s - """ - super().__init__(**kwargs) - self._layout_engine = None - - if layout is not None: - if (tight_layout is not None): - _api.warn_external( - "The Figure parameters 'layout' and 'tight_layout' cannot " - "be used together. Please use 'layout' only.") - if (constrained_layout is not None): - _api.warn_external( - "The Figure parameters 'layout' and 'constrained_layout' " - "cannot be used together. Please use 'layout' only.") - self.set_layout_engine(layout=layout) - elif tight_layout is not None: - if constrained_layout is not None: - _api.warn_external( - "The Figure parameters 'tight_layout' and " - "'constrained_layout' cannot be used together. Please use " - "'layout' parameter") - self.set_layout_engine(layout='tight') - if isinstance(tight_layout, dict): - self.get_layout_engine().set(**tight_layout) - elif constrained_layout is not None: - if isinstance(constrained_layout, dict): - self.set_layout_engine(layout='constrained') - self.get_layout_engine().set(**constrained_layout) - elif constrained_layout: - self.set_layout_engine(layout='constrained') - - else: - # everything is None, so use default: - self.set_layout_engine(layout=layout) - - self._fig_callbacks = cbook.CallbackRegistry(signals=["dpi_changed"]) - # Callbacks traditionally associated with the canvas (and exposed with - # a proxy property), but that actually need to be on the figure for - # pickling. - self._canvas_callbacks = cbook.CallbackRegistry( - signals=FigureCanvasBase.events) - connect = self._canvas_callbacks._connect_picklable - self._mouse_key_ids = [ - connect('key_press_event', backend_bases._key_handler), - connect('key_release_event', backend_bases._key_handler), - connect('key_release_event', backend_bases._key_handler), - connect('button_press_event', backend_bases._mouse_handler), - connect('button_release_event', backend_bases._mouse_handler), - connect('scroll_event', backend_bases._mouse_handler), - connect('motion_notify_event', backend_bases._mouse_handler), - ] - self._button_pick_id = connect('button_press_event', self.pick) - self._scroll_pick_id = connect('scroll_event', self.pick) - - if figsize is None: - figsize = mpl.rcParams['figure.figsize'] - if dpi is None: - dpi = mpl.rcParams['figure.dpi'] - if facecolor is None: - facecolor = mpl.rcParams['figure.facecolor'] - if edgecolor is None: - edgecolor = mpl.rcParams['figure.edgecolor'] - if frameon is None: - frameon = mpl.rcParams['figure.frameon'] - - if not np.isfinite(figsize).all() or (np.array(figsize) < 0).any(): - raise ValueError('figure size must be positive finite not ' - f'{figsize}') - self.bbox_inches = Bbox.from_bounds(0, 0, *figsize) - - self.dpi_scale_trans = Affine2D().scale(dpi) - # do not use property as it will trigger - self._dpi = dpi - self.bbox = TransformedBbox(self.bbox_inches, self.dpi_scale_trans) - self.figbbox = self.bbox - self.transFigure = BboxTransformTo(self.bbox) - self.transSubfigure = self.transFigure - - self.patch = Rectangle( - xy=(0, 0), width=1, height=1, visible=frameon, - facecolor=facecolor, edgecolor=edgecolor, linewidth=linewidth, - # Don't let the figure patch influence bbox calculation. - in_layout=False) - self._set_artist_props(self.patch) - self.patch.set_antialiased(False) - - FigureCanvasBase(self) # Set self.canvas. - - if subplotpars is None: - subplotpars = SubplotParams() - - self.subplotpars = subplotpars - - self._axstack = _AxesStack() # track all figure axes and current axes - self.clear() - - def pick(self, mouseevent): - if not self.canvas.widgetlock.locked(): - super().pick(mouseevent) - - def _check_layout_engines_compat(self, old, new): - """ - Helper for set_layout engine - - If the figure has used the old engine and added a colorbar then the - value of colorbar_gridspec must be the same on the new engine. - """ - if old is None or new is None: - return True - if old.colorbar_gridspec == new.colorbar_gridspec: - return True - # colorbar layout different, so check if any colorbars are on the - # figure... - for ax in self.axes: - if hasattr(ax, '_colorbar'): - # colorbars list themselves as a colorbar. - return False - return True - - def set_layout_engine(self, layout=None, **kwargs): - """ - Set the layout engine for this figure. - - Parameters - ---------- - layout: {'constrained', 'compressed', 'tight', 'none'} or \ -`LayoutEngine` or None - - - 'constrained' will use `~.ConstrainedLayoutEngine` - - 'compressed' will also use `~.ConstrainedLayoutEngine`, but with - a correction that attempts to make a good layout for fixed-aspect - ratio Axes. - - 'tight' uses `~.TightLayoutEngine` - - 'none' removes layout engine. - - If `None`, the behavior is controlled by :rc:`figure.autolayout` - (which if `True` behaves as if 'tight' was passed) and - :rc:`figure.constrained_layout.use` (which if `True` behaves as if - 'constrained' was passed). If both are `True`, - :rc:`figure.autolayout` takes priority. - - Users and libraries can define their own layout engines and pass - the instance directly as well. - - kwargs: dict - The keyword arguments are passed to the layout engine to set things - like padding and margin sizes. Only used if *layout* is a string. - - """ - if layout is None: - if mpl.rcParams['figure.autolayout']: - layout = 'tight' - elif mpl.rcParams['figure.constrained_layout.use']: - layout = 'constrained' - else: - self._layout_engine = None - return - if layout == 'tight': - new_layout_engine = TightLayoutEngine(**kwargs) - elif layout == 'constrained': - new_layout_engine = ConstrainedLayoutEngine(**kwargs) - elif layout == 'compressed': - new_layout_engine = ConstrainedLayoutEngine(compress=True, - **kwargs) - elif layout == 'none': - if self._layout_engine is not None: - new_layout_engine = PlaceHolderLayoutEngine( - self._layout_engine.adjust_compatible, - self._layout_engine.colorbar_gridspec - ) - else: - new_layout_engine = None - elif isinstance(layout, LayoutEngine): - new_layout_engine = layout - else: - raise ValueError(f"Invalid value for 'layout': {layout!r}") - - if self._check_layout_engines_compat(self._layout_engine, - new_layout_engine): - self._layout_engine = new_layout_engine - else: - raise RuntimeError('Colorbar layout of new layout engine not ' - 'compatible with old engine, and a colorbar ' - 'has been created. Engine not changed.') - - def get_layout_engine(self): - return self._layout_engine - - # TODO: I'd like to dynamically add the _repr_html_ method - # to the figure in the right context, but then IPython doesn't - # use it, for some reason. - - def _repr_html_(self): - # We can't use "isinstance" here, because then we'd end up importing - # webagg unconditionally. - if 'WebAgg' in type(self.canvas).__name__: - from matplotlib.backends import backend_webagg - return backend_webagg.ipython_inline_display(self) - - def show(self, warn=True): - """ - If using a GUI backend with pyplot, display the figure window. - - If the figure was not created using `~.pyplot.figure`, it will lack - a `~.backend_bases.FigureManagerBase`, and this method will raise an - AttributeError. - - .. warning:: - - This does not manage an GUI event loop. Consequently, the figure - may only be shown briefly or not shown at all if you or your - environment are not managing an event loop. - - Use cases for `.Figure.show` include running this from a GUI - application (where there is persistently an event loop running) or - from a shell, like IPython, that install an input hook to allow the - interactive shell to accept input while the figure is also being - shown and interactive. Some, but not all, GUI toolkits will - register an input hook on import. See :ref:`cp_integration` for - more details. - - If you're in a shell without input hook integration or executing a - python script, you should use `matplotlib.pyplot.show` with - ``block=True`` instead, which takes care of starting and running - the event loop for you. - - Parameters - ---------- - warn : bool, default: True - If ``True`` and we are not running headless (i.e. on Linux with an - unset DISPLAY), issue warning when called on a non-GUI backend. - - """ - if self.canvas.manager is None: - raise AttributeError( - "Figure.show works only for figures managed by pyplot, " - "normally created by pyplot.figure()") - try: - self.canvas.manager.show() - except NonGuiException as exc: - if warn: - _api.warn_external(str(exc)) - - @property - def axes(self): - """ - List of Axes in the Figure. You can access and modify the Axes in the - Figure through this list. - - Do not modify the list itself. Instead, use `~Figure.add_axes`, - `~.Figure.add_subplot` or `~.Figure.delaxes` to add or remove an Axes. - - Note: The `.Figure.axes` property and `~.Figure.get_axes` method are - equivalent. - """ - return self._axstack.as_list() - - get_axes = axes.fget - - def _get_renderer(self): - if hasattr(self.canvas, 'get_renderer'): - return self.canvas.get_renderer() - else: - return _get_renderer(self) - - def _get_dpi(self): - return self._dpi - - def _set_dpi(self, dpi, forward=True): - """ - Parameters - ---------- - dpi : float - - forward : bool - Passed on to `~.Figure.set_size_inches` - """ - if dpi == self._dpi: - # We don't want to cause undue events in backends. - return - self._dpi = dpi - self.dpi_scale_trans.clear().scale(dpi) - w, h = self.get_size_inches() - self.set_size_inches(w, h, forward=forward) - self._fig_callbacks.process('dpi_changed', self) - - dpi = property(_get_dpi, _set_dpi, doc="The resolution in dots per inch.") - - def get_tight_layout(self): - """Return whether `.tight_layout` is called when drawing.""" - return isinstance(self.get_layout_engine(), TightLayoutEngine) - - @_api.deprecated("3.6", alternative="set_layout_engine", - pending=True) - def set_tight_layout(self, tight): - """ - [*Discouraged*] Set whether and how `.tight_layout` is called when - drawing. - - .. admonition:: Discouraged - - This method is discouraged in favor of `~.set_layout_engine`. - - Parameters - ---------- - tight : bool or dict with keys "pad", "w_pad", "h_pad", "rect" or None - If a bool, sets whether to call `.tight_layout` upon drawing. - If ``None``, use :rc:`figure.autolayout` instead. - If a dict, pass it as kwargs to `.tight_layout`, overriding the - default paddings. - """ - if tight is None: - tight = mpl.rcParams['figure.autolayout'] - _tight = 'tight' if bool(tight) else 'none' - _tight_parameters = tight if isinstance(tight, dict) else {} - self.set_layout_engine(_tight, **_tight_parameters) - self.stale = True - - def get_constrained_layout(self): - """ - Return whether constrained layout is being used. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide`. - """ - return isinstance(self.get_layout_engine(), ConstrainedLayoutEngine) - - @_api.deprecated("3.6", alternative="set_layout_engine('constrained')", - pending=True) - def set_constrained_layout(self, constrained): - """ - [*Discouraged*] Set whether ``constrained_layout`` is used upon - drawing. - - If None, :rc:`figure.constrained_layout.use` value will be used. - - When providing a dict containing the keys ``w_pad``, ``h_pad`` - the default ``constrained_layout`` paddings will be - overridden. These pads are in inches and default to 3.0/72.0. - ``w_pad`` is the width padding and ``h_pad`` is the height padding. - - .. admonition:: Discouraged - - This method is discouraged in favor of `~.set_layout_engine`. - - Parameters - ---------- - constrained : bool or dict or None - """ - if constrained is None: - constrained = mpl.rcParams['figure.constrained_layout.use'] - _constrained = 'constrained' if bool(constrained) else 'none' - _parameters = constrained if isinstance(constrained, dict) else {} - self.set_layout_engine(_constrained, **_parameters) - self.stale = True - - @_api.deprecated( - "3.6", alternative="figure.get_layout_engine().set()", - pending=True) - def set_constrained_layout_pads(self, **kwargs): - """ - Set padding for ``constrained_layout``. - - Tip: The parameters can be passed from a dictionary by using - ``fig.set_constrained_layout(**pad_dict)``. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide`. - - Parameters - ---------- - w_pad : float, default: :rc:`figure.constrained_layout.w_pad` - Width padding in inches. This is the pad around Axes - and is meant to make sure there is enough room for fonts to - look good. Defaults to 3 pts = 0.04167 inches - - h_pad : float, default: :rc:`figure.constrained_layout.h_pad` - Height padding in inches. Defaults to 3 pts. - - wspace : float, default: :rc:`figure.constrained_layout.wspace` - Width padding between subplots, expressed as a fraction of the - subplot width. The total padding ends up being w_pad + wspace. - - hspace : float, default: :rc:`figure.constrained_layout.hspace` - Height padding between subplots, expressed as a fraction of the - subplot width. The total padding ends up being h_pad + hspace. - - """ - if isinstance(self.get_layout_engine(), ConstrainedLayoutEngine): - self.get_layout_engine().set(**kwargs) - - @_api.deprecated("3.6", alternative="fig.get_layout_engine().get()", - pending=True) - def get_constrained_layout_pads(self, relative=False): - """ - Get padding for ``constrained_layout``. - - Returns a list of ``w_pad, h_pad`` in inches and - ``wspace`` and ``hspace`` as fractions of the subplot. - All values are None if ``constrained_layout`` is not used. - - See :doc:`/tutorials/intermediate/constrainedlayout_guide`. - - Parameters - ---------- - relative : bool - If `True`, then convert from inches to figure relative. - """ - if not isinstance(self.get_layout_engine(), ConstrainedLayoutEngine): - return None, None, None, None - info = self.get_layout_engine().get_info() - w_pad = info['w_pad'] - h_pad = info['h_pad'] - wspace = info['wspace'] - hspace = info['hspace'] - - if relative and (w_pad is not None or h_pad is not None): - renderer = self._get_renderer() - dpi = renderer.dpi - w_pad = w_pad * dpi / renderer.width - h_pad = h_pad * dpi / renderer.height - - return w_pad, h_pad, wspace, hspace - - def set_canvas(self, canvas): - """ - Set the canvas that contains the figure - - Parameters - ---------- - canvas : FigureCanvas - """ - self.canvas = canvas - - @_docstring.interpd - def figimage(self, X, xo=0, yo=0, alpha=None, norm=None, cmap=None, - vmin=None, vmax=None, origin=None, resize=False, **kwargs): - """ - Add a non-resampled image to the figure. - - The image is attached to the lower or upper left corner depending on - *origin*. - - Parameters - ---------- - X - The image data. This is an array of one of the following shapes: - - - (M, N): an image with scalar data. Color-mapping is controlled - by *cmap*, *norm*, *vmin*, and *vmax*. - - (M, N, 3): an image with RGB values (0-1 float or 0-255 int). - - (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), - i.e. including transparency. - - xo, yo : int - The *x*/*y* image offset in pixels. - - alpha : None or float - The alpha blending value. - - %(cmap_doc)s - - This parameter is ignored if *X* is RGB(A). - - %(norm_doc)s - - This parameter is ignored if *X* is RGB(A). - - %(vmin_vmax_doc)s - - This parameter is ignored if *X* is RGB(A). - - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Indicates where the [0, 0] index of the array is in the upper left - or lower left corner of the axes. - - resize : bool - If *True*, resize the figure to match the given image size. - - Returns - ------- - `matplotlib.image.FigureImage` - - Other Parameters - ---------------- - **kwargs - Additional kwargs are `.Artist` kwargs passed on to `.FigureImage`. - - Notes - ----- - figimage complements the Axes image (`~matplotlib.axes.Axes.imshow`) - which will be resampled to fit the current Axes. If you want - a resampled image to fill the entire figure, you can define an - `~matplotlib.axes.Axes` with extent [0, 0, 1, 1]. - - Examples - -------- - :: - - f = plt.figure() - nx = int(f.get_figwidth() * f.dpi) - ny = int(f.get_figheight() * f.dpi) - data = np.random.random((ny, nx)) - f.figimage(data) - plt.show() - """ - if resize: - dpi = self.get_dpi() - figsize = [x / dpi for x in (X.shape[1], X.shape[0])] - self.set_size_inches(figsize, forward=True) - - im = mimage.FigureImage(self, cmap=cmap, norm=norm, - offsetx=xo, offsety=yo, - origin=origin, **kwargs) - im.stale_callback = _stale_figure_callback - - im.set_array(X) - im.set_alpha(alpha) - if norm is None: - im.set_clim(vmin, vmax) - self.images.append(im) - im._remove_method = self.images.remove - self.stale = True - return im - - def set_size_inches(self, w, h=None, forward=True): - """ - Set the figure size in inches. - - Call signatures:: - - fig.set_size_inches(w, h) # OR - fig.set_size_inches((w, h)) - - Parameters - ---------- - w : (float, float) or float - Width and height in inches (if height not specified as a separate - argument) or width. - h : float - Height in inches. - forward : bool, default: True - If ``True``, the canvas size is automatically updated, e.g., - you can resize the figure window from the shell. - - See Also - -------- - matplotlib.figure.Figure.get_size_inches - matplotlib.figure.Figure.set_figwidth - matplotlib.figure.Figure.set_figheight - - Notes - ----- - To transform from pixels to inches divide by `Figure.dpi`. - """ - if h is None: # Got called with a single pair as argument. - w, h = w - size = np.array([w, h]) - if not np.isfinite(size).all() or (size < 0).any(): - raise ValueError(f'figure size must be positive finite not {size}') - self.bbox_inches.p1 = size - if forward: - manager = self.canvas.manager - if manager is not None: - manager.resize(*(size * self.dpi).astype(int)) - self.stale = True - - def get_size_inches(self): - """ - Return the current size of the figure in inches. - - Returns - ------- - ndarray - The size (width, height) of the figure in inches. - - See Also - -------- - matplotlib.figure.Figure.set_size_inches - matplotlib.figure.Figure.get_figwidth - matplotlib.figure.Figure.get_figheight - - Notes - ----- - The size in pixels can be obtained by multiplying with `Figure.dpi`. - """ - return np.array(self.bbox_inches.p1) - - def get_figwidth(self): - """Return the figure width in inches.""" - return self.bbox_inches.width - - def get_figheight(self): - """Return the figure height in inches.""" - return self.bbox_inches.height - - def get_dpi(self): - """Return the resolution in dots per inch as a float.""" - return self.dpi - - def set_dpi(self, val): - """ - Set the resolution of the figure in dots-per-inch. - - Parameters - ---------- - val : float - """ - self.dpi = val - self.stale = True - - def set_figwidth(self, val, forward=True): - """ - Set the width of the figure in inches. - - Parameters - ---------- - val : float - forward : bool - See `set_size_inches`. - - See Also - -------- - matplotlib.figure.Figure.set_figheight - matplotlib.figure.Figure.set_size_inches - """ - self.set_size_inches(val, self.get_figheight(), forward=forward) - - def set_figheight(self, val, forward=True): - """ - Set the height of the figure in inches. - - Parameters - ---------- - val : float - forward : bool - See `set_size_inches`. - - See Also - -------- - matplotlib.figure.Figure.set_figwidth - matplotlib.figure.Figure.set_size_inches - """ - self.set_size_inches(self.get_figwidth(), val, forward=forward) - - def clear(self, keep_observers=False): - # docstring inherited - super().clear(keep_observers=keep_observers) - # FigureBase.clear does not clear toolbars, as - # only Figure can have toolbars - toolbar = self.canvas.toolbar - if toolbar is not None: - toolbar.update() - - @_finalize_rasterization - @allow_rasterization - def draw(self, renderer): - # docstring inherited - - # draw the figure bounding box, perhaps none for white figure - if not self.get_visible(): - return - - artists = self._get_draw_artists(renderer) - try: - renderer.open_group('figure', gid=self.get_gid()) - if self.axes and self.get_layout_engine() is not None: - try: - self.get_layout_engine().execute(self) - except ValueError: - pass - # ValueError can occur when resizing a window. - - self.patch.draw(renderer) - mimage._draw_list_compositing_images( - renderer, self, artists, self.suppressComposite) - - for sfig in self.subfigs: - sfig.draw(renderer) - - renderer.close_group('figure') - finally: - self.stale = False - - DrawEvent("draw_event", self.canvas, renderer)._process() - - def draw_without_rendering(self): - """ - Draw the figure with no output. Useful to get the final size of - artists that require a draw before their size is known (e.g. text). - """ - renderer = _get_renderer(self) - with renderer._draw_disabled(): - self.draw(renderer) - - def draw_artist(self, a): - """ - Draw `.Artist` *a* only. - """ - a.draw(self.canvas.get_renderer()) - - def __getstate__(self): - state = super().__getstate__() - - # The canvas cannot currently be pickled, but this has the benefit - # of meaning that a figure can be detached from one canvas, and - # re-attached to another. - state.pop("canvas") - - # discard any changes to the dpi due to pixel ratio changes - state["_dpi"] = state.get('_original_dpi', state['_dpi']) - - # add version information to the state - state['__mpl_version__'] = mpl.__version__ - - # check whether the figure manager (if any) is registered with pyplot - from matplotlib import _pylab_helpers - if self.canvas.manager in _pylab_helpers.Gcf.figs.values(): - state['_restore_to_pylab'] = True - return state - - def __setstate__(self, state): - version = state.pop('__mpl_version__') - restore_to_pylab = state.pop('_restore_to_pylab', False) - - if version != mpl.__version__: - _api.warn_external( - f"This figure was saved with matplotlib version {version} and " - f"is unlikely to function correctly.") - - self.__dict__ = state - - # re-initialise some of the unstored state information - FigureCanvasBase(self) # Set self.canvas. - - if restore_to_pylab: - # lazy import to avoid circularity - import matplotlib.pyplot as plt - import matplotlib._pylab_helpers as pylab_helpers - allnums = plt.get_fignums() - num = max(allnums) + 1 if allnums else 1 - backend = plt._get_backend_mod() - mgr = backend.new_figure_manager_given_figure(num, self) - pylab_helpers.Gcf._set_new_active_manager(mgr) - plt.draw_if_interactive() - - self.stale = True - - def add_axobserver(self, func): - """Whenever the Axes state change, ``func(self)`` will be called.""" - # Connect a wrapper lambda and not func itself, to avoid it being - # weakref-collected. - self._axobservers.connect("_axes_change_event", lambda arg: func(arg)) - - def savefig(self, fname, *, transparent=None, **kwargs): - """ - Save the current figure. - - Call signature:: - - savefig(fname, *, dpi='figure', format=None, metadata=None, - bbox_inches=None, pad_inches=0.1, - facecolor='auto', edgecolor='auto', - backend=None, **kwargs - ) - - The available output formats depend on the backend being used. - - Parameters - ---------- - fname : str or path-like or binary file-like - A path, or a Python file-like object, or - possibly some backend-dependent object such as - `matplotlib.backends.backend_pdf.PdfPages`. - - If *format* is set, it determines the output format, and the file - is saved as *fname*. Note that *fname* is used verbatim, and there - is no attempt to make the extension, if any, of *fname* match - *format*, and no extension is appended. - - If *format* is not set, then the format is inferred from the - extension of *fname*, if there is one. If *format* is not - set and *fname* has no extension, then the file is saved with - :rc:`savefig.format` and the appropriate extension is appended to - *fname*. - - Other Parameters - ---------------- - dpi : float or 'figure', default: :rc:`savefig.dpi` - The resolution in dots per inch. If 'figure', use the figure's - dpi value. - - format : str - The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when - this is unset is documented under *fname*. - - metadata : dict, optional - Key/value pairs to store in the image metadata. The supported keys - and defaults depend on the image format and backend: - - - 'png' with Agg backend: See the parameter ``metadata`` of - `~.FigureCanvasAgg.print_png`. - - 'pdf' with pdf backend: See the parameter ``metadata`` of - `~.backend_pdf.PdfPages`. - - 'svg' with svg backend: See the parameter ``metadata`` of - `~.FigureCanvasSVG.print_svg`. - - 'eps' and 'ps' with PS backend: Only 'Creator' is supported. - - bbox_inches : str or `.Bbox`, default: :rc:`savefig.bbox` - Bounding box in inches: only the given portion of the figure is - saved. If 'tight', try to figure out the tight bbox of the figure. - - pad_inches : float, default: :rc:`savefig.pad_inches` - Amount of padding around the figure when bbox_inches is 'tight'. - - facecolor : color or 'auto', default: :rc:`savefig.facecolor` - The facecolor of the figure. If 'auto', use the current figure - facecolor. - - edgecolor : color or 'auto', default: :rc:`savefig.edgecolor` - The edgecolor of the figure. If 'auto', use the current figure - edgecolor. - - backend : str, optional - Use a non-default backend to render the file, e.g. to render a - png file with the "cairo" backend rather than the default "agg", - or a pdf file with the "pgf" backend rather than the default - "pdf". Note that the default backend is normally sufficient. See - :ref:`the-builtin-backends` for a list of valid backends for each - file format. Custom backends can be referenced as "module://...". - - orientation : {'landscape', 'portrait'} - Currently only supported by the postscript backend. - - papertype : str - One of 'letter', 'legal', 'executive', 'ledger', 'a0' through - 'a10', 'b0' through 'b10'. Only supported for postscript - output. - - transparent : bool - If *True*, the Axes patches will all be transparent; the - Figure patch will also be transparent unless *facecolor* - and/or *edgecolor* are specified via kwargs. - - If *False* has no effect and the color of the Axes and - Figure patches are unchanged (unless the Figure patch - is specified via the *facecolor* and/or *edgecolor* keyword - arguments in which case those colors are used). - - The transparency of these patches will be restored to their - original values upon exit of this function. - - This is useful, for example, for displaying - a plot on top of a colored background on a web page. - - bbox_extra_artists : list of `~matplotlib.artist.Artist`, optional - A list of extra artists that will be considered when the - tight bbox is calculated. - - pil_kwargs : dict, optional - Additional keyword arguments that are passed to - `PIL.Image.Image.save` when saving the figure. - - """ - - kwargs.setdefault('dpi', mpl.rcParams['savefig.dpi']) - if transparent is None: - transparent = mpl.rcParams['savefig.transparent'] - - with ExitStack() as stack: - if transparent: - kwargs.setdefault('facecolor', 'none') - kwargs.setdefault('edgecolor', 'none') - for ax in self.axes: - stack.enter_context( - ax.patch._cm_set(facecolor='none', edgecolor='none')) - - self.canvas.print_figure(fname, **kwargs) - - def ginput(self, n=1, timeout=30, show_clicks=True, - mouse_add=MouseButton.LEFT, - mouse_pop=MouseButton.RIGHT, - mouse_stop=MouseButton.MIDDLE): - """ - Blocking call to interact with a figure. - - Wait until the user clicks *n* times on the figure, and return the - coordinates of each click in a list. - - There are three possible interactions: - - - Add a point. - - Remove the most recently added point. - - Stop the interaction and return the points added so far. - - The actions are assigned to mouse buttons via the arguments - *mouse_add*, *mouse_pop* and *mouse_stop*. - - Parameters - ---------- - n : int, default: 1 - Number of mouse clicks to accumulate. If negative, accumulate - clicks until the input is terminated manually. - timeout : float, default: 30 seconds - Number of seconds to wait before timing out. If zero or negative - will never time out. - show_clicks : bool, default: True - If True, show a red cross at the location of each click. - mouse_add : `.MouseButton` or None, default: `.MouseButton.LEFT` - Mouse button used to add points. - mouse_pop : `.MouseButton` or None, default: `.MouseButton.RIGHT` - Mouse button used to remove the most recently added point. - mouse_stop : `.MouseButton` or None, default: `.MouseButton.MIDDLE` - Mouse button used to stop input. - - Returns - ------- - list of tuples - A list of the clicked (x, y) coordinates. - - Notes - ----- - The keyboard can also be used to select points in case your mouse - does not have one or more of the buttons. The delete and backspace - keys act like right-clicking (i.e., remove last point), the enter key - terminates input and any other key (not already used by the window - manager) selects a point. - """ - clicks = [] - marks = [] - - def handler(event): - is_button = event.name == "button_press_event" - is_key = event.name == "key_press_event" - # Quit (even if not in infinite mode; this is consistent with - # MATLAB and sometimes quite useful, but will require the user to - # test how many points were actually returned before using data). - if (is_button and event.button == mouse_stop - or is_key and event.key in ["escape", "enter"]): - self.canvas.stop_event_loop() - # Pop last click. - elif (is_button and event.button == mouse_pop - or is_key and event.key in ["backspace", "delete"]): - if clicks: - clicks.pop() - if show_clicks: - marks.pop().remove() - self.canvas.draw() - # Add new click. - elif (is_button and event.button == mouse_add - # On macOS/gtk, some keys return None. - or is_key and event.key is not None): - if event.inaxes: - clicks.append((event.xdata, event.ydata)) - _log.info("input %i: %f, %f", - len(clicks), event.xdata, event.ydata) - if show_clicks: - line = mpl.lines.Line2D([event.xdata], [event.ydata], - marker="+", color="r") - event.inaxes.add_line(line) - marks.append(line) - self.canvas.draw() - if len(clicks) == n and n > 0: - self.canvas.stop_event_loop() - - _blocking_input.blocking_input_loop( - self, ["button_press_event", "key_press_event"], timeout, handler) - - # Cleanup. - for mark in marks: - mark.remove() - self.canvas.draw() - - return clicks - - def waitforbuttonpress(self, timeout=-1): - """ - Blocking call to interact with the figure. - - Wait for user input and return True if a key was pressed, False if a - mouse button was pressed and None if no input was given within - *timeout* seconds. Negative values deactivate *timeout*. - """ - event = None - - def handler(ev): - nonlocal event - event = ev - self.canvas.stop_event_loop() - - _blocking_input.blocking_input_loop( - self, ["button_press_event", "key_press_event"], timeout, handler) - - return None if event is None else event.name == "key_press_event" - - @_api.deprecated("3.6", alternative="figure.get_layout_engine().execute()") - def execute_constrained_layout(self, renderer=None): - """ - Use ``layoutgrid`` to determine pos positions within Axes. - - See also `.set_constrained_layout_pads`. - - Returns - ------- - layoutgrid : private debugging object - """ - if not isinstance(self.get_layout_engine(), ConstrainedLayoutEngine): - return None - return self.get_layout_engine().execute(self) - - def tight_layout(self, *, pad=1.08, h_pad=None, w_pad=None, rect=None): - """ - Adjust the padding between and around subplots. - - To exclude an artist on the Axes from the bounding box calculation - that determines the subplot parameters (i.e. legend, or annotation), - set ``a.set_in_layout(False)`` for that artist. - - Parameters - ---------- - pad : float, default: 1.08 - Padding between the figure edge and the edges of subplots, - as a fraction of the font size. - h_pad, w_pad : float, default: *pad* - Padding (height/width) between edges of adjacent subplots, - as a fraction of the font size. - rect : tuple (left, bottom, right, top), default: (0, 0, 1, 1) - A rectangle in normalized figure coordinates into which the whole - subplots area (including labels) will fit. - - See Also - -------- - .Figure.set_layout_engine - .pyplot.tight_layout - """ - # note that here we do not permanently set the figures engine to - # tight_layout but rather just perform the layout in place and remove - # any previous engines. - engine = TightLayoutEngine(pad=pad, h_pad=h_pad, w_pad=w_pad, - rect=rect) - try: - previous_engine = self.get_layout_engine() - self.set_layout_engine(engine) - engine.execute(self) - if not isinstance(previous_engine, TightLayoutEngine) \ - and previous_engine is not None: - _api.warn_external('The figure layout has changed to tight') - finally: - self.set_layout_engine(None) - - -def figaspect(arg): - """ - Calculate the width and height for a figure with a specified aspect ratio. - - While the height is taken from :rc:`figure.figsize`, the width is - adjusted to match the desired aspect ratio. Additionally, it is ensured - that the width is in the range [4., 16.] and the height is in the range - [2., 16.]. If necessary, the default height is adjusted to ensure this. - - Parameters - ---------- - arg : float or 2D array - If a float, this defines the aspect ratio (i.e. the ratio height / - width). - In case of an array the aspect ratio is number of rows / number of - columns, so that the array could be fitted in the figure undistorted. - - Returns - ------- - width, height : float - The figure size in inches. - - Notes - ----- - If you want to create an Axes within the figure, that still preserves the - aspect ratio, be sure to create it with equal width and height. See - examples below. - - Thanks to Fernando Perez for this function. - - Examples - -------- - Make a figure twice as tall as it is wide:: - - w, h = figaspect(2.) - fig = Figure(figsize=(w, h)) - ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) - ax.imshow(A, **kwargs) - - Make a figure with the proper aspect for an array:: - - A = rand(5, 3) - w, h = figaspect(A) - fig = Figure(figsize=(w, h)) - ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) - ax.imshow(A, **kwargs) - """ - - isarray = hasattr(arg, 'shape') and not np.isscalar(arg) - - # min/max sizes to respect when autoscaling. If John likes the idea, they - # could become rc parameters, for now they're hardwired. - figsize_min = np.array((4.0, 2.0)) # min length for width/height - figsize_max = np.array((16.0, 16.0)) # max length for width/height - - # Extract the aspect ratio of the array - if isarray: - nr, nc = arg.shape[:2] - arr_ratio = nr / nc - else: - arr_ratio = arg - - # Height of user figure defaults - fig_height = mpl.rcParams['figure.figsize'][1] - - # New size for the figure, keeping the aspect ratio of the caller - newsize = np.array((fig_height / arr_ratio, fig_height)) - - # Sanity checks, don't drop either dimension below figsize_min - newsize /= min(1.0, *(newsize / figsize_min)) - - # Avoid humongous windows as well - newsize /= max(1.0, *(newsize / figsize_max)) - - # Finally, if we have a really funky aspect ratio, break it but respect - # the min/max dimensions (we don't want figures 10 feet tall!) - newsize = np.clip(newsize, figsize_min, figsize_max) - return newsize diff --git a/spaces/lc202301/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/lc202301/ChuanhuChatGPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/lc202301/ChuanhuChatGPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/leurez/moss/src/plugins/index.ts b/spaces/leurez/moss/src/plugins/index.ts deleted file mode 100644 index 18e9c1a59b4e1e4446141aa625cd65ecf44499fd..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/plugins/index.ts +++ /dev/null @@ -1,4 +0,0 @@ -import setupAssets from './assets' -import setupScrollbarStyle from './scrollbarStyle' - -export { setupAssets, setupScrollbarStyle } diff --git a/spaces/lightli/bingo-newbing/next.config.js b/spaces/lightli/bingo-newbing/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/FraternityX - The Stuffing [Gay].zip.md b/spaces/lincquiQcaudo/Top-20-Diffusion/FraternityX - The Stuffing [Gay].zip.md deleted file mode 100644 index 837221f764f7328e81cc26f4bb28146d26f92399..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/FraternityX - The Stuffing [Gay].zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

    FraternityX - The Stuffing [Gay].zip


    Download ››››› https://bytlly.com/2uGxEV



    -
    -[Gay].zip hit porno.. 27 Jan.... FraternityX writes: "Usually we have no idea what day it is. But Thanksgiving is a time for stuffing and breeding hot college boy ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Greenpois0n 10 Rc6 Download Windows VERIFIED.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Greenpois0n 10 Rc6 Download Windows VERIFIED.md deleted file mode 100644 index 98fa3308aab865796cab6d4c96b53efb40e1928d..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Greenpois0n 10 Rc6 Download Windows VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Greenpois0n 10 Rc6 Download Windows


    DOWNLOAD ––– https://bytlly.com/2uGwng



    -
    -Feb 26th, 2011 10:31 pm ... But that might because I didnt install all the useless stuff I had before. ... 4.3 GM has been jailbroken but is just a semi-tethered jailbreak and is only available for the iPhone 4 and Mac users only. ... I want to jb my phone but I can't seem to get on Greenpois0n to download RC6.1. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.h b/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.h deleted file mode 100644 index a59b1d347ea5fe92976a4fda10a820d6508f51da..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.h +++ /dev/null @@ -1,27 +0,0 @@ -#pragma once - -#include - -#include "masked_image.h" -#include "nnf.h" - -class Inpainting { -public: - Inpainting(cv::Mat image, cv::Mat mask, const PatchDistanceMetric *metric); - Inpainting(cv::Mat image, cv::Mat mask, cv::Mat global_mask, const PatchDistanceMetric *metric); - cv::Mat run(bool verbose = false, bool verbose_visualize = false, unsigned int random_seed = 1212); - -private: - void _initialize_pyramid(void); - MaskedImage _expectation_maximization(MaskedImage source, MaskedImage target, int level, bool verbose); - void _expectation_step(const NearestNeighborField &nnf, bool source2target, cv::Mat &vote, const MaskedImage &source, bool upscaled); - void _maximization_step(MaskedImage &target, const cv::Mat &vote); - - MaskedImage m_initial; - std::vector m_pyramid; - - NearestNeighborField m_source2target; - NearestNeighborField m_target2source; - const PatchDistanceMetric *m_distance_metric; -}; - diff --git a/spaces/logasja/LowKey/align/__init__.py b/spaces/logasja/LowKey/align/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/logasja/LowKey/align/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/ltgoslo/ssa-perin/mtool/data/validate/Makefile b/spaces/ltgoslo/ssa-perin/mtool/data/validate/Makefile deleted file mode 100644 index cc51fc2a40085a8749681bf024459d7bf2852485..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/data/validate/Makefile +++ /dev/null @@ -1,5 +0,0 @@ -.PHONY: all - -all: - time python3 -u ../../main.py --trace --trace --validate all \ - --read mrp eds/wsj.mrp $@ 2>&1 | tee eds.wsj.log diff --git a/spaces/luost26/DiffAb/diffab/utils/transforms/_base.py b/spaces/luost26/DiffAb/diffab/utils/transforms/_base.py deleted file mode 100644 index 0694aae80271b0a8e990e209daa1822393988f1b..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/utils/transforms/_base.py +++ /dev/null @@ -1,56 +0,0 @@ -import copy -import torch -from torchvision.transforms import Compose - - -_TRANSFORM_DICT = {} - - -def register_transform(name): - def decorator(cls): - _TRANSFORM_DICT[name] = cls - return cls - return decorator - - -def get_transform(cfg): - if cfg is None or len(cfg) == 0: - return None - tfms = [] - for t_dict in cfg: - t_dict = copy.deepcopy(t_dict) - cls = _TRANSFORM_DICT[t_dict.pop('type')] - tfms.append(cls(**t_dict)) - return Compose(tfms) - - -def _index_select(v, index, n): - if isinstance(v, torch.Tensor) and v.size(0) == n: - return v[index] - elif isinstance(v, list) and len(v) == n: - return [v[i] for i in index] - else: - return v - - -def _index_select_data(data, index): - return { - k: _index_select(v, index, data['aa'].size(0)) - for k, v in data.items() - } - - -def _mask_select(v, mask): - if isinstance(v, torch.Tensor) and v.size(0) == mask.size(0): - return v[mask] - elif isinstance(v, list) and len(v) == mask.size(0): - return [v[i] for i, b in enumerate(mask) if b] - else: - return v - - -def _mask_select_data(data, mask): - return { - k: _mask_select(v, mask) - for k, v in data.items() - } diff --git a/spaces/luwujie/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/luwujie/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 4e44bab8aa65d16e35e935f1273de2e98ce80cf9..0000000000000000000000000000000000000000 --- a/spaces/luwujie/QQsign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.0.jar;%APP_HOME%\lib\unidbg-fix.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/luxuedong/lxd/src/components/ui/alert-dialog.tsx b/spaces/luxuedong/lxd/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
    - {children} -
    -
    -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/m3hrdadfi/gpt2-persian-qa/app.py b/spaces/m3hrdadfi/gpt2-persian-qa/app.py deleted file mode 100644 index 83d4e95abfe6ce9979d53f52afbae150e379d359..0000000000000000000000000000000000000000 --- a/spaces/m3hrdadfi/gpt2-persian-qa/app.py +++ /dev/null @@ -1,192 +0,0 @@ -import streamlit as st -import torch -from transformers import pipeline, set_seed -from transformers import AutoTokenizer -from transformers import GPT2LMHeadModel -from mtranslate import translate -import random - -import meta -from normalizer import normalize -from utils import ( - remote_css, - local_css, - load_json -) - -EXAMPLES = load_json("examples.json") -CK = "متن" -QK = "پرسش" -AK = "پاسخ" - - -class TextGeneration: - def __init__(self): - self.debug = False - self.dummy_output = "مخلوطی از ایتالیایی و انگلیسی" - self.tokenizer = None - self.model = None - self.model_name_or_path = "m3hrdadfi/gpt2-persian-qa" - self.length_margin = 100 - set_seed(42) - - def load(self): - if not self.debug: - self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path) - self.model = GPT2LMHeadModel.from_pretrained(self.model_name_or_path) - - def generate(self, prompt, generation_kwargs): - - if not self.debug: - input_ids = self.tokenizer([prompt], return_tensors="pt")["input_ids"] - max_length = len(input_ids[0]) + self.length_margin - generation_kwargs["max_length"] = max_length - - generated = self.model.generate( - input_ids, - **generation_kwargs, - )[0] - - answer = self.tokenizer.decode(generated, skip_special_tokens=True) - found = answer.find(f"{AK}: ") - if not found: - return "" - - answer = [a.strip() for a in answer[found:].split(f"{AK}: ") if a.strip()] - answer = answer[0] if len(answer) > 0 else "" - return answer - - return self.dummy_output - - -@st.cache(allow_output_mutation=True) -def load_text_generator(): - generator = TextGeneration() - generator.load() - return generator - - -def main(): - st.set_page_config( - page_title="GPT2 QA - Persian", - page_icon="⁉️", - layout="wide", - initial_sidebar_state="expanded" - ) - remote_css("https://cdn.jsdelivr.net/gh/rastikerdar/vazir-font/dist/font-face.css") - local_css("assets/rtl.css") - generator = load_text_generator() - - st.sidebar.markdown(meta.SIDEBAR_INFO) - num_beams = st.sidebar.slider( - label='Number of Beam', - help="Number of beams for beam search", - min_value=4, - max_value=15, - value=5, - step=1 - ) - repetition_penalty = st.sidebar.slider( - label='Repetition Penalty', - help="The parameter for repetition penalty", - min_value=1.0, - max_value=10.0, - value=1.0, - step=0.1 - ) - length_penalty = st.sidebar.slider( - label='Length Penalty', - help="Exponential penalty to the length", - min_value=1.0, - max_value=10.0, - value=1.0, - step=0.1 - ) - early_stopping = st.sidebar.selectbox( - label='Early Stopping ?', - options=(True, False), - help="Whether to stop the beam search when at least num_beams sentences are finished per batch or not", - ) - translated = st.sidebar.selectbox( - label='Translation ?', - options=(True, False), - help="Will translate the result in English", - ) - generation_kwargs = { - "num_beams": num_beams, - "early_stopping": early_stopping, - "repetition_penalty": repetition_penalty, - "length_penalty": length_penalty, - } - - st.markdown(meta.HEADER_INFO) - prompts = [e["title"] for e in EXAMPLES] + ["Custom"] - prompt = st.selectbox('Examples', prompts, index=len(prompts) - 1) - - if prompt == "Custom": - prompt_box = { - "context": meta.C_PROMPT_BOX, - "question": meta.Q_PROMPT_BOX, - "answer": meta.A_PROMPT_BOX, - } - else: - prompt_box = next(e for e in EXAMPLES if e["title"] == prompt) - - context = st.text_area("Enter context", prompt_box["context"], height=250) - question = st.text_area("Enter question", prompt_box["question"], height=100) - answer = "پاسخ درست: " + prompt_box["answer"] - st.markdown( - f'

    ' - f'{answer}' - f'

    ', - unsafe_allow_html=True - ) - if translated: - translated_answer = translate(answer, "en", "fa") - st.markdown( - f'

    ' - f'{translated_answer}' - f'

    ', - unsafe_allow_html=True - ) - generation_kwargs_ph = st.empty() - - if st.button("Find the answer 🔎 "): - with st.spinner(text="Searching ..."): - generation_kwargs_ph.markdown(", ".join([f"`{k}`: {v}" for k, v in generation_kwargs.items()])) - context = normalize(context) - question = normalize(question) - - if context and question: - text = f"{context} {QK}: {question} {AK}:" - generated_answer = generator.generate(text, generation_kwargs) - generated_answer = f"{AK}: {generated_answer}".strip() - context = f"{CK}: {context}".strip() - question = f"{QK}: {question}".strip() - - st.markdown( - f'

    ' - f'{context}

    ' - f'{question}

    ' - f'{generated_answer} ' - f'

    ', - unsafe_allow_html=True - ) - - if translated: - translated_context = translate(context, "en", "fa") - translated_question = translate(question, "en", "fa") - translated_generated_answer = translate(generated_answer, "en", "fa") - - st.markdown( - f'

    ' - f'{translated_context}

    ' - f'{translated_question}

    ' - f'{translated_generated_answer}' - f'

    ', - unsafe_allow_html=True - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/ma-xu/LIVE/thrust/thrust/mismatch.h b/spaces/ma-xu/LIVE/thrust/thrust/mismatch.h deleted file mode 100644 index 413db84f56361af4d028b756e267f655a591b34c..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/mismatch.h +++ /dev/null @@ -1,260 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file mismatch.h - * \brief Search for differences between ranges - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup algorithms - */ - -/*! \addtogroup searching - * \ingroup algorithms - * \{ - */ - - -/*! \p mismatch finds the first position where the two ranges [first1, last1) - * and [first2, first2 + (last1 - first1)) differ. The two versions of - * \p mismatch use different tests for whether elements differ. - * - * This version of \p mismatch finds the first iterator \c i in [first1, last1) - * such that *i == *(first2 + (i - first1)) is \c false. The return value is a - * \c pair whose first element is \c i and whose second element is *(first2 + (i - first1)). - * If no such iterator \c i exists, the return value is a \c pair whose first element - * is \c last1 and whose second element is *(first2 + (last1 - first1)). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \return The first position where the sequences differ. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator - * and \p InputIterator1's \c value_type is equality comparable to \p InputIterator2's \c value_type. - * \tparam InputIterator2 is a model of Input Iterator. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector vec1(4); - * thrust::device_vector vec2(4); - * - * vec1[0] = 0; vec2[0] = 0; - * vec1[1] = 5; vec2[1] = 5; - * vec1[2] = 3; vec2[2] = 8; - * vec1[3] = 7; vec2[3] = 7; - * - * typedef thrust::device_vector::iterator Iterator; - * thrust::pair result; - * - * result = thrust::mismatch(thrust::device, vec1.begin(), vec1.end(), vec2.begin()); - * - * // result.first is vec1.begin() + 2 - * // result.second is vec2.begin() + 2 - * \endcode - * - * \see find - * \see find_if - */ -template -__host__ __device__ -thrust::pair mismatch(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2); - - -/*! \p mismatch finds the first position where the two ranges [first1, last1) - * and [first2, first2 + (last1 - first1)) differ. The two versions of - * \p mismatch use different tests for whether elements differ. - * - * This version of \p mismatch finds the first iterator \c i in [first1, last1) - * such that *i == *(first2 + (i - first1)) is \c false. The return value is a - * \c pair whose first element is \c i and whose second element is *(first2 + (i - first1)). - * If no such iterator \c i exists, the return value is a \c pair whose first element - * is \c last1 and whose second element is *(first2 + (last1 - first1)). - * - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \return The first position where the sequences differ. - * - * \tparam InputIterator1 is a model of Input Iterator - * and \p InputIterator1's \c value_type is equality comparable to \p InputIterator2's \c value_type. - * \tparam InputIterator2 is a model of Input Iterator. - * - * \code - * #include - * #include - * ... - * thrust::device_vector vec1(4); - * thrust::device_vector vec2(4); - * - * vec1[0] = 0; vec2[0] = 0; - * vec1[1] = 5; vec2[1] = 5; - * vec1[2] = 3; vec2[2] = 8; - * vec1[3] = 7; vec2[3] = 7; - * - * typedef thrust::device_vector::iterator Iterator; - * thrust::pair result; - * - * result = thrust::mismatch(vec1.begin(), vec1.end(), vec2.begin()); - * - * // result.first is vec1.begin() + 2 - * // result.second is vec2.begin() + 2 - * \endcode - * - * \see find - * \see find_if - */ -template -thrust::pair mismatch(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2); - - -/*! \p mismatch finds the first position where the two ranges [first1, last1) - * and [first2, first2 + (last1 - first1)) differ. The two versions of - * \p mismatch use different tests for whether elements differ. - * - * This version of \p mismatch finds the first iterator \c i in [first1, last1) - * such that pred(\*i, \*(first2 + (i - first1)) is \c false. The return value is a - * \c pair whose first element is \c i and whose second element is *(first2 + (i - first1)). - * If no such iterator \c i exists, the return value is a \c pair whose first element is - * \c last1 and whose second element is *(first2 + (last1 - first1)). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param pred The binary predicate to compare elements. - * \return The first position where the sequences differ. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator1 is a model of Input Iterator. - * \tparam InputIterator2 is a model of Input Iterator. - * \tparam Predicate is a model of Input Iterator. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector vec1(4); - * thrust::device_vector vec2(4); - * - * vec1[0] = 0; vec2[0] = 0; - * vec1[1] = 5; vec2[1] = 5; - * vec1[2] = 3; vec2[2] = 8; - * vec1[3] = 7; vec2[3] = 7; - * - * typedef thrust::device_vector::iterator Iterator; - * thrust::pair result; - * - * result = thrust::mismatch(thrust::device, vec1.begin(), vec1.end(), vec2.begin(), thrust::equal_to()); - * - * // result.first is vec1.begin() + 2 - * // result.second is vec2.begin() + 2 - * \endcode - * - * \see find - * \see find_if - */ -template -__host__ __device__ -thrust::pair mismatch(const thrust::detail::execution_policy_base &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - BinaryPredicate pred); - - -/*! \p mismatch finds the first position where the two ranges [first1, last1) - * and [first2, first2 + (last1 - first1)) differ. The two versions of - * \p mismatch use different tests for whether elements differ. - * - * This version of \p mismatch finds the first iterator \c i in [first1, last1) - * such that pred(\*i, \*(first2 + (i - first1)) is \c false. The return value is a - * \c pair whose first element is \c i and whose second element is *(first2 + (i - first1)). - * If no such iterator \c i exists, the return value is a \c pair whose first element is - * \c last1 and whose second element is *(first2 + (last1 - first1)). - * - * \param first1 The beginning of the first sequence. - * \param last1 The end of the first sequence. - * \param first2 The beginning of the second sequence. - * \param pred The binary predicate to compare elements. - * \return The first position where the sequences differ. - * - * \tparam InputIterator1 is a model of Input Iterator. - * \tparam InputIterator2 is a model of Input Iterator. - * \tparam Predicate is a model of Input Iterator. - * - * \code - * #include - * #include - * ... - * thrust::device_vector vec1(4); - * thrust::device_vector vec2(4); - * - * vec1[0] = 0; vec2[0] = 0; - * vec1[1] = 5; vec2[1] = 5; - * vec1[2] = 3; vec2[2] = 8; - * vec1[3] = 7; vec2[3] = 7; - * - * typedef thrust::device_vector::iterator Iterator; - * thrust::pair result; - * - * result = thrust::mismatch(vec1.begin(), vec1.end(), vec2.begin(), thrust::equal_to()); - * - * // result.first is vec1.begin() + 2 - * // result.second is vec2.begin() + 2 - * \endcode - * - * \see find - * \see find_if - */ -template -thrust::pair mismatch(InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - BinaryPredicate pred); - -/*! \} // end searching - */ - -} // end namespace thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/sequence.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/sequence.h deleted file mode 100644 index c33b2d4333ce2ded0ffe73c23c20a80c5a35b928..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/sequence.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits sequence -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/base_model.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/base_model.py deleted file mode 100644 index 05c8d2e138f45367ff66dd99c5c7454ea217cd79..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/models/base_model.py +++ /dev/null @@ -1,380 +0,0 @@ -import os -import time -import torch -from collections import OrderedDict -from copy import deepcopy -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from basicsr.models import lr_scheduler as lr_scheduler -from basicsr.utils import get_root_logger -from basicsr.utils.dist_util import master_only - - -class BaseModel(): - """Base model.""" - - def __init__(self, opt): - self.opt = opt - self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu') - self.is_train = opt['is_train'] - self.schedulers = [] - self.optimizers = [] - - def feed_data(self, data): - pass - - def optimize_parameters(self): - pass - - def get_current_visuals(self): - pass - - def save(self, epoch, current_iter): - """Save networks and training state.""" - pass - - def validation(self, dataloader, current_iter, tb_logger, save_img=False): - """Validation function. - - Args: - dataloader (torch.utils.data.DataLoader): Validation dataloader. - current_iter (int): Current iteration. - tb_logger (tensorboard logger): Tensorboard logger. - save_img (bool): Whether to save images. Default: False. - """ - if self.opt['dist']: - self.dist_validation(dataloader, current_iter, tb_logger, save_img) - else: - self.nondist_validation(dataloader, current_iter, tb_logger, save_img) - - def _initialize_best_metric_results(self, dataset_name): - """Initialize the best metric results dict for recording the best metric value and iteration.""" - if hasattr(self, 'best_metric_results') and dataset_name in self.best_metric_results: - return - elif not hasattr(self, 'best_metric_results'): - self.best_metric_results = dict() - - # add a dataset record - record = dict() - for metric, content in self.opt['val']['metrics'].items(): - better = content.get('better', 'higher') - init_val = float('-inf') if better == 'higher' else float('inf') - record[metric] = dict(better=better, val=init_val, iter=-1) - self.best_metric_results[dataset_name] = record - - def _update_best_metric_result(self, dataset_name, metric, val, current_iter): - if self.best_metric_results[dataset_name][metric]['better'] == 'higher': - if val >= self.best_metric_results[dataset_name][metric]['val']: - self.best_metric_results[dataset_name][metric]['val'] = val - self.best_metric_results[dataset_name][metric]['iter'] = current_iter - else: - if val <= self.best_metric_results[dataset_name][metric]['val']: - self.best_metric_results[dataset_name][metric]['val'] = val - self.best_metric_results[dataset_name][metric]['iter'] = current_iter - - def model_ema(self, decay=0.999): - net_g = self.get_bare_model(self.net_g) - - net_g_params = dict(net_g.named_parameters()) - net_g_ema_params = dict(self.net_g_ema.named_parameters()) - - for k in net_g_ema_params.keys(): - net_g_ema_params[k].data.mul_(decay).add_(net_g_params[k].data, alpha=1 - decay) - - def get_current_log(self): - return self.log_dict - - def model_to_device(self, net): - """Model to device. It also warps models with DistributedDataParallel - or DataParallel. - - Args: - net (nn.Module) - """ - net = net.to(self.device) - if self.opt['dist']: - find_unused_parameters = self.opt.get('find_unused_parameters', False) - net = DistributedDataParallel( - net, device_ids=[torch.cuda.current_device()], find_unused_parameters=find_unused_parameters) - elif self.opt['num_gpu'] > 1: - net = DataParallel(net) - return net - - def get_optimizer(self, optim_type, params, lr, **kwargs): - if optim_type == 'Adam': - optimizer = torch.optim.Adam(params, lr, **kwargs) - else: - raise NotImplementedError(f'optimizer {optim_type} is not supperted yet.') - return optimizer - - def setup_schedulers(self): - """Set up schedulers.""" - train_opt = self.opt['train'] - scheduler_type = train_opt['scheduler'].pop('type') - if scheduler_type in ['MultiStepLR', 'MultiStepRestartLR']: - for optimizer in self.optimizers: - self.schedulers.append(lr_scheduler.MultiStepRestartLR(optimizer, **train_opt['scheduler'])) - elif scheduler_type == 'CosineAnnealingRestartLR': - for optimizer in self.optimizers: - self.schedulers.append(lr_scheduler.CosineAnnealingRestartLR(optimizer, **train_opt['scheduler'])) - else: - raise NotImplementedError(f'Scheduler {scheduler_type} is not implemented yet.') - - def get_bare_model(self, net): - """Get bare model, especially under wrapping with - DistributedDataParallel or DataParallel. - """ - if isinstance(net, (DataParallel, DistributedDataParallel)): - net = net.module - return net - - @master_only - def print_network(self, net): - """Print the str and parameter number of a network. - - Args: - net (nn.Module) - """ - if isinstance(net, (DataParallel, DistributedDataParallel)): - net_cls_str = f'{net.__class__.__name__} - {net.module.__class__.__name__}' - else: - net_cls_str = f'{net.__class__.__name__}' - - net = self.get_bare_model(net) - net_str = str(net) - net_params = sum(map(lambda x: x.numel(), net.parameters())) - - logger = get_root_logger() - logger.info(f'Network: {net_cls_str}, with parameters: {net_params:,d}') - logger.info(net_str) - - def _set_lr(self, lr_groups_l): - """Set learning rate for warm-up. - - Args: - lr_groups_l (list): List for lr_groups, each for an optimizer. - """ - for optimizer, lr_groups in zip(self.optimizers, lr_groups_l): - for param_group, lr in zip(optimizer.param_groups, lr_groups): - param_group['lr'] = lr - - def _get_init_lr(self): - """Get the initial lr, which is set by the scheduler. - """ - init_lr_groups_l = [] - for optimizer in self.optimizers: - init_lr_groups_l.append([v['initial_lr'] for v in optimizer.param_groups]) - return init_lr_groups_l - - def update_learning_rate(self, current_iter, warmup_iter=-1): - """Update learning rate. - - Args: - current_iter (int): Current iteration. - warmup_iter (int): Warm-up iter numbers. -1 for no warm-up. - Default: -1. - """ - if current_iter > 1: - for scheduler in self.schedulers: - scheduler.step() - # set up warm-up learning rate - if current_iter < warmup_iter: - # get initial lr for each group - init_lr_g_l = self._get_init_lr() - # modify warming-up learning rates - # currently only support linearly warm up - warm_up_lr_l = [] - for init_lr_g in init_lr_g_l: - warm_up_lr_l.append([v / warmup_iter * current_iter for v in init_lr_g]) - # set learning rate - self._set_lr(warm_up_lr_l) - - def get_current_learning_rate(self): - return [param_group['lr'] for param_group in self.optimizers[0].param_groups] - - @master_only - def save_network(self, net, net_label, current_iter, param_key='params'): - """Save networks. - - Args: - net (nn.Module | list[nn.Module]): Network(s) to be saved. - net_label (str): Network label. - current_iter (int): Current iter number. - param_key (str | list[str]): The parameter key(s) to save network. - Default: 'params'. - """ - if current_iter == -1: - current_iter = 'latest' - save_filename = f'{net_label}_{current_iter}.pth' - save_path = os.path.join(self.opt['path']['models'], save_filename) - - net = net if isinstance(net, list) else [net] - param_key = param_key if isinstance(param_key, list) else [param_key] - assert len(net) == len(param_key), 'The lengths of net and param_key should be the same.' - - save_dict = {} - for net_, param_key_ in zip(net, param_key): - net_ = self.get_bare_model(net_) - state_dict = net_.state_dict() - for key, param in state_dict.items(): - if key.startswith('module.'): # remove unnecessary 'module.' - key = key[7:] - state_dict[key] = param.cpu() - save_dict[param_key_] = state_dict - - # avoid occasional writing errors - retry = 3 - while retry > 0: - try: - torch.save(save_dict, save_path) - except Exception as e: - logger = get_root_logger() - logger.warning(f'Save model error: {e}, remaining retry times: {retry - 1}') - time.sleep(1) - else: - break - finally: - retry -= 1 - if retry == 0: - logger.warning(f'Still cannot save {save_path}. Just ignore it.') - # raise IOError(f'Cannot save {save_path}.') - - def _print_different_keys_loading(self, crt_net, load_net, strict=True): - """Print keys with different name or different size when loading models. - - 1. Print keys with different names. - 2. If strict=False, print the same key but with different tensor size. - It also ignore these keys with different sizes (not load). - - Args: - crt_net (torch model): Current network. - load_net (dict): Loaded network. - strict (bool): Whether strictly loaded. Default: True. - """ - crt_net = self.get_bare_model(crt_net) - crt_net = crt_net.state_dict() - crt_net_keys = set(crt_net.keys()) - load_net_keys = set(load_net.keys()) - - logger = get_root_logger() - if crt_net_keys != load_net_keys: - logger.warning('Current net - loaded net:') - for v in sorted(list(crt_net_keys - load_net_keys)): - logger.warning(f' {v}') - logger.warning('Loaded net - current net:') - for v in sorted(list(load_net_keys - crt_net_keys)): - logger.warning(f' {v}') - - # check the size for the same keys - if not strict: - common_keys = crt_net_keys & load_net_keys - for k in common_keys: - if crt_net[k].size() != load_net[k].size(): - logger.warning(f'Size different, ignore [{k}]: crt_net: ' - f'{crt_net[k].shape}; load_net: {load_net[k].shape}') - load_net[k + '.ignore'] = load_net.pop(k) - - def load_network(self, net, load_path, strict=True, param_key='params'): - """Load network. - - Args: - load_path (str): The path of networks to be loaded. - net (nn.Module): Network. - strict (bool): Whether strictly loaded. - param_key (str): The parameter key of loaded network. If set to - None, use the root 'path'. - Default: 'params'. - """ - logger = get_root_logger() - net = self.get_bare_model(net) - load_net = torch.load(load_path, map_location=lambda storage, loc: storage) - if param_key is not None: - if param_key not in load_net and 'params' in load_net: - param_key = 'params' - logger.info('Loading: params_ema does not exist, use params.') - load_net = load_net[param_key] - logger.info(f'Loading {net.__class__.__name__} model from {load_path}, with param key: [{param_key}].') - # remove unnecessary 'module.' - for k, v in deepcopy(load_net).items(): - if k.startswith('module.'): - load_net[k[7:]] = v - load_net.pop(k) - self._print_different_keys_loading(net, load_net, strict) - net.load_state_dict(load_net, strict=strict) - - @master_only - def save_training_state(self, epoch, current_iter): - """Save training states during training, which will be used for - resuming. - - Args: - epoch (int): Current epoch. - current_iter (int): Current iteration. - """ - if current_iter != -1: - state = {'epoch': epoch, 'iter': current_iter, 'optimizers': [], 'schedulers': []} - for o in self.optimizers: - state['optimizers'].append(o.state_dict()) - for s in self.schedulers: - state['schedulers'].append(s.state_dict()) - save_filename = f'{current_iter}.state' - save_path = os.path.join(self.opt['path']['training_states'], save_filename) - - # avoid occasional writing errors - retry = 3 - while retry > 0: - try: - torch.save(state, save_path) - except Exception as e: - logger = get_root_logger() - logger.warning(f'Save training state error: {e}, remaining retry times: {retry - 1}') - time.sleep(1) - else: - break - finally: - retry -= 1 - if retry == 0: - logger.warning(f'Still cannot save {save_path}. Just ignore it.') - # raise IOError(f'Cannot save {save_path}.') - - def resume_training(self, resume_state): - """Reload the optimizers and schedulers for resumed training. - - Args: - resume_state (dict): Resume state. - """ - resume_optimizers = resume_state['optimizers'] - resume_schedulers = resume_state['schedulers'] - assert len(resume_optimizers) == len(self.optimizers), 'Wrong lengths of optimizers' - assert len(resume_schedulers) == len(self.schedulers), 'Wrong lengths of schedulers' - for i, o in enumerate(resume_optimizers): - self.optimizers[i].load_state_dict(o) - for i, s in enumerate(resume_schedulers): - self.schedulers[i].load_state_dict(s) - - def reduce_loss_dict(self, loss_dict): - """reduce loss dict. - - In distributed training, it averages the losses among different GPUs . - - Args: - loss_dict (OrderedDict): Loss dict. - """ - with torch.no_grad(): - if self.opt['dist']: - keys = [] - losses = [] - for name, value in loss_dict.items(): - keys.append(name) - losses.append(value) - losses = torch.stack(losses, 0) - torch.distributed.reduce(losses, dst=0) - if self.opt['rank'] == 0: - losses /= self.opt['world_size'] - loss_dict = {key: loss for key, loss in zip(keys, losses)} - - log_dict = OrderedDict() - for name, value in loss_dict.items(): - log_dict[name] = value.mean().item() - - return log_dict diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/__init__.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/__init__.py deleted file mode 100644 index 4374370494b65f10b76c70a2d4f731c238cfa54c..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/wav2lip_models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .wav2lip import Wav2Lip, Wav2Lip_disc_qual -from .syncnet import SyncNet_color \ No newline at end of file diff --git a/spaces/manhkhanhUIT/BOPBTL/Dockerfile b/spaces/manhkhanhUIT/BOPBTL/Dockerfile deleted file mode 100644 index 8764e0011f8e0b937674005354ca957317c23fd4..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Dockerfile +++ /dev/null @@ -1,43 +0,0 @@ -FROM nvidia/cuda:11.1-base-ubuntu20.04 - -RUN apt update && DEBIAN_FRONTEND=noninteractive apt install git bzip2 wget unzip python3-pip python3-dev cmake libgl1-mesa-dev python-is-python3 libgtk2.0-dev -yq -ADD . /app -WORKDIR /app -RUN cd Face_Enhancement/models/networks/ &&\ - git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch &&\ - cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm . &&\ - cd ../../../ - -RUN cd Global/detection_models &&\ - git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch &&\ - cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm . &&\ - cd ../../ - -RUN cd Face_Detection/ &&\ - wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 &&\ - bzip2 -d shape_predictor_68_face_landmarks.dat.bz2 &&\ - cd ../ - -RUN cd Face_Enhancement/ &&\ - wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip &&\ - unzip checkpoints.zip &&\ - cd ../ &&\ - cd Global/ &&\ - wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip &&\ - unzip checkpoints.zip &&\ - rm -f checkpoints.zip &&\ - cd ../ - -RUN pip3 install numpy - -RUN pip3 install dlib - -RUN pip3 install -r requirements.txt - -RUN git clone https://github.com/NVlabs/SPADE.git - -RUN cd SPADE/ && pip3 install -r requirements.txt - -RUN cd .. - -CMD ["python3", "run.py"] diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/facewarp/gen_puppet_utils.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/facewarp/gen_puppet_utils.py deleted file mode 100644 index ce6374d66fb4a1d90d3e2852fb9aa55382ec7e3d..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/facewarp/gen_puppet_utils.py +++ /dev/null @@ -1,287 +0,0 @@ -import numpy as np -import cv2 -import matplotlib.pyplot as plt -import os -import random - -def closest_node(xy, pts): - #search the list of nodes for the one closest to node, return the name - dist_2 = np.sqrt(np.sum((pts - np.array(xy).reshape((-1, 2)))**2, axis=1)) - if (dist_2[np.argmin(dist_2)] > 20): - return -1 - return np.argmin(dist_2) - - -def draw_landmarks(img, pts, pc=(0,0,255), radius=2, lc=(0,255,0), thickness=2): - - for i in range(0, 16): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (0, 255, 0), thickness) - for i in range(17, 21): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 0, 0), thickness) - for i in range(22, 26): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 0, 0), thickness) - for i in range(27, 35): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 255, 0), thickness) - for i in range(36, 41): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 0, 255), thickness) - for i in range(42, 47): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 0, 255), thickness) - for i in range(48, 59): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 128, 0), thickness) - for i in range(60, 67): - cv2.line(img, (int(pts[i, 0]), int(pts[i, 1])), - (int(pts[i+1, 0]), int(pts[i+1, 1])), (255, 128, 128), thickness) - cv2.line(img, (int(pts[48, 0]), int(pts[48, 1])), - (int(pts[59, 0]), int(pts[59, 1])), (255, 128, 0), thickness) - cv2.line(img, (int(pts[60, 0]), int(pts[60, 1])), - (int(pts[67, 0]), int(pts[67, 1])), (255, 128, 128), thickness) - - for i in range(68): - cv2.circle(img, (int(pts[i, 0]), int(pts[i, 1])), radius, pc, -1) - - -def norm_anno(ROOT_DIR, CH, param=[0.75, 0.35, 0.6, 0.6], show=True): - - face_tmp = np.loadtxt(os.path.join(ROOT_DIR, CH + '_face_open_mouth.txt')) # .reshape(1, 204) - try: - face_tmp = face_tmp.reshape(68, 3) - except: - print('annotated face is not in correct size = [68 x 3]') - exit(0) - - scale = 1.6 / (face_tmp[0, 0] - face_tmp[16, 0]) - shift = - 0.5 * (face_tmp[0, 0:2] + face_tmp[16, 0:2]) - face_tmp[:, 0:2] = (face_tmp[:, 0:2] + shift) * scale - face_std = np.loadtxt(os.path.join(ROOT_DIR, 'STD_FACE_LANDMARKS.txt')) - face_std = face_std.reshape(68, 3) - - face_tmp[:, -1] = face_std[:, -1] - face_tmp[:, 0:2] = -face_tmp[:, 0:2] - np.savetxt(os.path.join(ROOT_DIR, CH + '_face_open_mouth_norm.txt'), face_tmp, fmt='%.4f') - np.savetxt(os.path.join(ROOT_DIR, CH + '_scale_shift.txt'), np.array([scale, shift[0], shift[1]]), fmt='%.10f') - - # Force the frame to close mouth - face_tmp[49:54, 1] = param[0] * face_tmp[49:54, 1] + (1-param[0]) * face_tmp[59:54:-1, 1] - face_tmp[59:54:-1, 1] = param[1] * face_tmp[49:54, 1] + (1-param[1]) * face_tmp[59:54:-1, 1] - face_tmp[61:64, 1] = param[2] * face_tmp[61:64, 1] + (1-param[2]) * face_tmp[67:64:-1, 1] - face_tmp[67:64:-1, 1] = param[3] * face_tmp[61:64, 1] + (1-param[3]) * face_tmp[67:64:-1, 1] - face_tmp[61:64, 0] = 0.6 * face_tmp[61:64, 0] + 0.4 * face_tmp[67:64:-1, 0] - face_tmp[67:64:-1, 0] = 0.6 * face_tmp[61:64, 0] + 0.4 * face_tmp[67:64:-1, 0] - - np.savetxt(os.path.join(ROOT_DIR, CH + '_face_close_mouth.txt'), face_tmp, fmt='%.4f') - - std_face_id = np.loadtxt(os.path.join(ROOT_DIR, CH + '_face_close_mouth.txt')) # .reshape(1, 204) - std_face_id = std_face_id.reshape(68, 3) - - def vis_landmark_on_plt(fl, x_offset=0.0, show_now=True): - def draw_curve(shape, idx_list, loop=False, x_offset=0.0, c=None): - for i in idx_list: - plt.plot((shape[i, 0] + x_offset, shape[i + 1, 0] + x_offset), (-shape[i, 1], -shape[i + 1, 1]), c=c) - if (loop): - plt.plot((shape[idx_list[0], 0] + x_offset, shape[idx_list[-1] + 1, 0] + x_offset), - (-shape[idx_list[0], 1], -shape[idx_list[-1] + 1, 1]), c=c) - - draw_curve(fl, list(range(0, 16)), x_offset=x_offset) # jaw - draw_curve(fl, list(range(17, 21)), x_offset=x_offset) # eye brow - draw_curve(fl, list(range(22, 26)), x_offset=x_offset) - draw_curve(fl, list(range(27, 35)), x_offset=x_offset) # nose - draw_curve(fl, list(range(36, 41)), loop=True, x_offset=x_offset) # eyes - draw_curve(fl, list(range(42, 47)), loop=True, x_offset=x_offset) - draw_curve(fl, list(range(48, 59)), loop=True, x_offset=x_offset, c='b') # mouth - draw_curve(fl, list(range(60, 67)), loop=True, x_offset=x_offset, c='r') - draw_curve(fl, list(range(60, 64)), loop=False, x_offset=x_offset, c='g') - - if (show_now): - plt.show() - - vis_landmark_on_plt(std_face_id, show_now=show) - - - -# Check if a point is inside a rectangle -def rect_contains(rect, point): - if point[0] < rect[0]: - return False - elif point[1] < rect[1]: - return False - elif point[0] > rect[2]: - return False - elif point[1] > rect[3]: - return False - return True - - -# Draw a point -def draw_point(img, p, color): - cv2.circle(img, p, 2, color, -1, cv2.LINE_AA, 0) - - -# Draw delaunay triangles -def draw_delaunay(img, subdiv, delaunay_color): - triangleList = subdiv.getTriangleList(); - size = img.shape - r = (0, 0, size[1], size[0]) - - for t in triangleList: - - pt1 = (t[0], t[1]) - pt2 = (t[2], t[3]) - pt3 = (t[4], t[5]) - - if rect_contains(r, pt1) and rect_contains(r, pt2) and rect_contains(r, pt3): - cv2.line(img, pt1, pt2, delaunay_color, 1, cv2.LINE_AA, 0) - cv2.line(img, pt2, pt3, delaunay_color, 1, cv2.LINE_AA, 0) - cv2.line(img, pt3, pt1, delaunay_color, 1, cv2.LINE_AA, 0) - - -# Draw voronoi diagram -def draw_voronoi(img, subdiv): - (facets, centers) = subdiv.getVoronoiFacetList([]) - - for i in range(0, len(facets)): - ifacet_arr = [] - for f in facets[i]: - ifacet_arr.append(f) - ifacet = np.array(ifacet_arr, np.int) - color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) - - cv2.fillConvexPoly(img, ifacet, color, cv2.LINE_AA, 0) - ifacets = np.array([ifacet]) - cv2.polylines(img, ifacets, True, (0, 0, 0), 1, cv2.LINE_AA, 0) - cv2.circle(img, (centers[i][0], centers[i][1]), 3, (0, 0, 0), -1, cv2.LINE_AA, 0) - print("end of draw_voronoi") - - -def delauney_tri(ROOT_DIR, test_data, INNER_ONLY=False): - # Define window names - win_delaunay = "Delaunay Triangulation" - cv2.namedWindow(win_delaunay, cv2.WINDOW_NORMAL) - win_voronoi = "Voronoi Diagram" - - # Turn on animation while drawing triangles - animate = True - - # Define colors for drawing. - delaunay_color = (255, 255, 255) - points_color = (0, 0, 255) - - # Read in the image. - if (os.path.exists(os.path.join(ROOT_DIR, test_data))): - img = cv2.imread(os.path.join(ROOT_DIR, test_data)) - else: - print('not file founded.') - exit(0) - - CH = test_data[:-4] - # Keep a copy around - img_orig = img.copy() - - # Rectangle to be used with Subdiv2D - size = img.shape - rect = (0, 0, size[1], size[0]) - - # Create an array of points. - points = [] - - # Create an instance of Subdiv2D - subdiv = cv2.Subdiv2D(rect) - h = size[1] - 1 - w = size[0] - 1 - - # Read in the points from a text file - file = np.loadtxt(os.path.join(ROOT_DIR, CH + '_face_open_mouth.txt')) - file = file.reshape(68, 3) - - for i in range(file.shape[0]): - if(INNER_ONLY): - if(i >= 48 and i <= 59): ############## for inner lip only - continue - line = file[i] - x, y, z = line - points.append((int(float(x)), int(float(y)))) - - - points.append((0, 0)) - points.append((0, w // 4)) - points.append((0, w // 2)) - points.append((0, w // 4 * 3)) - points.append((0, w)) - points.append((h // 2, w)) - points.append((h, w)) - points.append((h, w // 2)) - points.append((h, 0)) - points.append((h // 4, 0)) - points.append((h // 2, 0)) - points.append((h // 4*3, 0)) - - # Insert points into subdiv - for p in points: - print(p) - subdiv.insert(p) - - # Show animation - if animate: - img_copy = img_orig.copy() - # Draw delaunay triangles - draw_delaunay(img_copy, subdiv, (255, 255, 0)) - cv2.imshow(win_delaunay, img_copy) - cv2.waitKey(100) - - # Draw delaunay triangles - draw_delaunay(img, subdiv, (255, 255, 0)) - triangleList = subdiv.getTriangleList() - - p_dict = {} # Initialize empty dictionary. - index = 0 - # Draw points - for p in points: - # draw_point(img, p, (0, 0, 255)) - p_dict[p] = index - index = index + 1 - - # Allocate space for voronoi Diagram - img_voronoi = np.zeros(img.shape, dtype=img.dtype) - - # Draw voronoi diagram - draw_voronoi(img_voronoi, subdiv) - - # Show results - cv2.imshow(win_delaunay, img) - print("Press any key to quit...") - cv2.waitKey(0) - - new_tri = []; - - for line in triangleList: - p1 = (line[0], line[1]) - p2 = (line[2], line[3]) - p3 = (line[4], line[5]) - - try: - p1_index = p_dict[p1] - p2_index = p_dict[p2] - p3_index = p_dict[p3] - except: - continue - - new_tri.append((p1_index, p2_index, p3_index)) - - print(new_tri) - - a = np.array(new_tri).astype(int) - np.savetxt(os.path.join(ROOT_DIR, CH + '_delauney_tri.txt'), a, fmt='%d') - - - - - - - - diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/README.md b/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/README.md deleted file mode 100644 index de7e0cfc4c7a0bdcb60781bf6c59fa6a06eb8fa9..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/README.md +++ /dev/null @@ -1,98 +0,0 @@ -# The Face of Art: Landmark Detection and Geometric Style in Portraits - -Code for the landmark detection framework described in [The Face of Art: Landmark Detection and Geometric Style in Portraits](http://www.faculty.idc.ac.il/arik/site/foa/face-of-art.asp) (SIGGRAPH 2019) - -![](old/teaser.png) -Top: landmark detection results on artistic portraits with different styles allows to define the geometric style of an artist. Bottom: results of the style transfer of portraits using various artists' geometric style, including Amedeo Modigliani, Pablo Picasso, Margaret Keane, Fernand Léger, and Tsuguharu Foujita. Top right portrait is from 'Woman with Peanuts,' ©1962, Estate of Roy Lichtenstein. - -## Getting Started - -### Requirements - -* python -* anaconda - -### Download - -#### Model -download model weights from [here](https://www.dropbox.com/sh/hrxcyug1bmbj6cs/AAAxq_zI5eawcLjM8zvUwaXha?dl=0). - -#### Datasets -* The datasets used for training and evaluating our model can be found [here](https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/). - -* The Artistic-Faces dataset can be found [here](http://www.faculty.idc.ac.il/arik/site/foa/artistic-faces-dataset.asp). - -* Training images with texture augmentation can be found [here](https://www.dropbox.com/sh/av2k1i1082z0nie/AAC5qV1E2UkqpDLVsv7TazMta?dl=0). - before applying texture style transfer, the training images were cropped to the ground-truth face bounding-box with 25% margin. To crop training images, run the script `crop_training_set.py`. - -* our model expects the following directory structure of landmark detection datasets: -``` -landmark_detection_datasets - ├── training - ├── test - ├── challenging - ├── common - ├── full - ├── crop_gt_margin_0.25 (cropped images of training set) - └── crop_gt_margin_0.25_ns (cropped images of training set + texture style transfer) -``` -### Install - -Create a virtual environment and install the following: -* opencv -* menpo -* menpofit -* tensorflow-gpu - -for python 2: -``` -conda create -n foa_env python=2.7 anaconda -source activate foa_env -conda install -c menpo opencv -conda install -c menpo menpo -conda install -c menpo menpofit -pip install tensorflow-gpu - -``` - -for python 3: -``` -conda create -n foa_env python=3.5 anaconda -source activate foa_env -conda install -c menpo opencv -conda install -c menpo menpo -conda install -c menpo menpofit -pip3 install tensorflow-gpu - -``` - -Clone repository: - -``` -git clone https://github.com/papulke/deep_face_heatmaps -``` - -## Instructions - -### Training - -To train the network you need to run `train_heatmaps_network.py` - -example for training a model with texture augmentation (100% of images) and geometric augmentation (~70% of images): -``` -python train_heatmaps_network.py --output_dir='test_artistic_aug' --augment_geom=True \ ---augment_texture=True --p_texture=1. --p_geom=0.7 -``` - -### Testing - -For using the detection framework to predict landmarks, run the script `predict_landmarks.py` - -## Acknowledgments - -* [ect](https://github.com/HongwenZhang/ECT-FaceAlignment) -* [menpo](https://github.com/menpo/menpo) -* [menpofit](https://github.com/menpo/menpofit) -* [mdm](https://github.com/trigeorgis/mdm) -* [style transfer implementation](https://github.com/woodrush/neural-art-tf) -* [painter-by-numbers dataset](https://www.kaggle.com/c/painter-by-numbers/data) diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/vis.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/vis.py deleted file mode 100644 index 5a3a477133fab4dbc4143d1165292473b93f51ea..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/util/vis.py +++ /dev/null @@ -1,268 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import numpy as np -import os -import matplotlib.pyplot as plt -import cv2 -import ffmpeg - -OTHER_SPECIFIC_VOICE = None - -class Vis(): - - def __init__(self, fls, filename, audio_filenam=None, fps=100, frames=1000): - - # from scipy.signal import savgol_filter - # fls = savgol_filter(fls, 21, 3, axis=0) - - # adj nose - # fls[:, 27 * 3:28 * 3] = fls[:, 28 * 3:29 * 3] * 2 - fls[:, 29 * 3:30 * 3] - # fls[:, 28 * 3:29 * 3] = fls[:, 27 * 3:28 * 3]*0.75 + fls[:, 31 * 3:32 * 3]*0.25 - # fls[:, 29 * 3:30 * 3] = fls[:, 27 * 3:28 * 3]*0.5 + fls[:, 31 * 3:32 * 3]*0.5 - # fls[:, 30 * 3:31 * 3] = fls[:, 27 * 3:28 * 3] * 0.25 + fls[:, 31 * 3:32 * 3] * 0.75 - - fls = fls * 120 - fls[:, 0::3] += 200 - fls[:, 1::3] += 100 - - fls = fls.reshape((-1, 68, 3)) - fls = fls.astype(int) - - writer = cv2.VideoWriter(os.path.join('MakeItTalk/examples', 'tmp.mp4'), - cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), fps, (400, 400)) - - frames = np.min((fls.shape[0], frames)) - for i in range(frames): #fls.shape[0]): - # print(i, fls.shape[0]) - frame = np.ones((400, 400, 3), np.uint8) * 0 - frame = self.__vis_landmark_on_img__(frame, fls[i]) - writer.write(frame) - writer.release() - - if(audio_filenam is not None): - print(audio_filenam) - os.system('ffmpeg -y -i {} -i {} -strict -2 -shortest {}'.format( - os.path.join('MakeItTalk/examples', 'tmp.mp4'), - audio_filenam, - os.path.join('MakeItTalk/examples', '{}_av.mp4'.format(filename)) - )) - else: - os.system('ffmpeg -y -i {} {}'.format( - os.path.join('MakeItTalk/examples', 'tmp.mp4'), - os.path.join('MakeItTalk/examples', '{}_av.mp4'.format(filename)) - )) - - os.remove(os.path.join('MakeItTalk/examples', 'tmp.mp4')) - - - - - def __vis_landmark_on_img__(self, img, shape, linewidth=2): - ''' - Visualize landmark on images. - ''' - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape[i, 0], shape[i, 1]), (shape[i + 1, 0], shape[i + 1, 1]), color, lineWidth) - if (loop): - cv2.line(img, (shape[idx_list[0], 0], shape[idx_list[0], 1]), - (shape[idx_list[-1] + 1, 0], shape[idx_list[-1] + 1, 1]), color, lineWidth) - - # draw_curve(list(range(0, 16)), color=(0, 255, 0)) # jaw - # draw_curve(list(range(17, 21)), color=(0, 127, 255)) # eye brow - # draw_curve(list(range(22, 26)), color=(0, 127, 255)) - # draw_curve(list(range(27, 35)), color=(255, 0, 0)) # nose - # draw_curve(list(range(36, 41)), loop=True, color=(204, 0, 204)) # eyes - # draw_curve(list(range(42, 47)), loop=True, color=(204, 0, 204)) - # draw_curve(list(range(48, 59)), loop=True, color=(0, 0, 255)) # mouth - # draw_curve(list(range(60, 67)), loop=True, color=(0, 0, 255)) - # draw_curve(list(range(60, 64)), loop=False, color=(0, 0, 255)) - - draw_curve(list(range(0, 16)), color=(0, 255, 0)) # jaw - draw_curve(list(range(17, 21)), color=(0, 255, 0)) # eye brow - draw_curve(list(range(22, 26)), color=(0, 255, 0)) - draw_curve(list(range(27, 35)), color=(0, 255, 0)) # nose - draw_curve(list(range(36, 41)), loop=True, color=(0, 255, 0)) # eyes - draw_curve(list(range(42, 47)), loop=True, color=(0, 255, 0)) - draw_curve(list(range(48, 59)), loop=True, color=(0, 255, 255)) # mouth - draw_curve(list(range(60, 67)), loop=True, color=(255, 255, 0)) - draw_curve(list(range(60, 64)), loop=False, color=(0, 0, 255)) - - return img - - - -class Vis_old(): - - def __init__(self, run_name, pred_fl_filename, audio_filename, av_name='NAME', fps=100, frames=625, - postfix='', root_dir=r'E:\Dataset\TalkingToon\Obama', ifsmooth=True, rand_start=0): - - print(root_dir) - self.src_dir = os.path.join(root_dir, r'nn_result/{}'.format(run_name)) - self.std_face = np.loadtxt(r'src/dataset/utils/STD_FACE_LANDMARKS.txt') - self.std_face = self.std_face.reshape((-1, 204)) - - fls = np.loadtxt(os.path.join(self.src_dir, pred_fl_filename)) - - fls = fls * 120 - fls[:, 0::3] += 200 - fls[:, 1::3] += 100 - - fls = fls.reshape((-1, 68, 3)) - fls = fls.astype(int) - - writer = cv2.VideoWriter(os.path.join(self.src_dir, 'tmp.mp4'), - cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), fps, (400, 400)) - - frames = np.min((fls.shape[0], frames)) - for i in range(frames): #fls.shape[0]): - # print(i, fls.shape[0]) - frame = np.ones((400, 400, 3), np.uint8) * 0 - frame = self.__vis_landmark_on_img__(frame, fls[i]) - writer.write(frame) - writer.release() - - if(os.path.exists(os.path.join(root_dir, 'demo_wav', '{}'.format(audio_filename)))): - ain = os.path.join(root_dir, 'demo_wav', '{}'.format(audio_filename)) - else: - ain = os.path.join(root_dir, 'raw_wav', '{}'.format(audio_filename)) - # print(ain) - # vin = ffmpeg.input(os.path.join(self.src_dir, 'tmp.mp4')).video - # ain = ffmpeg.input(ain).audio - # out = ffmpeg.output(vin, ain, os.path.join(self.src_dir, '{}_av.mp4'.format(pred_fl_filename[:-4])), shortest=None) - # out = out.overwrite_output().global_args('-loglevel', 'quiet') - # out.run() - - os.system('ffmpeg -y -loglevel error -i {} -ss {} {}'.format( - ain, rand_start/62.5, - os.path.join(self.src_dir, '{}_a_tmp.wav'.format(av_name)) - )) - - os.system('ffmpeg -y -loglevel error -i {} -i {} -pix_fmt yuv420p -strict -2 -shortest {}'.format( - os.path.join(self.src_dir, 'tmp.mp4'), - os.path.join(self.src_dir, '{}_a_tmp.wav'.format(av_name)), - os.path.join(self.src_dir, '{}_av.mp4'.format(av_name)) - )) - - os.remove(os.path.join(self.src_dir, 'tmp.mp4')) - os.remove(os.path.join(self.src_dir, '{}_a_tmp.wav'.format(av_name))) - - # os.remove(os.path.join(self.src_dir, filename)) - # exit(0) - - - - - - def __vis_landmark_on_img__(self, img, shape, linewidth=2): - ''' - Visualize landmark on images. - ''' - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape[i, 0], shape[i, 1]), (shape[i + 1, 0], shape[i + 1, 1]), color, lineWidth) - if (loop): - cv2.line(img, (shape[idx_list[0], 0], shape[idx_list[0], 1]), - (shape[idx_list[-1] + 1, 0], shape[idx_list[-1] + 1, 1]), color, lineWidth) - - # draw_curve(list(range(0, 16)), color=(0, 255, 0)) # jaw - # draw_curve(list(range(17, 21)), color=(0, 127, 255)) # eye brow - # draw_curve(list(range(22, 26)), color=(0, 127, 255)) - # draw_curve(list(range(27, 35)), color=(255, 0, 0)) # nose - # draw_curve(list(range(36, 41)), loop=True, color=(204, 0, 204)) # eyes - # draw_curve(list(range(42, 47)), loop=True, color=(204, 0, 204)) - # draw_curve(list(range(48, 59)), loop=True, color=(0, 0, 255)) # mouth - # draw_curve(list(range(60, 67)), loop=True, color=(0, 0, 255)) - # draw_curve(list(range(60, 64)), loop=False, color=(0, 0, 255)) - - draw_curve(list(range(0, 16)), color=(0, 255, 0)) # jaw - draw_curve(list(range(17, 21)), color=(0, 255, 0)) # eye brow - draw_curve(list(range(22, 26)), color=(0, 255, 0)) - draw_curve(list(range(27, 35)), color=(0, 255, 0)) # nose - draw_curve(list(range(36, 41)), loop=True, color=(0, 255, 0)) # eyes - draw_curve(list(range(42, 47)), loop=True, color=(0, 255, 0)) - draw_curve(list(range(48, 59)), loop=True, color=(0, 255, 255)) # mouth - draw_curve(list(range(60, 67)), loop=True, color=(255, 255, 0)) - draw_curve(list(range(60, 64)), loop=False, color=(0, 0, 255)) - - return img - - -class Vis_comp(): - - def __init__(self, run_name, pred1, pred2, audio_filename, av_name='NAME', fps=100, frames=625, postfix='', root_dir=r'E:\Dataset\TalkingToon\Obama', ifsmooth=True): - - print(root_dir) - self.src_dir = os.path.join(root_dir, r'nn_result/{}'.format(run_name)) - self.std_face = np.loadtxt(r'src/dataset/utils/STD_FACE_LANDMARKS.txt') - self.std_face = self.std_face.reshape((-1, 204)) - - def fls_adj(fls): - fls = fls * 120 - fls[:, 0::3] += 200 - fls[:, 1::3] += 100 - fls = fls.reshape((-1, 68, 3)) - fls = fls.astype(int) - return fls - - fls = np.loadtxt(os.path.join(self.src_dir, pred1)) - fls2 = np.loadtxt(os.path.join(self.src_dir, pred2)) - fls = fls_adj(fls) - fls2 = fls_adj(fls2) - - writer = cv2.VideoWriter(os.path.join(self.src_dir, 'tmp.mp4'), - cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), fps, (400, 400)) - - frames = np.min((fls.shape[0], frames)) - for i in range(frames): #fls.shape[0]): - # print(i, fls.shape[0]) - frame = np.ones((400, 400, 3), np.uint8) * 0 - frame = self.__vis_landmark_on_img__(frame, fls[i]) - frame = self.__vis_landmark_on_img__(frame, fls2[i]) - writer.write(frame) - writer.release() - - if(os.path.exists(os.path.join(root_dir, 'demo_wav', '{}'.format(audio_filename)))): - ain = os.path.join(root_dir, 'demo_wav', '{}'.format(audio_filename)) - else: - ain = os.path.join(root_dir, 'raw_wav', '{}'.format(audio_filename)) - - os.system('ffmpeg -y -loglevel error -i {} -i {} -pix_fmt yuv420p -strict -2 -shortest {}'.format( - os.path.join(self.src_dir, 'tmp.mp4'), - ain, - os.path.join(self.src_dir, '{}_av.mp4'.format(av_name)) - )) - - os.remove(os.path.join(self.src_dir, 'tmp.mp4')) - - - def __vis_landmark_on_img__(self, img, shape, linewidth=2): - ''' - Visualize landmark on images. - ''' - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape[i, 0], shape[i, 1]), (shape[i + 1, 0], shape[i + 1, 1]), color, lineWidth) - if (loop): - cv2.line(img, (shape[idx_list[0], 0], shape[idx_list[0], 1]), - (shape[idx_list[-1] + 1, 0], shape[idx_list[-1] + 1, 1]), color, lineWidth) - - draw_curve(list(range(0, 16)), color=(0, 255, 0)) # jaw - draw_curve(list(range(17, 21)), color=(0, 255, 0)) # eye brow - draw_curve(list(range(22, 26)), color=(0, 255, 0)) - draw_curve(list(range(27, 35)), color=(0, 255, 0)) # nose - draw_curve(list(range(36, 41)), loop=True, color=(0, 255, 0)) # eyes - draw_curve(list(range(42, 47)), loop=True, color=(0, 255, 0)) - draw_curve(list(range(48, 59)), loop=True, color=(0, 255, 255)) # mouth - draw_curve(list(range(60, 67)), loop=True, color=(255, 255, 0)) - draw_curve(list(range(60, 64)), loop=False, color=(0, 0, 255)) - - return img \ No newline at end of file diff --git a/spaces/maxmax20160403/sovits5.0/whisper/normalizers/english.py b/spaces/maxmax20160403/sovits5.0/whisper/normalizers/english.py deleted file mode 100644 index d5c2bb4ebef1a04b9ab14970c10c3fb0d4e948d4..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/whisper/normalizers/english.py +++ /dev/null @@ -1,543 +0,0 @@ -import json -import os -import re -from fractions import Fraction -from typing import Iterator, List, Match, Optional, Union - -from more_itertools import windowed - -from .basic import remove_symbols_and_diacritics - - -class EnglishNumberNormalizer: - """ - Convert any spelled-out numbers into arabic numbers, while handling: - - - remove any commas - - keep the suffixes such as: `1960s`, `274th`, `32nd`, etc. - - spell out currency symbols after the number. e.g. `$20 million` -> `20000000 dollars` - - spell out `one` and `ones` - - interpret successive single-digit numbers as nominal: `one oh one` -> `101` - """ - - def __init__(self): - super().__init__() - - self.zeros = {"o", "oh", "zero"} - self.ones = { - name: i - for i, name in enumerate( - [ - "one", - "two", - "three", - "four", - "five", - "six", - "seven", - "eight", - "nine", - "ten", - "eleven", - "twelve", - "thirteen", - "fourteen", - "fifteen", - "sixteen", - "seventeen", - "eighteen", - "nineteen", - ], - start=1, - ) - } - self.ones_plural = { - "sixes" if name == "six" else name + "s": (value, "s") - for name, value in self.ones.items() - } - self.ones_ordinal = { - "zeroth": (0, "th"), - "first": (1, "st"), - "second": (2, "nd"), - "third": (3, "rd"), - "fifth": (5, "th"), - "twelfth": (12, "th"), - **{ - name + ("h" if name.endswith("t") else "th"): (value, "th") - for name, value in self.ones.items() - if value > 3 and value != 5 and value != 12 - }, - } - self.ones_suffixed = {**self.ones_plural, **self.ones_ordinal} - - self.tens = { - "twenty": 20, - "thirty": 30, - "forty": 40, - "fifty": 50, - "sixty": 60, - "seventy": 70, - "eighty": 80, - "ninety": 90, - } - self.tens_plural = { - name.replace("y", "ies"): (value, "s") for name, value in self.tens.items() - } - self.tens_ordinal = { - name.replace("y", "ieth"): (value, "th") for name, value in self.tens.items() - } - self.tens_suffixed = {**self.tens_plural, **self.tens_ordinal} - - self.multipliers = { - "hundred": 100, - "thousand": 1_000, - "million": 1_000_000, - "billion": 1_000_000_000, - "trillion": 1_000_000_000_000, - "quadrillion": 1_000_000_000_000_000, - "quintillion": 1_000_000_000_000_000_000, - "sextillion": 1_000_000_000_000_000_000_000, - "septillion": 1_000_000_000_000_000_000_000_000, - "octillion": 1_000_000_000_000_000_000_000_000_000, - "nonillion": 1_000_000_000_000_000_000_000_000_000_000, - "decillion": 1_000_000_000_000_000_000_000_000_000_000_000, - } - self.multipliers_plural = { - name + "s": (value, "s") for name, value in self.multipliers.items() - } - self.multipliers_ordinal = { - name + "th": (value, "th") for name, value in self.multipliers.items() - } - self.multipliers_suffixed = {**self.multipliers_plural, **self.multipliers_ordinal} - self.decimals = {*self.ones, *self.tens, *self.zeros} - - self.preceding_prefixers = { - "minus": "-", - "negative": "-", - "plus": "+", - "positive": "+", - } - self.following_prefixers = { - "pound": "£", - "pounds": "£", - "euro": "€", - "euros": "€", - "dollar": "$", - "dollars": "$", - "cent": "¢", - "cents": "¢", - } - self.prefixes = set( - list(self.preceding_prefixers.values()) + list(self.following_prefixers.values()) - ) - self.suffixers = { - "per": {"cent": "%"}, - "percent": "%", - } - self.specials = {"and", "double", "triple", "point"} - - self.words = set( - [ - key - for mapping in [ - self.zeros, - self.ones, - self.ones_suffixed, - self.tens, - self.tens_suffixed, - self.multipliers, - self.multipliers_suffixed, - self.preceding_prefixers, - self.following_prefixers, - self.suffixers, - self.specials, - ] - for key in mapping - ] - ) - self.literal_words = {"one", "ones"} - - def process_words(self, words: List[str]) -> Iterator[str]: - prefix: Optional[str] = None - value: Optional[Union[str, int]] = None - skip = False - - def to_fraction(s: str): - try: - return Fraction(s) - except ValueError: - return None - - def output(result: Union[str, int]): - nonlocal prefix, value - result = str(result) - if prefix is not None: - result = prefix + result - value = None - prefix = None - return result - - if len(words) == 0: - return - - for prev, current, next in windowed([None] + words + [None], 3): - if skip: - skip = False - continue - - next_is_numeric = next is not None and re.match(r"^\d+(\.\d+)?$", next) - has_prefix = current[0] in self.prefixes - current_without_prefix = current[1:] if has_prefix else current - if re.match(r"^\d+(\.\d+)?$", current_without_prefix): - # arabic numbers (potentially with signs and fractions) - f = to_fraction(current_without_prefix) - assert f is not None - if value is not None: - if isinstance(value, str) and value.endswith("."): - # concatenate decimals / ip address components - value = str(value) + str(current) - continue - else: - yield output(value) - - prefix = current[0] if has_prefix else prefix - if f.denominator == 1: - value = f.numerator # store integers as int - else: - value = current_without_prefix - elif current not in self.words: - # non-numeric words - if value is not None: - yield output(value) - yield output(current) - elif current in self.zeros: - value = str(value or "") + "0" - elif current in self.ones: - ones = self.ones[current] - - if value is None: - value = ones - elif isinstance(value, str) or prev in self.ones: - if prev in self.tens and ones < 10: # replace the last zero with the digit - assert value[-1] == "0" - value = value[:-1] + str(ones) - else: - value = str(value) + str(ones) - elif ones < 10: - if value % 10 == 0: - value += ones - else: - value = str(value) + str(ones) - else: # eleven to nineteen - if value % 100 == 0: - value += ones - else: - value = str(value) + str(ones) - elif current in self.ones_suffixed: - # ordinal or cardinal; yield the number right away - ones, suffix = self.ones_suffixed[current] - if value is None: - yield output(str(ones) + suffix) - elif isinstance(value, str) or prev in self.ones: - if prev in self.tens and ones < 10: - assert value[-1] == "0" - yield output(value[:-1] + str(ones) + suffix) - else: - yield output(str(value) + str(ones) + suffix) - elif ones < 10: - if value % 10 == 0: - yield output(str(value + ones) + suffix) - else: - yield output(str(value) + str(ones) + suffix) - else: # eleven to nineteen - if value % 100 == 0: - yield output(str(value + ones) + suffix) - else: - yield output(str(value) + str(ones) + suffix) - value = None - elif current in self.tens: - tens = self.tens[current] - if value is None: - value = tens - elif isinstance(value, str): - value = str(value) + str(tens) - else: - if value % 100 == 0: - value += tens - else: - value = str(value) + str(tens) - elif current in self.tens_suffixed: - # ordinal or cardinal; yield the number right away - tens, suffix = self.tens_suffixed[current] - if value is None: - yield output(str(tens) + suffix) - elif isinstance(value, str): - yield output(str(value) + str(tens) + suffix) - else: - if value % 100 == 0: - yield output(str(value + tens) + suffix) - else: - yield output(str(value) + str(tens) + suffix) - elif current in self.multipliers: - multiplier = self.multipliers[current] - if value is None: - value = multiplier - elif isinstance(value, str) or value == 0: - f = to_fraction(value) - p = f * multiplier if f is not None else None - if f is not None and p.denominator == 1: - value = p.numerator - else: - yield output(value) - value = multiplier - else: - before = value // 1000 * 1000 - residual = value % 1000 - value = before + residual * multiplier - elif current in self.multipliers_suffixed: - multiplier, suffix = self.multipliers_suffixed[current] - if value is None: - yield output(str(multiplier) + suffix) - elif isinstance(value, str): - f = to_fraction(value) - p = f * multiplier if f is not None else None - if f is not None and p.denominator == 1: - yield output(str(p.numerator) + suffix) - else: - yield output(value) - yield output(str(multiplier) + suffix) - else: # int - before = value // 1000 * 1000 - residual = value % 1000 - value = before + residual * multiplier - yield output(str(value) + suffix) - value = None - elif current in self.preceding_prefixers: - # apply prefix (positive, minus, etc.) if it precedes a number - if value is not None: - yield output(value) - - if next in self.words or next_is_numeric: - prefix = self.preceding_prefixers[current] - else: - yield output(current) - elif current in self.following_prefixers: - # apply prefix (dollars, cents, etc.) only after a number - if value is not None: - prefix = self.following_prefixers[current] - yield output(value) - else: - yield output(current) - elif current in self.suffixers: - # apply suffix symbols (percent -> '%') - if value is not None: - suffix = self.suffixers[current] - if isinstance(suffix, dict): - if next in suffix: - yield output(str(value) + suffix[next]) - skip = True - else: - yield output(value) - yield output(current) - else: - yield output(str(value) + suffix) - else: - yield output(current) - elif current in self.specials: - if next not in self.words and not next_is_numeric: - # apply special handling only if the next word can be numeric - if value is not None: - yield output(value) - yield output(current) - elif current == "and": - # ignore "and" after hundreds, thousands, etc. - if prev not in self.multipliers: - if value is not None: - yield output(value) - yield output(current) - elif current == "double" or current == "triple": - if next in self.ones or next in self.zeros: - repeats = 2 if current == "double" else 3 - ones = self.ones.get(next, 0) - value = str(value or "") + str(ones) * repeats - skip = True - else: - if value is not None: - yield output(value) - yield output(current) - elif current == "point": - if next in self.decimals or next_is_numeric: - value = str(value or "") + "." - else: - # should all have been covered at this point - raise ValueError(f"Unexpected token: {current}") - else: - # all should have been covered at this point - raise ValueError(f"Unexpected token: {current}") - - if value is not None: - yield output(value) - - def preprocess(self, s: str): - # replace " and a half" with " point five" - results = [] - - segments = re.split(r"\band\s+a\s+half\b", s) - for i, segment in enumerate(segments): - if len(segment.strip()) == 0: - continue - if i == len(segments) - 1: - results.append(segment) - else: - results.append(segment) - last_word = segment.rsplit(maxsplit=2)[-1] - if last_word in self.decimals or last_word in self.multipliers: - results.append("point five") - else: - results.append("and a half") - - s = " ".join(results) - - # put a space at number/letter boundary - s = re.sub(r"([a-z])([0-9])", r"\1 \2", s) - s = re.sub(r"([0-9])([a-z])", r"\1 \2", s) - - # but remove spaces which could be a suffix - s = re.sub(r"([0-9])\s+(st|nd|rd|th|s)\b", r"\1\2", s) - - return s - - def postprocess(self, s: str): - def combine_cents(m: Match): - try: - currency = m.group(1) - integer = m.group(2) - cents = int(m.group(3)) - return f"{currency}{integer}.{cents:02d}" - except ValueError: - return m.string - - def extract_cents(m: Match): - try: - return f"¢{int(m.group(1))}" - except ValueError: - return m.string - - # apply currency postprocessing; "$2 and ¢7" -> "$2.07" - s = re.sub(r"([€£$])([0-9]+) (?:and )?¢([0-9]{1,2})\b", combine_cents, s) - s = re.sub(r"[€£$]0.([0-9]{1,2})\b", extract_cents, s) - - # write "one(s)" instead of "1(s)", just for the readability - s = re.sub(r"\b1(s?)\b", r"one\1", s) - - return s - - def __call__(self, s: str): - s = self.preprocess(s) - s = " ".join(word for word in self.process_words(s.split()) if word is not None) - s = self.postprocess(s) - - return s - - -class EnglishSpellingNormalizer: - """ - Applies British-American spelling mappings as listed in [1]. - - [1] https://www.tysto.com/uk-us-spelling-list.html - """ - - def __init__(self): - mapping_path = os.path.join(os.path.dirname(__file__), "english.json") - self.mapping = json.load(open(mapping_path)) - - def __call__(self, s: str): - return " ".join(self.mapping.get(word, word) for word in s.split()) - - -class EnglishTextNormalizer: - def __init__(self): - self.ignore_patterns = r"\b(hmm|mm|mhm|mmm|uh|um)\b" - self.replacers = { - # common contractions - r"\bwon't\b": "will not", - r"\bcan't\b": "can not", - r"\blet's\b": "let us", - r"\bain't\b": "aint", - r"\by'all\b": "you all", - r"\bwanna\b": "want to", - r"\bgotta\b": "got to", - r"\bgonna\b": "going to", - r"\bi'ma\b": "i am going to", - r"\bimma\b": "i am going to", - r"\bwoulda\b": "would have", - r"\bcoulda\b": "could have", - r"\bshoulda\b": "should have", - r"\bma'am\b": "madam", - # contractions in titles/prefixes - r"\bmr\b": "mister ", - r"\bmrs\b": "missus ", - r"\bst\b": "saint ", - r"\bdr\b": "doctor ", - r"\bprof\b": "professor ", - r"\bcapt\b": "captain ", - r"\bgov\b": "governor ", - r"\bald\b": "alderman ", - r"\bgen\b": "general ", - r"\bsen\b": "senator ", - r"\brep\b": "representative ", - r"\bpres\b": "president ", - r"\brev\b": "reverend ", - r"\bhon\b": "honorable ", - r"\basst\b": "assistant ", - r"\bassoc\b": "associate ", - r"\blt\b": "lieutenant ", - r"\bcol\b": "colonel ", - r"\bjr\b": "junior ", - r"\bsr\b": "senior ", - r"\besq\b": "esquire ", - # prefect tenses, ideally it should be any past participles, but it's harder.. - r"'d been\b": " had been", - r"'s been\b": " has been", - r"'d gone\b": " had gone", - r"'s gone\b": " has gone", - r"'d done\b": " had done", # "'s done" is ambiguous - r"'s got\b": " has got", - # general contractions - r"n't\b": " not", - r"'re\b": " are", - r"'s\b": " is", - r"'d\b": " would", - r"'ll\b": " will", - r"'t\b": " not", - r"'ve\b": " have", - r"'m\b": " am", - } - self.standardize_numbers = EnglishNumberNormalizer() - self.standardize_spellings = EnglishSpellingNormalizer() - - def __call__(self, s: str): - s = s.lower() - - s = re.sub(r"[<\[][^>\]]*[>\]]", "", s) # remove words between brackets - s = re.sub(r"\(([^)]+?)\)", "", s) # remove words between parenthesis - s = re.sub(self.ignore_patterns, "", s) - s = re.sub(r"\s+'", "'", s) # standardize when there's a space before an apostrophe - - for pattern, replacement in self.replacers.items(): - s = re.sub(pattern, replacement, s) - - s = re.sub(r"(\d),(\d)", r"\1\2", s) # remove commas between digits - s = re.sub(r"\.([^0-9]|$)", r" \1", s) # remove periods not followed by numbers - s = remove_symbols_and_diacritics(s, keep=".%$¢€£") # keep some symbols for numerics - - s = self.standardize_numbers(s) - s = self.standardize_spellings(s) - - # now remove prefix/suffix symbols that are not preceded/followed by numbers - s = re.sub(r"[.$¢€£]([^0-9])", r" \1", s) - s = re.sub(r"([^0-9])%", r"\1 ", s) - - s = re.sub(r"\s+", " ", s) # replace any successive whitespace characters with a space - - return s diff --git a/spaces/merve/hidden-bias/public/uncertainty-calibration/footnote.css b/spaces/merve/hidden-bias/public/uncertainty-calibration/footnote.css deleted file mode 100644 index 83472e6bc26c962b1c2fcc630d641ed62f181e77..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/uncertainty-calibration/footnote.css +++ /dev/null @@ -1,57 +0,0 @@ -.tooltip-footnote { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px !important; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip-footnote a{ - color: #fff !important; -} -.tooltip-footnote:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-footnote-hidden{ - opacity: 0; - transition: opacity .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - div.tooltip-footnote{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -.footstart{ - padding-left: 2px; - height: 8px !important; - /*background: red;*/ - /*display: inline-block;*/ - line-height: 0em; -} diff --git a/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-pair.js b/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-pair.js deleted file mode 100644 index dbd16d4499ddbcc59234fcdefbf7a5cad6f91a7a..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-pair.js +++ /dev/null @@ -1,360 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initPair = function(pair){ - var isMobile = window.innerWidth <= 820 - - var sel = d3.select('.' + pair.class).html('') - .at({role: 'graphics-document', 'aria-label': pair.ariaLabel}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - // return - - pair.str0 = '' - pair.str1 = '' - - updateChart() - }) - - if (!sel.node()) return - - var optionSel = sel.append('div.options') - - var inputRow = optionSel.append('div.flex-row.flex-row-textarea') - var input1Sel = inputRow.append('textarea.input-1') - .st({color: util.colors[1]}).at({cols: 30}) - input1Sel.node().value = pair.s1.replace('[MASK]', '_') - - var input0Sel = inputRow.append('textarea.input-0') - .st({color: util.colors[0]}).at({cols: 30}) - input0Sel.node().value = pair.s0.replace('[MASK]', '_') - - if (isMobile){ - sel.selectAll('textarea').on('change', updateChart) - } - - var countSel = optionSel.append('div') - .append('b').text('Number of Tokens') - .append('info').text('ⓘ').call(addLockedTooltip) - .datum('The scales are set using the top N tokens for each sentence.

    "Likelihoods" will show more than N tokens if a top completion for one sentence is unlikely for the other sentence.') - .parent().parent() - .append('div.flex-row') - .appendMany('div.button', [30, 200, 1000, 5000, 99999]) - .text(d => d > 5000 ? 'All' : d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.count = d - updateChart() - }) - - var typeSel = optionSel.append('div') - .append('b').text('Chart Type') - .append('info').text('ⓘ').call(addLockedTooltip) - .datum('"Likelihoods" shows the logits from both models plotted directly with a shared linear scale.

    To better contrast the outputs, "Differences" shows logitA - logitB on the y-axis and mean(logitA, logitB) on the x-axis with separate linear scales.') - .parent().parent() - .append('div.flex-row') - .appendMany('div.button', ['Likelihoods', 'Differences']) - .text(d => d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.type = d - updateChart() - }) - - var modelSel = optionSel.append('div') - .st({display: pair.model == 'BERT' ? 'none' : ''}) - .append('b').text('Model') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['BERT', 'Zari']) - .text(d => d) - .st({textAlign: 'center'}) - .on('click', d => { - pair.model = d - updateChart() - }) - - // TODO add loading spinner - var updateSel = optionSel - .append('div.flex-row') - .append('div.button.update').on('click', updateChart) - .text('Update') - .st({display: isMobile ? 'none' : ''}) - - var warningSel = optionSel.append('div.warning') - .text('⚠️Some of the text this model was trained on includes harmful stereotypes. This is a tool to uncover these associations—not an endorsement of them.') - - var resetSel = optionSel.append('div.reset') - .html(' Reset') - .on('click', () => { - pair = JSON.parse(pair.pairStr) - pair.pairStr = JSON.stringify(pair) - - input0Sel.node().value = pair.s0 - input1Sel.node().value = pair.s1 - - updateChart(true) - }) - - if (pair.alts){ - d3.select('.' + pair.class + '-alts').html('') - .classed('alt-block', 1).st({display: 'block'}) - .appendMany('span.p-button-link', pair.alts) - .html(d => d.str) - .on('click', d => { - input0Sel.node().value = d.s0 - input1Sel.node().value = d.s1 - - updateChart() - }) - } - - - var margin = {bottom: 50, left: 25, top: 5, right: 20} - var graphSel = sel.append('div.graph') - var totalWidth = graphSel.node().offsetWidth - var width = totalWidth - margin.left - margin.right - - var c = d3.conventions({ - sel: graphSel.append('div').st({marginTop: isMobile ? 20 : -5}), - width, - height: width, - margin, - layers: 'sdds', - }) - - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - var annotationSel = c.layers[1].appendMany('div.annotations', pair.annotations) - .translate(d => d.pos) - .html(d => d.str) - .st({color: d => d.color, width: 250, postion: 'absolute'}) - - var scatter = window.initScatter(c) - - updateChart(true) - - - async function updateChart(isFirst){ - sel.classed('changed', 0) - warningSel.st({opacity: isFirst ? 0 : 1}) - resetSel.st({opacity: isFirst ? 0 : 1}) - annotationSel.st({opacity: isFirst ? 1 : 0}) - - countSel.classed('active', d => d == pair.count) - typeSel.classed('active', d => d == pair.type) - modelSel.classed('active', d => d == pair.model) - - function getStr(sel){ - return sel.node().value.replace('_', '[MASK]') - } - - var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed' - - pair.s0 = input0Sel.node().value.replace('_', '[MASK]') - pair.s1 = input1Sel.node().value.replace('_', '[MASK]') - - updateSel.classed('loading', 1) - var vals0 = await post(modelPath, {sentence: pair.s0}) - var vals1 = await post(modelPath, {sentence: pair.s1}) - updateSel.classed('loading', 0) - - - var allTokens = vals0.map((v0, i) => { - return {word: tokenizer.vocab[i], v0, i, v1: vals1[i]} - }) - allTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - d.isVisible = false - }) - - _.sortBy(allTokens, d => -d.v1).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v0).forEach((d, i) => d.v0i = i) - - var topTokens = allTokens.filter(d => d.v0i <= pair.count || d.v1i <= pair.count) - - - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.type == 'Differences') tokens = _.sortBy(allTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = palette(-maxDif*.8, maxDif*.8) - - updateSentenceLabels() - - if (pair.type == 'Likelihoods'){ - drawXY() - } else{ - drawRotated() - } - - sel.classed('is-xy', pair.type == 'Likelihoods') - sel.classed('is-rotate', pair.type != 'Likelihoods') - - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = color(d.dif) - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - scatter.draw(c, scatterData, true) - - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(pair.label0 ? ' __ likelihood, ' + pair.label0 + ' sentence →' : '__ likelihood, sentence two →') - .st({fill: util.colors[0]}) - .at({textAnchor: 'middle'}) - - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(pair.label1 ? ' __ likelihood, ' + pair.label1 + ' sentence →' : '__ likelihood, sentence one →') - .st({fill: util.colors[1]}) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = color(d.dif) - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - scatter.draw(c, scatterData, false) - - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text('__ likelihood, both sentences →') - .at({textAnchor: 'middle'}) - .st({fill: '#000'}) - - c.svg.selectAll('g.rotate-only.sent-1,g.rotate-only.sent-1').remove() - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(`Higher likelihood, ${pair.label1 ? pair.label1 + ' sentence ' : 'sentence one'} →`) - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 20}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text(`← Higher likelihood, ${pair.label0 ? pair.label0 + ' sentence ' : 'sentence two'}`) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -20}) - .st({fill: util.colors[0]}) - } - } - - function updateSentenceLabels(){ - var t0 = tokenizer.tokenize(pair.s0) - var t1 = tokenizer.tokenize(pair.s1) - - var i = 0 - while (t0[i] == t1[i] && i < t0.length) i++ - - var j = 1 - while (t0[t0.length - j] == t1[t1.length - j] && j < t0.length) j++ - - pair.label0 = tokens2origStr(t0, pair.s0) - pair.label1 = tokens2origStr(t1, pair.s1) - - function tokens2origStr(t, s){ - var tokenStr = tokenizer.decode(t.slice(i, -j + 1)).trim() - var lowerStr = s.toLowerCase() - - var startI = lowerStr.indexOf(tokenStr) - return s.slice(startI, startI + tokenStr.length) - } - - if ( - !pair.label0.length || - !pair.label1.length || - pair.label0.length > 15 || - pair.label1.length > 15){ - pair.label0 = '' - pair.label1 = '' - } - - // console.log(i, j, pair.label0, pair.label1) - } -} - -if (window.init) init() diff --git a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/init.js b/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/init.js deleted file mode 100644 index d23a4fecea1bfa4fae6557043d8053dc3acc29ce..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/init.js +++ /dev/null @@ -1,36 +0,0 @@ -window.thresholds = [0, 0.2, 0.4, 0.6, 0.8, 1]; -window.emojis = ['☀️','🌧️']; -window.constant_score = 0.5; - -window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - - -window.init = function(){ - - var graphSel = d3.select('#graph') - var width = height = graphSel.node().offsetWidth - if (innerWidth <= 925){ - width = innerWidth - height = innerHeight*.65 - window.isMobile = true - } - fig_height = height/2 - fig_width = width - - - window.util = window.initUtil() - window.weatherGraph = window.drawWeatherGraph(graphSel, fig_height, fig_width); - window.calibrationCurve = window.drawCalibrationCurve(graphSel, fig_height, fig_width); - // window.calibrationSlider = window.drawCalibrationSlider(weatherGraph, calibrationCurve, fig_width/2) - // window.modelRemapper = window.drawModelRemapping(fig_width/2); - - - window.slides = window.drawSlides() - weatherGraph.renderThresholds() - -} - -window.init() - - - diff --git a/spaces/mikeee/ultimatumbee/ubee/ubee.py b/spaces/mikeee/ultimatumbee/ubee/ubee.py deleted file mode 100644 index 39c4022a8a24def8cacba4e7872142c3656de3e9..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ultimatumbee/ubee/ubee.py +++ /dev/null @@ -1,45 +0,0 @@ -"""Align via ubee,""" -# pylint: disable= -from itertools import zip_longest -from typing import Iterable, List, Tuple - -from icecream import ic -from logzero import logger - -from ubee.uclas import uclas - - -def ubee( - sents_zh: Iterable, - sents_en: Iterable, - thresh: float = 0.5, -) -> Tuple[List[Tuple[str, str, float]], List[Tuple[str, str]]]: - """Align blocks. - - Args: - sents_zh: list of text, can be any langauge supported by clas-l-user - sents_en: ditto - Returns: - three tuples of aligned blocked - leftovers (unaligned) - """ - res = [] - labels = [*sents_en] - - lo1 = [] - lo2 = labels[:] - - for seq in sents_zh: - ic(seq) - label, likelihood = uclas(seq, labels, thresh=thresh) - if label: - likelihood = round(float(likelihood), 2) - res.append((seq, label, likelihood)) - try: - lo2.remove(label) - except Exception as exc: - logger.error(exc) - logger.info("seq: %s, lable: %s", seq, label) - else: - lo1.append(seq) - return res, [*zip_longest(lo1, lo2)] diff --git a/spaces/miyaaa666/bingo/src/components/ui/select.tsx b/spaces/miyaaa666/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/mrmocciai/rvc-models/infer_pack/transforms.py b/spaces/mrmocciai/rvc-models/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-models/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/model.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/model.py deleted file mode 100644 index 7a4b00e52902d850b78dea3736324198eb32e075..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/model.py +++ /dev/null @@ -1,719 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from model.stylegan.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - self.dilation = dilation ## modified - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, ## modified - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style, externalweight=None): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - if externalweight is None: - weight = self.scale * self.weight * style - else: - weight = self.scale * (self.weight + externalweight) * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None, externalweight=None): - out = self.conv(input, style, externalweight) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, - return_feature_ind=999, - ): - if not input_is_latent: - if not z_plus_latent: - styles = [self.style(s) for s in styles] - else: - styles_ = [] - for s in styles: - style_ = [] - for i in range(s.shape[1]): - style_.append(self.style(s[:,i]).unsqueeze(1)) - styles_.append(torch.cat(style_,dim=1)) - styles = styles_ - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - if i > return_feature_ind: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - dilation=1, ## modified - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 + dilation-1 ## modified - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - dilation=dilation, ## modified - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/README.md deleted file mode 100644 index 57104230655c7c517d25904e634c53b6159ee60f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# Unit to Speech Model (unit2speech) - -Unit to speech model is modified Tacotron2 model that learns to synthesize speech from discrete speech units. All models are trained on quantized [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). - -Upstream Units | Download Link -|-|- -Log Mel Filterbank + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km50/tts_checkpoint_best.pt) -Log Mel Filterbank + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km100/tts_checkpoint_best.pt) -Log Mel Filterbank + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km200/tts_checkpoint_best.pt) -Log Mel Filterbank + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km500/tts_checkpoint_best.pt) -Modified CPC + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km50/tts_checkpoint_best.pt) -Modified CPC + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km100/tts_checkpoint_best.pt) -Modified CPC + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km200/tts_checkpoint_best.pt) -Modified CPC + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km500/tts_checkpoint_best.pt) -HuBERT Base + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km50/tts_checkpoint_best.pt) -HuBERT Base + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km100/tts_checkpoint_best.pt) -HuBERT Base + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km200/tts_checkpoint_best.pt) -HuBERT Base + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km500/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km50/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km100/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km200/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km500/tts_checkpoint_best.pt) - -## Run inference using a unit2speech model -* Install librosa, unidecode and inflect using `pip install librosa, unidecode, inflect` -* Download [Waveglow checkpoint](https://dl.fbaipublicfiles.com/textless_nlp/gslm/waveglow_256channels_new.pt). This is the vocoder. - -Sample commnd to run inference using trained unit2speech models. Please note that the quantized audio to synthesized should be using the same units as the unit2speech model was trained with. -``` -FAIRSEQ_ROOT= -TTS_MODEL_PATH= -QUANTIZED_UNIT_PATH= -OUT_DIR= -WAVEGLOW_PATH= - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py \ - --tts_model_path $TTS_MODEL_PATH \ - --quantized_unit_path $QUANTIZED_UNIT_PATH \ - --out_audio_dir $OUT_DIR \ - --waveglow_path $WAVEGLOW_PATH \ - --max_decoder_steps 2000 -``` \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/utils.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/utils.py deleted file mode 100644 index 1320ec473756c78ec949f72f9260420c19caff0f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/utils.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import inspect -import logging -import os -import re -from argparse import ArgumentError, ArgumentParser, Namespace -from dataclasses import _MISSING_TYPE, MISSING, is_dataclass -from enum import Enum -from typing import Any, Dict, List, Optional, Tuple, Type - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import FairseqConfig -from hydra.core.global_hydra import GlobalHydra -from hydra.experimental import compose, initialize -from omegaconf import DictConfig, OmegaConf, open_dict, _utils - -logger = logging.getLogger(__name__) - - -def eval_str_list(x, x_type=float): - if x is None: - return None - if isinstance(x, str): - if len(x) == 0: - return [] - x = ast.literal_eval(x) - try: - return list(map(x_type, x)) - except TypeError: - return [x_type(x)] - - -def interpret_dc_type(field_type): - if isinstance(field_type, str): - raise RuntimeError("field should be a type") - - if field_type == Any: - return str - - typestring = str(field_type) - if re.match( - r"(typing.|^)Union\[(.*), NoneType\]$", typestring - ) or typestring.startswith("typing.Optional"): - return field_type.__args__[0] - return field_type - - -def gen_parser_from_dataclass( - parser: ArgumentParser, - dataclass_instance: FairseqDataclass, - delete_default: bool = False, - with_prefix: Optional[str] = None, -) -> None: - """ - convert a dataclass instance to tailing parser arguments. - - If `with_prefix` is provided, prefix all the keys in the resulting parser with it. It means that we are - building a flat namespace from a structured dataclass (see transformer_config.py for example). - """ - - def argparse_name(name: str): - if name == "data" and (with_prefix is None or with_prefix == ''): - # normally data is positional args, so we don't add the -- nor the prefix - return name - if name == "_name": - # private member, skip - return None - full_name = "--" + name.replace("_", "-") - if with_prefix is not None and with_prefix != '': - # if a prefix is specified, construct the prefixed arg name - full_name = with_prefix + "-" + full_name[2:] # strip -- when composing - return full_name - - def get_kwargs_from_dc( - dataclass_instance: FairseqDataclass, k: str - ) -> Dict[str, Any]: - """k: dataclass attributes""" - - kwargs = {} - - field_type = dataclass_instance._get_type(k) - inter_type = interpret_dc_type(field_type) - - field_default = dataclass_instance._get_default(k) - - if isinstance(inter_type, type) and issubclass(inter_type, Enum): - field_choices = [t.value for t in list(inter_type)] - else: - field_choices = None - - field_help = dataclass_instance._get_help(k) - field_const = dataclass_instance._get_argparse_const(k) - - if isinstance(field_default, str) and field_default.startswith("${"): - kwargs["default"] = field_default - else: - if field_default is MISSING: - kwargs["required"] = True - if field_choices is not None: - kwargs["choices"] = field_choices - if ( - isinstance(inter_type, type) - and (issubclass(inter_type, List) or issubclass(inter_type, Tuple)) - ) or ("List" in str(inter_type) or "Tuple" in str(inter_type)): - if "int" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, int) - elif "float" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, float) - elif "str" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, str) - else: - raise NotImplementedError( - "parsing of type " + str(inter_type) + " is not implemented" - ) - if field_default is not MISSING: - kwargs["default"] = ( - ",".join(map(str, field_default)) - if field_default is not None - else None - ) - elif ( - isinstance(inter_type, type) and issubclass(inter_type, Enum) - ) or "Enum" in str(inter_type): - kwargs["type"] = str - if field_default is not MISSING: - if isinstance(field_default, Enum): - kwargs["default"] = field_default.value - else: - kwargs["default"] = field_default - elif inter_type is bool: - kwargs["action"] = ( - "store_false" if field_default is True else "store_true" - ) - kwargs["default"] = field_default - else: - kwargs["type"] = inter_type - if field_default is not MISSING: - kwargs["default"] = field_default - - # build the help with the hierarchical prefix - if with_prefix is not None and with_prefix != '' and field_help is not None: - field_help = with_prefix[2:] + ': ' + field_help - - kwargs["help"] = field_help - if field_const is not None: - kwargs["const"] = field_const - kwargs["nargs"] = "?" - - return kwargs - - for k in dataclass_instance._get_all_attributes(): - field_name = argparse_name(dataclass_instance._get_name(k)) - field_type = dataclass_instance._get_type(k) - if field_name is None: - continue - elif inspect.isclass(field_type) and issubclass(field_type, FairseqDataclass): - # for fields that are of type FairseqDataclass, we can recursively - # add their fields to the namespace (so we add the args from model, task, etc. to the root namespace) - prefix = None - if with_prefix is not None: - # if a prefix is specified, then we don't want to copy the subfields directly to the root namespace - # but we prefix them with the name of the current field. - prefix = field_name - gen_parser_from_dataclass(parser, field_type(), delete_default, prefix) - continue - - kwargs = get_kwargs_from_dc(dataclass_instance, k) - - field_args = [field_name] - alias = dataclass_instance._get_argparse_alias(k) - if alias is not None: - field_args.append(alias) - - if "default" in kwargs: - if isinstance(kwargs["default"], str) and kwargs["default"].startswith( - "${" - ): - if kwargs["help"] is None: - # this is a field with a name that will be added elsewhere - continue - else: - del kwargs["default"] - if delete_default and "default" in kwargs: - del kwargs["default"] - try: - parser.add_argument(*field_args, **kwargs) - except ArgumentError: - pass - - -def _set_legacy_defaults(args, cls): - """Helper to set default arguments based on *add_args*.""" - if not hasattr(cls, "add_args"): - return - - import argparse - - parser = argparse.ArgumentParser( - argument_default=argparse.SUPPRESS, allow_abbrev=False - ) - cls.add_args(parser) - # copied from argparse.py: - defaults = argparse.Namespace() - for action in parser._actions: - if action.dest is not argparse.SUPPRESS: - if not hasattr(defaults, action.dest): - if action.default is not argparse.SUPPRESS: - setattr(defaults, action.dest, action.default) - for key, default_value in vars(defaults).items(): - if not hasattr(args, key): - setattr(args, key, default_value) - - -def _override_attr( - sub_node: str, data_class: Type[FairseqDataclass], args: Namespace -) -> List[str]: - overrides = [] - - if not inspect.isclass(data_class) or not issubclass(data_class, FairseqDataclass): - return overrides - - def get_default(f): - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - for k, v in data_class.__dataclass_fields__.items(): - if k.startswith("_"): - # private member, skip - continue - - val = get_default(v) if not hasattr(args, k) else getattr(args, k) - - field_type = interpret_dc_type(v.type) - if ( - isinstance(val, str) - and not val.startswith("${") # not interpolation - and field_type != str - and ( - not inspect.isclass(field_type) or not issubclass(field_type, Enum) - ) # not choices enum - ): - # upgrade old models that stored complex parameters as string - val = ast.literal_eval(val) - - if isinstance(val, tuple): - val = list(val) - - v_type = getattr(v.type, "__origin__", None) - if ( - (v_type is List or v_type is list or v_type is Optional) - # skip interpolation - and not (isinstance(val, str) and val.startswith("${")) - ): - # if type is int but val is float, then we will crash later - try to convert here - if hasattr(v.type, "__args__"): - t_args = v.type.__args__ - if len(t_args) == 1 and (t_args[0] is float or t_args[0] is int): - val = list(map(t_args[0], val)) - elif val is not None and ( - field_type is int or field_type is bool or field_type is float - ): - try: - val = field_type(val) - except: - pass # ignore errors here, they are often from interpolation args - - if val is None: - overrides.append("{}.{}=null".format(sub_node, k)) - elif val == "": - overrides.append("{}.{}=''".format(sub_node, k)) - elif isinstance(val, str): - val = val.replace("'", r"\'") - overrides.append("{}.{}='{}'".format(sub_node, k, val)) - elif isinstance(val, FairseqDataclass): - overrides += _override_attr(f"{sub_node}.{k}", type(val), args) - elif isinstance(val, Namespace): - sub_overrides, _ = override_module_args(val) - for so in sub_overrides: - overrides.append(f"{sub_node}.{k}.{so}") - else: - overrides.append("{}.{}={}".format(sub_node, k, val)) - - return overrides - - -def migrate_registry( - name, value, registry, args, overrides, deletes, use_name_as_val=False -): - if value in registry: - overrides.append("{}={}".format(name, value)) - overrides.append("{}._name={}".format(name, value)) - overrides.extend(_override_attr(name, registry[value], args)) - elif use_name_as_val and value is not None: - overrides.append("{}={}".format(name, value)) - else: - deletes.append(name) - - -def override_module_args(args: Namespace) -> Tuple[List[str], List[str]]: - """use the field in args to overrides those in cfg""" - overrides = [] - deletes = [] - - for k in FairseqConfig.__dataclass_fields__.keys(): - overrides.extend( - _override_attr(k, FairseqConfig.__dataclass_fields__[k].type, args) - ) - - if args is not None: - if hasattr(args, "task"): - from fairseq.tasks import TASK_DATACLASS_REGISTRY - - migrate_registry( - "task", args.task, TASK_DATACLASS_REGISTRY, args, overrides, deletes - ) - else: - deletes.append("task") - - # these options will be set to "None" if they have not yet been migrated - # so we can populate them with the entire flat args - CORE_REGISTRIES = {"criterion", "optimizer", "lr_scheduler"} - - from fairseq.registry import REGISTRIES - - for k, v in REGISTRIES.items(): - if hasattr(args, k): - migrate_registry( - k, - getattr(args, k), - v["dataclass_registry"], - args, - overrides, - deletes, - use_name_as_val=k not in CORE_REGISTRIES, - ) - else: - deletes.append(k) - - no_dc = True - if hasattr(args, "arch"): - from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_MODEL_NAME_REGISTRY - - if args.arch in ARCH_MODEL_REGISTRY: - m_cls = ARCH_MODEL_REGISTRY[args.arch] - dc = getattr(m_cls, "__dataclass", None) - if dc is not None: - m_name = ARCH_MODEL_NAME_REGISTRY[args.arch] - overrides.append("model={}".format(m_name)) - overrides.append("model._name={}".format(args.arch)) - # override model params with those exist in args - overrides.extend(_override_attr("model", dc, args)) - no_dc = False - if no_dc: - deletes.append("model") - - return overrides, deletes - - -class omegaconf_no_object_check: - def __init__(self): - self.old_is_primitive = _utils.is_primitive_type - - def __enter__(self): - _utils.is_primitive_type = lambda _: True - - def __exit__(self, type, value, traceback): - _utils.is_primitive_type = self.old_is_primitive - - -def convert_namespace_to_omegaconf(args: Namespace) -> DictConfig: - """Convert a flat argparse.Namespace to a structured DictConfig.""" - - # Here we are using field values provided in args to override counterparts inside config object - overrides, deletes = override_module_args(args) - - # configs will be in fairseq/config after installation - config_path = os.path.join("..", "config") - - GlobalHydra.instance().clear() - - with initialize(config_path=config_path): - try: - composed_cfg = compose("config", overrides=overrides, strict=False) - except: - logger.error("Error when composing. Overrides: " + str(overrides)) - raise - - for k in deletes: - composed_cfg[k] = None - - cfg = OmegaConf.create( - OmegaConf.to_container(composed_cfg, resolve=True, enum_to_str=True) - ) - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - with omegaconf_no_object_check(): - if cfg.task is None and getattr(args, "task", None): - cfg.task = Namespace(**vars(args)) - from fairseq.tasks import TASK_REGISTRY - - _set_legacy_defaults(cfg.task, TASK_REGISTRY[args.task]) - cfg.task._name = args.task - if cfg.model is None and getattr(args, "arch", None): - cfg.model = Namespace(**vars(args)) - from fairseq.models import ARCH_MODEL_REGISTRY - - _set_legacy_defaults(cfg.model, ARCH_MODEL_REGISTRY[args.arch]) - cfg.model._name = args.arch - if cfg.optimizer is None and getattr(args, "optimizer", None): - cfg.optimizer = Namespace(**vars(args)) - from fairseq.optim import OPTIMIZER_REGISTRY - - _set_legacy_defaults(cfg.optimizer, OPTIMIZER_REGISTRY[args.optimizer]) - cfg.optimizer._name = args.optimizer - if cfg.lr_scheduler is None and getattr(args, "lr_scheduler", None): - cfg.lr_scheduler = Namespace(**vars(args)) - from fairseq.optim.lr_scheduler import LR_SCHEDULER_REGISTRY - - _set_legacy_defaults( - cfg.lr_scheduler, LR_SCHEDULER_REGISTRY[args.lr_scheduler] - ) - cfg.lr_scheduler._name = args.lr_scheduler - if cfg.criterion is None and getattr(args, "criterion", None): - cfg.criterion = Namespace(**vars(args)) - from fairseq.criterions import CRITERION_REGISTRY - - _set_legacy_defaults(cfg.criterion, CRITERION_REGISTRY[args.criterion]) - cfg.criterion._name = args.criterion - - OmegaConf.set_struct(cfg, True) - return cfg - - -def overwrite_args_by_name(cfg: DictConfig, overrides: Dict[str, any]): - # this will be deprecated when we get rid of argparse and model_overrides logic - - from fairseq.registry import REGISTRIES - - with open_dict(cfg): - for k in cfg.keys(): - # "k in cfg" will return false if its a "mandatory value (e.g. ???)" - if k in cfg and isinstance(cfg[k], DictConfig): - if k in overrides and isinstance(overrides[k], dict): - for ok, ov in overrides[k].items(): - if isinstance(ov, dict) and cfg[k][ok] is not None: - overwrite_args_by_name(cfg[k][ok], ov) - else: - cfg[k][ok] = ov - else: - overwrite_args_by_name(cfg[k], overrides) - elif k in cfg and isinstance(cfg[k], Namespace): - for override_key, val in overrides.items(): - setattr(cfg[k], override_key, val) - elif k in overrides: - if ( - k in REGISTRIES - and overrides[k] in REGISTRIES[k]["dataclass_registry"] - ): - cfg[k] = DictConfig( - REGISTRIES[k]["dataclass_registry"][overrides[k]] - ) - overwrite_args_by_name(cfg[k], overrides) - cfg[k]._name = overrides[k] - else: - cfg[k] = overrides[k] - - -def merge_with_parent(dc: FairseqDataclass, cfg: DictConfig, remove_missing=True): - if remove_missing: - - if is_dataclass(dc): - target_keys = set(dc.__dataclass_fields__.keys()) - else: - target_keys = set(dc.keys()) - - with open_dict(cfg): - for k in list(cfg.keys()): - if k not in target_keys: - del cfg[k] - - merged_cfg = OmegaConf.merge(dc, cfg) - merged_cfg.__dict__["_parent"] = cfg.__dict__["_parent"] - OmegaConf.set_struct(merged_cfg, True) - return merged_cfg diff --git a/spaces/mthsk/sovits-models/inference/__init__.py b/spaces/mthsk/sovits-models/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/visualizations.py b/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/visualizations.py deleted file mode 100644 index 980c74f95f1f7df41ebccc983600b2713c0b0502..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/visualizations.py +++ /dev/null @@ -1,178 +0,0 @@ -from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from datetime import datetime -from time import perf_counter as timer -import matplotlib.pyplot as plt -import numpy as np -# import webbrowser -import visdom -import umap - -colormap = np.array([ - [76, 255, 0], - [0, 127, 70], - [255, 0, 0], - [255, 217, 38], - [0, 135, 255], - [165, 0, 165], - [255, 167, 255], - [0, 255, 255], - [255, 96, 38], - [142, 76, 0], - [33, 0, 127], - [0, 0, 0], - [183, 183, 183], -], dtype=np.float) / 255 - - -class Visualizations: - def __init__(self, env_name=None, update_every=10, server="http://localhost", disabled=False): - # Tracking data - self.last_update_timestamp = timer() - self.update_every = update_every - self.step_times = [] - self.losses = [] - self.eers = [] - print("Updating the visualizations every %d steps." % update_every) - - # If visdom is disabled TODO: use a better paradigm for that - self.disabled = disabled - if self.disabled: - return - - # Set the environment name - now = str(datetime.now().strftime("%d-%m %Hh%M")) - if env_name is None: - self.env_name = now - else: - self.env_name = "%s (%s)" % (env_name, now) - - # Connect to visdom and open the corresponding window in the browser - try: - self.vis = visdom.Visdom(server, env=self.env_name, raise_exceptions=True) - except ConnectionError: - raise Exception("No visdom server detected. Run the command \"visdom\" in your CLI to " - "start it.") - # webbrowser.open("http://localhost:8097/env/" + self.env_name) - - # Create the windows - self.loss_win = None - self.eer_win = None - # self.lr_win = None - self.implementation_win = None - self.projection_win = None - self.implementation_string = "" - - def log_params(self): - if self.disabled: - return - from encoder import params_data - from encoder import params_model - param_string = "Model parameters:
    " - for param_name in (p for p in dir(params_model) if not p.startswith("__")): - value = getattr(params_model, param_name) - param_string += "\t%s: %s
    " % (param_name, value) - param_string += "Data parameters:
    " - for param_name in (p for p in dir(params_data) if not p.startswith("__")): - value = getattr(params_data, param_name) - param_string += "\t%s: %s
    " % (param_name, value) - self.vis.text(param_string, opts={"title": "Parameters"}) - - def log_dataset(self, dataset: SpeakerVerificationDataset): - if self.disabled: - return - dataset_string = "" - dataset_string += "Speakers: %s\n" % len(dataset.speakers) - dataset_string += "\n" + dataset.get_logs() - dataset_string = dataset_string.replace("\n", "
    ") - self.vis.text(dataset_string, opts={"title": "Dataset"}) - - def log_implementation(self, params): - if self.disabled: - return - implementation_string = "" - for param, value in params.items(): - implementation_string += "%s: %s\n" % (param, value) - implementation_string = implementation_string.replace("\n", "
    ") - self.implementation_string = implementation_string - self.implementation_win = self.vis.text( - implementation_string, - opts={"title": "Training implementation"} - ) - - def update(self, loss, eer, step): - # Update the tracking data - now = timer() - self.step_times.append(1000 * (now - self.last_update_timestamp)) - self.last_update_timestamp = now - self.losses.append(loss) - self.eers.append(eer) - print(".", end="") - - # Update the plots every steps - if step % self.update_every != 0: - return - time_string = "Step time: mean: %5dms std: %5dms" % \ - (int(np.mean(self.step_times)), int(np.std(self.step_times))) - print("\nStep %6d Loss: %.4f EER: %.4f %s" % - (step, np.mean(self.losses), np.mean(self.eers), time_string)) - if not self.disabled: - self.loss_win = self.vis.line( - [np.mean(self.losses)], - [step], - win=self.loss_win, - update="append" if self.loss_win else None, - opts=dict( - legend=["Avg. loss"], - xlabel="Step", - ylabel="Loss", - title="Loss", - ) - ) - self.eer_win = self.vis.line( - [np.mean(self.eers)], - [step], - win=self.eer_win, - update="append" if self.eer_win else None, - opts=dict( - legend=["Avg. EER"], - xlabel="Step", - ylabel="EER", - title="Equal error rate" - ) - ) - if self.implementation_win is not None: - self.vis.text( - self.implementation_string + ("%s" % time_string), - win=self.implementation_win, - opts={"title": "Training implementation"}, - ) - - # Reset the tracking - self.losses.clear() - self.eers.clear() - self.step_times.clear() - - def draw_projections(self, embeds, utterances_per_speaker, step, out_fpath=None, - max_speakers=10): - max_speakers = min(max_speakers, len(colormap)) - embeds = embeds[:max_speakers * utterances_per_speaker] - - n_speakers = len(embeds) // utterances_per_speaker - ground_truth = np.repeat(np.arange(n_speakers), utterances_per_speaker) - colors = [colormap[i] for i in ground_truth] - - reducer = umap.UMAP() - projected = reducer.fit_transform(embeds) - plt.scatter(projected[:, 0], projected[:, 1], c=colors) - plt.gca().set_aspect("equal", "datalim") - plt.title("UMAP projection (step %d)" % step) - if not self.disabled: - self.projection_win = self.vis.matplot(plt, win=self.projection_win) - if out_fpath is not None: - plt.savefig(out_fpath) - plt.clf() - - def save(self): - if not self.disabled: - self.vis.save([self.env_name]) - \ No newline at end of file diff --git a/spaces/nakas/MusicGenDemucs/tests/quantization/test_vq.py b/spaces/nakas/MusicGenDemucs/tests/quantization/test_vq.py deleted file mode 100644 index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/tests/quantization/test_vq.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.quantization.vq import ResidualVectorQuantizer - - -class TestResidualVectorQuantizer: - - def test_rvq(self): - x = torch.randn(1, 16, 2048) - vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8) - res = vq(x, 1.) - assert res.x.shape == torch.Size([1, 16, 2048]) diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/sdf.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/sdf.py deleted file mode 100644 index e87e639eb94993c3e4068d6bd4d21f902aee7694..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/sdf.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np - - -def create_grid(resX, resY, resZ, b_min=np.array([0, 0, 0]), b_max=np.array([1, 1, 1]), transform=None): - ''' - Create a dense grid of given resolution and bounding box - :param resX: resolution along X axis - :param resY: resolution along Y axis - :param resZ: resolution along Z axis - :param b_min: vec3 (x_min, y_min, z_min) bounding box corner - :param b_max: vec3 (x_max, y_max, z_max) bounding box corner - :return: [3, resX, resY, resZ] coordinates of the grid, and transform matrix from mesh index - ''' - coords = np.mgrid[:resX, :resY, :resZ] - coords = coords.reshape(3, -1) - coords_matrix = np.eye(4) - length = b_max - b_min - coords_matrix[0, 0] = length[0] / resX - coords_matrix[1, 1] = length[1] / resY - coords_matrix[2, 2] = length[2] / resZ - coords_matrix[0:3, 3] = b_min - coords = np.matmul(coords_matrix[:3, :3], coords) + coords_matrix[:3, 3:4] - if transform is not None: - coords = np.matmul(transform[:3, :3], coords) + transform[:3, 3:4] - coords_matrix = np.matmul(transform, coords_matrix) - coords = coords.reshape(3, resX, resY, resZ) - return coords, coords_matrix - - -def batch_eval(points, eval_func, num_samples=512 * 512 * 512): - num_pts = points.shape[1] - sdf = np.zeros(num_pts) - - num_batches = num_pts // num_samples - for i in range(num_batches): - sdf[i * num_samples:i * num_samples + num_samples] = eval_func( - points[:, i * num_samples:i * num_samples + num_samples]) - if num_pts % num_samples: - sdf[num_batches * num_samples:] = eval_func(points[:, num_batches * num_samples:]) - - return sdf - - -def eval_grid(coords, eval_func, num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - coords = coords.reshape([3, -1]) - sdf = batch_eval(coords, eval_func, num_samples=num_samples) - return sdf.reshape(resolution) - - -def eval_grid_octree(coords, eval_func, - init_resolution=64, threshold=0.01, - num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - - sdf = np.zeros(resolution) - - dirty = np.ones(resolution, dtype=np.bool) - grid_mask = np.zeros(resolution, dtype=np.bool) - - reso = resolution[0] // init_resolution - - while reso > 0: - # subdivide the grid - grid_mask[0:resolution[0]:reso, 0:resolution[1]:reso, 0:resolution[2]:reso] = True - # test samples in this iteration - test_mask = np.logical_and(grid_mask, dirty) - #print('step size:', reso, 'test sample size:', test_mask.sum()) - points = coords[:, test_mask] - - sdf[test_mask] = batch_eval(points, eval_func, num_samples=num_samples) - dirty[test_mask] = False - - # do interpolation - if reso <= 1: - break - for x in range(0, resolution[0] - reso, reso): - for y in range(0, resolution[1] - reso, reso): - for z in range(0, resolution[2] - reso, reso): - # if center marked, return - if not dirty[x + reso // 2, y + reso // 2, z + reso // 2]: - continue - v0 = sdf[x, y, z] - v1 = sdf[x, y, z + reso] - v2 = sdf[x, y + reso, z] - v3 = sdf[x, y + reso, z + reso] - v4 = sdf[x + reso, y, z] - v5 = sdf[x + reso, y, z + reso] - v6 = sdf[x + reso, y + reso, z] - v7 = sdf[x + reso, y + reso, z + reso] - v = np.array([v0, v1, v2, v3, v4, v5, v6, v7]) - v_min = v.min() - v_max = v.max() - # this cell is all the same - if (v_max - v_min) < threshold: - sdf[x:x + reso, y:y + reso, z:z + reso] = (v_max + v_min) / 2 - dirty[x:x + reso, y:y + reso, z:z + reso] = False - reso //= 2 - - return sdf.reshape(resolution) diff --git a/spaces/neko321/Voice-Changer1/infer_pack/models.py b/spaces/neko321/Voice-Changer1/infer_pack/models.py deleted file mode 100644 index 1b4b06e5c7c8e84f0ef8b4f0174a5e0ec6800344..0000000000000000000000000000000000000000 --- a/spaces/neko321/Voice-Changer1/infer_pack/models.py +++ /dev/null @@ -1,1116 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2,3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Salsa Celtica - El Camino 2006 .rar.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Salsa Celtica - El Camino 2006 .rar.md deleted file mode 100644 index f5970ddf363b11084d4187b752bab67004ab92f5..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Salsa Celtica - El Camino 2006 .rar.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    Download Salsa Celtica - El Camino 2006 .rar for Free

    -

    If you are looking for a unique fusion of Latin and Celtic music, you should check out Salsa Celtica - El Camino 2006 .rar. This is a compressed file that contains the fourth album by the Edinburgh-based band Salsa Celtica, who have been blending salsa rhythms with Scottish and Irish instruments and vocals since 1995.

    -

    Salsa Celtica - El Camino 2006 .rar


    Downloadhttps://urlcod.com/2uIaV5



    -

    El Camino (The Road) is a critically acclaimed album that showcases the band's new direction into songwriting and soulful pieces, while still maintaining their high-energy and exuberant sound. The album features 12 tracks that range from upbeat rumbas and salsas to haunting ballads and instrumentals. Some of the highlights include:

    -
      -
    • Pa'l Rumberos (For The Rumberos), a tribute to the Cuban percussionists who inspired the band.
    • -
    • Esperanza (Hope), a catchy salsa with a positive message.
    • -
    • An Cailleach (The Hag), a traditional Scottish tune played on bagpipes and congas.
    • -
    • Cuando Me Vaya (When I Go), a beautiful duet between Scottish singer Lino Rocha and Venezuelan vocalist Ricardo Fernandez Pompa.
    • -
    • Café Colando (Brewing Coffee), a two-part track that starts as a slow bolero and then transforms into a fast-paced son montuno.
    • -
    • Grey Gallito (The Grey Cockerel), a Latin version of an English folk song featuring guest vocalist Eliza Carthy.
    • -
    • Luna Llena (Full Moon), a romantic salsa with a Celtic twist.
    • -
    • Fuego, Alma y Paz (Fire Soul And Peace), a powerful song that expresses the band's vision of music as a force for good in the world.
    • -
    -

    You can download Salsa Celtica - El Camino 2006 .rar for free from our website. All you need to do is click on the link below and follow the instructions. You will need a program like WinRAR or 7-Zip to extract the files from the compressed folder. Once you do that, you can enjoy this amazing album on your computer, phone, or any other device that supports MP3 files.

    -

    Don't miss this opportunity to discover one of the most original and exciting bands in the world music scene. Download Salsa Celtica - El Camino 2006 .rar today and get ready to dance and sing along with their irresistible fusion of Latin and Celtic sounds.

    -Download Salsa Celtica - El Camino 2006 .rar here - -

    If you want to learn more about Salsa Celtica and their music, you can visit their official website, where you can find their biography, discography, tour dates, videos, and more. You can also follow them on social media platforms like Facebook, Twitter, and Instagram, where they share news, updates, and behind-the-scenes photos and videos.

    -

    -

    Salsa Celtica are not only a band, but also a musical community that brings together artists and fans from different cultures and backgrounds. They have collaborated with many other musicians from the UK and abroad, such as Shooglenifty, Capercaillie, Martyn Bennett, Carlos Pena, and The Afro-Cuban All Stars. They have also performed at prestigious festivals and venues around the world, such as WOMAD, Cambridge Folk Festival, Celtic Connections, Glastonbury, Edinburgh Castle, and The Royal Albert Hall.

    -

    By downloading Salsa Celtica - El Camino 2006 .rar, you will be supporting an independent and innovative band that has been making music for over 25 years. You will also be joining a global family of salsa celtica lovers who appreciate their unique blend of Latin and Celtic music. Don't wait any longer and download Salsa Celtica - El Camino 2006 .rar now!

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/nickprock/banking_intent_classification/README.md b/spaces/nickprock/banking_intent_classification/README.md deleted file mode 100644 index 39c37a0f8668f3bd0e09c3acb99e7f2773617a27..0000000000000000000000000000000000000000 --- a/spaces/nickprock/banking_intent_classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Banking Intent Classification -emoji: 💳 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/coco_panoptic.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/coco_panoptic.py deleted file mode 100644 index b8dae44317b556610d7fed39017e082d7e855956..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/coco_panoptic.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import json -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.file_io import PathManager - -from .coco import load_coco_json, load_sem_seg - -__all__ = ["register_coco_panoptic", "register_coco_panoptic_separated"] - - -def load_coco_panoptic_json(json_file, image_dir, gt_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = int(ann["image_id"]) - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - return ret - - -def register_coco_panoptic( - name, metadata, image_root, panoptic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of COCO panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - - Args: - name (str): the name that identifies a dataset, - e.g. "coco_2017_train_panoptic" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_coco_panoptic_json(panoptic_json, image_root, panoptic_root, metadata), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="coco_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -def register_coco_panoptic_separated( - name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json -): - """ - Register a "separated" version of COCO panoptic segmentation dataset named `name`. - The annotations in this registered dataset will contain both instance annotations and - semantic annotations, each with its own contiguous ids. Hence it's called "separated". - - It follows the setting used by the PanopticFPN paper: - - 1. The instance annotations directly come from polygons in the COCO - instances annotation task, rather than from the masks in the COCO panoptic annotations. - - The two format have small differences: - Polygons in the instance annotations may have overlaps. - The mask annotations are produced by labeling the overlapped polygons - with depth ordering. - - 2. The semantic annotations are converted from panoptic annotations, where - all "things" are assigned a semantic id of 0. - All semantic categories will therefore have ids in contiguous - range [1, #stuff_categories]. - - This function will also register a pure semantic segmentation dataset - named ``name + '_stuffonly'``. - - Args: - name (str): the name that identifies a dataset, - e.g. "coco_2017_train_panoptic" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images - panoptic_json (str): path to the json panoptic annotation file - sem_seg_root (str): directory which contains all the ground truth segmentation annotations. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name + "_separated" - DatasetCatalog.register( - panoptic_name, - lambda: merge_to_panoptic( - load_coco_json(instances_json, image_root, panoptic_name), - load_sem_seg(sem_seg_root, image_root), - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - sem_seg_root=sem_seg_root, - json_file=instances_json, # TODO rename - evaluator_type="coco_panoptic_seg", - ignore_label=255, - **metadata, - ) - - semantic_name = name + "_stuffonly" - DatasetCatalog.register(semantic_name, lambda: load_sem_seg(sem_seg_root, image_root)) - MetadataCatalog.get(semantic_name).set( - sem_seg_root=sem_seg_root, - image_root=image_root, - evaluator_type="sem_seg", - ignore_label=255, - **metadata, - ) - - -def merge_to_panoptic(detection_dicts, sem_seg_dicts): - """ - Create dataset dicts for panoptic segmentation, by - merging two dicts using "file_name" field to match their entries. - - Args: - detection_dicts (list[dict]): lists of dicts for object detection or instance segmentation. - sem_seg_dicts (list[dict]): lists of dicts for semantic segmentation. - - Returns: - list[dict] (one per input image): Each dict contains all (key, value) pairs from dicts in - both detection_dicts and sem_seg_dicts that correspond to the same image. - The function assumes that the same key in different dicts has the same value. - """ - results = [] - sem_seg_file_to_entry = {x["file_name"]: x for x in sem_seg_dicts} - assert len(sem_seg_file_to_entry) > 0 - - for det_dict in detection_dicts: - dic = copy.copy(det_dict) - dic.update(sem_seg_file_to_entry[dic["file_name"]]) - results.append(dic) - return results - - -if __name__ == "__main__": - """ - Test the COCO panoptic dataset loader. - - Usage: - python -m detectron2.data.datasets.coco_panoptic \ - path/to/image_root path/to/panoptic_root path/to/panoptic_json dataset_name 10 - - "dataset_name" can be "coco_2017_train_panoptic", or other - pre-registered ones - """ - from detectron2.utils.logger import setup_logger - from detectron2.utils.visualizer import Visualizer - import detectron2.data.datasets # noqa # add pre-defined metadata - import sys - from PIL import Image - import numpy as np - - logger = setup_logger(name=__name__) - assert sys.argv[4] in DatasetCatalog.list() - meta = MetadataCatalog.get(sys.argv[4]) - - dicts = load_coco_panoptic_json(sys.argv[3], sys.argv[1], sys.argv[2], meta.as_dict()) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "coco-data-vis" - os.makedirs(dirname, exist_ok=True) - num_imgs_to_vis = int(sys.argv[5]) - for i, d in enumerate(dicts): - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) - if i + 1 >= num_imgs_to_vis: - break diff --git a/spaces/nomic-ai/ag_news/style.css b/spaces/nomic-ai/ag_news/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/ag_news/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/BadWordRule.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/BadWordRule.py deleted file mode 100644 index b7761304584297e2583540b523bf7e9ee2d30ccb..0000000000000000000000000000000000000000 --- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/BadWordRule.py +++ /dev/null @@ -1,47 +0,0 @@ -from src.rule_based_system.Rule import Rule - -from src.rule_based_system.TextLengthRule import TEXT_SIZE_LIMIT -from src.rule_based_system.Verdict import Verdict - - -class BadWordRule(Rule): - """ - Bad words obtained from corners of the internet you do not want to visit: - - https://www.ensie.nl/scheldwoordenboek# - - https://scheldwoorden.goedbegin.nl/ - - https://nl.wiktionary.org/wiki/Categorie:Scheldwoord_in_het_Nederlands - - https://www.lannoo.be/sites/default/files/books/issuu/9789401453417.pdf - - https://www.dutchmultimedia.nl/meest-verschrikkelijke-engelse-scheldwoorden/ - - https://www.dutchmultimedia.nl/scheldwoordenboek-1-000-den-nederlandse-scheldwoorden/ - - https://www.henkyspapiamento.com/10-papiaments-scheldwoorden-die-we-liever-niet-horen/ - - https://volkabulaire.nl/tag/scheldwoorden/ - - https://data.world/wordlists/dirty-naughty-obscene-and-otherwise-bad-words-in-dutch - """ - - bad_words = None - - def __init__(self, bad_words: list, strict: bool): - self.bad_words = bad_words - self.strict = strict - - def get_verdict(self, comment_text: str) -> Verdict: - comment_text = comment_text[0:TEXT_SIZE_LIMIT] - - bad_words = self.find_bad_words(comment_text.split()) - - return Verdict(len(bad_words) == 0, bad_words) - - def find_bad_words(self, text: list) -> list: - detected_bad_words = [] - for word in text: - if word in self.bad_words: - detected_bad_words.append(word) - - return detected_bad_words - - def is_strict(self) -> bool: - return self.strict - - def get_rule_description(self) -> str: - return "Comment text contained %s inappropriate words" % \ - ('strictly' if self.is_strict() else 'ambiguous') diff --git a/spaces/oguzakif/video-object-remover/SiamMask/data/ytb_vos/parse_ytb_vos.py b/spaces/oguzakif/video-object-remover/SiamMask/data/ytb_vos/parse_ytb_vos.py deleted file mode 100644 index 53234842e30e13e12e26fa1893e865f8bc1e1a7a..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/data/ytb_vos/parse_ytb_vos.py +++ /dev/null @@ -1,174 +0,0 @@ -# -------------------------------------------------------- -# SiamMask -# Licensed under The MIT License -# Written by Qiang Wang (wangqiang2015 at ia.ac.cn) -# -------------------------------------------------------- -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function -from __future__ import unicode_literals - -import argparse -import h5py -import json -import os -import scipy.misc -import sys -import numpy as np -import cv2 -from os.path import join - - -def parse_args(): - parser = argparse.ArgumentParser(description='Convert dataset') - parser.add_argument('--outdir', default='./', type=str, - help="output dir for json files") - parser.add_argument('--datadir', default='./', type=str, - help="data dir for annotations to be converted") - return parser.parse_args() - - -def xyxy_to_xywh(xyxy): - """Convert [x1 y1 x2 y2] box format to [x1 y1 w h] format.""" - if isinstance(xyxy, (list, tuple)): - # Single box given as a list of coordinates - assert len(xyxy) == 4 - x1, y1 = xyxy[0], xyxy[1] - w = xyxy[2] - x1 + 1 - h = xyxy[3] - y1 + 1 - return (x1, y1, w, h) - elif isinstance(xyxy, np.ndarray): - # Multiple boxes given as a 2D ndarray - return np.hstack((xyxy[:, 0:2], xyxy[:, 2:4] - xyxy[:, 0:2] + 1)) - else: - raise TypeError('Argument xyxy must be a list, tuple, or numpy array.') - - -def polys_to_boxes(polys): - """Convert a list of polygons into an array of tight bounding boxes.""" - boxes_from_polys = np.zeros((len(polys), 4), dtype=np.float32) - for i in range(len(polys)): - poly = polys[i] - x0 = min(min(p[::2]) for p in poly) - x1 = max(max(p[::2]) for p in poly) - y0 = min(min(p[1::2]) for p in poly) - y1 = max(max(p[1::2]) for p in poly) - boxes_from_polys[i, :] = [x0, y0, x1, y1] - return boxes_from_polys - - -class Instance(object): - instID = 0 - pixelCount = 0 - - def __init__(self, imgNp, instID): - if (instID ==0 ): - return - self.instID = int(instID) - self.pixelCount = int(self.getInstancePixels(imgNp, instID)) - - def getInstancePixels(self, imgNp, instLabel): - return (imgNp == instLabel).sum() - - def toDict(self): - buildDict = {} - buildDict["instID"] = self.instID - buildDict["pixelCount"] = self.pixelCount - return buildDict - - def __str__(self): - return "("+str(self.instID)+")" - - -def convert_ytb_vos(data_dir, out_dir): - sets = ['train'] - ann_dirs = ['train/Annotations/'] - json_name = 'instances_%s.json' - num_obj = 0 - num_ann = 0 - for data_set, ann_dir in zip(sets, ann_dirs): - print('Starting %s' % data_set) - ann_dict = {} - ann_dir = os.path.join(data_dir, ann_dir) - json_ann = json.load(open(os.path.join(ann_dir, '../meta.json'))) - for vid, video in enumerate(json_ann['videos']): - v = json_ann['videos'][video] - frames = [] - for obj in v['objects']: - o = v['objects'][obj] - frames.extend(o['frames']) - frames = sorted(set(frames)) - - annotations = [] - instanceIds = [] - for frame in frames: - file_name = join(video, frame) - fullname = os.path.join(ann_dir, file_name+'.png') - img = cv2.imread(fullname, 0) - h, w = img.shape[:2] - - objects = dict() - for instanceId in np.unique(img): - if instanceId == 0: - continue - instanceObj = Instance(img, instanceId) - instanceObj_dict = instanceObj.toDict() - mask = (img == instanceId).astype(np.uint8) - _, contour, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) - polygons = [c.reshape(-1).tolist() for c in contour] - instanceObj_dict['contours'] = [p for p in polygons if len(p) > 4] - if len(instanceObj_dict['contours']) and instanceObj_dict['pixelCount'] > 1000: - objects[instanceId] = instanceObj_dict - # else: - # cv2.imshow("disappear?", mask) - # cv2.waitKey(0) - - for objId in objects: - if len(objects[objId]) == 0: - continue - obj = objects[objId] - len_p = [len(p) for p in obj['contours']] - if min(len_p) <= 4: - print('Warning: invalid contours.') - continue # skip non-instance categories - - ann = dict() - ann['h'] = h - ann['w'] = w - ann['file_name'] = file_name - ann['id'] = int(objId) - # ann['segmentation'] = obj['contours'] - # ann['iscrowd'] = 0 - ann['area'] = obj['pixelCount'] - ann['bbox'] = xyxy_to_xywh(polys_to_boxes([obj['contours']])).tolist()[0] - - annotations.append(ann) - instanceIds.append(objId) - num_ann += 1 - instanceIds = sorted(set(instanceIds)) - num_obj += len(instanceIds) - video_ann = {str(iId): [] for iId in instanceIds} - for ann in annotations: - video_ann[str(ann['id'])].append(ann) - - ann_dict[video] = video_ann - if vid % 50 == 0 and vid != 0: - print("process: %d video" % (vid+1)) - - print("Num Videos: %d" % len(ann_dict)) - print("Num Objects: %d" % num_obj) - print("Num Annotations: %d" % num_ann) - - items = list(ann_dict.items()) - train_dict = dict(items[:3000]) - val_dict = dict(items[3000:]) - with open(os.path.join(out_dir, json_name % 'train'), 'w') as outfile: - json.dump(train_dict, outfile) - - with open(os.path.join(out_dir, json_name % 'val'), 'w') as outfile: - json.dump(val_dict, outfile) - - -if __name__ == '__main__': - args = parse_args() - convert_ytb_vos(args.datadir, args.outdir) diff --git a/spaces/oldplayer1871/anime-remove-background/README.md b/spaces/oldplayer1871/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/oldplayer1871/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/onemriganka/palm2-pdf/app.py b/spaces/onemriganka/palm2-pdf/app.py deleted file mode 100644 index 499ad7781c42502bcde5eec3febcdf435e478d59..0000000000000000000000000000000000000000 --- a/spaces/onemriganka/palm2-pdf/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import streamlit as st -from PyPDF2 import PdfReader -from langchain.text_splitter import RecursiveCharacterTextSplitter -import google.generativeai as palm -from langchain.embeddings import GooglePalmEmbeddings -from langchain.llms import GooglePalm -from langchain.vectorstores import FAISS -from langchain.chains import ConversationalRetrievalChain -from langchain.memory import ConversationBufferMemory -import os - -os.environ['GOOGLE_API_KEY'] = 'AIzaSyAO1uqCO_1CTZV1zgIlUhk5Mv4Ey08cjzI' - -def get_pdf_text(pdf_docs): - text="" - for pdf in pdf_docs: - pdf_reader= PdfReader(pdf) - for page in pdf_reader.pages: - text+= page.extract_text() - return text - -def get_text_chunks(text): - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20) - chunks = text_splitter.split_text(text) - return chunks - -def get_vector_store(text_chunks): - embeddings = GooglePalmEmbeddings() - vector_store = FAISS.from_texts(text_chunks, embedding=embeddings) - return vector_store - -def get_conversational_chain(vector_store): - llm=GooglePalm() - memory = ConversationBufferMemory(memory_key = "chat_history", return_messages=True) - conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vector_store.as_retriever(), memory=memory) - return conversation_chain - -def user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chatHistory = response['chat_history'] - for i, message in enumerate(st.session_state.chatHistory): - if i%2 == 0: - st.write("Me: ", message.content) - else: - st.write("mGPT: ", message.content) -def main(): - st.set_page_config("palm2 pdf ") - st.header("Hi , ask me anything from your pdf 😎 ") - user_question = st.text_input("Ask a Question from the PDF Files") - if "conversation" not in st.session_state: - st.session_state.conversation = None - if "chatHistory" not in st.session_state: - st.session_state.chatHistory = None - if user_question: - user_input(user_question) - with st.sidebar: - st.title("Settings") - st.subheader("Upload your Documents") - pdf_docs = st.file_uploader("Upload your PDF Files and Click on the Process Button", accept_multiple_files=True) - if st.button("Process"): - with st.spinner("Processing"): - raw_text = get_pdf_text(pdf_docs) - text_chunks = get_text_chunks(raw_text) - vector_store = get_vector_store(text_chunks) - st.session_state.conversation = get_conversational_chain(vector_store) - st.success("Done") - - - -if __name__ == "__main__": - main() - - -#M \ No newline at end of file diff --git a/spaces/onereal/rvc-models-convertvoice/infer_pack/attentions.py b/spaces/onereal/rvc-models-convertvoice/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/onereal/rvc-models-convertvoice/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/onnx/faster-rcnn/README.md b/spaces/onnx/faster-rcnn/README.md deleted file mode 100644 index b7c4bb162ab7b3e70853ac46aedfd6ee30550f76..0000000000000000000000000000000000000000 --- a/spaces/onnx/faster-rcnn/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Faster Rcnn -emoji: 💩 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/openai/openai-detector/detector/README.md b/spaces/openai/openai-detector/detector/README.md deleted file mode 100644 index 4b5c22507c049891acb09bd8a3852b443ca477d2..0000000000000000000000000000000000000000 --- a/spaces/openai/openai-detector/detector/README.md +++ /dev/null @@ -1,51 +0,0 @@ -GPT-2 Output Detector -===================== - -This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a -[RoBERTa model](https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-self-supervised-nlp-systems/) -with [the outputs of the 1.5B-parameter GPT-2 model](https://github.com/openai/gpt-2-output-dataset). -For motivations and discussions regarding the release of this detector model, please check out -[our blog post](https://openai.com/blog/gpt-2-1-5b-release/) and [report](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf). - -## Downloading a pre-trained detector model - -Download the weights for the fine-tuned `roberta-base` model (478 MB): - -```bash -wget https://storage.googleapis.com/gpt-2/detector-models/v1/detector-base.pt -``` - -or `roberta-large` model (1.5 GB): - -```bash -wget https://storage.googleapis.com/gpt-2/detector-models/v1/detector-large.pt -``` - -These RoBERTa-based models are fine-tuned with a mixture of temperature-1 and nucleus sampling outputs, -which should generalize well to outputs generated using different sampling methods. - -## Running a detector model - -You can launch a web UI in which you can enter a text and see the detector model's prediction -on whether or not it was generated by a GPT-2 model. - -```bash -# (on the top-level directory of this repository) -pip install -r requirements.txt -python -m detector.server detector-base.pt -``` - -After the script says "Ready to serve", nagivate to http://localhost:8080 to view the UI. - -## Training a new detector model - -You can use the provided training script to train a detector model on a new set of datasets. -We recommend using a GPU machine for this task. - -```bash -# (on the top-level directory of this repository) -pip install -r requirements.txt -python -m detector.train -``` - -The training script supports a number of different options; append `--help` to the command above for usage. diff --git a/spaces/openbio/calculator/utils/duckdb_queries.py b/spaces/openbio/calculator/utils/duckdb_queries.py deleted file mode 100644 index fff0ec020479462c769c0551dc5b48d2ae4ab980..0000000000000000000000000000000000000000 --- a/spaces/openbio/calculator/utils/duckdb_queries.py +++ /dev/null @@ -1,86 +0,0 @@ -import json -import os - -import duckdb - -# Configure DuckDB connection -if not os.getenv("motherduck_token"): - raise Exception( - "No motherduck token found. Please set the `motherduck_token` environment variable." - ) -else: - con = duckdb.connect("md:climatebase") - con.sql("USE climatebase;") - # load extensions - con.sql("""INSTALL spatial; LOAD spatial;""") - - -# to-do: pass con through decorator -def list_projects_by_author(author_id): - return con.execute( - "SELECT DISTINCT name FROM project WHERE (authorId = ? OR authorId = 'default') AND (geometry IS NOT NULL)", - [author_id], - ).df() - - -def get_project_geometry(project_name): - return con.execute( - "SELECT geometry FROM project WHERE name = ? LIMIT 1", [project_name] - ).fetchall() - - -def get_project_centroid(project_name): - # Workaround to get centroid of project - # To-do: refactor to only use DuckDB spatial extension - _geom = get_project_geometry(project_name) - _polygon = json.dumps(json.loads(_geom[0][0])["features"][0]["geometry"]) - return con.sql( - f"SELECT ST_X(ST_Centroid(ST_GeomFromGeoJSON('{_polygon}'))) AS longitude, ST_Y(ST_Centroid(ST_GeomFromGeoJSON('{_polygon}'))) AS latitude;" - ).fetchall()[0] - - -def get_project_scores(project_name, start_year, end_year): - return con.execute( - "SELECT * FROM bioindicator WHERE (year >= ? AND year <= ? AND project_name = ?)", - [start_year, end_year, project_name], - ).df() - - -def check_if_table_exists(table_name): - tables = con.execute("SHOW TABLES;").fetchall() - for i in range(len(tables)): - tables[i] = tables[i][0] - return table_name in tables - -def check_if_project_exists_for_year(project_name, year): - return con.execute( - "SELECT COUNT(1) FROM bioindicator WHERE (year = ? AND project_name = ?)", - [year, project_name], - ).fetchall()[0][0] - - -def write_score_to_temptable(df): - con.sql( - "CREATE OR REPLACE TABLE _temptable AS SELECT *, (value * area) AS score FROM (SELECT year, project_name, metric, AVG(value * coefficient) AS value, area FROM df GROUP BY year, project_name, metric, area ORDER BY project_name, metric)" - ) - return True - - -def get_or_create_bioindicator_table(): - con.sql( - """ - USE climatebase; - CREATE TABLE IF NOT EXISTS bioindicator (year BIGINT, project_name VARCHAR(255), metric VARCHAR(255), value DOUBLE, area DOUBLE, score DOUBLE, CONSTRAINT unique_year_project_name_metric UNIQUE (year, project_name, metric)); - """ - ) - return True - - -def upsert_project_record(): - con.sql( - """ - INSERT INTO bioindicator FROM _temptable - ON CONFLICT (year, project_name, metric) DO UPDATE SET value = excluded.value; - """ - ) - return True diff --git a/spaces/osanseviero/Neural_Image_Colorizer/README.md b/spaces/osanseviero/Neural_Image_Colorizer/README.md deleted file mode 100644 index c4fc041dab37ac019f9016a96ad686f71303e24a..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/Neural_Image_Colorizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Neural_Image_Colorizer -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/osbm/prostate158-monai-inference/README.md b/spaces/osbm/prostate158-monai-inference/README.md deleted file mode 100644 index 42841690bce92bceaa0862c0a88583cc78c72981..0000000000000000000000000000000000000000 --- a/spaces/osbm/prostate158-monai-inference/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Prostate158 Monai Inference -emoji: 👀 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.39.0 -app_file: gradio_app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py deleted file mode 100644 index 63b6c3860a2967db967561581fa060f5dae64082..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py +++ /dev/null @@ -1,927 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from multi_token_clip import MultiTokenCLIPTokenizer - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.14.0.dev0") - -logger = get_logger(__name__) - - -def add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=1, initializer_token=None): - """ - Add tokens to the tokenizer and set the initial value of token embeddings - """ - tokenizer.add_placeholder_tokens(placeholder_token, num_vec_per_token=num_vec_per_token) - text_encoder.resize_token_embeddings(len(tokenizer)) - token_embeds = text_encoder.get_input_embeddings().weight.data - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - if initializer_token: - token_ids = tokenizer.encode(initializer_token, add_special_tokens=False) - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = token_embeds[token_ids[i * len(token_ids) // num_vec_per_token]] - else: - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = torch.randn_like(token_embeds[placeholder_token_id]) - return placeholder_token - - -def save_progress(tokenizer, text_encoder, accelerator, save_path): - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_ids] - if len(placeholder_token_ids) == 1: - learned_embeds = learned_embeds[None] - learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict): - for placeholder_token in learned_embeds_dict: - placeholder_embeds = learned_embeds_dict[placeholder_token] - num_vec_per_token = placeholder_embeds.shape[0] - placeholder_embeds = placeholder_embeds.to(dtype=text_encoder.dtype) - add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=num_vec_per_token) - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - token_embeds = text_encoder.get_input_embeddings().weight.data - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = placeholder_embeds[i] - - -def load_multitoken_tokenizer_from_automatic(tokenizer, text_encoder, automatic_dict, placeholder_token): - """ - Automatic1111's tokens have format - {'string_to_token': {'*': 265}, 'string_to_param': {'*': tensor([[ 0.0833, 0.0030, 0.0057, ..., -0.0264, -0.0616, -0.0529], - [ 0.0058, -0.0190, -0.0584, ..., -0.0025, -0.0945, -0.0490], - [ 0.0916, 0.0025, 0.0365, ..., -0.0685, -0.0124, 0.0728], - [ 0.0812, -0.0199, -0.0100, ..., -0.0581, -0.0780, 0.0254]], - requires_grad=True)}, 'name': 'FloralMarble-400', 'step': 399, 'sd_checkpoint': '4bdfc29c', 'sd_checkpoint_name': 'SD2.1-768'} - """ - learned_embeds_dict = {} - learned_embeds_dict[placeholder_token] = automatic_dict["string_to_param"]["*"] - load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict) - - -def get_mask(tokenizer, accelerator): - # Get the mask of the weights that won't change - mask = torch.ones(len(tokenizer)).to(accelerator.device, dtype=torch.bool) - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - for i in range(len(placeholder_token_ids)): - mask = mask & (torch.arange(len(tokenizer)) != placeholder_token_ids[i]).to(accelerator.device) - return mask - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--progressive_tokens_max_steps", - type=int, - default=2000, - help="The number of steps until all tokens will be used.", - ) - parser.add_argument( - "--progressive_tokens", - action="store_true", - help="Progressively train the tokens. For example, first train for 1 token, then 2 tokens and so on.", - ) - parser.add_argument("--vector_shuffle", action="store_true", help="Shuffling tokens durint training") - parser.add_argument( - "--num_vec_per_token", - type=int, - default=1, - help=( - "The number of vectors used to represent the placeholder token. The higher the number, the better the" - " result at the cost of editability. This can be fixed by prompt editing." - ), - ) - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--only_save_embeds", - action="store_true", - default=False, - help="Save only the embeddings for the new concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run validation every X epochs. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - vector_shuffle=False, - progressive_tokens=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - self.vector_shuffle = vector_shuffle - self.progressive_tokens = progressive_tokens - self.prop_tokens_to_load = 0 - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer.encode( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - vector_shuffle=self.vector_shuffle, - prop_tokens_to_load=self.prop_tokens_to_load if self.progressive_tokens else 1.0, - )[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load tokenizer - if args.tokenizer_name: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - if is_xformers_available(): - try: - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - "Could not enable memory efficient attention. Make sure xformers is installed" - f" correctly and a GPU is available: {e}" - ) - add_tokens(tokenizer, text_encoder, args.placeholder_token, args.num_vec_per_token, args.initializer_token) - - # Freeze vae and unet - vae.requires_grad_(False) - unet.requires_grad_(False) - # Freeze all parameters except for the token embeddings in text encoder - text_encoder.text_model.encoder.requires_grad_(False) - text_encoder.text_model.final_layer_norm.requires_grad_(False) - text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) - - if args.gradient_checkpointing: - # Keep unet in train mode if we are using gradient checkpointing to save memory. - # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. - unet.train() - text_encoder.gradient_checkpointing_enable() - unet.enable_gradient_checkpointing() - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - # Prepare everything with our `accelerator`. - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the unet and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and unet to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - # keep original embeddings as reference - orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() - - for epoch in range(first_epoch, args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - if args.progressive_tokens: - train_dataset.prop_tokens_to_load = float(global_step) / args.progressive_tokens_max_steps - - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Let's make sure we don't update any embedding weights besides the newly added token - index_no_updates = get_mask(tokenizer, accelerator) - with torch.no_grad(): - accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ - index_no_updates - ] = orig_embeds_params[index_no_updates] - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process and args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=unet, - vae=vae, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = ( - None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - ) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.push_to_hub and args.only_save_embeds: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = not args.only_save_embeds - if save_full_model: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/patgpt4/MusicGen/tests/modules/__init__.py b/spaces/patgpt4/MusicGen/tests/modules/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/tests/modules/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/pietrolesci/wordify/src/components.py b/spaces/pietrolesci/wordify/src/components.py deleted file mode 100644 index 454f65a20ffff9c0c9cb018a8ec5473bf46c4372..0000000000000000000000000000000000000000 --- a/spaces/pietrolesci/wordify/src/components.py +++ /dev/null @@ -1,529 +0,0 @@ -import time - -import pandas as pd -import streamlit as st - -from src.configs import ColumnNames, Languages, PreprocessingConfigs, SupportedFiles -from src.preprocessing import PreprocessingPipeline -from src.utils import get_col_indices -from src.wordifier import input_transform, output_transform, wordifier - - -def docs(): - steps_options = list(PreprocessingPipeline.pipeline_components().keys()) - - with st.expander("Documentation for the Advanced Options"): - component_name = st.selectbox( - "Select a processing step to see docs", - options=[""] + steps_options, - index=1, - format_func=lambda x: x.replace("_", " ").title(), - help="Select a processing step to see the relative documentation", - ) - - pipe_component = PreprocessingPipeline.pipeline_components().get(component_name) - if pipe_component is not None: - st.help(pipe_component) - - -def form(df): - st.subheader("Parameters") - with st.form("Wordify form"): - col1, col2, col3 = st.columns(3) - cols = [""] + df.columns.tolist() - text_index, label_index = get_col_indices(cols) - with col1: - label_column = st.selectbox( - "Select label column", - cols, - index=label_index, - help="Select the column containing the labels", - ) - with col2: - text_column = st.selectbox( - "Select text column", - cols, - index=text_index, - help="Select the column containing the text", - ) - with col3: - language = st.selectbox( - "Select language", - [i.name for i in Languages], - help=""" - Select the language of your texts amongst the supported one. If we currently do - not support it, feel free to open an issue - """, - ) - - with st.expander("Advanced Options"): - disable_preprocessing = st.checkbox("Disable Preprocessing", False) - - if not disable_preprocessing: - steps_options = list(PreprocessingPipeline.pipeline_components().keys()) - - pre_steps = st.multiselect( - "Select pre-lemmatization processing steps (ordered)", - options=steps_options, - default=[ - steps_options[i] for i in PreprocessingConfigs.DEFAULT_PRE.value - ], - format_func=lambda x: x.replace("_", " ").title(), - help="Select the processing steps to apply before the text is lemmatized", - ) - - lammatization_options = list( - PreprocessingPipeline.lemmatization_component().keys() - ) - lemmatization_step = st.selectbox( - "Select lemmatization", - options=lammatization_options, - index=PreprocessingConfigs.DEFAULT_LEMMA.value, - help="Select lemmatization procedure. This is automatically disabled when the selected language is Chinese or MultiLanguage.", - ) - - post_steps = st.multiselect( - "Select post-lemmatization processing steps (ordered)", - options=steps_options, - default=[ - steps_options[i] - for i in PreprocessingConfigs.DEFAULT_POST.value - ], - format_func=lambda x: x.replace("_", " ").title(), - help="Select the processing steps to apply after the text is lemmatized", - ) - - # Every form must have a submit button. - submitted = st.form_submit_button("Submit") - if submitted: - - start_time = time.time() - - # warnings about inputs - language_specific_warnings( - pre_steps, post_steps, lemmatization_step, language - ) - - # preprocess - if not disable_preprocessing: - with st.spinner("Step 1/4: Preprocessing text"): - pipe = PreprocessingPipeline( - language, pre_steps, lemmatization_step, post_steps - ) - df = pipe.vaex_process(df, text_column) - else: - with st.spinner( - "Step 1/4: Preprocessing has been disabled - doing nothing" - ): - df = df.rename( - columns={text_column: ColumnNames.PROCESSED_TEXT.value} - ) - time.sleep(1.2) - - # prepare input - with st.spinner("Step 2/4: Preparing inputs"): - input_dict = input_transform( - df[ColumnNames.PROCESSED_TEXT.value], df[label_column] - ) - - # wordify - with st.spinner("Step 3/4: Wordifying"): - pos, neg = wordifier(**input_dict) - - # prepare output - with st.spinner("Step 4/4: Preparing outputs"): - new_df = output_transform(pos, neg) - - end_time = time.time() - meta_data = { - "vocab_size": input_dict["X"].shape[1], - "n_instances": input_dict["X"].shape[0], - "vocabulary": pd.DataFrame({"Vocabulary": input_dict["X_names"]}), - "labels": pd.DataFrame({"Labels": input_dict["y_names"]}), - "time": round(end_time - start_time), - } - - return new_df, meta_data - - -def presentation(): - st.markdown( - """ - Wordify makes it easy to identify words that discriminate categories in textual data. - It was proposed by Dirk Hovy, Shiri Melumad, and Jeffrey J Inman in - [Wordify: A Tool for Discovering and Differentiating Consumer Vocabularies](https://academic.oup.com/jcr/article/48/3/394/6199426). - - :point_left: Start by uploading a file. *Once you upload the file, __Wordify__ will - show an interactive UI*. - """ - ) - - st.subheader("Quickstart") - st.markdown( - """ - - There is no need to preprocess your text, we will take care of it. However, if you wish to - do so, turn off preprocessing in the `Advanced Settings` in the interactive UI. - - - We expect a file with two columns: `label` with the labels and `text` with the texts (the names are case insensitive). If - you provide a file following this naming convention, Wordify will automatically select the - correct columns. However, if you wish to use a different nomenclature, you will be asked to - provide the column names in the interactive UI. - - - Maintain a stable connection with the Wordify page until you download the data. If you refresh the page, - a new Wordify session is created and your progress is lost. - - - Wordify performances depend on the length of the individual texts in your file. The longer the texts, the higher - the chance that Wordify considers many n-grams. More n-grams means more data to analyse in each run. - We tailored Wordify performance for files of approximately 5'000 lines or 50k n-grams. In such cases we expect a runtime - between 90 seconds and 10 minutes. If your file is big, try to apply a stricter preprocessing of the text in the `Advanced Options` section. - If this is not enough, please do feel free to reach out to us directly so we can help. - """ - ) - - how_to_use() - how_it_works() - - -def how_to_use(): - with st.expander("How to use Wordify"): - - st.subheader("Input format") - st.markdown( - """ - Please note that your file must have a column with the texts and a column with the labels, - for example - """ - ) - st.table( - { - "text": ["A review", "Another review", "Yet another one", "etc"], - "label": ["Good", "Bad", "Good", "etc"], - } - ) - - st.subheader("Output format") - st.markdown( - """ - As a result of the process, you will get a file containing 4 columns: - - `Word`: the n-gram (i.e., a word or a concatenation of words) considered - - `Score`: the wordify score, between 0 and 1, of how important is `Word` to discrimitate `Label` - - `Label`: the label that `Word` is discriminating - - `Correlation`: how `Word` is correlated with `Label` (e.g., "negative" means that if `Word` is present in the text then the label is less likely to be `Label`) - - for example - """ - ) - - st.table( - { - "Word": ["good", "awful", "bad service", "etc"], - "Score": ["0.52", "0.49", "0.35", "etc"], - "Label": ["Good", "Bad", "Good", "etc"], - "Correlation": ["positive", "positive", "negative", "etc"], - } - ) - - -def how_it_works(): - table2 = pd.DataFrame( - { - "Text": [ - "Spice light wine", - "Wine oak heavy", - "Chardonnay buttery light", - "Wine light cherry", - "Chardonnay wine oak buttery", - ], - "Label": [ - "Italy", - "United States", - "United States", - "Italy", - "United States", - ], - } - ) - - table3 = pd.DataFrame( - { - "Model": [1, 2, 3, 4], - "Buttery": [0.32, 0, 0, 0], - "Chardonnay": [3.78, 0, 0, 0], - "Cherry": [-2.49, 0, 0, -6.2], - "Heavy": [0, 3.62, 0, 0], - "Light": [-1.72, -4.38, 0, 0], - "Oak": [0, 0, 0, 0], - "Spice": [-2.49, 0, -6.2, 0], - "Wine": [0, 0, 0, 0], - }, - dtype=str, - ) - - table4 = pd.DataFrame( - { - "Coefficient valence": ["positive", "negative"], - "Buttery": [0.25, 0], - "Chardonnay": [0.25, 0], - "Cherry": [0, 0.5], - "Heavy": [0.25, 0], - "Light": [0, 0.5], - "Oak": [0, 0], - "Spice": [0, 0.5], - "Wine": [0, 0], - }, - dtype=str, - ) - - with st.expander("How Wordify works: an illustrative example"): - st.markdown( - f""" - To provide an intuitive example of how Wordify works, imagine we have the following five documents with hypothetical - descriptions of wines from the United States and Italy listed in table 2 (preprocessed to remove noise words). - """ - ) - st.caption("Table 2: Descriptions of wines from the USA and Italy.") - st.table(table2) - - st.markdown( - """ - Wordify now draws, say, four independent samples from this data, for example: `(1,3,4,5)`, `(1,2,2,4)`, `(1,1,2,3)`, and `(2,3,4,4)`. - We fit an L1-regularized Logistic Regression on each, with the United States as target class. This result in the following sparse - vectors of coefficients reported in table 3 (indicators that are not present in a run are listed as 0 here): - """ - ) - st.caption( - "Table 3: Coefficients for frequency of indicators in each of the four runs for US wines." - ) - st.table(table3) - - st.markdown( - """ - We can now count for each indicator how many times out of the four runs it received a non-zero coefficient (the magnitude does not matter). - We distinguish by positive and negative coefficients, and divide the result by the number of runs (here, four), which yields the final indicators - that are positively and negatively correlated with the US wines. - """ - ) - st.caption( - "Table 4: Final set of indicators that are positively versus negatively correlated with US wines." - ) - st.table(table4) - st.markdown( - """ - The results of table 4 suggest that a wine is likely to be from the United States if its description contains any of the following words: "buttery", - "chardonnay", or "heavy", and these words are similarly discriminative. In contrast, a wine is likely to not be from the United States if it contains - the words "spice", "light", or "cherry". It is also worth noting that "oak" and "wine", which were present for both Italian and US wines, were ultimately - not selected as discriminative indicators of US wines. Finally, we would conduct an analogous analysis with Italy as the target class to determine which - indicators are most and least discriminative of Italian wines. - """ - ) - - -def faq(): - st.subheader("Frequently Asked Questions") - with st.expander("What is Wordify?"): - st.markdown( - """ - __Wordify__ is a way to find out which n-grams (i.e., words and concatenations of words) are most indicative for each of your dependent - variable values. - """ - ) - - with st.expander("What happens to my data?"): - st.markdown( - """ - Nothing. We never store the data you upload on disk: it is only kept in memory for the - duration of the modeling, and then deleted. We do not retain any copies or traces of - your data. - """ - ) - - with st.expander("What input formats do you support?"): - st.markdown( - f""" - We currently support {", ".join([i.name for i in SupportedFiles])}. - """ - ) - - with st.expander("Do I need to preprocess my data?"): - st.markdown( - """ - No, there is no need to preprocess your text, we will take of it. - However, if you wish to do so, turn off preprocessing in the `Advanced - Settings` in the interactive UI. - """ - ) - - with st.expander("What languages are supported?"): - st.markdown( - f""" - Currently we support: {", ".join([i.name for i in Languages])}. - """ - ) - - with st.expander("How does it work?"): - st.markdown( - """ - It uses a variant of the Stability Selection algorithm - [(Meinshausen and Bühlmann, 2010)](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9868.2010.00740.x) - to fit hundreds of logistic regression models on random subsets of the data, using - different L1 penalties to drive as many of the term coefficients to 0. Any terms that - receive a non-zero coefficient in at least 30% of all model runs can be seen as stable - indicators. - """ - ) - - with st.expander("What libraries do you use?"): - st.markdown( - """ - We leverage the power of many great libraries in the Python ecosystem: - - `Streamlit` - - `Pandas` - - `Numpy` - - `Spacy` - - `Scikit-learn` - - `Vaex` - """ - ) - - with st.expander("How much data do I need?"): - st.markdown( - """ - We recommend at least 2000 instances, the more, the better. With fewer instances, the - results are less replicable and reliable. - """ - ) - - with st.expander("Is there a paper I can cite?"): - st.markdown( - """ - Yes, please! Cite [Wordify: A Tool for Discovering and Differentiating Consumer Vocabularies](https://academic.oup.com/jcr/article/48/3/394/6199426) - ``` - @article{10.1093/jcr/ucab018, - author = {Hovy, Dirk and Melumad, Shiri and Inman, J Jeffrey}, - title = "{Wordify: A Tool for Discovering and Differentiating Consumer Vocabularies}", - journal = {Journal of Consumer Research}, - volume = {48}, - number = {3}, - pages = {394-414}, - year = {2021}, - month = {03}, - abstract = "{This work describes and illustrates a free and easy-to-use online text-analysis tool for understanding how consumer word use varies across contexts. The tool, Wordify, uses randomized logistic regression (RLR) to identify the words that best discriminate texts drawn from different pre-classified corpora, such as posts written by men versus women, or texts containing mostly negative versus positive valence. We present illustrative examples to show how the tool can be used for such diverse purposes as (1) uncovering the distinctive vocabularies that consumers use when writing reviews on smartphones versus PCs, (2) discovering how the words used in Tweets differ between presumed supporters and opponents of a controversial ad, and (3) expanding the dictionaries of dictionary-based sentiment-measurement tools. We show empirically that Wordify’s RLR algorithm performs better at discriminating vocabularies than support vector machines and chi-square selectors, while offering significant advantages in computing time. A discussion is also provided on the use of Wordify in conjunction with other text-analysis tools, such as probabilistic topic modeling and sentiment analysis, to gain more profound knowledge of the role of language in consumer behavior.}", - issn = {0093-5301}, - doi = {10.1093/jcr/ucab018}, - url = {https://doi.org/10.1093/jcr/ucab018}, - eprint = {https://academic.oup.com/jcr/article-pdf/48/3/394/40853499/ucab018.pdf}, - } - ``` - """ - ) - - with st.expander("How can I reach out to the Wordify team?"): - st.markdown(contacts(), unsafe_allow_html=True) - - -def footer(): - st.sidebar.markdown( - """ - Built with ♥ by [`Pietro Lesci`](https://pietrolesci.github.io/) and [`MilaNLP`](https://twitter.com/MilaNLProc?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor). - """, - unsafe_allow_html=True, - ) - - -def contacts(): - return """ - You can reach out to us via email, phone, or via mail - - - :email: wordify@unibocconi.it - - - :telephone_receiver: +39 02 5836 2604 - - - :postbox: Via Röntgen n. 1, Milan 20136 (ITALY) - - - - """ - - -def analysis(outputs): - - df, meta_data = outputs - - st.subheader("Results") - st.markdown( - """ - Wordify successfully run and you can now look at the results before downloading the wordified file. - In particular, you can use the slider to filter only those words that have a `Score` above (>=) a certain threshold. - For meaningful results, we suggest keeping the threshold to 0.25. - """ - ) - - col1, col2 = st.columns([2, 1]) - - with col1: - threshold = st.slider( - "Select threshold", - min_value=0.0, - max_value=1.0, - step=0.01, - value=0.25, - help="To return everything, select 0.", - ) - subset_df = df.loc[df["Score"] >= threshold].reset_index(drop=True) - st.write(subset_df) - - with col2: - st.markdown("**Some info about your data**") - st.markdown( - f""" - Your input file contained {meta_data["n_instances"]:,} rows and - Wordify took {meta_data["time"]:,} seconds to run. - - The total number of n-grams Wordify considered is {meta_data["vocab_size"]:,}. - With the current selected threshold on the `Score` (>={threshold}) the output contains {subset_df["Word"].nunique():,} - unique n-grams. - """ - ) - - with st.expander("Vocabulary"): - st.markdown( - "The table below shows all candidate n-grams that Wordify considered" - ) - st.write(meta_data["vocabulary"]) - - with st.expander("Labels"): - st.markdown( - "The table below summarizes the labels that your file contained" - ) - st.write(meta_data["labels"]) - - return subset_df - - -# warning for Chinese and MultiLanguage -def language_specific_warnings(pre_steps, post_steps, lemmatization_step, language): - - if language in ("MultiLanguage", "Chinese") and ( - "remove_non_words" in pre_steps or "remove_non_words" in post_steps - ): - msg = """ - NOTE: for Chinese and MultiLanguage we automatically substitute `remove_non_words` with - `remove_numbers` and `remove_punctuation` to avoid wrong results. - """ - st.info(msg) - - msg = "NOTE: for Chinese and MultiLanguage we turn-off lemmatization automatically." - if lemmatization_step == "Spacy lemmatizer (keep stopwords)" and language in ( - "MultiLanguage", - "Chinese", - ): - st.info(msg) - - elif lemmatization_step == "Spacy lemmatizer (remove stopwords)" and language in ( - "MultiLanguage", - "Chinese", - ): - st.info( - msg - + " However we will still remove stopwords since you selected `Spacy lemmatizer (remove stopwords)`." - ) diff --git a/spaces/pplonski/NLP-SpaCy-Mercury/app.py b/spaces/pplonski/NLP-SpaCy-Mercury/app.py deleted file mode 100644 index 417c30fed08557a32b544709f1e6b5974349b717..0000000000000000000000000000000000000000 --- a/spaces/pplonski/NLP-SpaCy-Mercury/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import os -from dotenv import load_dotenv -from subprocess import Popen -load_dotenv() - -command = ["mercury", "run", f"0.0.0.0:{os.environ.get('PORT', 7860)}"] -worker = Popen(command) -worker.wait() \ No newline at end of file diff --git a/spaces/ppsingh/cpu-demo/appStore/__init__.py b/spaces/ppsingh/cpu-demo/appStore/__init__.py deleted file mode 100644 index 07c25e5348471a19c24ca6468ee403b472c68906..0000000000000000000000000000000000000000 --- a/spaces/ppsingh/cpu-demo/appStore/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# creating appstore package \ No newline at end of file diff --git a/spaces/pragnakalp/Audio_Emotion_Recognition/README.md b/spaces/pragnakalp/Audio_Emotion_Recognition/README.md deleted file mode 100644 index 77073f45b5bfaf6d9445c7f04bd3228fffd3a090..0000000000000000000000000000000000000000 --- a/spaces/pragnakalp/Audio_Emotion_Recognition/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Audio Emotion Recognition -emoji: 🎼 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_hostapi.h b/spaces/prerna9811/Chord/portaudio/src/common/pa_hostapi.h deleted file mode 100644 index 4ac3ab60e9299f32e5ec78912fc199d6cfdfbdf3..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/common/pa_hostapi.h +++ /dev/null @@ -1,362 +0,0 @@ -#ifndef PA_HOSTAPI_H -#define PA_HOSTAPI_H -/* - * $Id$ - * Portable Audio I/O Library - * host api representation - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2008 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Interfaces and representation structures used by pa_front.c - to manage and communicate with host API implementations. -*/ - -#include "portaudio.h" - -/** -The PA_NO_* host API macros are now deprecated in favor of PA_USE_* macros. -PA_USE_* indicates whether a particular host API will be initialized by PortAudio. -An undefined or 0 value indicates that the host API will not be used. A value of 1 -indicates that the host API will be used. PA_USE_* macros should be left undefined -or defined to either 0 or 1. - -The code below ensures that PA_USE_* macros are always defined and have value -0 or 1. Undefined symbols are defaulted to 0. Symbols that are neither 0 nor 1 -are defaulted to 1. -*/ - -#ifndef PA_USE_SKELETON -#define PA_USE_SKELETON 0 -#elif (PA_USE_SKELETON != 0) && (PA_USE_SKELETON != 1) -#undef PA_USE_SKELETON -#define PA_USE_SKELETON 1 -#endif - -#if defined(PA_NO_ASIO) || defined(PA_NO_DS) || defined(PA_NO_WMME) || defined(PA_NO_WASAPI) || defined(PA_NO_WDMKS) -#error "Portaudio: PA_NO_ is no longer supported, please remove definition and use PA_USE_ instead" -#endif - -#ifndef PA_USE_ASIO -#define PA_USE_ASIO 0 -#elif (PA_USE_ASIO != 0) && (PA_USE_ASIO != 1) -#undef PA_USE_ASIO -#define PA_USE_ASIO 1 -#endif - -#ifndef PA_USE_DS -#define PA_USE_DS 0 -#elif (PA_USE_DS != 0) && (PA_USE_DS != 1) -#undef PA_USE_DS -#define PA_USE_DS 1 -#endif - -#ifndef PA_USE_WMME -#define PA_USE_WMME 0 -#elif (PA_USE_WMME != 0) && (PA_USE_WMME != 1) -#undef PA_USE_WMME -#define PA_USE_WMME 1 -#endif - -#ifndef PA_USE_WASAPI -#define PA_USE_WASAPI 0 -#elif (PA_USE_WASAPI != 0) && (PA_USE_WASAPI != 1) -#undef PA_USE_WASAPI -#define PA_USE_WASAPI 1 -#endif - -#ifndef PA_USE_WDMKS -#define PA_USE_WDMKS 0 -#elif (PA_USE_WDMKS != 0) && (PA_USE_WDMKS != 1) -#undef PA_USE_WDMKS -#define PA_USE_WDMKS 1 -#endif - -/* Set default values for Unix based APIs. */ -#if defined(PA_NO_OSS) || defined(PA_NO_ALSA) || defined(PA_NO_JACK) || defined(PA_NO_COREAUDIO) || defined(PA_NO_SGI) || defined(PA_NO_ASIHPI) -#error "Portaudio: PA_NO_ is no longer supported, please remove definition and use PA_USE_ instead" -#endif - -#ifndef PA_USE_OSS -#define PA_USE_OSS 0 -#elif (PA_USE_OSS != 0) && (PA_USE_OSS != 1) -#undef PA_USE_OSS -#define PA_USE_OSS 1 -#endif - -#ifndef PA_USE_ALSA -#define PA_USE_ALSA 0 -#elif (PA_USE_ALSA != 0) && (PA_USE_ALSA != 1) -#undef PA_USE_ALSA -#define PA_USE_ALSA 1 -#endif - -#ifndef PA_USE_JACK -#define PA_USE_JACK 0 -#elif (PA_USE_JACK != 0) && (PA_USE_JACK != 1) -#undef PA_USE_JACK -#define PA_USE_JACK 1 -#endif - -#ifndef PA_USE_SGI -#define PA_USE_SGI 0 -#elif (PA_USE_SGI != 0) && (PA_USE_SGI != 1) -#undef PA_USE_SGI -#define PA_USE_SGI 1 -#endif - -#ifndef PA_USE_COREAUDIO -#define PA_USE_COREAUDIO 0 -#elif (PA_USE_COREAUDIO != 0) && (PA_USE_COREAUDIO != 1) -#undef PA_USE_COREAUDIO -#define PA_USE_COREAUDIO 1 -#endif - -#ifndef PA_USE_ASIHPI -#define PA_USE_ASIHPI 0 -#elif (PA_USE_ASIHPI != 0) && (PA_USE_ASIHPI != 1) -#undef PA_USE_ASIHPI -#define PA_USE_ASIHPI 1 -#endif - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - -/** **FOR THE USE OF pa_front.c ONLY** - Do NOT use fields in this structure, they my change at any time. - Use functions defined in pa_util.h if you think you need functionality - which can be derived from here. -*/ -typedef struct PaUtilPrivatePaFrontHostApiInfo { - - - unsigned long baseDeviceIndex; -}PaUtilPrivatePaFrontHostApiInfo; - - -/** The common header for all data structures whose pointers are passed through - the hostApiSpecificStreamInfo field of the PaStreamParameters structure. - Note that in order to keep the public PortAudio interface clean, this structure - is not used explicitly when declaring hostApiSpecificStreamInfo data structures. - However, some code in pa_front depends on the first 3 members being equivalent - with this structure. - @see PaStreamParameters -*/ -typedef struct PaUtilHostApiSpecificStreamInfoHeader -{ - unsigned long size; /**< size of whole structure including this header */ - PaHostApiTypeId hostApiType; /**< host API for which this data is intended */ - unsigned long version; /**< structure version */ -} PaUtilHostApiSpecificStreamInfoHeader; - - - -/** A structure representing the interface to a host API. Contains both - concrete data and pointers to functions which implement the interface. -*/ -typedef struct PaUtilHostApiRepresentation { - PaUtilPrivatePaFrontHostApiInfo privatePaFrontInfo; - - /** The host api implementation should populate the info field. In the - case of info.defaultInputDevice and info.defaultOutputDevice the - values stored should be 0 based indices within the host api's own - device index range (0 to deviceCount). These values will be converted - to global device indices by pa_front after PaUtilHostApiInitializer() - returns. - */ - PaHostApiInfo info; - - PaDeviceInfo** deviceInfos; - - /** - (*Terminate)() is guaranteed to be called with a valid - parameter, which was previously returned from the same implementation's - initializer. - */ - void (*Terminate)( struct PaUtilHostApiRepresentation *hostApi ); - - /** - The inputParameters and outputParameters pointers should not be saved - as they will not remain valid after OpenStream is called. - - - The following guarantees are made about parameters to (*OpenStream)(): - - [NOTE: the following list up to *END PA FRONT VALIDATIONS* should be - kept in sync with the one for ValidateOpenStreamParameters and - Pa_OpenStream in pa_front.c] - - PaHostApiRepresentation *hostApi - - is valid for this implementation - - PaStream** stream - - is non-null - - - at least one of inputParameters & outputParmeters is valid (not NULL) - - - if inputParameters & outputParmeters are both valid, that - inputParameters->device & outputParmeters->device both use the same host api - - PaDeviceIndex inputParameters->device - - is within range (0 to Pa_CountDevices-1) Or: - - is paUseHostApiSpecificDeviceSpecification and - inputParameters->hostApiSpecificStreamInfo is non-NULL and refers - to a valid host api - - int inputParameters->numChannels - - if inputParameters->device is not paUseHostApiSpecificDeviceSpecification, numInputChannels is > 0 - - upper bound is NOT validated against device capabilities - - PaSampleFormat inputParameters->sampleFormat - - is one of the sample formats defined in portaudio.h - - void *inputParameters->hostApiSpecificStreamInfo - - if supplied its hostApi field matches the input device's host Api - - PaDeviceIndex outputParmeters->device - - is within range (0 to Pa_CountDevices-1) - - int outputParmeters->numChannels - - if inputDevice is valid, numInputChannels is > 0 - - upper bound is NOT validated against device capabilities - - PaSampleFormat outputParmeters->sampleFormat - - is one of the sample formats defined in portaudio.h - - void *outputParmeters->hostApiSpecificStreamInfo - - if supplied its hostApi field matches the output device's host Api - - double sampleRate - - is not an 'absurd' rate (less than 1000. or greater than 384000.) - - sampleRate is NOT validated against device capabilities - - PaStreamFlags streamFlags - - unused platform neutral flags are zero - - paNeverDropInput is only used for full-duplex callback streams - with variable buffer size (paFramesPerBufferUnspecified) - - [*END PA FRONT VALIDATIONS*] - - - The following validations MUST be performed by (*OpenStream)(): - - - check that input device can support numInputChannels - - - check that input device can support inputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - if inputStreamInfo is supplied, validate its contents, - or return an error if no inputStreamInfo is expected - - - check that output device can support numOutputChannels - - - check that output device can support outputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - if outputStreamInfo is supplied, validate its contents, - or return an error if no outputStreamInfo is expected - - - if a full duplex stream is requested, check that the combination - of input and output parameters is supported - - - check that the device supports sampleRate - - - alter sampleRate to a close allowable rate if necessary - - - validate inputLatency and outputLatency - - - validate any platform specific flags, if flags are supplied they - must be valid. - */ - PaError (*OpenStream)( struct PaUtilHostApiRepresentation *hostApi, - PaStream** stream, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerCallback, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ); - - - PaError (*IsFormatSupported)( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ); -} PaUtilHostApiRepresentation; - - -/** Prototype for the initialization function which must be implemented by every - host API. - - This function should only return an error other than paNoError if it encounters - an unexpected and fatal error (memory allocation error for example). In general, - there may be conditions under which it returns a NULL interface pointer and also - returns paNoError. For example, if the ASIO implementation detects that ASIO is - not installed, it should return a NULL interface, and paNoError. - - @see paHostApiInitializers -*/ -typedef PaError PaUtilHostApiInitializer( PaUtilHostApiRepresentation**, PaHostApiIndex ); - - -/** paHostApiInitializers is a NULL-terminated array of host API initialization - functions. These functions are called by pa_front.c to initialize the host APIs - when the client calls Pa_Initialize(). - - The initialization functions are invoked in order. - - The first successfully initialized host API that has a default input *or* output - device is used as the default PortAudio host API. This is based on the logic that - there is only one default host API, and it must contain the default input and output - devices (if defined). - - There is a platform specific file that defines paHostApiInitializers for that - platform, pa_win/pa_win_hostapis.c contains the Win32 definitions for example. -*/ -extern PaUtilHostApiInitializer *paHostApiInitializers[]; - - -#ifdef __cplusplus -} -#endif /* __cplusplus */ -#endif /* PA_HOSTAPI_H */ diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/mmdeviceapi.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/mmdeviceapi.h deleted file mode 100644 index a75e4758ed3824ab91d371bf3aeffbefb821bb91..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/mmdeviceapi.h +++ /dev/null @@ -1,929 +0,0 @@ - - -/* this ALWAYS GENERATED file contains the definitions for the interfaces */ - - - /* File created by MIDL compiler version 7.00.0499 */ -/* Compiler settings for mmdeviceapi.idl: - Oicf, W1, Zp8, env=Win32 (32b run) - protocol : dce , ms_ext, c_ext, robust - error checks: allocation ref bounds_check enum stub_data - VC __declspec() decoration level: - __declspec(uuid()), __declspec(selectany), __declspec(novtable) - DECLSPEC_UUID(), MIDL_INTERFACE() -*/ -//@@MIDL_FILE_HEADING( ) - -#pragma warning( disable: 4049 ) /* more than 64k source lines */ - - -/* verify that the version is high enough to compile this file*/ -#ifndef __REQUIRED_RPCNDR_H_VERSION__ -#define __REQUIRED_RPCNDR_H_VERSION__ 500 -#endif - -/* verify that the version is high enough to compile this file*/ -#ifndef __REQUIRED_RPCSAL_H_VERSION__ -#define __REQUIRED_RPCSAL_H_VERSION__ 100 -#endif - -#include "rpc.h" -#include "rpcndr.h" - -#ifndef __RPCNDR_H_VERSION__ -#error this stub requires an updated version of -#endif // __RPCNDR_H_VERSION__ - -#ifndef COM_NO_WINDOWS_H -#include "windows.h" -#include "ole2.h" -#endif /*COM_NO_WINDOWS_H*/ - -#ifndef __mmdeviceapi_h__ -#define __mmdeviceapi_h__ - -#if __GNUC__ >=3 -#pragma GCC system_header -#endif - -#if defined(_MSC_VER) && (_MSC_VER >= 1020) -#pragma once -#endif - -/* Forward Declarations */ - -#ifndef __IMMNotificationClient_FWD_DEFINED__ -#define __IMMNotificationClient_FWD_DEFINED__ -typedef interface IMMNotificationClient IMMNotificationClient; -#endif /* __IMMNotificationClient_FWD_DEFINED__ */ - - -#ifndef __IMMDevice_FWD_DEFINED__ -#define __IMMDevice_FWD_DEFINED__ -typedef interface IMMDevice IMMDevice; -#endif /* __IMMDevice_FWD_DEFINED__ */ - - -#ifndef __IMMDeviceCollection_FWD_DEFINED__ -#define __IMMDeviceCollection_FWD_DEFINED__ -typedef interface IMMDeviceCollection IMMDeviceCollection; -#endif /* __IMMDeviceCollection_FWD_DEFINED__ */ - - -#ifndef __IMMEndpoint_FWD_DEFINED__ -#define __IMMEndpoint_FWD_DEFINED__ -typedef interface IMMEndpoint IMMEndpoint; -#endif /* __IMMEndpoint_FWD_DEFINED__ */ - - -#ifndef __IMMDeviceEnumerator_FWD_DEFINED__ -#define __IMMDeviceEnumerator_FWD_DEFINED__ -typedef interface IMMDeviceEnumerator IMMDeviceEnumerator; -#endif /* __IMMDeviceEnumerator_FWD_DEFINED__ */ - - -#ifndef __IMMDeviceActivator_FWD_DEFINED__ -#define __IMMDeviceActivator_FWD_DEFINED__ -typedef interface IMMDeviceActivator IMMDeviceActivator; -#endif /* __IMMDeviceActivator_FWD_DEFINED__ */ - - -#ifndef __MMDeviceEnumerator_FWD_DEFINED__ -#define __MMDeviceEnumerator_FWD_DEFINED__ - -#ifdef __cplusplus -typedef class MMDeviceEnumerator MMDeviceEnumerator; -#else -typedef struct MMDeviceEnumerator MMDeviceEnumerator; -#endif /* __cplusplus */ - -#endif /* __MMDeviceEnumerator_FWD_DEFINED__ */ - - -/* header files for imported files */ -#include "unknwn.h" -#include "propsys.h" - -#ifdef __cplusplus -extern "C"{ -#endif - - -/* interface __MIDL_itf_mmdeviceapi_0000_0000 */ -/* [local] */ - -#define E_NOTFOUND HRESULT_FROM_WIN32(ERROR_NOT_FOUND) -#define E_UNSUPPORTED_TYPE HRESULT_FROM_WIN32(ERROR_UNSUPPORTED_TYPE) -#define DEVICE_STATE_ACTIVE 0x00000001 -#define DEVICE_STATE_DISABLED 0x00000002 -#define DEVICE_STATE_NOTPRESENT 0x00000004 -#define DEVICE_STATE_UNPLUGGED 0x00000008 -#define DEVICE_STATEMASK_ALL 0x0000000f -#ifdef DEFINE_PROPERTYKEY -#undef DEFINE_PROPERTYKEY -#endif -#ifdef INITGUID -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY name = { { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } }, pid } -#else -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY name -#endif // INITGUID -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_FormFactor, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 0); -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_ControlPanelPageProvider, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 1); -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_Association, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 2); -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_PhysicalSpeakers, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 3); -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_GUID, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 4); -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_Disable_SysFx, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 5); -#define ENDPOINT_SYSFX_ENABLED 0x00000000 // System Effects are enabled. -#define ENDPOINT_SYSFX_DISABLED 0x00000001 // System Effects are disabled. -DEFINE_PROPERTYKEY(PKEY_AudioEndpoint_FullRangeSpeakers, 0x1da5d803, 0xd492, 0x4edd, 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e, 6); -DEFINE_PROPERTYKEY(PKEY_AudioEngine_DeviceFormat, 0xf19f064d, 0x82c, 0x4e27, 0xbc, 0x73, 0x68, 0x82, 0xa1, 0xbb, 0x8e, 0x4c, 0); -typedef struct tagDIRECTX_AUDIO_ACTIVATION_PARAMS - { - DWORD cbDirectXAudioActivationParams; - GUID guidAudioSession; - DWORD dwAudioStreamFlags; - } DIRECTX_AUDIO_ACTIVATION_PARAMS; - -typedef struct tagDIRECTX_AUDIO_ACTIVATION_PARAMS *PDIRECTX_AUDIO_ACTIVATION_PARAMS; - -typedef /* [public][public][public][public][public] */ -enum __MIDL___MIDL_itf_mmdeviceapi_0000_0000_0001 - { eRender = 0, - eCapture = ( eRender + 1 ) , - eAll = ( eCapture + 1 ) , - EDataFlow_enum_count = ( eAll + 1 ) - } EDataFlow; - -typedef /* [public][public][public] */ -enum __MIDL___MIDL_itf_mmdeviceapi_0000_0000_0002 - { eConsole = 0, - eMultimedia = ( eConsole + 1 ) , - eCommunications = ( eMultimedia + 1 ) , - ERole_enum_count = ( eCommunications + 1 ) - } ERole; - -typedef /* [public] */ -enum __MIDL___MIDL_itf_mmdeviceapi_0000_0000_0003 - { RemoteNetworkDevice = 0, - Speakers = ( RemoteNetworkDevice + 1 ) , - LineLevel = ( Speakers + 1 ) , - Headphones = ( LineLevel + 1 ) , - Microphone = ( Headphones + 1 ) , - Headset = ( Microphone + 1 ) , - Handset = ( Headset + 1 ) , - UnknownDigitalPassthrough = ( Handset + 1 ) , - SPDIF = ( UnknownDigitalPassthrough + 1 ) , - HDMI = ( SPDIF + 1 ) , - UnknownFormFactor = ( HDMI + 1 ) - } EndpointFormFactor; - - - -extern RPC_IF_HANDLE __MIDL_itf_mmdeviceapi_0000_0000_v0_0_c_ifspec; -extern RPC_IF_HANDLE __MIDL_itf_mmdeviceapi_0000_0000_v0_0_s_ifspec; - -#ifndef __IMMNotificationClient_INTERFACE_DEFINED__ -#define __IMMNotificationClient_INTERFACE_DEFINED__ - -/* interface IMMNotificationClient */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMNotificationClient; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("7991EEC9-7E89-4D85-8390-6C703CEC60C0") - IMMNotificationClient : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnDeviceStateChanged( - /* [in] */ - __in LPCWSTR pwstrDeviceId, - /* [in] */ - __in DWORD dwNewState) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnDeviceAdded( - /* [in] */ - __in LPCWSTR pwstrDeviceId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnDeviceRemoved( - /* [in] */ - __in LPCWSTR pwstrDeviceId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnDefaultDeviceChanged( - /* [in] */ - __in EDataFlow flow, - /* [in] */ - __in ERole role, - /* [in] */ - __in LPCWSTR pwstrDefaultDeviceId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OnPropertyValueChanged( - /* [in] */ - __in LPCWSTR pwstrDeviceId, - /* [in] */ - __in const PROPERTYKEY key) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMNotificationClientVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMNotificationClient * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMNotificationClient * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMNotificationClient * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnDeviceStateChanged )( - IMMNotificationClient * This, - /* [in] */ - __in LPCWSTR pwstrDeviceId, - /* [in] */ - __in DWORD dwNewState); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnDeviceAdded )( - IMMNotificationClient * This, - /* [in] */ - __in LPCWSTR pwstrDeviceId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnDeviceRemoved )( - IMMNotificationClient * This, - /* [in] */ - __in LPCWSTR pwstrDeviceId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnDefaultDeviceChanged )( - IMMNotificationClient * This, - /* [in] */ - __in EDataFlow flow, - /* [in] */ - __in ERole role, - /* [in] */ - __in LPCWSTR pwstrDefaultDeviceId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OnPropertyValueChanged )( - IMMNotificationClient * This, - /* [in] */ - __in LPCWSTR pwstrDeviceId, - /* [in] */ - __in const PROPERTYKEY key); - - END_INTERFACE - } IMMNotificationClientVtbl; - - interface IMMNotificationClient - { - CONST_VTBL struct IMMNotificationClientVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMNotificationClient_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMNotificationClient_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMNotificationClient_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMNotificationClient_OnDeviceStateChanged(This,pwstrDeviceId,dwNewState) \ - ( (This)->lpVtbl -> OnDeviceStateChanged(This,pwstrDeviceId,dwNewState) ) - -#define IMMNotificationClient_OnDeviceAdded(This,pwstrDeviceId) \ - ( (This)->lpVtbl -> OnDeviceAdded(This,pwstrDeviceId) ) - -#define IMMNotificationClient_OnDeviceRemoved(This,pwstrDeviceId) \ - ( (This)->lpVtbl -> OnDeviceRemoved(This,pwstrDeviceId) ) - -#define IMMNotificationClient_OnDefaultDeviceChanged(This,flow,role,pwstrDefaultDeviceId) \ - ( (This)->lpVtbl -> OnDefaultDeviceChanged(This,flow,role,pwstrDefaultDeviceId) ) - -#define IMMNotificationClient_OnPropertyValueChanged(This,pwstrDeviceId,key) \ - ( (This)->lpVtbl -> OnPropertyValueChanged(This,pwstrDeviceId,key) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMNotificationClient_INTERFACE_DEFINED__ */ - - -#ifndef __IMMDevice_INTERFACE_DEFINED__ -#define __IMMDevice_INTERFACE_DEFINED__ - -/* interface IMMDevice */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMDevice; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("D666063F-1587-4E43-81F1-B948E807363F") - IMMDevice : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Activate( - /* [in] */ - __in REFIID iid, - /* [in] */ - __in DWORD dwClsCtx, - /* [unique][in] */ - __in_opt PROPVARIANT *pActivationParams, - /* [iid_is][out] */ - __out void **ppInterface) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE OpenPropertyStore( - /* [in] */ - __in DWORD stgmAccess, - /* [out] */ - __out IPropertyStore **ppProperties) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetId( - /* [out] */ - __deref_out LPWSTR *ppstrId) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetState( - /* [out] */ - __out DWORD *pdwState) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMDeviceVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMDevice * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMDevice * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMDevice * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Activate )( - IMMDevice * This, - /* [in] */ - __in REFIID iid, - /* [in] */ - __in DWORD dwClsCtx, - /* [unique][in] */ - __in_opt PROPVARIANT *pActivationParams, - /* [iid_is][out] */ - __out void **ppInterface); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *OpenPropertyStore )( - IMMDevice * This, - /* [in] */ - __in DWORD stgmAccess, - /* [out] */ - __out IPropertyStore **ppProperties); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetId )( - IMMDevice * This, - /* [out] */ - __deref_out LPWSTR *ppstrId); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetState )( - IMMDevice * This, - /* [out] */ - __out DWORD *pdwState); - - END_INTERFACE - } IMMDeviceVtbl; - - interface IMMDevice - { - CONST_VTBL struct IMMDeviceVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMDevice_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMDevice_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMDevice_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMDevice_Activate(This,iid,dwClsCtx,pActivationParams,ppInterface) \ - ( (This)->lpVtbl -> Activate(This,iid,dwClsCtx,pActivationParams,ppInterface) ) - -#define IMMDevice_OpenPropertyStore(This,stgmAccess,ppProperties) \ - ( (This)->lpVtbl -> OpenPropertyStore(This,stgmAccess,ppProperties) ) - -#define IMMDevice_GetId(This,ppstrId) \ - ( (This)->lpVtbl -> GetId(This,ppstrId) ) - -#define IMMDevice_GetState(This,pdwState) \ - ( (This)->lpVtbl -> GetState(This,pdwState) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMDevice_INTERFACE_DEFINED__ */ - - -#ifndef __IMMDeviceCollection_INTERFACE_DEFINED__ -#define __IMMDeviceCollection_INTERFACE_DEFINED__ - -/* interface IMMDeviceCollection */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMDeviceCollection; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("0BD7A1BE-7A1A-44DB-8397-CC5392387B5E") - IMMDeviceCollection : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetCount( - /* [out] */ - __out UINT *pcDevices) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Item( - /* [in] */ - __in UINT nDevice, - /* [out] */ - __out IMMDevice **ppDevice) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMDeviceCollectionVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMDeviceCollection * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMDeviceCollection * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMDeviceCollection * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetCount )( - IMMDeviceCollection * This, - /* [out] */ - __out UINT *pcDevices); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Item )( - IMMDeviceCollection * This, - /* [in] */ - __in UINT nDevice, - /* [out] */ - __out IMMDevice **ppDevice); - - END_INTERFACE - } IMMDeviceCollectionVtbl; - - interface IMMDeviceCollection - { - CONST_VTBL struct IMMDeviceCollectionVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMDeviceCollection_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMDeviceCollection_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMDeviceCollection_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMDeviceCollection_GetCount(This,pcDevices) \ - ( (This)->lpVtbl -> GetCount(This,pcDevices) ) - -#define IMMDeviceCollection_Item(This,nDevice,ppDevice) \ - ( (This)->lpVtbl -> Item(This,nDevice,ppDevice) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMDeviceCollection_INTERFACE_DEFINED__ */ - - -#ifndef __IMMEndpoint_INTERFACE_DEFINED__ -#define __IMMEndpoint_INTERFACE_DEFINED__ - -/* interface IMMEndpoint */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMEndpoint; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("1BE09788-6894-4089-8586-9A2A6C265AC5") - IMMEndpoint : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDataFlow( - /* [out] */ - __out EDataFlow *pDataFlow) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMEndpointVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMEndpoint * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMEndpoint * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMEndpoint * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDataFlow )( - IMMEndpoint * This, - /* [out] */ - __out EDataFlow *pDataFlow); - - END_INTERFACE - } IMMEndpointVtbl; - - interface IMMEndpoint - { - CONST_VTBL struct IMMEndpointVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMEndpoint_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMEndpoint_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMEndpoint_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMEndpoint_GetDataFlow(This,pDataFlow) \ - ( (This)->lpVtbl -> GetDataFlow(This,pDataFlow) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMEndpoint_INTERFACE_DEFINED__ */ - - -#ifndef __IMMDeviceEnumerator_INTERFACE_DEFINED__ -#define __IMMDeviceEnumerator_INTERFACE_DEFINED__ - -/* interface IMMDeviceEnumerator */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMDeviceEnumerator; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("A95664D2-9614-4F35-A746-DE8DB63617E6") - IMMDeviceEnumerator : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE EnumAudioEndpoints( - /* [in] */ - __in EDataFlow dataFlow, - /* [in] */ - __in DWORD dwStateMask, - /* [out] */ - __out IMMDeviceCollection **ppDevices) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDefaultAudioEndpoint( - /* [in] */ - __in EDataFlow dataFlow, - /* [in] */ - __in ERole role, - /* [out] */ - __out IMMDevice **ppEndpoint) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE GetDevice( - /* */ - __in LPCWSTR pwstrId, - /* [out] */ - __out IMMDevice **ppDevice) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE RegisterEndpointNotificationCallback( - /* [in] */ - __in IMMNotificationClient *pClient) = 0; - - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE UnregisterEndpointNotificationCallback( - /* [in] */ - __in IMMNotificationClient *pClient) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMDeviceEnumeratorVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMDeviceEnumerator * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMDeviceEnumerator * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMDeviceEnumerator * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *EnumAudioEndpoints )( - IMMDeviceEnumerator * This, - /* [in] */ - __in EDataFlow dataFlow, - /* [in] */ - __in DWORD dwStateMask, - /* [out] */ - __out IMMDeviceCollection **ppDevices); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDefaultAudioEndpoint )( - IMMDeviceEnumerator * This, - /* [in] */ - __in EDataFlow dataFlow, - /* [in] */ - __in ERole role, - /* [out] */ - __out IMMDevice **ppEndpoint); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *GetDevice )( - IMMDeviceEnumerator * This, - /* */ - __in LPCWSTR pwstrId, - /* [out] */ - __out IMMDevice **ppDevice); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *RegisterEndpointNotificationCallback )( - IMMDeviceEnumerator * This, - /* [in] */ - __in IMMNotificationClient *pClient); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *UnregisterEndpointNotificationCallback )( - IMMDeviceEnumerator * This, - /* [in] */ - __in IMMNotificationClient *pClient); - - END_INTERFACE - } IMMDeviceEnumeratorVtbl; - - interface IMMDeviceEnumerator - { - CONST_VTBL struct IMMDeviceEnumeratorVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMDeviceEnumerator_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMDeviceEnumerator_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMDeviceEnumerator_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMDeviceEnumerator_EnumAudioEndpoints(This,dataFlow,dwStateMask,ppDevices) \ - ( (This)->lpVtbl -> EnumAudioEndpoints(This,dataFlow,dwStateMask,ppDevices) ) - -#define IMMDeviceEnumerator_GetDefaultAudioEndpoint(This,dataFlow,role,ppEndpoint) \ - ( (This)->lpVtbl -> GetDefaultAudioEndpoint(This,dataFlow,role,ppEndpoint) ) - -#define IMMDeviceEnumerator_GetDevice(This,pwstrId,ppDevice) \ - ( (This)->lpVtbl -> GetDevice(This,pwstrId,ppDevice) ) - -#define IMMDeviceEnumerator_RegisterEndpointNotificationCallback(This,pClient) \ - ( (This)->lpVtbl -> RegisterEndpointNotificationCallback(This,pClient) ) - -#define IMMDeviceEnumerator_UnregisterEndpointNotificationCallback(This,pClient) \ - ( (This)->lpVtbl -> UnregisterEndpointNotificationCallback(This,pClient) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMDeviceEnumerator_INTERFACE_DEFINED__ */ - - -#ifndef __IMMDeviceActivator_INTERFACE_DEFINED__ -#define __IMMDeviceActivator_INTERFACE_DEFINED__ - -/* interface IMMDeviceActivator */ -/* [unique][helpstring][nonextensible][uuid][local][object] */ - - -EXTERN_C const IID IID_IMMDeviceActivator; - -#if defined(__cplusplus) && !defined(CINTERFACE) - - MIDL_INTERFACE("3B0D0EA4-D0A9-4B0E-935B-09516746FAC0") - IMMDeviceActivator : public IUnknown - { - public: - virtual /* [helpstring][id] */ HRESULT STDMETHODCALLTYPE Activate( - /* [in] */ - __in REFIID iid, - /* [in] */ - __in IMMDevice *pDevice, - /* [in] */ - __in_opt PROPVARIANT *pActivationParams, - /* [iid_is][out] */ - __out void **ppInterface) = 0; - - }; - -#else /* C style interface */ - - typedef struct IMMDeviceActivatorVtbl - { - BEGIN_INTERFACE - - HRESULT ( STDMETHODCALLTYPE *QueryInterface )( - IMMDeviceActivator * This, - /* [in] */ REFIID riid, - /* [iid_is][out] */ - __RPC__deref_out void **ppvObject); - - ULONG ( STDMETHODCALLTYPE *AddRef )( - IMMDeviceActivator * This); - - ULONG ( STDMETHODCALLTYPE *Release )( - IMMDeviceActivator * This); - - /* [helpstring][id] */ HRESULT ( STDMETHODCALLTYPE *Activate )( - IMMDeviceActivator * This, - /* [in] */ - __in REFIID iid, - /* [in] */ - __in IMMDevice *pDevice, - /* [in] */ - __in_opt PROPVARIANT *pActivationParams, - /* [iid_is][out] */ - __out void **ppInterface); - - END_INTERFACE - } IMMDeviceActivatorVtbl; - - interface IMMDeviceActivator - { - CONST_VTBL struct IMMDeviceActivatorVtbl *lpVtbl; - }; - - - -#ifdef COBJMACROS - - -#define IMMDeviceActivator_QueryInterface(This,riid,ppvObject) \ - ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) - -#define IMMDeviceActivator_AddRef(This) \ - ( (This)->lpVtbl -> AddRef(This) ) - -#define IMMDeviceActivator_Release(This) \ - ( (This)->lpVtbl -> Release(This) ) - - -#define IMMDeviceActivator_Activate(This,iid,pDevice,pActivationParams,ppInterface) \ - ( (This)->lpVtbl -> Activate(This,iid,pDevice,pActivationParams,ppInterface) ) - -#endif /* COBJMACROS */ - - -#endif /* C style interface */ - - - - -#endif /* __IMMDeviceActivator_INTERFACE_DEFINED__ */ - - -/* interface __MIDL_itf_mmdeviceapi_0000_0006 */ -/* [local] */ - -typedef /* [public] */ struct __MIDL___MIDL_itf_mmdeviceapi_0000_0006_0001 - { - LPARAM AddPageParam; - IMMDevice *pEndpoint; - IMMDevice *pPnpInterface; - IMMDevice *pPnpDevnode; - } AudioExtensionParams; - - - -extern RPC_IF_HANDLE __MIDL_itf_mmdeviceapi_0000_0006_v0_0_c_ifspec; -extern RPC_IF_HANDLE __MIDL_itf_mmdeviceapi_0000_0006_v0_0_s_ifspec; - - -#ifndef __MMDeviceAPILib_LIBRARY_DEFINED__ -#define __MMDeviceAPILib_LIBRARY_DEFINED__ - -/* library MMDeviceAPILib */ -/* [helpstring][version][uuid] */ - - -EXTERN_C const IID LIBID_MMDeviceAPILib; - -EXTERN_C const CLSID CLSID_MMDeviceEnumerator; - -#ifdef __cplusplus - -class DECLSPEC_UUID("BCDE0395-E52F-467C-8E3D-C4579291692E") -MMDeviceEnumerator; -#endif -#endif /* __MMDeviceAPILib_LIBRARY_DEFINED__ */ - -/* Additional Prototypes for ALL interfaces */ - -/* end of Additional Prototypes */ - -#ifdef __cplusplus -} -#endif - -#endif - - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-61301ee7.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-61301ee7.js deleted file mode 100644 index 304d09201d7417f4265350da472c6d701d69a417..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-61301ee7.js +++ /dev/null @@ -1,6 +0,0 @@ -import{d as z}from"./dsv-576afacd.js";var G=z(","),H=G.parseRows,I=z(" "),J=I.parseRows;const{SvelteComponent:K,append:p,attr:b,destroy_each:F,detach:u,element:m,empty:L,ensure_array_like:k,init:M,insert:d,listen:w,noop:j,run_all:O,safe_not_equal:Q,set_data:N,space:S,text:Z,toggle_class:a}=window.__gradio__svelte__internal;function R(f,e,l){const n=f.slice();return n[11]=e[l],n[13]=l,n}function A(f,e,l){const n=f.slice();return n[14]=e[l],n[16]=l,n}function E(f){let e,l,n;function t(r,s){return typeof r[6]=="string"?U:T}let c=t(f),i=c(f);return{c(){e=m("div"),i.c(),b(e,"class","svelte-1cib1xd"),a(e,"table",f[1]==="table"),a(e,"gallery",f[1]==="gallery"),a(e,"selected",f[2])},m(r,s){d(r,e,s),i.m(e,null),l||(n=[w(e,"mouseenter",f[9]),w(e,"mouseleave",f[10])],l=!0)},p(r,s){c===(c=t(r))&&i?i.p(r,s):(i.d(1),i=c(r),i&&(i.c(),i.m(e,null))),s&2&&a(e,"table",r[1]==="table"),s&2&&a(e,"gallery",r[1]==="gallery"),s&4&&a(e,"selected",r[2])},d(r){r&&u(e),i.d(),l=!1,O(n)}}}function T(f){let e,l,n=k(f[6].slice(0,3)),t=[];for(let i=0;i3&&q(f);return{c(){e=m("table");for(let i=0;i3?c?c.p(i,r):(c=q(i),c.c(),c.m(e,null)):c&&(c.d(1),c=null)},d(i){i&&u(e),F(t,i),c&&c.d()}}}function U(f){let e;return{c(){e=Z(f[6])},m(l,n){d(l,e,n)},p(l,n){n&64&&N(e,l[6])},d(l){l&&u(e)}}}function C(f){let e,l=f[14]+"",n;return{c(){e=m("td"),n=Z(l),b(e,"class","svelte-1cib1xd")},m(t,c){d(t,e,c),p(e,n)},p(t,c){c&64&&l!==(l=t[14]+"")&&N(n,l)},d(t){t&&u(e)}}}function P(f){let e;return{c(){e=m("td"),e.textContent="…",b(e,"class","svelte-1cib1xd")},m(l,n){d(l,e,n)},d(l){l&&u(e)}}}function W(f){let e,l,n=k(f[11].slice(0,3)),t=[];for(let i=0;i3&&P();return{c(){e=m("tr");for(let i=0;i3?c||(c=P(),c.c(),c.m(e,null)):c&&(c.d(1),c=null)},d(i){i&&u(e),F(t,i),c&&c.d()}}}function q(f){let e;return{c(){e=m("div"),b(e,"class","overlay svelte-1cib1xd"),a(e,"odd",f[3]%2!=0),a(e,"even",f[3]%2==0),a(e,"button",f[1]==="gallery")},m(l,n){d(l,e,n)},p(l,n){n&8&&a(e,"odd",l[3]%2!=0),n&8&&a(e,"even",l[3]%2==0),n&2&&a(e,"button",l[1]==="gallery")},d(l){l&&u(e)}}}function V(f){let e,l=f[4]&&E(f);return{c(){l&&l.c(),e=L()},m(n,t){l&&l.m(n,t),d(n,e,t)},p(n,[t]){n[4]?l?l.p(n,t):(l=E(n),l.c(),l.m(e.parentNode,e)):l&&(l.d(1),l=null)},i:j,o:j,d(n){n&&u(e),l&&l.d(n)}}}function X(f,e,l){let{gradio:n}=e,{value:t}=e,{samples_dir:c}=e,{type:i}=e,{selected:r=!1}=e,{index:s}=e,_=!1,h=t,v=Array.isArray(h);const B=()=>l(5,_=!0),D=()=>l(5,_=!1);return f.$$set=o=>{"gradio"in o&&l(7,n=o.gradio),"value"in o&&l(0,t=o.value),"samples_dir"in o&&l(8,c=o.samples_dir),"type"in o&&l(1,i=o.type),"selected"in o&&l(2,r=o.selected),"index"in o&&l(3,s=o.index)},f.$$.update=()=>{f.$$.dirty&401&&!v&&typeof t=="string"&&/\.[a-zA-Z]+$/.test(t)&&fetch(c+t).then(o=>o.text()).then(o=>{try{if(t.endsWith("csv")){const g=o.split(` -`).slice(0,4).map(y=>y.split(",").slice(0,4).join(",")).join(` -`);l(6,h=H(g))}else if(t.endsWith("tsv")){const g=o.split(` -`).slice(0,4).map(y=>y.split(" ").slice(0,4).join(" ")).join(` -`);l(6,h=J(g))}else throw new Error(n.i18n("dataframe.incorrect_format"));l(4,v=!0)}catch(g){console.error(g)}}).catch(o=>{l(6,h=t),l(4,v=!0)})},[t,i,r,s,v,_,h,n,c,B,D]}class x extends K{constructor(e){super(),M(this,e,X,V,Q,{gradio:7,value:0,samples_dir:8,type:1,selected:2,index:3})}}export{x as default}; -//# sourceMappingURL=Example-61301ee7.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_multipart.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_multipart.py deleted file mode 100644 index 446f4ad2df3eb0b566e11c9aab9bbfc4875edfba..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpx/_multipart.py +++ /dev/null @@ -1,267 +0,0 @@ -import binascii -import io -import os -import typing -from pathlib import Path - -from ._types import ( - AsyncByteStream, - FileContent, - FileTypes, - RequestData, - RequestFiles, - SyncByteStream, -) -from ._utils import ( - format_form_param, - guess_content_type, - peek_filelike_length, - primitive_value_to_str, - to_bytes, -) - - -def get_multipart_boundary_from_content_type( - content_type: typing.Optional[bytes], -) -> typing.Optional[bytes]: - if not content_type or not content_type.startswith(b"multipart/form-data"): - return None - # parse boundary according to - # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1 - if b";" in content_type: - for section in content_type.split(b";"): - if section.strip().lower().startswith(b"boundary="): - return section.strip()[len(b"boundary=") :].strip(b'"') - return None - - -class DataField: - """ - A single form field item, within a multipart form field. - """ - - def __init__( - self, name: str, value: typing.Union[str, bytes, int, float, None] - ) -> None: - if not isinstance(name, str): - raise TypeError( - f"Invalid type for name. Expected str, got {type(name)}: {name!r}" - ) - if value is not None and not isinstance(value, (str, bytes, int, float)): - raise TypeError( - f"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}" - ) - self.name = name - self.value: typing.Union[str, bytes] = ( - value if isinstance(value, bytes) else primitive_value_to_str(value) - ) - - def render_headers(self) -> bytes: - if not hasattr(self, "_headers"): - name = format_form_param("name", self.name) - self._headers = b"".join( - [b"Content-Disposition: form-data; ", name, b"\r\n\r\n"] - ) - - return self._headers - - def render_data(self) -> bytes: - if not hasattr(self, "_data"): - self._data = to_bytes(self.value) - - return self._data - - def get_length(self) -> int: - headers = self.render_headers() - data = self.render_data() - return len(headers) + len(data) - - def render(self) -> typing.Iterator[bytes]: - yield self.render_headers() - yield self.render_data() - - -class FileField: - """ - A single file field item, within a multipart form field. - """ - - CHUNK_SIZE = 64 * 1024 - - def __init__(self, name: str, value: FileTypes) -> None: - self.name = name - - fileobj: FileContent - - headers: typing.Dict[str, str] = {} - content_type: typing.Optional[str] = None - - # This large tuple based API largely mirror's requests' API - # It would be good to think of better APIs for this that we could include in httpx 2.0 - # since variable length tuples (especially of 4 elements) are quite unwieldly - if isinstance(value, tuple): - if len(value) == 2: - # neither the 3rd parameter (content_type) nor the 4th (headers) was included - filename, fileobj = value # type: ignore - elif len(value) == 3: - filename, fileobj, content_type = value # type: ignore - else: - # all 4 parameters included - filename, fileobj, content_type, headers = value # type: ignore - else: - filename = Path(str(getattr(value, "name", "upload"))).name - fileobj = value - - if content_type is None: - content_type = guess_content_type(filename) - - has_content_type_header = any("content-type" in key.lower() for key in headers) - if content_type is not None and not has_content_type_header: - # note that unlike requests, we ignore the content_type - # provided in the 3rd tuple element if it is also included in the headers - # requests does the opposite (it overwrites the header with the 3rd tuple element) - headers["Content-Type"] = content_type - - if isinstance(fileobj, io.StringIO): - raise TypeError( - "Multipart file uploads require 'io.BytesIO', not 'io.StringIO'." - ) - if isinstance(fileobj, io.TextIOBase): - raise TypeError( - "Multipart file uploads must be opened in binary mode, not text mode." - ) - - self.filename = filename - self.file = fileobj - self.headers = headers - - def get_length(self) -> typing.Optional[int]: - headers = self.render_headers() - - if isinstance(self.file, (str, bytes)): - return len(headers) + len(to_bytes(self.file)) - - file_length = peek_filelike_length(self.file) - - # If we can't determine the filesize without reading it into memory, - # then return `None` here, to indicate an unknown file length. - if file_length is None: - return None - - return len(headers) + file_length - - def render_headers(self) -> bytes: - if not hasattr(self, "_headers"): - parts = [ - b"Content-Disposition: form-data; ", - format_form_param("name", self.name), - ] - if self.filename: - filename = format_form_param("filename", self.filename) - parts.extend([b"; ", filename]) - for header_name, header_value in self.headers.items(): - key, val = f"\r\n{header_name}: ".encode(), header_value.encode() - parts.extend([key, val]) - parts.append(b"\r\n\r\n") - self._headers = b"".join(parts) - - return self._headers - - def render_data(self) -> typing.Iterator[bytes]: - if isinstance(self.file, (str, bytes)): - yield to_bytes(self.file) - return - - if hasattr(self.file, "seek"): - try: - self.file.seek(0) - except io.UnsupportedOperation: - pass - - chunk = self.file.read(self.CHUNK_SIZE) - while chunk: - yield to_bytes(chunk) - chunk = self.file.read(self.CHUNK_SIZE) - - def render(self) -> typing.Iterator[bytes]: - yield self.render_headers() - yield from self.render_data() - - -class MultipartStream(SyncByteStream, AsyncByteStream): - """ - Request content as streaming multipart encoded form data. - """ - - def __init__( - self, - data: RequestData, - files: RequestFiles, - boundary: typing.Optional[bytes] = None, - ) -> None: - if boundary is None: - boundary = binascii.hexlify(os.urandom(16)) - - self.boundary = boundary - self.content_type = "multipart/form-data; boundary=%s" % boundary.decode( - "ascii" - ) - self.fields = list(self._iter_fields(data, files)) - - def _iter_fields( - self, data: RequestData, files: RequestFiles - ) -> typing.Iterator[typing.Union[FileField, DataField]]: - for name, value in data.items(): - if isinstance(value, (tuple, list)): - for item in value: - yield DataField(name=name, value=item) - else: - yield DataField(name=name, value=value) - - file_items = files.items() if isinstance(files, typing.Mapping) else files - for name, value in file_items: - yield FileField(name=name, value=value) - - def iter_chunks(self) -> typing.Iterator[bytes]: - for field in self.fields: - yield b"--%s\r\n" % self.boundary - yield from field.render() - yield b"\r\n" - yield b"--%s--\r\n" % self.boundary - - def get_content_length(self) -> typing.Optional[int]: - """ - Return the length of the multipart encoded content, or `None` if - any of the files have a length that cannot be determined upfront. - """ - boundary_length = len(self.boundary) - length = 0 - - for field in self.fields: - field_length = field.get_length() - if field_length is None: - return None - - length += 2 + boundary_length + 2 # b"--{boundary}\r\n" - length += field_length - length += 2 # b"\r\n" - - length += 2 + boundary_length + 4 # b"--{boundary}--\r\n" - return length - - # Content stream interface. - - def get_headers(self) -> typing.Dict[str, str]: - content_length = self.get_content_length() - content_type = self.content_type - if content_length is None: - return {"Transfer-Encoding": "chunked", "Content-Type": content_type} - return {"Content-Length": str(content_length), "Content-Type": content_type} - - def __iter__(self) -> typing.Iterator[bytes]: - for chunk in self.iter_chunks(): - yield chunk - - async def __aiter__(self) -> typing.AsyncIterator[bytes]: - for chunk in self.iter_chunks(): - yield chunk diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py deleted file mode 100644 index baaee1b876da0abc5f69b55191446627c2c47cac..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_mathtext_data.py +++ /dev/null @@ -1,1311 +0,0 @@ -""" -font data tables for truetype and afm computer modern fonts -""" - -from __future__ import annotations - - -latex_to_bakoma = { - '\\__sqrt__' : ('cmex10', 0x70), - '\\bigcap' : ('cmex10', 0x5c), - '\\bigcup' : ('cmex10', 0x5b), - '\\bigodot' : ('cmex10', 0x4b), - '\\bigoplus' : ('cmex10', 0x4d), - '\\bigotimes' : ('cmex10', 0x4f), - '\\biguplus' : ('cmex10', 0x5d), - '\\bigvee' : ('cmex10', 0x5f), - '\\bigwedge' : ('cmex10', 0x5e), - '\\coprod' : ('cmex10', 0x61), - '\\int' : ('cmex10', 0x5a), - '\\langle' : ('cmex10', 0xad), - '\\leftangle' : ('cmex10', 0xad), - '\\leftbrace' : ('cmex10', 0xa9), - '\\oint' : ('cmex10', 0x49), - '\\prod' : ('cmex10', 0x59), - '\\rangle' : ('cmex10', 0xae), - '\\rightangle' : ('cmex10', 0xae), - '\\rightbrace' : ('cmex10', 0xaa), - '\\sum' : ('cmex10', 0x58), - '\\widehat' : ('cmex10', 0x62), - '\\widetilde' : ('cmex10', 0x65), - '\\{' : ('cmex10', 0xa9), - '\\}' : ('cmex10', 0xaa), - '{' : ('cmex10', 0xa9), - '}' : ('cmex10', 0xaa), - - ',' : ('cmmi10', 0x3b), - '.' : ('cmmi10', 0x3a), - '/' : ('cmmi10', 0x3d), - '<' : ('cmmi10', 0x3c), - '>' : ('cmmi10', 0x3e), - '\\alpha' : ('cmmi10', 0xae), - '\\beta' : ('cmmi10', 0xaf), - '\\chi' : ('cmmi10', 0xc2), - '\\combiningrightarrowabove' : ('cmmi10', 0x7e), - '\\delta' : ('cmmi10', 0xb1), - '\\ell' : ('cmmi10', 0x60), - '\\epsilon' : ('cmmi10', 0xb2), - '\\eta' : ('cmmi10', 0xb4), - '\\flat' : ('cmmi10', 0x5b), - '\\frown' : ('cmmi10', 0x5f), - '\\gamma' : ('cmmi10', 0xb0), - '\\imath' : ('cmmi10', 0x7b), - '\\iota' : ('cmmi10', 0xb6), - '\\jmath' : ('cmmi10', 0x7c), - '\\kappa' : ('cmmi10', 0x2219), - '\\lambda' : ('cmmi10', 0xb8), - '\\leftharpoondown' : ('cmmi10', 0x29), - '\\leftharpoonup' : ('cmmi10', 0x28), - '\\mu' : ('cmmi10', 0xb9), - '\\natural' : ('cmmi10', 0x5c), - '\\nu' : ('cmmi10', 0xba), - '\\omega' : ('cmmi10', 0x21), - '\\phi' : ('cmmi10', 0xc1), - '\\pi' : ('cmmi10', 0xbc), - '\\psi' : ('cmmi10', 0xc3), - '\\rho' : ('cmmi10', 0xbd), - '\\rightharpoondown' : ('cmmi10', 0x2b), - '\\rightharpoonup' : ('cmmi10', 0x2a), - '\\sharp' : ('cmmi10', 0x5d), - '\\sigma' : ('cmmi10', 0xbe), - '\\smile' : ('cmmi10', 0x5e), - '\\tau' : ('cmmi10', 0xbf), - '\\theta' : ('cmmi10', 0xb5), - '\\triangleleft' : ('cmmi10', 0x2f), - '\\triangleright' : ('cmmi10', 0x2e), - '\\upsilon' : ('cmmi10', 0xc0), - '\\varepsilon' : ('cmmi10', 0x22), - '\\varphi' : ('cmmi10', 0x27), - '\\varrho' : ('cmmi10', 0x25), - '\\varsigma' : ('cmmi10', 0x26), - '\\vartheta' : ('cmmi10', 0x23), - '\\wp' : ('cmmi10', 0x7d), - '\\xi' : ('cmmi10', 0xbb), - '\\zeta' : ('cmmi10', 0xb3), - - '!' : ('cmr10', 0x21), - '%' : ('cmr10', 0x25), - '&' : ('cmr10', 0x26), - '(' : ('cmr10', 0x28), - ')' : ('cmr10', 0x29), - '+' : ('cmr10', 0x2b), - '0' : ('cmr10', 0x30), - '1' : ('cmr10', 0x31), - '2' : ('cmr10', 0x32), - '3' : ('cmr10', 0x33), - '4' : ('cmr10', 0x34), - '5' : ('cmr10', 0x35), - '6' : ('cmr10', 0x36), - '7' : ('cmr10', 0x37), - '8' : ('cmr10', 0x38), - '9' : ('cmr10', 0x39), - ':' : ('cmr10', 0x3a), - ';' : ('cmr10', 0x3b), - '=' : ('cmr10', 0x3d), - '?' : ('cmr10', 0x3f), - '@' : ('cmr10', 0x40), - '[' : ('cmr10', 0x5b), - '\\#' : ('cmr10', 0x23), - '\\$' : ('cmr10', 0x24), - '\\%' : ('cmr10', 0x25), - '\\Delta' : ('cmr10', 0xa2), - '\\Gamma' : ('cmr10', 0xa1), - '\\Lambda' : ('cmr10', 0xa4), - '\\Omega' : ('cmr10', 0xad), - '\\Phi' : ('cmr10', 0xa9), - '\\Pi' : ('cmr10', 0xa6), - '\\Psi' : ('cmr10', 0xaa), - '\\Sigma' : ('cmr10', 0xa7), - '\\Theta' : ('cmr10', 0xa3), - '\\Upsilon' : ('cmr10', 0xa8), - '\\Xi' : ('cmr10', 0xa5), - '\\circumflexaccent' : ('cmr10', 0x5e), - '\\combiningacuteaccent' : ('cmr10', 0xb6), - '\\combiningbreve' : ('cmr10', 0xb8), - '\\combiningdiaeresis' : ('cmr10', 0xc4), - '\\combiningdotabove' : ('cmr10', 0x5f), - '\\combininggraveaccent' : ('cmr10', 0xb5), - '\\combiningoverline' : ('cmr10', 0xb9), - '\\combiningtilde' : ('cmr10', 0x7e), - '\\leftbracket' : ('cmr10', 0x5b), - '\\leftparen' : ('cmr10', 0x28), - '\\rightbracket' : ('cmr10', 0x5d), - '\\rightparen' : ('cmr10', 0x29), - '\\widebar' : ('cmr10', 0xb9), - ']' : ('cmr10', 0x5d), - - '*' : ('cmsy10', 0xa4), - '\N{MINUS SIGN}' : ('cmsy10', 0xa1), - '\\Downarrow' : ('cmsy10', 0x2b), - '\\Im' : ('cmsy10', 0x3d), - '\\Leftarrow' : ('cmsy10', 0x28), - '\\Leftrightarrow' : ('cmsy10', 0x2c), - '\\P' : ('cmsy10', 0x7b), - '\\Re' : ('cmsy10', 0x3c), - '\\Rightarrow' : ('cmsy10', 0x29), - '\\S' : ('cmsy10', 0x78), - '\\Uparrow' : ('cmsy10', 0x2a), - '\\Updownarrow' : ('cmsy10', 0x6d), - '\\Vert' : ('cmsy10', 0x6b), - '\\aleph' : ('cmsy10', 0x40), - '\\approx' : ('cmsy10', 0xbc), - '\\ast' : ('cmsy10', 0xa4), - '\\asymp' : ('cmsy10', 0xb3), - '\\backslash' : ('cmsy10', 0x6e), - '\\bigcirc' : ('cmsy10', 0xb0), - '\\bigtriangledown' : ('cmsy10', 0x35), - '\\bigtriangleup' : ('cmsy10', 0x34), - '\\bot' : ('cmsy10', 0x3f), - '\\bullet' : ('cmsy10', 0xb2), - '\\cap' : ('cmsy10', 0x5c), - '\\cdot' : ('cmsy10', 0xa2), - '\\circ' : ('cmsy10', 0xb1), - '\\clubsuit' : ('cmsy10', 0x7c), - '\\cup' : ('cmsy10', 0x5b), - '\\dag' : ('cmsy10', 0x79), - '\\dashv' : ('cmsy10', 0x61), - '\\ddag' : ('cmsy10', 0x7a), - '\\diamond' : ('cmsy10', 0xa6), - '\\diamondsuit' : ('cmsy10', 0x7d), - '\\div' : ('cmsy10', 0xa5), - '\\downarrow' : ('cmsy10', 0x23), - '\\emptyset' : ('cmsy10', 0x3b), - '\\equiv' : ('cmsy10', 0xb4), - '\\exists' : ('cmsy10', 0x39), - '\\forall' : ('cmsy10', 0x38), - '\\geq' : ('cmsy10', 0xb8), - '\\gg' : ('cmsy10', 0xc0), - '\\heartsuit' : ('cmsy10', 0x7e), - '\\in' : ('cmsy10', 0x32), - '\\infty' : ('cmsy10', 0x31), - '\\lbrace' : ('cmsy10', 0x66), - '\\lceil' : ('cmsy10', 0x64), - '\\leftarrow' : ('cmsy10', 0xc3), - '\\leftrightarrow' : ('cmsy10', 0x24), - '\\leq' : ('cmsy10', 0x2219), - '\\lfloor' : ('cmsy10', 0x62), - '\\ll' : ('cmsy10', 0xbf), - '\\mid' : ('cmsy10', 0x6a), - '\\mp' : ('cmsy10', 0xa8), - '\\nabla' : ('cmsy10', 0x72), - '\\nearrow' : ('cmsy10', 0x25), - '\\neg' : ('cmsy10', 0x3a), - '\\ni' : ('cmsy10', 0x33), - '\\nwarrow' : ('cmsy10', 0x2d), - '\\odot' : ('cmsy10', 0xaf), - '\\ominus' : ('cmsy10', 0xaa), - '\\oplus' : ('cmsy10', 0xa9), - '\\oslash' : ('cmsy10', 0xae), - '\\otimes' : ('cmsy10', 0xad), - '\\pm' : ('cmsy10', 0xa7), - '\\prec' : ('cmsy10', 0xc1), - '\\preceq' : ('cmsy10', 0xb9), - '\\prime' : ('cmsy10', 0x30), - '\\propto' : ('cmsy10', 0x2f), - '\\rbrace' : ('cmsy10', 0x67), - '\\rceil' : ('cmsy10', 0x65), - '\\rfloor' : ('cmsy10', 0x63), - '\\rightarrow' : ('cmsy10', 0x21), - '\\searrow' : ('cmsy10', 0x26), - '\\sim' : ('cmsy10', 0xbb), - '\\simeq' : ('cmsy10', 0x27), - '\\slash' : ('cmsy10', 0x36), - '\\spadesuit' : ('cmsy10', 0xc4), - '\\sqcap' : ('cmsy10', 0x75), - '\\sqcup' : ('cmsy10', 0x74), - '\\sqsubseteq' : ('cmsy10', 0x76), - '\\sqsupseteq' : ('cmsy10', 0x77), - '\\subset' : ('cmsy10', 0xbd), - '\\subseteq' : ('cmsy10', 0xb5), - '\\succ' : ('cmsy10', 0xc2), - '\\succeq' : ('cmsy10', 0xba), - '\\supset' : ('cmsy10', 0xbe), - '\\supseteq' : ('cmsy10', 0xb6), - '\\swarrow' : ('cmsy10', 0x2e), - '\\times' : ('cmsy10', 0xa3), - '\\to' : ('cmsy10', 0x21), - '\\top' : ('cmsy10', 0x3e), - '\\uparrow' : ('cmsy10', 0x22), - '\\updownarrow' : ('cmsy10', 0x6c), - '\\uplus' : ('cmsy10', 0x5d), - '\\vdash' : ('cmsy10', 0x60), - '\\vee' : ('cmsy10', 0x5f), - '\\vert' : ('cmsy10', 0x6a), - '\\wedge' : ('cmsy10', 0x5e), - '\\wr' : ('cmsy10', 0x6f), - '\\|' : ('cmsy10', 0x6b), - '|' : ('cmsy10', 0x6a), - - '\\_' : ('cmtt10', 0x5f) -} - -# Automatically generated. - -type12uni = { - 'aring' : 229, - 'quotedblright' : 8221, - 'V' : 86, - 'dollar' : 36, - 'four' : 52, - 'Yacute' : 221, - 'P' : 80, - 'underscore' : 95, - 'p' : 112, - 'Otilde' : 213, - 'perthousand' : 8240, - 'zero' : 48, - 'dotlessi' : 305, - 'Scaron' : 352, - 'zcaron' : 382, - 'egrave' : 232, - 'section' : 167, - 'Icircumflex' : 206, - 'ntilde' : 241, - 'ampersand' : 38, - 'dotaccent' : 729, - 'degree' : 176, - 'K' : 75, - 'acircumflex' : 226, - 'Aring' : 197, - 'k' : 107, - 'smalltilde' : 732, - 'Agrave' : 192, - 'divide' : 247, - 'ocircumflex' : 244, - 'asciitilde' : 126, - 'two' : 50, - 'E' : 69, - 'scaron' : 353, - 'F' : 70, - 'bracketleft' : 91, - 'asciicircum' : 94, - 'f' : 102, - 'ordmasculine' : 186, - 'mu' : 181, - 'paragraph' : 182, - 'nine' : 57, - 'v' : 118, - 'guilsinglleft' : 8249, - 'backslash' : 92, - 'six' : 54, - 'A' : 65, - 'icircumflex' : 238, - 'a' : 97, - 'ogonek' : 731, - 'q' : 113, - 'oacute' : 243, - 'ograve' : 242, - 'edieresis' : 235, - 'comma' : 44, - 'otilde' : 245, - 'guillemotright' : 187, - 'ecircumflex' : 234, - 'greater' : 62, - 'uacute' : 250, - 'L' : 76, - 'bullet' : 8226, - 'cedilla' : 184, - 'ydieresis' : 255, - 'l' : 108, - 'logicalnot' : 172, - 'exclamdown' : 161, - 'endash' : 8211, - 'agrave' : 224, - 'Adieresis' : 196, - 'germandbls' : 223, - 'Odieresis' : 214, - 'space' : 32, - 'quoteright' : 8217, - 'ucircumflex' : 251, - 'G' : 71, - 'quoteleft' : 8216, - 'W' : 87, - 'Q' : 81, - 'g' : 103, - 'w' : 119, - 'question' : 63, - 'one' : 49, - 'ring' : 730, - 'figuredash' : 8210, - 'B' : 66, - 'iacute' : 237, - 'Ydieresis' : 376, - 'R' : 82, - 'b' : 98, - 'r' : 114, - 'Ccedilla' : 199, - 'minus' : 8722, - 'Lslash' : 321, - 'Uacute' : 218, - 'yacute' : 253, - 'Ucircumflex' : 219, - 'quotedbl' : 34, - 'onehalf' : 189, - 'Thorn' : 222, - 'M' : 77, - 'eight' : 56, - 'multiply' : 215, - 'grave' : 96, - 'Ocircumflex' : 212, - 'm' : 109, - 'Ugrave' : 217, - 'guilsinglright' : 8250, - 'Ntilde' : 209, - 'questiondown' : 191, - 'Atilde' : 195, - 'ccedilla' : 231, - 'Z' : 90, - 'copyright' : 169, - 'yen' : 165, - 'Eacute' : 201, - 'H' : 72, - 'X' : 88, - 'Idieresis' : 207, - 'bar' : 124, - 'h' : 104, - 'x' : 120, - 'udieresis' : 252, - 'ordfeminine' : 170, - 'braceleft' : 123, - 'macron' : 175, - 'atilde' : 227, - 'Acircumflex' : 194, - 'Oslash' : 216, - 'C' : 67, - 'quotedblleft' : 8220, - 'S' : 83, - 'exclam' : 33, - 'Zcaron' : 381, - 'equal' : 61, - 's' : 115, - 'eth' : 240, - 'Egrave' : 200, - 'hyphen' : 45, - 'period' : 46, - 'igrave' : 236, - 'colon' : 58, - 'Ecircumflex' : 202, - 'trademark' : 8482, - 'Aacute' : 193, - 'cent' : 162, - 'lslash' : 322, - 'c' : 99, - 'N' : 78, - 'breve' : 728, - 'Oacute' : 211, - 'guillemotleft' : 171, - 'n' : 110, - 'idieresis' : 239, - 'braceright' : 125, - 'seven' : 55, - 'brokenbar' : 166, - 'ugrave' : 249, - 'periodcentered' : 183, - 'sterling' : 163, - 'I' : 73, - 'Y' : 89, - 'Eth' : 208, - 'emdash' : 8212, - 'i' : 105, - 'daggerdbl' : 8225, - 'y' : 121, - 'plusminus' : 177, - 'less' : 60, - 'Udieresis' : 220, - 'D' : 68, - 'five' : 53, - 'T' : 84, - 'oslash' : 248, - 'acute' : 180, - 'd' : 100, - 'OE' : 338, - 'Igrave' : 204, - 't' : 116, - 'parenright' : 41, - 'adieresis' : 228, - 'quotesingle' : 39, - 'twodotenleader' : 8229, - 'slash' : 47, - 'ellipsis' : 8230, - 'numbersign' : 35, - 'odieresis' : 246, - 'O' : 79, - 'oe' : 339, - 'o' : 111, - 'Edieresis' : 203, - 'plus' : 43, - 'dagger' : 8224, - 'three' : 51, - 'hungarumlaut' : 733, - 'parenleft' : 40, - 'fraction' : 8260, - 'registered' : 174, - 'J' : 74, - 'dieresis' : 168, - 'Ograve' : 210, - 'j' : 106, - 'z' : 122, - 'ae' : 230, - 'semicolon' : 59, - 'at' : 64, - 'Iacute' : 205, - 'percent' : 37, - 'bracketright' : 93, - 'AE' : 198, - 'asterisk' : 42, - 'aacute' : 225, - 'U' : 85, - 'eacute' : 233, - 'e' : 101, - 'thorn' : 254, - 'u' : 117, -} - -uni2type1 = {v: k for k, v in type12uni.items()} - -# The script below is to sort and format the tex2uni dict - -## For decimal values: int(hex(v), 16) -# newtex = {k: hex(v) for k, v in tex2uni.items()} -# sd = dict(sorted(newtex.items(), key=lambda item: item[0])) -# -## For formatting the sorted dictionary with proper spacing -## the value '24' comes from finding the longest string in -## the newtex keys with len(max(newtex, key=len)) -# for key in sd: -# print("{0:24} : {1: color alpha > rcParams['grid.alpha'] - # Note: only resolve to rcParams if the color does not have alpha - # otherwise `grid(color=(1, 1, 1, 0.5))` would work like - # grid(color=(1, 1, 1, 0.5), alpha=rcParams['grid.alpha']) - # so the that the rcParams default would override color alpha. - grid_alpha = mpl.rcParams["grid.alpha"] - grid_kw = {k[5:]: v for k, v in kwargs.items()} - - self.tick1line = mlines.Line2D( - [], [], - color=color, linestyle="none", zorder=zorder, visible=tick1On, - markeredgecolor=color, markersize=size, markeredgewidth=width, - ) - self.tick2line = mlines.Line2D( - [], [], - color=color, linestyle="none", zorder=zorder, visible=tick2On, - markeredgecolor=color, markersize=size, markeredgewidth=width, - ) - self.gridline = mlines.Line2D( - [], [], - color=grid_color, alpha=grid_alpha, visible=gridOn, - linestyle=grid_linestyle, linewidth=grid_linewidth, marker="", - **grid_kw, - ) - self.gridline.get_path()._interpolation_steps = \ - GRIDLINE_INTERPOLATION_STEPS - self.label1 = mtext.Text( - np.nan, np.nan, - fontsize=labelsize, color=labelcolor, visible=label1On, - fontfamily=labelfontfamily, rotation=self._labelrotation[1]) - self.label2 = mtext.Text( - np.nan, np.nan, - fontsize=labelsize, color=labelcolor, visible=label2On, - fontfamily=labelfontfamily, rotation=self._labelrotation[1]) - - self._apply_tickdir(tickdir) - - for artist in [self.tick1line, self.tick2line, self.gridline, - self.label1, self.label2]: - self._set_artist_props(artist) - - self.update_position(loc) - - def _set_labelrotation(self, labelrotation): - if isinstance(labelrotation, str): - mode = labelrotation - angle = 0 - elif isinstance(labelrotation, (tuple, list)): - mode, angle = labelrotation - else: - mode = 'default' - angle = labelrotation - _api.check_in_list(['auto', 'default'], labelrotation=mode) - self._labelrotation = (mode, angle) - - def _apply_tickdir(self, tickdir): - """Set tick direction. Valid values are 'out', 'in', 'inout'.""" - # This method is responsible for updating `_pad`, and, in subclasses, - # for setting the tick{1,2}line markers as well. From the user - # perspective this should always be called though _apply_params, which - # further updates ticklabel positions using the new pads. - if tickdir is None: - tickdir = mpl.rcParams[f'{self.__name__}.direction'] - else: - _api.check_in_list(['in', 'out', 'inout'], tickdir=tickdir) - self._tickdir = tickdir - self._pad = self._base_pad + self.get_tick_padding() - - def get_tickdir(self): - return self._tickdir - - def get_tick_padding(self): - """Get the length of the tick outside of the Axes.""" - padding = { - 'in': 0.0, - 'inout': 0.5, - 'out': 1.0 - } - return self._size * padding[self._tickdir] - - def get_children(self): - children = [self.tick1line, self.tick2line, - self.gridline, self.label1, self.label2] - return children - - @_api.rename_parameter("3.8", "clippath", "path") - def set_clip_path(self, path, transform=None): - # docstring inherited - super().set_clip_path(path, transform) - self.gridline.set_clip_path(path, transform) - self.stale = True - - def contains(self, mouseevent): - """ - Test whether the mouse event occurred in the Tick marks. - - This function always returns false. It is more useful to test if the - axis as a whole contains the mouse rather than the set of tick marks. - """ - return False, {} - - def set_pad(self, val): - """ - Set the tick label pad in points - - Parameters - ---------- - val : float - """ - self._apply_params(pad=val) - self.stale = True - - def get_pad(self): - """Get the value of the tick label pad in points.""" - return self._base_pad - - def _get_text1(self): - """Get the default Text 1 instance.""" - - def _get_text2(self): - """Get the default Text 2 instance.""" - - def _get_tick1line(self): - """Get the default `.Line2D` instance for tick1.""" - - def _get_tick2line(self): - """Get the default `.Line2D` instance for tick2.""" - - def _get_gridline(self): - """Get the default grid `.Line2D` instance for this tick.""" - - def get_loc(self): - """Return the tick location (data coords) as a scalar.""" - return self._loc - - @martist.allow_rasterization - def draw(self, renderer): - if not self.get_visible(): - self.stale = False - return - renderer.open_group(self.__name__, gid=self.get_gid()) - for artist in [self.gridline, self.tick1line, self.tick2line, - self.label1, self.label2]: - artist.draw(renderer) - renderer.close_group(self.__name__) - self.stale = False - - @_api.deprecated("3.8") - def set_label1(self, s): - """ - Set the label1 text. - - Parameters - ---------- - s : str - """ - self.label1.set_text(s) - self.stale = True - - set_label = set_label1 - - @_api.deprecated("3.8") - def set_label2(self, s): - """ - Set the label2 text. - - Parameters - ---------- - s : str - """ - self.label2.set_text(s) - self.stale = True - - def set_url(self, url): - """ - Set the url of label1 and label2. - - Parameters - ---------- - url : str - """ - super().set_url(url) - self.label1.set_url(url) - self.label2.set_url(url) - self.stale = True - - def _set_artist_props(self, a): - a.set_figure(self.figure) - - def get_view_interval(self): - """ - Return the view limits ``(min, max)`` of the axis the tick belongs to. - """ - raise NotImplementedError('Derived must override') - - def _apply_params(self, **kwargs): - for name, target in [("gridOn", self.gridline), - ("tick1On", self.tick1line), - ("tick2On", self.tick2line), - ("label1On", self.label1), - ("label2On", self.label2)]: - if name in kwargs: - target.set_visible(kwargs.pop(name)) - if any(k in kwargs for k in ['size', 'width', 'pad', 'tickdir']): - self._size = kwargs.pop('size', self._size) - # Width could be handled outside this block, but it is - # convenient to leave it here. - self._width = kwargs.pop('width', self._width) - self._base_pad = kwargs.pop('pad', self._base_pad) - # _apply_tickdir uses _size and _base_pad to make _pad, and also - # sets the ticklines markers. - self._apply_tickdir(kwargs.pop('tickdir', self._tickdir)) - for line in (self.tick1line, self.tick2line): - line.set_markersize(self._size) - line.set_markeredgewidth(self._width) - # _get_text1_transform uses _pad from _apply_tickdir. - trans = self._get_text1_transform()[0] - self.label1.set_transform(trans) - trans = self._get_text2_transform()[0] - self.label2.set_transform(trans) - tick_kw = {k: v for k, v in kwargs.items() if k in ['color', 'zorder']} - if 'color' in kwargs: - tick_kw['markeredgecolor'] = kwargs['color'] - self.tick1line.set(**tick_kw) - self.tick2line.set(**tick_kw) - for k, v in tick_kw.items(): - setattr(self, '_' + k, v) - - if 'labelrotation' in kwargs: - self._set_labelrotation(kwargs.pop('labelrotation')) - self.label1.set(rotation=self._labelrotation[1]) - self.label2.set(rotation=self._labelrotation[1]) - - label_kw = {k[5:]: v for k, v in kwargs.items() - if k in ['labelsize', 'labelcolor', 'labelfontfamily']} - self.label1.set(**label_kw) - self.label2.set(**label_kw) - - grid_kw = {k[5:]: v for k, v in kwargs.items() - if k in _gridline_param_names} - self.gridline.set(**grid_kw) - - def update_position(self, loc): - """Set the location of tick in data coords with scalar *loc*.""" - raise NotImplementedError('Derived must override') - - def _get_text1_transform(self): - raise NotImplementedError('Derived must override') - - def _get_text2_transform(self): - raise NotImplementedError('Derived must override') - - -class XTick(Tick): - """ - Contains all the Artists needed to make an x tick - the tick line, - the label text and the grid line - """ - __name__ = 'xtick' - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # x in data coords, y in axes coords - ax = self.axes - self.tick1line.set( - data=([0], [0]), transform=ax.get_xaxis_transform("tick1")) - self.tick2line.set( - data=([0], [1]), transform=ax.get_xaxis_transform("tick2")) - self.gridline.set( - data=([0, 0], [0, 1]), transform=ax.get_xaxis_transform("grid")) - # the y loc is 3 points below the min of y axis - trans, va, ha = self._get_text1_transform() - self.label1.set( - x=0, y=0, - verticalalignment=va, horizontalalignment=ha, transform=trans, - ) - trans, va, ha = self._get_text2_transform() - self.label2.set( - x=0, y=1, - verticalalignment=va, horizontalalignment=ha, transform=trans, - ) - - def _get_text1_transform(self): - return self.axes.get_xaxis_text1_transform(self._pad) - - def _get_text2_transform(self): - return self.axes.get_xaxis_text2_transform(self._pad) - - def _apply_tickdir(self, tickdir): - # docstring inherited - super()._apply_tickdir(tickdir) - mark1, mark2 = _MARKER_DICT[self._tickdir] - self.tick1line.set_marker(mark1) - self.tick2line.set_marker(mark2) - - def update_position(self, loc): - """Set the location of tick in data coords with scalar *loc*.""" - self.tick1line.set_xdata((loc,)) - self.tick2line.set_xdata((loc,)) - self.gridline.set_xdata((loc,)) - self.label1.set_x(loc) - self.label2.set_x(loc) - self._loc = loc - self.stale = True - - def get_view_interval(self): - # docstring inherited - return self.axes.viewLim.intervalx - - -class YTick(Tick): - """ - Contains all the Artists needed to make a Y tick - the tick line, - the label text and the grid line - """ - __name__ = 'ytick' - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - # x in axes coords, y in data coords - ax = self.axes - self.tick1line.set( - data=([0], [0]), transform=ax.get_yaxis_transform("tick1")) - self.tick2line.set( - data=([1], [0]), transform=ax.get_yaxis_transform("tick2")) - self.gridline.set( - data=([0, 1], [0, 0]), transform=ax.get_yaxis_transform("grid")) - # the y loc is 3 points below the min of y axis - trans, va, ha = self._get_text1_transform() - self.label1.set( - x=0, y=0, - verticalalignment=va, horizontalalignment=ha, transform=trans, - ) - trans, va, ha = self._get_text2_transform() - self.label2.set( - x=1, y=0, - verticalalignment=va, horizontalalignment=ha, transform=trans, - ) - - def _get_text1_transform(self): - return self.axes.get_yaxis_text1_transform(self._pad) - - def _get_text2_transform(self): - return self.axes.get_yaxis_text2_transform(self._pad) - - def _apply_tickdir(self, tickdir): - # docstring inherited - super()._apply_tickdir(tickdir) - mark1, mark2 = { - 'out': (mlines.TICKLEFT, mlines.TICKRIGHT), - 'in': (mlines.TICKRIGHT, mlines.TICKLEFT), - 'inout': ('_', '_'), - }[self._tickdir] - self.tick1line.set_marker(mark1) - self.tick2line.set_marker(mark2) - - def update_position(self, loc): - """Set the location of tick in data coords with scalar *loc*.""" - self.tick1line.set_ydata((loc,)) - self.tick2line.set_ydata((loc,)) - self.gridline.set_ydata((loc,)) - self.label1.set_y(loc) - self.label2.set_y(loc) - self._loc = loc - self.stale = True - - def get_view_interval(self): - # docstring inherited - return self.axes.viewLim.intervaly - - -class Ticker: - """ - A container for the objects defining tick position and format. - - Attributes - ---------- - locator : `~matplotlib.ticker.Locator` subclass - Determines the positions of the ticks. - formatter : `~matplotlib.ticker.Formatter` subclass - Determines the format of the tick labels. - """ - - def __init__(self): - self._locator = None - self._formatter = None - self._locator_is_default = True - self._formatter_is_default = True - - @property - def locator(self): - return self._locator - - @locator.setter - def locator(self, locator): - if not isinstance(locator, mticker.Locator): - raise TypeError('locator must be a subclass of ' - 'matplotlib.ticker.Locator') - self._locator = locator - - @property - def formatter(self): - return self._formatter - - @formatter.setter - def formatter(self, formatter): - if not isinstance(formatter, mticker.Formatter): - raise TypeError('formatter must be a subclass of ' - 'matplotlib.ticker.Formatter') - self._formatter = formatter - - -class _LazyTickList: - """ - A descriptor for lazy instantiation of tick lists. - - See comment above definition of the ``majorTicks`` and ``minorTicks`` - attributes. - """ - - def __init__(self, major): - self._major = major - - def __get__(self, instance, owner): - if instance is None: - return self - else: - # instance._get_tick() can itself try to access the majorTicks - # attribute (e.g. in certain projection classes which override - # e.g. get_xaxis_text1_transform). In order to avoid infinite - # recursion, first set the majorTicks on the instance to an empty - # list, then create the tick and append it. - if self._major: - instance.majorTicks = [] - tick = instance._get_tick(major=True) - instance.majorTicks.append(tick) - return instance.majorTicks - else: - instance.minorTicks = [] - tick = instance._get_tick(major=False) - instance.minorTicks.append(tick) - return instance.minorTicks - - -class Axis(martist.Artist): - """ - Base class for `.XAxis` and `.YAxis`. - - Attributes - ---------- - isDefault_label : bool - - axes : `~matplotlib.axes.Axes` - The `~.axes.Axes` to which the Axis belongs. - major : `~matplotlib.axis.Ticker` - Determines the major tick positions and their label format. - minor : `~matplotlib.axis.Ticker` - Determines the minor tick positions and their label format. - callbacks : `~matplotlib.cbook.CallbackRegistry` - - label : `~matplotlib.text.Text` - The axis label. - labelpad : float - The distance between the axis label and the tick labels. - Defaults to :rc:`axes.labelpad` = 4. - offsetText : `~matplotlib.text.Text` - A `.Text` object containing the data offset of the ticks (if any). - pickradius : float - The acceptance radius for containment tests. See also `.Axis.contains`. - majorTicks : list of `.Tick` - The major ticks. - - .. warning:: - - Ticks are not guaranteed to be persistent. Various operations - can create, delete and modify the Tick instances. There is an - imminent risk that changes to individual ticks will not - survive if you work on the figure further (including also - panning/zooming on a displayed figure). - - Working on the individual ticks is a method of last resort. - Use `.set_tick_params` instead if possible. - - minorTicks : list of `.Tick` - The minor ticks. - """ - OFFSETTEXTPAD = 3 - # The class used in _get_tick() to create tick instances. Must either be - # overwritten in subclasses, or subclasses must reimplement _get_tick(). - _tick_class = None - - def __str__(self): - return "{}({},{})".format( - type(self).__name__, *self.axes.transAxes.transform((0, 0))) - - def __init__(self, axes, *, pickradius=15, clear=True): - """ - Parameters - ---------- - axes : `~matplotlib.axes.Axes` - The `~.axes.Axes` to which the created Axis belongs. - pickradius : float - The acceptance radius for containment tests. See also - `.Axis.contains`. - clear : bool, default: True - Whether to clear the Axis on creation. This is not required, e.g., when - creating an Axis as part of an Axes, as ``Axes.clear`` will call - ``Axis.clear``. - .. versionadded:: 3.8 - """ - super().__init__() - self._remove_overlapping_locs = True - - self.set_figure(axes.figure) - - self.isDefault_label = True - - self.axes = axes - self.major = Ticker() - self.minor = Ticker() - self.callbacks = cbook.CallbackRegistry(signals=["units"]) - - self._autolabelpos = True - - self.label = mtext.Text( - np.nan, np.nan, - fontsize=mpl.rcParams['axes.labelsize'], - fontweight=mpl.rcParams['axes.labelweight'], - color=mpl.rcParams['axes.labelcolor'], - ) - self._set_artist_props(self.label) - self.offsetText = mtext.Text(np.nan, np.nan) - self._set_artist_props(self.offsetText) - - self.labelpad = mpl.rcParams['axes.labelpad'] - - self.pickradius = pickradius - - # Initialize here for testing; later add API - self._major_tick_kw = dict() - self._minor_tick_kw = dict() - - if clear: - self.clear() - else: - self.converter = None - self.units = None - - self._autoscale_on = True - - @property - def isDefault_majloc(self): - return self.major._locator_is_default - - @isDefault_majloc.setter - def isDefault_majloc(self, value): - self.major._locator_is_default = value - - @property - def isDefault_majfmt(self): - return self.major._formatter_is_default - - @isDefault_majfmt.setter - def isDefault_majfmt(self, value): - self.major._formatter_is_default = value - - @property - def isDefault_minloc(self): - return self.minor._locator_is_default - - @isDefault_minloc.setter - def isDefault_minloc(self, value): - self.minor._locator_is_default = value - - @property - def isDefault_minfmt(self): - return self.minor._formatter_is_default - - @isDefault_minfmt.setter - def isDefault_minfmt(self, value): - self.minor._formatter_is_default = value - - def _get_shared_axes(self): - """Return Grouper of shared axes for current axis.""" - return self.axes._shared_axes[ - self._get_axis_name()].get_siblings(self.axes) - - def _get_shared_axis(self): - """Return list of shared axis for current axis.""" - name = self._get_axis_name() - return [ax._axis_map[name] for ax in self._get_shared_axes()] - - def _get_axis_name(self): - """Return the axis name.""" - return [name for name, axis in self.axes._axis_map.items() - if axis is self][0] - - # During initialization, Axis objects often create ticks that are later - # unused; this turns out to be a very slow step. Instead, use a custom - # descriptor to make the tick lists lazy and instantiate them as needed. - majorTicks = _LazyTickList(major=True) - minorTicks = _LazyTickList(major=False) - - def get_remove_overlapping_locs(self): - return self._remove_overlapping_locs - - def set_remove_overlapping_locs(self, val): - self._remove_overlapping_locs = bool(val) - - remove_overlapping_locs = property( - get_remove_overlapping_locs, set_remove_overlapping_locs, - doc=('If minor ticker locations that overlap with major ' - 'ticker locations should be trimmed.')) - - def set_label_coords(self, x, y, transform=None): - """ - Set the coordinates of the label. - - By default, the x coordinate of the y label and the y coordinate of the - x label are determined by the tick label bounding boxes, but this can - lead to poor alignment of multiple labels if there are multiple axes. - - You can also specify the coordinate system of the label with the - transform. If None, the default coordinate system will be the axes - coordinate system: (0, 0) is bottom left, (0.5, 0.5) is center, etc. - """ - self._autolabelpos = False - if transform is None: - transform = self.axes.transAxes - - self.label.set_transform(transform) - self.label.set_position((x, y)) - self.stale = True - - def get_transform(self): - """Return the transform used in the Axis' scale""" - return self._scale.get_transform() - - def get_scale(self): - """Return this Axis' scale (as a str).""" - return self._scale.name - - def _set_scale(self, value, **kwargs): - if not isinstance(value, mscale.ScaleBase): - self._scale = mscale.scale_factory(value, self, **kwargs) - else: - self._scale = value - self._scale.set_default_locators_and_formatters(self) - - self.isDefault_majloc = True - self.isDefault_minloc = True - self.isDefault_majfmt = True - self.isDefault_minfmt = True - - # This method is directly wrapped by Axes.set_{x,y}scale. - def _set_axes_scale(self, value, **kwargs): - """ - Set this Axis' scale. - - Parameters - ---------- - value : {"linear", "log", "symlog", "logit", ...} or `.ScaleBase` - The axis scale type to apply. - - **kwargs - Different keyword arguments are accepted, depending on the scale. - See the respective class keyword arguments: - - - `matplotlib.scale.LinearScale` - - `matplotlib.scale.LogScale` - - `matplotlib.scale.SymmetricalLogScale` - - `matplotlib.scale.LogitScale` - - `matplotlib.scale.FuncScale` - - Notes - ----- - By default, Matplotlib supports the above-mentioned scales. - Additionally, custom scales may be registered using - `matplotlib.scale.register_scale`. These scales can then also - be used here. - """ - name = self._get_axis_name() - old_default_lims = (self.get_major_locator() - .nonsingular(-np.inf, np.inf)) - for ax in self._get_shared_axes(): - ax._axis_map[name]._set_scale(value, **kwargs) - ax._update_transScale() - ax.stale = True - new_default_lims = (self.get_major_locator() - .nonsingular(-np.inf, np.inf)) - if old_default_lims != new_default_lims: - # Force autoscaling now, to take advantage of the scale locator's - # nonsingular() before it possibly gets swapped out by the user. - self.axes.autoscale_view( - **{f"scale{k}": k == name for k in self.axes._axis_names}) - - def limit_range_for_scale(self, vmin, vmax): - return self._scale.limit_range_for_scale(vmin, vmax, self.get_minpos()) - - def _get_autoscale_on(self): - """Return whether this Axis is autoscaled.""" - return self._autoscale_on - - def _set_autoscale_on(self, b): - """ - Set whether this Axis is autoscaled when drawing or by - `.Axes.autoscale_view`. - - Parameters - ---------- - b : bool - """ - self._autoscale_on = b - - def get_children(self): - return [self.label, self.offsetText, - *self.get_major_ticks(), *self.get_minor_ticks()] - - def _reset_major_tick_kw(self): - self._major_tick_kw.clear() - self._major_tick_kw['gridOn'] = ( - mpl.rcParams['axes.grid'] and - mpl.rcParams['axes.grid.which'] in ('both', 'major')) - - def _reset_minor_tick_kw(self): - self._minor_tick_kw.clear() - self._minor_tick_kw['gridOn'] = ( - mpl.rcParams['axes.grid'] and - mpl.rcParams['axes.grid.which'] in ('both', 'minor')) - - def clear(self): - """ - Clear the axis. - - This resets axis properties to their default values: - - - the label - - the scale - - locators, formatters and ticks - - major and minor grid - - units - - registered callbacks - """ - self.label._reset_visual_defaults() - # The above resets the label formatting using text rcParams, - # so we then update the formatting using axes rcParams - self.label.set_color(mpl.rcParams['axes.labelcolor']) - self.label.set_fontsize(mpl.rcParams['axes.labelsize']) - self.label.set_fontweight(mpl.rcParams['axes.labelweight']) - self.offsetText._reset_visual_defaults() - self.labelpad = mpl.rcParams['axes.labelpad'] - - self._init() - - self._set_scale('linear') - - # Clear the callback registry for this axis, or it may "leak" - self.callbacks = cbook.CallbackRegistry(signals=["units"]) - - # whether the grids are on - self._major_tick_kw['gridOn'] = ( - mpl.rcParams['axes.grid'] and - mpl.rcParams['axes.grid.which'] in ('both', 'major')) - self._minor_tick_kw['gridOn'] = ( - mpl.rcParams['axes.grid'] and - mpl.rcParams['axes.grid.which'] in ('both', 'minor')) - self.reset_ticks() - - self.converter = None - self.units = None - self.stale = True - - def reset_ticks(self): - """ - Re-initialize the major and minor Tick lists. - - Each list starts with a single fresh Tick. - """ - # Restore the lazy tick lists. - try: - del self.majorTicks - except AttributeError: - pass - try: - del self.minorTicks - except AttributeError: - pass - try: - self.set_clip_path(self.axes.patch) - except AttributeError: - pass - - def set_tick_params(self, which='major', reset=False, **kwargs): - """ - Set appearance parameters for ticks, ticklabels, and gridlines. - - For documentation of keyword arguments, see - :meth:`matplotlib.axes.Axes.tick_params`. - - See Also - -------- - .Axis.get_tick_params - View the current style settings for ticks, ticklabels, and - gridlines. - """ - _api.check_in_list(['major', 'minor', 'both'], which=which) - kwtrans = self._translate_tick_params(kwargs) - - # the kwargs are stored in self._major/minor_tick_kw so that any - # future new ticks will automatically get them - if reset: - if which in ['major', 'both']: - self._reset_major_tick_kw() - self._major_tick_kw.update(kwtrans) - if which in ['minor', 'both']: - self._reset_minor_tick_kw() - self._minor_tick_kw.update(kwtrans) - self.reset_ticks() - else: - if which in ['major', 'both']: - self._major_tick_kw.update(kwtrans) - for tick in self.majorTicks: - tick._apply_params(**kwtrans) - if which in ['minor', 'both']: - self._minor_tick_kw.update(kwtrans) - for tick in self.minorTicks: - tick._apply_params(**kwtrans) - # labelOn and labelcolor also apply to the offset text. - if 'label1On' in kwtrans or 'label2On' in kwtrans: - self.offsetText.set_visible( - self._major_tick_kw.get('label1On', False) - or self._major_tick_kw.get('label2On', False)) - if 'labelcolor' in kwtrans: - self.offsetText.set_color(kwtrans['labelcolor']) - - self.stale = True - - def get_tick_params(self, which='major'): - """ - Get appearance parameters for ticks, ticklabels, and gridlines. - - .. versionadded:: 3.7 - - Parameters - ---------- - which : {'major', 'minor'}, default: 'major' - The group of ticks for which the parameters are retrieved. - - Returns - ------- - dict - Properties for styling tick elements added to the axis. - - Notes - ----- - This method returns the appearance parameters for styling *new* - elements added to this axis and may be different from the values - on current elements if they were modified directly by the user - (e.g., via ``set_*`` methods on individual tick objects). - - Examples - -------- - :: - - >>> ax.yaxis.set_tick_params(labelsize=30, labelcolor='red', - direction='out', which='major') - >>> ax.yaxis.get_tick_params(which='major') - {'direction': 'out', - 'left': True, - 'right': False, - 'labelleft': True, - 'labelright': False, - 'gridOn': False, - 'labelsize': 30, - 'labelcolor': 'red'} - >>> ax.yaxis.get_tick_params(which='minor') - {'left': True, - 'right': False, - 'labelleft': True, - 'labelright': False, - 'gridOn': False} - - - """ - _api.check_in_list(['major', 'minor'], which=which) - if which == 'major': - return self._translate_tick_params( - self._major_tick_kw, reverse=True - ) - return self._translate_tick_params(self._minor_tick_kw, reverse=True) - - @staticmethod - def _translate_tick_params(kw, reverse=False): - """ - Translate the kwargs supported by `.Axis.set_tick_params` to kwargs - supported by `.Tick._apply_params`. - - In particular, this maps axis specific names like 'top', 'left' - to the generic tick1, tick2 logic of the axis. Additionally, there - are some other name translations. - - Returns a new dict of translated kwargs. - - Note: Use reverse=True to translate from those supported by - `.Tick._apply_params` back to those supported by - `.Axis.set_tick_params`. - """ - kw_ = {**kw} - - # The following lists may be moved to a more accessible location. - allowed_keys = [ - 'size', 'width', 'color', 'tickdir', 'pad', - 'labelsize', 'labelcolor', 'labelfontfamily', 'zorder', 'gridOn', - 'tick1On', 'tick2On', 'label1On', 'label2On', - 'length', 'direction', 'left', 'bottom', 'right', 'top', - 'labelleft', 'labelbottom', 'labelright', 'labeltop', - 'labelrotation', - *_gridline_param_names] - - keymap = { - # tick_params key -> axis key - 'length': 'size', - 'direction': 'tickdir', - 'rotation': 'labelrotation', - 'left': 'tick1On', - 'bottom': 'tick1On', - 'right': 'tick2On', - 'top': 'tick2On', - 'labelleft': 'label1On', - 'labelbottom': 'label1On', - 'labelright': 'label2On', - 'labeltop': 'label2On', - } - if reverse: - kwtrans = { - oldkey: kw_.pop(newkey) - for oldkey, newkey in keymap.items() if newkey in kw_ - } - else: - kwtrans = { - newkey: kw_.pop(oldkey) - for oldkey, newkey in keymap.items() if oldkey in kw_ - } - if 'colors' in kw_: - c = kw_.pop('colors') - kwtrans['color'] = c - kwtrans['labelcolor'] = c - # Maybe move the checking up to the caller of this method. - for key in kw_: - if key not in allowed_keys: - raise ValueError( - "keyword %s is not recognized; valid keywords are %s" - % (key, allowed_keys)) - kwtrans.update(kw_) - return kwtrans - - @_api.rename_parameter("3.8", "clippath", "path") - def set_clip_path(self, path, transform=None): - super().set_clip_path(path, transform) - for child in self.majorTicks + self.minorTicks: - child.set_clip_path(path, transform) - self.stale = True - - def get_view_interval(self): - """Return the ``(min, max)`` view limits of this axis.""" - raise NotImplementedError('Derived must override') - - def set_view_interval(self, vmin, vmax, ignore=False): - """ - Set the axis view limits. This method is for internal use; Matplotlib - users should typically use e.g. `~.Axes.set_xlim` or `~.Axes.set_ylim`. - - If *ignore* is False (the default), this method will never reduce the - preexisting view limits, only expand them if *vmin* or *vmax* are not - within them. Moreover, the order of *vmin* and *vmax* does not matter; - the orientation of the axis will not change. - - If *ignore* is True, the view limits will be set exactly to ``(vmin, - vmax)`` in that order. - """ - raise NotImplementedError('Derived must override') - - def get_data_interval(self): - """Return the ``(min, max)`` data limits of this axis.""" - raise NotImplementedError('Derived must override') - - def set_data_interval(self, vmin, vmax, ignore=False): - """ - Set the axis data limits. This method is for internal use. - - If *ignore* is False (the default), this method will never reduce the - preexisting data limits, only expand them if *vmin* or *vmax* are not - within them. Moreover, the order of *vmin* and *vmax* does not matter; - the orientation of the axis will not change. - - If *ignore* is True, the data limits will be set exactly to ``(vmin, - vmax)`` in that order. - """ - raise NotImplementedError('Derived must override') - - def get_inverted(self): - """ - Return whether this Axis is oriented in the "inverse" direction. - - The "normal" direction is increasing to the right for the x-axis and to - the top for the y-axis; the "inverse" direction is increasing to the - left for the x-axis and to the bottom for the y-axis. - """ - low, high = self.get_view_interval() - return high < low - - def set_inverted(self, inverted): - """ - Set whether this Axis is oriented in the "inverse" direction. - - The "normal" direction is increasing to the right for the x-axis and to - the top for the y-axis; the "inverse" direction is increasing to the - left for the x-axis and to the bottom for the y-axis. - """ - a, b = self.get_view_interval() - # cast to bool to avoid bad interaction between python 3.8 and np.bool_ - self._set_lim(*sorted((a, b), reverse=bool(inverted)), auto=None) - - def set_default_intervals(self): - """ - Set the default limits for the axis data and view interval if they - have not been not mutated yet. - """ - # this is mainly in support of custom object plotting. For - # example, if someone passes in a datetime object, we do not - # know automagically how to set the default min/max of the - # data and view limits. The unit conversion AxisInfo - # interface provides a hook for custom types to register - # default limits through the AxisInfo.default_limits - # attribute, and the derived code below will check for that - # and use it if it's available (else just use 0..1) - - def _set_lim(self, v0, v1, *, emit=True, auto): - """ - Set view limits. - - This method is a helper for the Axes ``set_xlim``, ``set_ylim``, and - ``set_zlim`` methods. - - Parameters - ---------- - v0, v1 : float - The view limits. (Passing *v0* as a (low, high) pair is not - supported; normalization must occur in the Axes setters.) - emit : bool, default: True - Whether to notify observers of limit change. - auto : bool or None, default: False - Whether to turn on autoscaling of the x-axis. True turns on, False - turns off, None leaves unchanged. - """ - name = self._get_axis_name() - - self.axes._process_unit_info([(name, (v0, v1))], convert=False) - v0 = self.axes._validate_converted_limits(v0, self.convert_units) - v1 = self.axes._validate_converted_limits(v1, self.convert_units) - - if v0 is None or v1 is None: - # Axes init calls set_xlim(0, 1) before get_xlim() can be called, - # so only grab the limits if we really need them. - old0, old1 = self.get_view_interval() - if v0 is None: - v0 = old0 - if v1 is None: - v1 = old1 - - if self.get_scale() == 'log' and (v0 <= 0 or v1 <= 0): - # Axes init calls set_xlim(0, 1) before get_xlim() can be called, - # so only grab the limits if we really need them. - old0, old1 = self.get_view_interval() - if v0 <= 0: - _api.warn_external(f"Attempt to set non-positive {name}lim on " - f"a log-scaled axis will be ignored.") - v0 = old0 - if v1 <= 0: - _api.warn_external(f"Attempt to set non-positive {name}lim on " - f"a log-scaled axis will be ignored.") - v1 = old1 - if v0 == v1: - _api.warn_external( - f"Attempting to set identical low and high {name}lims " - f"makes transformation singular; automatically expanding.") - reverse = bool(v0 > v1) # explicit cast needed for python3.8+np.bool_. - v0, v1 = self.get_major_locator().nonsingular(v0, v1) - v0, v1 = self.limit_range_for_scale(v0, v1) - v0, v1 = sorted([v0, v1], reverse=bool(reverse)) - - self.set_view_interval(v0, v1, ignore=True) - # Mark viewlims as no longer stale without triggering an autoscale. - for ax in self._get_shared_axes(): - ax._stale_viewlims[name] = False - if auto is not None: - self._set_autoscale_on(bool(auto)) - - if emit: - self.axes.callbacks.process(f"{name}lim_changed", self.axes) - # Call all of the other axes that are shared with this one - for other in self._get_shared_axes(): - if other is self.axes: - continue - other._axis_map[name]._set_lim(v0, v1, emit=False, auto=auto) - if emit: - other.callbacks.process(f"{name}lim_changed", other) - if other.figure != self.figure: - other.figure.canvas.draw_idle() - - self.stale = True - return v0, v1 - - def _set_artist_props(self, a): - if a is None: - return - a.set_figure(self.figure) - - def _update_ticks(self): - """ - Update ticks (position and labels) using the current data interval of - the axes. Return the list of ticks that will be drawn. - """ - major_locs = self.get_majorticklocs() - major_labels = self.major.formatter.format_ticks(major_locs) - major_ticks = self.get_major_ticks(len(major_locs)) - for tick, loc, label in zip(major_ticks, major_locs, major_labels): - tick.update_position(loc) - tick.label1.set_text(label) - tick.label2.set_text(label) - minor_locs = self.get_minorticklocs() - minor_labels = self.minor.formatter.format_ticks(minor_locs) - minor_ticks = self.get_minor_ticks(len(minor_locs)) - for tick, loc, label in zip(minor_ticks, minor_locs, minor_labels): - tick.update_position(loc) - tick.label1.set_text(label) - tick.label2.set_text(label) - ticks = [*major_ticks, *minor_ticks] - - view_low, view_high = self.get_view_interval() - if view_low > view_high: - view_low, view_high = view_high, view_low - - interval_t = self.get_transform().transform([view_low, view_high]) - - ticks_to_draw = [] - for tick in ticks: - try: - loc_t = self.get_transform().transform(tick.get_loc()) - except AssertionError: - # transforms.transform doesn't allow masked values but - # some scales might make them, so we need this try/except. - pass - else: - if mtransforms._interval_contains_close(interval_t, loc_t): - ticks_to_draw.append(tick) - - return ticks_to_draw - - def _get_ticklabel_bboxes(self, ticks, renderer=None): - """Return lists of bboxes for ticks' label1's and label2's.""" - if renderer is None: - renderer = self.figure._get_renderer() - return ([tick.label1.get_window_extent(renderer) - for tick in ticks if tick.label1.get_visible()], - [tick.label2.get_window_extent(renderer) - for tick in ticks if tick.label2.get_visible()]) - - def get_tightbbox(self, renderer=None, *, for_layout_only=False): - """ - Return a bounding box that encloses the axis. It only accounts - tick labels, axis label, and offsetText. - - If *for_layout_only* is True, then the width of the label (if this - is an x-axis) or the height of the label (if this is a y-axis) is - collapsed to near zero. This allows tight/constrained_layout to ignore - too-long labels when doing their layout. - """ - if not self.get_visible(): - return - if renderer is None: - renderer = self.figure._get_renderer() - ticks_to_draw = self._update_ticks() - - self._update_label_position(renderer) - - # go back to just this axis's tick labels - tlb1, tlb2 = self._get_ticklabel_bboxes(ticks_to_draw, renderer) - - self._update_offset_text_position(tlb1, tlb2) - self.offsetText.set_text(self.major.formatter.get_offset()) - - bboxes = [ - *(a.get_window_extent(renderer) - for a in [self.offsetText] - if a.get_visible()), - *tlb1, *tlb2, - ] - # take care of label - if self.label.get_visible(): - bb = self.label.get_window_extent(renderer) - # for constrained/tight_layout, we want to ignore the label's - # width/height because the adjustments they make can't be improved. - # this code collapses the relevant direction - if for_layout_only: - if self.axis_name == "x" and bb.width > 0: - bb.x0 = (bb.x0 + bb.x1) / 2 - 0.5 - bb.x1 = bb.x0 + 1.0 - if self.axis_name == "y" and bb.height > 0: - bb.y0 = (bb.y0 + bb.y1) / 2 - 0.5 - bb.y1 = bb.y0 + 1.0 - bboxes.append(bb) - bboxes = [b for b in bboxes - if 0 < b.width < np.inf and 0 < b.height < np.inf] - if bboxes: - return mtransforms.Bbox.union(bboxes) - else: - return None - - def get_tick_padding(self): - values = [] - if len(self.majorTicks): - values.append(self.majorTicks[0].get_tick_padding()) - if len(self.minorTicks): - values.append(self.minorTicks[0].get_tick_padding()) - return max(values, default=0) - - @martist.allow_rasterization - def draw(self, renderer, *args, **kwargs): - # docstring inherited - - if not self.get_visible(): - return - renderer.open_group(__name__, gid=self.get_gid()) - - ticks_to_draw = self._update_ticks() - tlb1, tlb2 = self._get_ticklabel_bboxes(ticks_to_draw, renderer) - - for tick in ticks_to_draw: - tick.draw(renderer) - - # Shift label away from axes to avoid overlapping ticklabels. - self._update_label_position(renderer) - self.label.draw(renderer) - - self._update_offset_text_position(tlb1, tlb2) - self.offsetText.set_text(self.major.formatter.get_offset()) - self.offsetText.draw(renderer) - - renderer.close_group(__name__) - self.stale = False - - def get_gridlines(self): - r"""Return this Axis' grid lines as a list of `.Line2D`\s.""" - ticks = self.get_major_ticks() - return cbook.silent_list('Line2D gridline', - [tick.gridline for tick in ticks]) - - def get_label(self): - """Return the axis label as a Text instance.""" - return self.label - - def get_offset_text(self): - """Return the axis offsetText as a Text instance.""" - return self.offsetText - - def get_pickradius(self): - """Return the depth of the axis used by the picker.""" - return self._pickradius - - def get_majorticklabels(self): - """Return this Axis' major tick labels, as a list of `~.text.Text`.""" - self._update_ticks() - ticks = self.get_major_ticks() - labels1 = [tick.label1 for tick in ticks if tick.label1.get_visible()] - labels2 = [tick.label2 for tick in ticks if tick.label2.get_visible()] - return labels1 + labels2 - - def get_minorticklabels(self): - """Return this Axis' minor tick labels, as a list of `~.text.Text`.""" - self._update_ticks() - ticks = self.get_minor_ticks() - labels1 = [tick.label1 for tick in ticks if tick.label1.get_visible()] - labels2 = [tick.label2 for tick in ticks if tick.label2.get_visible()] - return labels1 + labels2 - - def get_ticklabels(self, minor=False, which=None): - """ - Get this Axis' tick labels. - - Parameters - ---------- - minor : bool - Whether to return the minor or the major ticklabels. - - which : None, ('minor', 'major', 'both') - Overrides *minor*. - - Selects which ticklabels to return - - Returns - ------- - list of `~matplotlib.text.Text` - """ - if which is not None: - if which == 'minor': - return self.get_minorticklabels() - elif which == 'major': - return self.get_majorticklabels() - elif which == 'both': - return self.get_majorticklabels() + self.get_minorticklabels() - else: - _api.check_in_list(['major', 'minor', 'both'], which=which) - if minor: - return self.get_minorticklabels() - return self.get_majorticklabels() - - def get_majorticklines(self): - r"""Return this Axis' major tick lines as a list of `.Line2D`\s.""" - lines = [] - ticks = self.get_major_ticks() - for tick in ticks: - lines.append(tick.tick1line) - lines.append(tick.tick2line) - return cbook.silent_list('Line2D ticklines', lines) - - def get_minorticklines(self): - r"""Return this Axis' minor tick lines as a list of `.Line2D`\s.""" - lines = [] - ticks = self.get_minor_ticks() - for tick in ticks: - lines.append(tick.tick1line) - lines.append(tick.tick2line) - return cbook.silent_list('Line2D ticklines', lines) - - def get_ticklines(self, minor=False): - r"""Return this Axis' tick lines as a list of `.Line2D`\s.""" - if minor: - return self.get_minorticklines() - return self.get_majorticklines() - - def get_majorticklocs(self): - """Return this Axis' major tick locations in data coordinates.""" - return self.major.locator() - - def get_minorticklocs(self): - """Return this Axis' minor tick locations in data coordinates.""" - # Remove minor ticks duplicating major ticks. - minor_locs = np.asarray(self.minor.locator()) - if self.remove_overlapping_locs: - major_locs = self.major.locator() - transform = self._scale.get_transform() - tr_minor_locs = transform.transform(minor_locs) - tr_major_locs = transform.transform(major_locs) - lo, hi = sorted(transform.transform(self.get_view_interval())) - # Use the transformed view limits as scale. 1e-5 is the default - # rtol for np.isclose. - tol = (hi - lo) * 1e-5 - mask = np.isclose(tr_minor_locs[:, None], tr_major_locs[None, :], - atol=tol, rtol=0).any(axis=1) - minor_locs = minor_locs[~mask] - return minor_locs - - def get_ticklocs(self, *, minor=False): - """ - Return this Axis' tick locations in data coordinates. - - The locations are not clipped to the current axis limits and hence - may contain locations that are not visible in the output. - - Parameters - ---------- - minor : bool, default: False - True to return the minor tick directions, - False to return the major tick directions. - - Returns - ------- - array of tick locations - """ - return self.get_minorticklocs() if minor else self.get_majorticklocs() - - def get_ticks_direction(self, minor=False): - """ - Return an array of this Axis' tick directions. - - Parameters - ---------- - minor : bool, default: False - True to return the minor tick directions, - False to return the major tick directions. - - Returns - ------- - array of tick directions - """ - if minor: - return np.array( - [tick._tickdir for tick in self.get_minor_ticks()]) - else: - return np.array( - [tick._tickdir for tick in self.get_major_ticks()]) - - def _get_tick(self, major): - """Return the default tick instance.""" - if self._tick_class is None: - raise NotImplementedError( - f"The Axis subclass {self.__class__.__name__} must define " - "_tick_class or reimplement _get_tick()") - tick_kw = self._major_tick_kw if major else self._minor_tick_kw - return self._tick_class(self.axes, 0, major=major, **tick_kw) - - def _get_tick_label_size(self, axis_name): - """ - Return the text size of tick labels for this Axis. - - This is a convenience function to avoid having to create a `Tick` in - `.get_tick_space`, since it is expensive. - """ - tick_kw = self._major_tick_kw - size = tick_kw.get('labelsize', - mpl.rcParams[f'{axis_name}tick.labelsize']) - return mtext.FontProperties(size=size).get_size_in_points() - - def _copy_tick_props(self, src, dest): - """Copy the properties from *src* tick to *dest* tick.""" - if src is None or dest is None: - return - dest.label1.update_from(src.label1) - dest.label2.update_from(src.label2) - dest.tick1line.update_from(src.tick1line) - dest.tick2line.update_from(src.tick2line) - dest.gridline.update_from(src.gridline) - - def get_label_text(self): - """Get the text of the label.""" - return self.label.get_text() - - def get_major_locator(self): - """Get the locator of the major ticker.""" - return self.major.locator - - def get_minor_locator(self): - """Get the locator of the minor ticker.""" - return self.minor.locator - - def get_major_formatter(self): - """Get the formatter of the major ticker.""" - return self.major.formatter - - def get_minor_formatter(self): - """Get the formatter of the minor ticker.""" - return self.minor.formatter - - def get_major_ticks(self, numticks=None): - r""" - Return the list of major `.Tick`\s. - - .. warning:: - - Ticks are not guaranteed to be persistent. Various operations - can create, delete and modify the Tick instances. There is an - imminent risk that changes to individual ticks will not - survive if you work on the figure further (including also - panning/zooming on a displayed figure). - - Working on the individual ticks is a method of last resort. - Use `.set_tick_params` instead if possible. - """ - if numticks is None: - numticks = len(self.get_majorticklocs()) - - while len(self.majorTicks) < numticks: - # Update the new tick label properties from the old. - tick = self._get_tick(major=True) - self.majorTicks.append(tick) - self._copy_tick_props(self.majorTicks[0], tick) - - return self.majorTicks[:numticks] - - def get_minor_ticks(self, numticks=None): - r""" - Return the list of minor `.Tick`\s. - - .. warning:: - - Ticks are not guaranteed to be persistent. Various operations - can create, delete and modify the Tick instances. There is an - imminent risk that changes to individual ticks will not - survive if you work on the figure further (including also - panning/zooming on a displayed figure). - - Working on the individual ticks is a method of last resort. - Use `.set_tick_params` instead if possible. - """ - if numticks is None: - numticks = len(self.get_minorticklocs()) - - while len(self.minorTicks) < numticks: - # Update the new tick label properties from the old. - tick = self._get_tick(major=False) - self.minorTicks.append(tick) - self._copy_tick_props(self.minorTicks[0], tick) - - return self.minorTicks[:numticks] - - def grid(self, visible=None, which='major', **kwargs): - """ - Configure the grid lines. - - Parameters - ---------- - visible : bool or None - Whether to show the grid lines. If any *kwargs* are supplied, it - is assumed you want the grid on and *visible* will be set to True. - - If *visible* is *None* and there are no *kwargs*, this toggles the - visibility of the lines. - - which : {'major', 'minor', 'both'} - The grid lines to apply the changes on. - - **kwargs : `~matplotlib.lines.Line2D` properties - Define the line properties of the grid, e.g.:: - - grid(color='r', linestyle='-', linewidth=2) - """ - if kwargs: - if visible is None: - visible = True - elif not visible: # something false-like but not None - _api.warn_external('First parameter to grid() is false, ' - 'but line properties are supplied. The ' - 'grid will be enabled.') - visible = True - which = which.lower() - _api.check_in_list(['major', 'minor', 'both'], which=which) - gridkw = {f'grid_{name}': value for name, value in kwargs.items()} - if which in ['minor', 'both']: - gridkw['gridOn'] = (not self._minor_tick_kw['gridOn'] - if visible is None else visible) - self.set_tick_params(which='minor', **gridkw) - if which in ['major', 'both']: - gridkw['gridOn'] = (not self._major_tick_kw['gridOn'] - if visible is None else visible) - self.set_tick_params(which='major', **gridkw) - self.stale = True - - def update_units(self, data): - """ - Introspect *data* for units converter and update the - ``axis.converter`` instance if necessary. Return *True* - if *data* is registered for unit conversion. - """ - converter = munits.registry.get_converter(data) - if converter is None: - return False - - neednew = self.converter != converter - self.converter = converter - default = self.converter.default_units(data, self) - if default is not None and self.units is None: - self.set_units(default) - - elif neednew: - self._update_axisinfo() - self.stale = True - return True - - def _update_axisinfo(self): - """ - Check the axis converter for the stored units to see if the - axis info needs to be updated. - """ - if self.converter is None: - return - - info = self.converter.axisinfo(self.units, self) - - if info is None: - return - if info.majloc is not None and \ - self.major.locator != info.majloc and self.isDefault_majloc: - self.set_major_locator(info.majloc) - self.isDefault_majloc = True - if info.minloc is not None and \ - self.minor.locator != info.minloc and self.isDefault_minloc: - self.set_minor_locator(info.minloc) - self.isDefault_minloc = True - if info.majfmt is not None and \ - self.major.formatter != info.majfmt and self.isDefault_majfmt: - self.set_major_formatter(info.majfmt) - self.isDefault_majfmt = True - if info.minfmt is not None and \ - self.minor.formatter != info.minfmt and self.isDefault_minfmt: - self.set_minor_formatter(info.minfmt) - self.isDefault_minfmt = True - if info.label is not None and self.isDefault_label: - self.set_label_text(info.label) - self.isDefault_label = True - - self.set_default_intervals() - - def have_units(self): - return self.converter is not None or self.units is not None - - def convert_units(self, x): - # If x is natively supported by Matplotlib, doesn't need converting - if munits._is_natively_supported(x): - return x - - if self.converter is None: - self.converter = munits.registry.get_converter(x) - - if self.converter is None: - return x - try: - ret = self.converter.convert(x, self.units, self) - except Exception as e: - raise munits.ConversionError('Failed to convert value(s) to axis ' - f'units: {x!r}') from e - return ret - - def set_units(self, u): - """ - Set the units for axis. - - Parameters - ---------- - u : units tag - - Notes - ----- - The units of any shared axis will also be updated. - """ - if u == self.units: - return - for axis in self._get_shared_axis(): - axis.units = u - axis._update_axisinfo() - axis.callbacks.process('units') - axis.stale = True - - def get_units(self): - """Return the units for axis.""" - return self.units - - def set_label_text(self, label, fontdict=None, **kwargs): - """ - Set the text value of the axis label. - - Parameters - ---------- - label : str - Text string. - fontdict : dict - Text properties. - - .. admonition:: Discouraged - - The use of *fontdict* is discouraged. Parameters should be passed as - individual keyword arguments or using dictionary-unpacking - ``set_label_text(..., **fontdict)``. - - **kwargs - Merged into fontdict. - """ - self.isDefault_label = False - self.label.set_text(label) - if fontdict is not None: - self.label.update(fontdict) - self.label.update(kwargs) - self.stale = True - return self.label - - def set_major_formatter(self, formatter): - """ - Set the formatter of the major ticker. - - In addition to a `~matplotlib.ticker.Formatter` instance, - this also accepts a ``str`` or function. - - For a ``str`` a `~matplotlib.ticker.StrMethodFormatter` is used. - The field used for the value must be labeled ``'x'`` and the field used - for the position must be labeled ``'pos'``. - See the `~matplotlib.ticker.StrMethodFormatter` documentation for - more information. - - For a function, a `~matplotlib.ticker.FuncFormatter` is used. - The function must take two inputs (a tick value ``x`` and a - position ``pos``), and return a string containing the corresponding - tick label. - See the `~matplotlib.ticker.FuncFormatter` documentation for - more information. - - Parameters - ---------- - formatter : `~matplotlib.ticker.Formatter`, ``str``, or function - """ - self._set_formatter(formatter, self.major) - - def set_minor_formatter(self, formatter): - """ - Set the formatter of the minor ticker. - - In addition to a `~matplotlib.ticker.Formatter` instance, - this also accepts a ``str`` or function. - See `.Axis.set_major_formatter` for more information. - - Parameters - ---------- - formatter : `~matplotlib.ticker.Formatter`, ``str``, or function - """ - self._set_formatter(formatter, self.minor) - - def _set_formatter(self, formatter, level): - if isinstance(formatter, str): - formatter = mticker.StrMethodFormatter(formatter) - # Don't allow any other TickHelper to avoid easy-to-make errors, - # like using a Locator instead of a Formatter. - elif (callable(formatter) and - not isinstance(formatter, mticker.TickHelper)): - formatter = mticker.FuncFormatter(formatter) - else: - _api.check_isinstance(mticker.Formatter, formatter=formatter) - - if (isinstance(formatter, mticker.FixedFormatter) - and len(formatter.seq) > 0 - and not isinstance(level.locator, mticker.FixedLocator)): - _api.warn_external('FixedFormatter should only be used together ' - 'with FixedLocator') - - if level == self.major: - self.isDefault_majfmt = False - else: - self.isDefault_minfmt = False - - level.formatter = formatter - formatter.set_axis(self) - self.stale = True - - def set_major_locator(self, locator): - """ - Set the locator of the major ticker. - - Parameters - ---------- - locator : `~matplotlib.ticker.Locator` - """ - _api.check_isinstance(mticker.Locator, locator=locator) - self.isDefault_majloc = False - self.major.locator = locator - if self.major.formatter: - self.major.formatter._set_locator(locator) - locator.set_axis(self) - self.stale = True - - def set_minor_locator(self, locator): - """ - Set the locator of the minor ticker. - - Parameters - ---------- - locator : `~matplotlib.ticker.Locator` - """ - _api.check_isinstance(mticker.Locator, locator=locator) - self.isDefault_minloc = False - self.minor.locator = locator - if self.minor.formatter: - self.minor.formatter._set_locator(locator) - locator.set_axis(self) - self.stale = True - - def set_pickradius(self, pickradius): - """ - Set the depth of the axis used by the picker. - - Parameters - ---------- - pickradius : float - The acceptance radius for containment tests. - See also `.Axis.contains`. - """ - if not isinstance(pickradius, Real) or pickradius < 0: - raise ValueError("pick radius should be a distance") - self._pickradius = pickradius - - pickradius = property( - get_pickradius, set_pickradius, doc="The acceptance radius for " - "containment tests. See also `.Axis.contains`.") - - # Helper for set_ticklabels. Defining it here makes it picklable. - @staticmethod - def _format_with_dict(tickd, x, pos): - return tickd.get(x, "") - - @_api.rename_parameter("3.7", "ticklabels", "labels") - def set_ticklabels(self, labels, *, minor=False, fontdict=None, **kwargs): - r""" - [*Discouraged*] Set this Axis' tick labels with list of string labels. - - .. admonition:: Discouraged - - The use of this method is discouraged, because of the dependency on - tick positions. In most cases, you'll want to use - ``Axes.set_[x/y/z]ticks(positions, labels)`` or ``Axis.set_ticks`` - instead. - - If you are using this method, you should always fix the tick - positions before, e.g. by using `.Axis.set_ticks` or by explicitly - setting a `~.ticker.FixedLocator`. Otherwise, ticks are free to - move and the labels may end up in unexpected positions. - - Parameters - ---------- - labels : sequence of str or of `.Text`\s - Texts for labeling each tick location in the sequence set by - `.Axis.set_ticks`; the number of labels must match the number of - locations. - - minor : bool - If True, set minor ticks instead of major ticks. - - fontdict : dict, optional - - .. admonition:: Discouraged - - The use of *fontdict* is discouraged. Parameters should be passed as - individual keyword arguments or using dictionary-unpacking - ``set_ticklabels(..., **fontdict)``. - - A dictionary controlling the appearance of the ticklabels. - The default *fontdict* is:: - - {'fontsize': rcParams['axes.titlesize'], - 'fontweight': rcParams['axes.titleweight'], - 'verticalalignment': 'baseline', - 'horizontalalignment': loc} - - **kwargs - Text properties. - - .. warning:: - - This only sets the properties of the current ticks. - Ticks are not guaranteed to be persistent. Various operations - can create, delete and modify the Tick instances. There is an - imminent risk that these settings can get lost if you work on - the figure further (including also panning/zooming on a - displayed figure). - - Use `.set_tick_params` instead if possible. - - Returns - ------- - list of `.Text`\s - For each tick, includes ``tick.label1`` if it is visible, then - ``tick.label2`` if it is visible, in that order. - """ - try: - labels = [t.get_text() if hasattr(t, 'get_text') else t - for t in labels] - except TypeError: - raise TypeError(f"{labels:=} must be a sequence") from None - locator = (self.get_minor_locator() if minor - else self.get_major_locator()) - if not labels: - # eg labels=[]: - formatter = mticker.NullFormatter() - elif isinstance(locator, mticker.FixedLocator): - # Passing [] as a list of labels is often used as a way to - # remove all tick labels, so only error for > 0 labels - if len(locator.locs) != len(labels) and len(labels) != 0: - raise ValueError( - "The number of FixedLocator locations" - f" ({len(locator.locs)}), usually from a call to" - " set_ticks, does not match" - f" the number of labels ({len(labels)}).") - tickd = {loc: lab for loc, lab in zip(locator.locs, labels)} - func = functools.partial(self._format_with_dict, tickd) - formatter = mticker.FuncFormatter(func) - else: - _api.warn_external( - "set_ticklabels() should only be used with a fixed number of " - "ticks, i.e. after set_ticks() or using a FixedLocator.") - formatter = mticker.FixedFormatter(labels) - - with warnings.catch_warnings(): - warnings.filterwarnings( - "ignore", - message="FixedFormatter should only be used together with FixedLocator") - if minor: - self.set_minor_formatter(formatter) - locs = self.get_minorticklocs() - ticks = self.get_minor_ticks(len(locs)) - else: - self.set_major_formatter(formatter) - locs = self.get_majorticklocs() - ticks = self.get_major_ticks(len(locs)) - - ret = [] - if fontdict is not None: - kwargs.update(fontdict) - for pos, (loc, tick) in enumerate(zip(locs, ticks)): - tick.update_position(loc) - tick_label = formatter(loc, pos) - # deal with label1 - tick.label1.set_text(tick_label) - tick.label1._internal_update(kwargs) - # deal with label2 - tick.label2.set_text(tick_label) - tick.label2._internal_update(kwargs) - # only return visible tick labels - if tick.label1.get_visible(): - ret.append(tick.label1) - if tick.label2.get_visible(): - ret.append(tick.label2) - - self.stale = True - return ret - - def _set_tick_locations(self, ticks, *, minor=False): - # see docstring of set_ticks - - # XXX if the user changes units, the information will be lost here - ticks = self.convert_units(ticks) - locator = mticker.FixedLocator(ticks) # validate ticks early. - if len(ticks): - for axis in self._get_shared_axis(): - # set_view_interval maintains any preexisting inversion. - axis.set_view_interval(min(ticks), max(ticks)) - self.axes.stale = True - if minor: - self.set_minor_locator(locator) - return self.get_minor_ticks(len(ticks)) - else: - self.set_major_locator(locator) - return self.get_major_ticks(len(ticks)) - - def set_ticks(self, ticks, labels=None, *, minor=False, **kwargs): - """ - Set this Axis' tick locations and optionally tick labels. - - If necessary, the view limits of the Axis are expanded so that all - given ticks are visible. - - Parameters - ---------- - ticks : 1D array-like - Array of tick locations. The axis `.Locator` is replaced by a - `~.ticker.FixedLocator`. - - The values may be either floats or in axis units. - - Pass an empty list to remove all ticks:: - - set_ticks([]) - - Some tick formatters will not label arbitrary tick positions; - e.g. log formatters only label decade ticks by default. In - such a case you can set a formatter explicitly on the axis - using `.Axis.set_major_formatter` or provide formatted - *labels* yourself. - labels : list of str, optional - Tick labels for each location in *ticks*. *labels* must be of the same - length as *ticks*. If not set, the labels are generate using the axis - tick `.Formatter`. - minor : bool, default: False - If ``False``, set the major ticks; if ``True``, the minor ticks. - **kwargs - `.Text` properties for the labels. Using these is only allowed if - you pass *labels*. In other cases, please use `~.Axes.tick_params`. - - Notes - ----- - The mandatory expansion of the view limits is an intentional design - choice to prevent the surprise of a non-visible tick. If you need - other limits, you should set the limits explicitly after setting the - ticks. - """ - if labels is None and kwargs: - first_key = next(iter(kwargs)) - raise ValueError( - f"Incorrect use of keyword argument {first_key!r}. Keyword arguments " - "other than 'minor' modify the text labels and can only be used if " - "'labels' are passed as well.") - result = self._set_tick_locations(ticks, minor=minor) - if labels is not None: - self.set_ticklabels(labels, minor=minor, **kwargs) - return result - - def _get_tick_boxes_siblings(self, renderer): - """ - Get the bounding boxes for this `.axis` and its siblings - as set by `.Figure.align_xlabels` or `.Figure.align_ylabels`. - - By default, it just gets bboxes for *self*. - """ - # Get the Grouper keeping track of x or y label groups for this figure. - name = self._get_axis_name() - if name not in self.figure._align_label_groups: - return [], [] - grouper = self.figure._align_label_groups[name] - bboxes = [] - bboxes2 = [] - # If we want to align labels from other Axes: - for ax in grouper.get_siblings(self.axes): - axis = ax._axis_map[name] - ticks_to_draw = axis._update_ticks() - tlb, tlb2 = axis._get_ticklabel_bboxes(ticks_to_draw, renderer) - bboxes.extend(tlb) - bboxes2.extend(tlb2) - return bboxes, bboxes2 - - def _update_label_position(self, renderer): - """ - Update the label position based on the bounding box enclosing - all the ticklabels and axis spine. - """ - raise NotImplementedError('Derived must override') - - def _update_offset_text_position(self, bboxes, bboxes2): - """ - Update the offset text position based on the sequence of bounding - boxes of all the ticklabels. - """ - raise NotImplementedError('Derived must override') - - def axis_date(self, tz=None): - """ - Set up axis ticks and labels to treat data along this Axis as dates. - - Parameters - ---------- - tz : str or `datetime.tzinfo`, default: :rc:`timezone` - The timezone used to create date labels. - """ - # By providing a sample datetime instance with the desired timezone, - # the registered converter can be selected, and the "units" attribute, - # which is the timezone, can be set. - if isinstance(tz, str): - import dateutil.tz - tz = dateutil.tz.gettz(tz) - self.update_units(datetime.datetime(2009, 1, 1, 0, 0, 0, 0, tz)) - - def get_tick_space(self): - """Return the estimated number of ticks that can fit on the axis.""" - # Must be overridden in the subclass - raise NotImplementedError() - - def _get_ticks_position(self): - """ - Helper for `XAxis.get_ticks_position` and `YAxis.get_ticks_position`. - - Check the visibility of tick1line, label1, tick2line, and label2 on - the first major and the first minor ticks, and return - - - 1 if only tick1line and label1 are visible (which corresponds to - "bottom" for the x-axis and "left" for the y-axis); - - 2 if only tick2line and label2 are visible (which corresponds to - "top" for the x-axis and "right" for the y-axis); - - "default" if only tick1line, tick2line and label1 are visible; - - "unknown" otherwise. - """ - major = self.majorTicks[0] - minor = self.minorTicks[0] - if all(tick.tick1line.get_visible() - and not tick.tick2line.get_visible() - and tick.label1.get_visible() - and not tick.label2.get_visible() - for tick in [major, minor]): - return 1 - elif all(tick.tick2line.get_visible() - and not tick.tick1line.get_visible() - and tick.label2.get_visible() - and not tick.label1.get_visible() - for tick in [major, minor]): - return 2 - elif all(tick.tick1line.get_visible() - and tick.tick2line.get_visible() - and tick.label1.get_visible() - and not tick.label2.get_visible() - for tick in [major, minor]): - return "default" - else: - return "unknown" - - def get_label_position(self): - """ - Return the label position (top or bottom) - """ - return self.label_position - - def set_label_position(self, position): - """ - Set the label position (top or bottom) - - Parameters - ---------- - position : {'top', 'bottom'} - """ - raise NotImplementedError() - - def get_minpos(self): - raise NotImplementedError() - - -def _make_getset_interval(method_name, lim_name, attr_name): - """ - Helper to generate ``get_{data,view}_interval`` and - ``set_{data,view}_interval`` implementations. - """ - - def getter(self): - # docstring inherited. - return getattr(getattr(self.axes, lim_name), attr_name) - - def setter(self, vmin, vmax, ignore=False): - # docstring inherited. - if ignore: - setattr(getattr(self.axes, lim_name), attr_name, (vmin, vmax)) - else: - oldmin, oldmax = getter(self) - if oldmin < oldmax: - setter(self, min(vmin, vmax, oldmin), max(vmin, vmax, oldmax), - ignore=True) - else: - setter(self, max(vmin, vmax, oldmin), min(vmin, vmax, oldmax), - ignore=True) - self.stale = True - - getter.__name__ = f"get_{method_name}_interval" - setter.__name__ = f"set_{method_name}_interval" - - return getter, setter - - -class XAxis(Axis): - __name__ = 'xaxis' - axis_name = 'x' #: Read-only name identifying the axis. - _tick_class = XTick - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init() - - def _init(self): - """ - Initialize the label and offsetText instance values and - `label_position` / `offset_text_position`. - """ - # x in axes coords, y in display coords (to be updated at draw time by - # _update_label_positions and _update_offset_text_position). - self.label.set( - x=0.5, y=0, - verticalalignment='top', horizontalalignment='center', - transform=mtransforms.blended_transform_factory( - self.axes.transAxes, mtransforms.IdentityTransform()), - ) - self.label_position = 'bottom' - - if mpl.rcParams['xtick.labelcolor'] == 'inherit': - tick_color = mpl.rcParams['xtick.color'] - else: - tick_color = mpl.rcParams['xtick.labelcolor'] - - self.offsetText.set( - x=1, y=0, - verticalalignment='top', horizontalalignment='right', - transform=mtransforms.blended_transform_factory( - self.axes.transAxes, mtransforms.IdentityTransform()), - fontsize=mpl.rcParams['xtick.labelsize'], - color=tick_color - ) - self.offset_text_position = 'bottom' - - def contains(self, mouseevent): - """Test whether the mouse event occurred in the x-axis.""" - if self._different_canvas(mouseevent): - return False, {} - x, y = mouseevent.x, mouseevent.y - try: - trans = self.axes.transAxes.inverted() - xaxes, yaxes = trans.transform((x, y)) - except ValueError: - return False, {} - (l, b), (r, t) = self.axes.transAxes.transform([(0, 0), (1, 1)]) - inaxis = 0 <= xaxes <= 1 and ( - b - self._pickradius < y < b or - t < y < t + self._pickradius) - return inaxis, {} - - def set_label_position(self, position): - """ - Set the label position (top or bottom) - - Parameters - ---------- - position : {'top', 'bottom'} - """ - self.label.set_verticalalignment(_api.check_getitem({ - 'top': 'baseline', 'bottom': 'top', - }, position=position)) - self.label_position = position - self.stale = True - - def _update_label_position(self, renderer): - """ - Update the label position based on the bounding box enclosing - all the ticklabels and axis spine - """ - if not self._autolabelpos: - return - - # get bounding boxes for this axis and any siblings - # that have been set by `fig.align_xlabels()` - bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer) - - x, y = self.label.get_position() - if self.label_position == 'bottom': - try: - spine = self.axes.spines['bottom'] - spinebbox = spine.get_window_extent() - except KeyError: - # use Axes if spine doesn't exist - spinebbox = self.axes.bbox - bbox = mtransforms.Bbox.union(bboxes + [spinebbox]) - bottom = bbox.y0 - - self.label.set_position( - (x, bottom - self.labelpad * self.figure.dpi / 72) - ) - else: - try: - spine = self.axes.spines['top'] - spinebbox = spine.get_window_extent() - except KeyError: - # use Axes if spine doesn't exist - spinebbox = self.axes.bbox - bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox]) - top = bbox.y1 - - self.label.set_position( - (x, top + self.labelpad * self.figure.dpi / 72) - ) - - def _update_offset_text_position(self, bboxes, bboxes2): - """ - Update the offset_text position based on the sequence of bounding - boxes of all the ticklabels - """ - x, y = self.offsetText.get_position() - if not hasattr(self, '_tick_position'): - self._tick_position = 'bottom' - if self._tick_position == 'bottom': - if not len(bboxes): - bottom = self.axes.bbox.ymin - else: - bbox = mtransforms.Bbox.union(bboxes) - bottom = bbox.y0 - y = bottom - self.OFFSETTEXTPAD * self.figure.dpi / 72 - else: - if not len(bboxes2): - top = self.axes.bbox.ymax - else: - bbox = mtransforms.Bbox.union(bboxes2) - top = bbox.y1 - y = top + self.OFFSETTEXTPAD * self.figure.dpi / 72 - self.offsetText.set_position((x, y)) - - def set_ticks_position(self, position): - """ - Set the ticks position. - - Parameters - ---------- - position : {'top', 'bottom', 'both', 'default', 'none'} - 'both' sets the ticks to appear on both positions, but does not - change the tick labels. 'default' resets the tick positions to - the default: ticks on both positions, labels at bottom. 'none' - can be used if you don't want any ticks. 'none' and 'both' - affect only the ticks, not the labels. - """ - if position == 'top': - self.set_tick_params(which='both', top=True, labeltop=True, - bottom=False, labelbottom=False) - self._tick_position = 'top' - self.offsetText.set_verticalalignment('bottom') - elif position == 'bottom': - self.set_tick_params(which='both', top=False, labeltop=False, - bottom=True, labelbottom=True) - self._tick_position = 'bottom' - self.offsetText.set_verticalalignment('top') - elif position == 'both': - self.set_tick_params(which='both', top=True, - bottom=True) - elif position == 'none': - self.set_tick_params(which='both', top=False, - bottom=False) - elif position == 'default': - self.set_tick_params(which='both', top=True, labeltop=False, - bottom=True, labelbottom=True) - self._tick_position = 'bottom' - self.offsetText.set_verticalalignment('top') - else: - _api.check_in_list(['top', 'bottom', 'both', 'default', 'none'], - position=position) - self.stale = True - - def tick_top(self): - """ - Move ticks and ticklabels (if present) to the top of the Axes. - """ - label = True - if 'label1On' in self._major_tick_kw: - label = (self._major_tick_kw['label1On'] - or self._major_tick_kw['label2On']) - self.set_ticks_position('top') - # If labels were turned off before this was called, leave them off. - self.set_tick_params(which='both', labeltop=label) - - def tick_bottom(self): - """ - Move ticks and ticklabels (if present) to the bottom of the Axes. - """ - label = True - if 'label1On' in self._major_tick_kw: - label = (self._major_tick_kw['label1On'] - or self._major_tick_kw['label2On']) - self.set_ticks_position('bottom') - # If labels were turned off before this was called, leave them off. - self.set_tick_params(which='both', labelbottom=label) - - def get_ticks_position(self): - """ - Return the ticks position ("top", "bottom", "default", or "unknown"). - """ - return {1: "bottom", 2: "top", - "default": "default", "unknown": "unknown"}[ - self._get_ticks_position()] - - get_view_interval, set_view_interval = _make_getset_interval( - "view", "viewLim", "intervalx") - get_data_interval, set_data_interval = _make_getset_interval( - "data", "dataLim", "intervalx") - - def get_minpos(self): - return self.axes.dataLim.minposx - - def set_default_intervals(self): - # docstring inherited - # only change view if dataLim has not changed and user has - # not changed the view: - if (not self.axes.dataLim.mutatedx() and - not self.axes.viewLim.mutatedx()): - if self.converter is not None: - info = self.converter.axisinfo(self.units, self) - if info.default_limits is not None: - xmin, xmax = self.convert_units(info.default_limits) - self.axes.viewLim.intervalx = xmin, xmax - self.stale = True - - def get_tick_space(self): - ends = mtransforms.Bbox.unit().transformed( - self.axes.transAxes - self.figure.dpi_scale_trans) - length = ends.width * 72 - # There is a heuristic here that the aspect ratio of tick text - # is no more than 3:1 - size = self._get_tick_label_size('x') * 3 - if size > 0: - return int(np.floor(length / size)) - else: - return 2**31 - 1 - - -class YAxis(Axis): - __name__ = 'yaxis' - axis_name = 'y' #: Read-only name identifying the axis. - _tick_class = YTick - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init() - - def _init(self): - """ - Initialize the label and offsetText instance values and - `label_position` / `offset_text_position`. - """ - # x in display coords, y in axes coords (to be updated at draw time by - # _update_label_positions and _update_offset_text_position). - self.label.set( - x=0, y=0.5, - verticalalignment='bottom', horizontalalignment='center', - rotation='vertical', rotation_mode='anchor', - transform=mtransforms.blended_transform_factory( - mtransforms.IdentityTransform(), self.axes.transAxes), - ) - self.label_position = 'left' - - if mpl.rcParams['ytick.labelcolor'] == 'inherit': - tick_color = mpl.rcParams['ytick.color'] - else: - tick_color = mpl.rcParams['ytick.labelcolor'] - - # x in axes coords, y in display coords(!). - self.offsetText.set( - x=0, y=0.5, - verticalalignment='baseline', horizontalalignment='left', - transform=mtransforms.blended_transform_factory( - self.axes.transAxes, mtransforms.IdentityTransform()), - fontsize=mpl.rcParams['ytick.labelsize'], - color=tick_color - ) - self.offset_text_position = 'left' - - def contains(self, mouseevent): - # docstring inherited - if self._different_canvas(mouseevent): - return False, {} - x, y = mouseevent.x, mouseevent.y - try: - trans = self.axes.transAxes.inverted() - xaxes, yaxes = trans.transform((x, y)) - except ValueError: - return False, {} - (l, b), (r, t) = self.axes.transAxes.transform([(0, 0), (1, 1)]) - inaxis = 0 <= yaxes <= 1 and ( - l - self._pickradius < x < l or - r < x < r + self._pickradius) - return inaxis, {} - - def set_label_position(self, position): - """ - Set the label position (left or right) - - Parameters - ---------- - position : {'left', 'right'} - """ - self.label.set_rotation_mode('anchor') - self.label.set_verticalalignment(_api.check_getitem({ - 'left': 'bottom', 'right': 'top', - }, position=position)) - self.label_position = position - self.stale = True - - def _update_label_position(self, renderer): - """ - Update the label position based on the bounding box enclosing - all the ticklabels and axis spine - """ - if not self._autolabelpos: - return - - # get bounding boxes for this axis and any siblings - # that have been set by `fig.align_ylabels()` - bboxes, bboxes2 = self._get_tick_boxes_siblings(renderer=renderer) - x, y = self.label.get_position() - if self.label_position == 'left': - try: - spine = self.axes.spines['left'] - spinebbox = spine.get_window_extent() - except KeyError: - # use Axes if spine doesn't exist - spinebbox = self.axes.bbox - bbox = mtransforms.Bbox.union(bboxes + [spinebbox]) - left = bbox.x0 - self.label.set_position( - (left - self.labelpad * self.figure.dpi / 72, y) - ) - - else: - try: - spine = self.axes.spines['right'] - spinebbox = spine.get_window_extent() - except KeyError: - # use Axes if spine doesn't exist - spinebbox = self.axes.bbox - - bbox = mtransforms.Bbox.union(bboxes2 + [spinebbox]) - right = bbox.x1 - self.label.set_position( - (right + self.labelpad * self.figure.dpi / 72, y) - ) - - def _update_offset_text_position(self, bboxes, bboxes2): - """ - Update the offset_text position based on the sequence of bounding - boxes of all the ticklabels - """ - x, _ = self.offsetText.get_position() - if 'outline' in self.axes.spines: - # Special case for colorbars: - bbox = self.axes.spines['outline'].get_window_extent() - else: - bbox = self.axes.bbox - top = bbox.ymax - self.offsetText.set_position( - (x, top + self.OFFSETTEXTPAD * self.figure.dpi / 72) - ) - - def set_offset_position(self, position): - """ - Parameters - ---------- - position : {'left', 'right'} - """ - x, y = self.offsetText.get_position() - x = _api.check_getitem({'left': 0, 'right': 1}, position=position) - - self.offsetText.set_ha(position) - self.offsetText.set_position((x, y)) - self.stale = True - - def set_ticks_position(self, position): - """ - Set the ticks position. - - Parameters - ---------- - position : {'left', 'right', 'both', 'default', 'none'} - 'both' sets the ticks to appear on both positions, but does not - change the tick labels. 'default' resets the tick positions to - the default: ticks on both positions, labels at left. 'none' - can be used if you don't want any ticks. 'none' and 'both' - affect only the ticks, not the labels. - """ - if position == 'right': - self.set_tick_params(which='both', right=True, labelright=True, - left=False, labelleft=False) - self.set_offset_position(position) - elif position == 'left': - self.set_tick_params(which='both', right=False, labelright=False, - left=True, labelleft=True) - self.set_offset_position(position) - elif position == 'both': - self.set_tick_params(which='both', right=True, - left=True) - elif position == 'none': - self.set_tick_params(which='both', right=False, - left=False) - elif position == 'default': - self.set_tick_params(which='both', right=True, labelright=False, - left=True, labelleft=True) - else: - _api.check_in_list(['left', 'right', 'both', 'default', 'none'], - position=position) - self.stale = True - - def tick_right(self): - """ - Move ticks and ticklabels (if present) to the right of the Axes. - """ - label = True - if 'label1On' in self._major_tick_kw: - label = (self._major_tick_kw['label1On'] - or self._major_tick_kw['label2On']) - self.set_ticks_position('right') - # if labels were turned off before this was called - # leave them off - self.set_tick_params(which='both', labelright=label) - - def tick_left(self): - """ - Move ticks and ticklabels (if present) to the left of the Axes. - """ - label = True - if 'label1On' in self._major_tick_kw: - label = (self._major_tick_kw['label1On'] - or self._major_tick_kw['label2On']) - self.set_ticks_position('left') - # if labels were turned off before this was called - # leave them off - self.set_tick_params(which='both', labelleft=label) - - def get_ticks_position(self): - """ - Return the ticks position ("left", "right", "default", or "unknown"). - """ - return {1: "left", 2: "right", - "default": "default", "unknown": "unknown"}[ - self._get_ticks_position()] - - get_view_interval, set_view_interval = _make_getset_interval( - "view", "viewLim", "intervaly") - get_data_interval, set_data_interval = _make_getset_interval( - "data", "dataLim", "intervaly") - - def get_minpos(self): - return self.axes.dataLim.minposy - - def set_default_intervals(self): - # docstring inherited - # only change view if dataLim has not changed and user has - # not changed the view: - if (not self.axes.dataLim.mutatedy() and - not self.axes.viewLim.mutatedy()): - if self.converter is not None: - info = self.converter.axisinfo(self.units, self) - if info.default_limits is not None: - ymin, ymax = self.convert_units(info.default_limits) - self.axes.viewLim.intervaly = ymin, ymax - self.stale = True - - def get_tick_space(self): - ends = mtransforms.Bbox.unit().transformed( - self.axes.transAxes - self.figure.dpi_scale_trans) - length = ends.height * 72 - # Having a spacing of at least 2 just looks good. - size = self._get_tick_label_size('y') * 2 - if size > 0: - return int(np.floor(length / size)) - else: - return 2**31 - 1 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py deleted file mode 100644 index 218be89476951a3c632d69d8f70f3c1269805d41..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_wx.py +++ /dev/null @@ -1,1332 +0,0 @@ -""" -A wxPython backend for matplotlib. - -Originally contributed by Jeremy O'Donoghue (jeremy@o-donoghue.com) and John -Hunter (jdhunter@ace.bsd.uchicago.edu). - -Copyright (C) Jeremy O'Donoghue & John Hunter, 2003-4. -""" - -import functools -import logging -import math -import pathlib -import sys -import weakref - -import numpy as np -import PIL.Image - -import matplotlib as mpl -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, - GraphicsContextBase, MouseButton, NavigationToolbar2, RendererBase, - TimerBase, ToolContainerBase, cursors, - CloseEvent, KeyEvent, LocationEvent, MouseEvent, ResizeEvent) - -from matplotlib import _api, cbook, backend_tools -from matplotlib._pylab_helpers import Gcf -from matplotlib.path import Path -from matplotlib.transforms import Affine2D - -import wx - -_log = logging.getLogger(__name__) - -# the True dots per inch on the screen; should be display dependent; see -# http://groups.google.com/d/msg/comp.lang.postscript/-/omHAc9FEuAsJ?hl=en -# for some info about screen dpi -PIXELS_PER_INCH = 75 - - -# lru_cache holds a reference to the App and prevents it from being gc'ed. -@functools.lru_cache(1) -def _create_wxapp(): - wxapp = wx.App(False) - wxapp.SetExitOnFrameDelete(True) - cbook._setup_new_guiapp() - return wxapp - - -class TimerWx(TimerBase): - """Subclass of `.TimerBase` using wx.Timer events.""" - - def __init__(self, *args, **kwargs): - self._timer = wx.Timer() - self._timer.Notify = self._on_timer - super().__init__(*args, **kwargs) - - def _timer_start(self): - self._timer.Start(self._interval, self._single) - - def _timer_stop(self): - self._timer.Stop() - - def _timer_set_interval(self): - if self._timer.IsRunning(): - self._timer_start() # Restart with new interval. - - -@_api.deprecated( - "2.0", name="wx", obj_type="backend", removal="the future", - alternative="wxagg", - addendum="See the Matplotlib usage FAQ for more info on backends.") -class RendererWx(RendererBase): - """ - The renderer handles all the drawing primitives using a graphics - context instance that controls the colors/styles. It acts as the - 'renderer' instance used by many classes in the hierarchy. - """ - # In wxPython, drawing is performed on a wxDC instance, which will - # generally be mapped to the client area of the window displaying - # the plot. Under wxPython, the wxDC instance has a wx.Pen which - # describes the colour and weight of any lines drawn, and a wxBrush - # which describes the fill colour of any closed polygon. - - # Font styles, families and weight. - fontweights = { - 100: wx.FONTWEIGHT_LIGHT, - 200: wx.FONTWEIGHT_LIGHT, - 300: wx.FONTWEIGHT_LIGHT, - 400: wx.FONTWEIGHT_NORMAL, - 500: wx.FONTWEIGHT_NORMAL, - 600: wx.FONTWEIGHT_NORMAL, - 700: wx.FONTWEIGHT_BOLD, - 800: wx.FONTWEIGHT_BOLD, - 900: wx.FONTWEIGHT_BOLD, - 'ultralight': wx.FONTWEIGHT_LIGHT, - 'light': wx.FONTWEIGHT_LIGHT, - 'normal': wx.FONTWEIGHT_NORMAL, - 'medium': wx.FONTWEIGHT_NORMAL, - 'semibold': wx.FONTWEIGHT_NORMAL, - 'bold': wx.FONTWEIGHT_BOLD, - 'heavy': wx.FONTWEIGHT_BOLD, - 'ultrabold': wx.FONTWEIGHT_BOLD, - 'black': wx.FONTWEIGHT_BOLD, - } - fontangles = { - 'italic': wx.FONTSTYLE_ITALIC, - 'normal': wx.FONTSTYLE_NORMAL, - 'oblique': wx.FONTSTYLE_SLANT, - } - - # wxPython allows for portable font styles, choosing them appropriately for - # the target platform. Map some standard font names to the portable styles. - # QUESTION: Is it wise to agree to standard fontnames across all backends? - fontnames = { - 'Sans': wx.FONTFAMILY_SWISS, - 'Roman': wx.FONTFAMILY_ROMAN, - 'Script': wx.FONTFAMILY_SCRIPT, - 'Decorative': wx.FONTFAMILY_DECORATIVE, - 'Modern': wx.FONTFAMILY_MODERN, - 'Courier': wx.FONTFAMILY_MODERN, - 'courier': wx.FONTFAMILY_MODERN, - } - - def __init__(self, bitmap, dpi): - """Initialise a wxWindows renderer instance.""" - super().__init__() - _log.debug("%s - __init__()", type(self)) - self.width = bitmap.GetWidth() - self.height = bitmap.GetHeight() - self.bitmap = bitmap - self.fontd = {} - self.dpi = dpi - self.gc = None - - def flipy(self): - # docstring inherited - return True - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - - if ismath: - s = cbook.strip_math(s) - - if self.gc is None: - gc = self.new_gc() - else: - gc = self.gc - gfx_ctx = gc.gfx_ctx - font = self.get_wx_font(s, prop) - gfx_ctx.SetFont(font, wx.BLACK) - w, h, descent, leading = gfx_ctx.GetFullTextExtent(s) - - return w, h, descent - - def get_canvas_width_height(self): - # docstring inherited - return self.width, self.height - - def handle_clip_rectangle(self, gc): - new_bounds = gc.get_clip_rectangle() - if new_bounds is not None: - new_bounds = new_bounds.bounds - gfx_ctx = gc.gfx_ctx - if gfx_ctx._lastcliprect != new_bounds: - gfx_ctx._lastcliprect = new_bounds - if new_bounds is None: - gfx_ctx.ResetClip() - else: - gfx_ctx.Clip(new_bounds[0], - self.height - new_bounds[1] - new_bounds[3], - new_bounds[2], new_bounds[3]) - - @staticmethod - def convert_path(gfx_ctx, path, transform): - wxpath = gfx_ctx.CreatePath() - for points, code in path.iter_segments(transform): - if code == Path.MOVETO: - wxpath.MoveToPoint(*points) - elif code == Path.LINETO: - wxpath.AddLineToPoint(*points) - elif code == Path.CURVE3: - wxpath.AddQuadCurveToPoint(*points) - elif code == Path.CURVE4: - wxpath.AddCurveToPoint(*points) - elif code == Path.CLOSEPOLY: - wxpath.CloseSubpath() - return wxpath - - def draw_path(self, gc, path, transform, rgbFace=None): - # docstring inherited - gc.select() - self.handle_clip_rectangle(gc) - gfx_ctx = gc.gfx_ctx - transform = transform + \ - Affine2D().scale(1.0, -1.0).translate(0.0, self.height) - wxpath = self.convert_path(gfx_ctx, path, transform) - if rgbFace is not None: - gfx_ctx.SetBrush(wx.Brush(gc.get_wxcolour(rgbFace))) - gfx_ctx.DrawPath(wxpath) - else: - gfx_ctx.StrokePath(wxpath) - gc.unselect() - - def draw_image(self, gc, x, y, im): - bbox = gc.get_clip_rectangle() - if bbox is not None: - l, b, w, h = bbox.bounds - else: - l = 0 - b = 0 - w = self.width - h = self.height - rows, cols = im.shape[:2] - bitmap = wx.Bitmap.FromBufferRGBA(cols, rows, im.tobytes()) - gc.select() - gc.gfx_ctx.DrawBitmap(bitmap, int(l), int(self.height - b), - int(w), int(-h)) - gc.unselect() - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - if ismath: - s = cbook.strip_math(s) - _log.debug("%s - draw_text()", type(self)) - gc.select() - self.handle_clip_rectangle(gc) - gfx_ctx = gc.gfx_ctx - - font = self.get_wx_font(s, prop) - color = gc.get_wxcolour(gc.get_rgb()) - gfx_ctx.SetFont(font, color) - - w, h, d = self.get_text_width_height_descent(s, prop, ismath) - x = int(x) - y = int(y - h) - - if angle == 0.0: - gfx_ctx.DrawText(s, x, y) - else: - rads = math.radians(angle) - xo = h * math.sin(rads) - yo = h * math.cos(rads) - gfx_ctx.DrawRotatedText(s, x - xo, y - yo, rads) - - gc.unselect() - - def new_gc(self): - # docstring inherited - _log.debug("%s - new_gc()", type(self)) - self.gc = GraphicsContextWx(self.bitmap, self) - self.gc.select() - self.gc.unselect() - return self.gc - - def get_wx_font(self, s, prop): - """Return a wx font. Cache font instances for efficiency.""" - _log.debug("%s - get_wx_font()", type(self)) - key = hash(prop) - font = self.fontd.get(key) - if font is not None: - return font - size = self.points_to_pixels(prop.get_size_in_points()) - # Font colour is determined by the active wx.Pen - # TODO: It may be wise to cache font information - self.fontd[key] = font = wx.Font( # Cache the font and gc. - pointSize=round(size), - family=self.fontnames.get(prop.get_name(), wx.ROMAN), - style=self.fontangles[prop.get_style()], - weight=self.fontweights[prop.get_weight()]) - return font - - def points_to_pixels(self, points): - # docstring inherited - return points * (PIXELS_PER_INCH / 72.0 * self.dpi / 72.0) - - -class GraphicsContextWx(GraphicsContextBase): - """ - The graphics context provides the color, line styles, etc. - - This class stores a reference to a wxMemoryDC, and a - wxGraphicsContext that draws to it. Creating a wxGraphicsContext - seems to be fairly heavy, so these objects are cached based on the - bitmap object that is passed in. - - The base GraphicsContext stores colors as an RGB tuple on the unit - interval, e.g., (0.5, 0.0, 1.0). wxPython uses an int interval, but - since wxPython colour management is rather simple, I have not chosen - to implement a separate colour manager class. - """ - _capd = {'butt': wx.CAP_BUTT, - 'projecting': wx.CAP_PROJECTING, - 'round': wx.CAP_ROUND} - - _joind = {'bevel': wx.JOIN_BEVEL, - 'miter': wx.JOIN_MITER, - 'round': wx.JOIN_ROUND} - - _cache = weakref.WeakKeyDictionary() - - def __init__(self, bitmap, renderer): - super().__init__() - # assert self.Ok(), "wxMemoryDC not OK to use" - _log.debug("%s - __init__(): %s", type(self), bitmap) - - dc, gfx_ctx = self._cache.get(bitmap, (None, None)) - if dc is None: - dc = wx.MemoryDC(bitmap) - gfx_ctx = wx.GraphicsContext.Create(dc) - gfx_ctx._lastcliprect = None - self._cache[bitmap] = dc, gfx_ctx - - self.bitmap = bitmap - self.dc = dc - self.gfx_ctx = gfx_ctx - self._pen = wx.Pen('BLACK', 1, wx.SOLID) - gfx_ctx.SetPen(self._pen) - self.renderer = renderer - - def select(self): - """Select the current bitmap into this wxDC instance.""" - if sys.platform == 'win32': - self.dc.SelectObject(self.bitmap) - self.IsSelected = True - - def unselect(self): - """Select a Null bitmap into this wxDC instance.""" - if sys.platform == 'win32': - self.dc.SelectObject(wx.NullBitmap) - self.IsSelected = False - - def set_foreground(self, fg, isRGBA=None): - # docstring inherited - # Implementation note: wxPython has a separate concept of pen and - # brush - the brush fills any outline trace left by the pen. - # Here we set both to the same colour - if a figure is not to be - # filled, the renderer will set the brush to be transparent - # Same goes for text foreground... - _log.debug("%s - set_foreground()", type(self)) - self.select() - super().set_foreground(fg, isRGBA) - - self._pen.SetColour(self.get_wxcolour(self.get_rgb())) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_linewidth(self, w): - # docstring inherited - w = float(w) - _log.debug("%s - set_linewidth()", type(self)) - self.select() - if 0 < w < 1: - w = 1 - super().set_linewidth(w) - lw = int(self.renderer.points_to_pixels(self._linewidth)) - if lw == 0: - lw = 1 - self._pen.SetWidth(lw) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_capstyle(self, cs): - # docstring inherited - _log.debug("%s - set_capstyle()", type(self)) - self.select() - super().set_capstyle(cs) - self._pen.SetCap(GraphicsContextWx._capd[self._capstyle]) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def set_joinstyle(self, js): - # docstring inherited - _log.debug("%s - set_joinstyle()", type(self)) - self.select() - super().set_joinstyle(js) - self._pen.SetJoin(GraphicsContextWx._joind[self._joinstyle]) - self.gfx_ctx.SetPen(self._pen) - self.unselect() - - def get_wxcolour(self, color): - """Convert an RGB(A) color to a wx.Colour.""" - _log.debug("%s - get_wx_color()", type(self)) - return wx.Colour(*[int(255 * x) for x in color]) - - -class _FigureCanvasWxBase(FigureCanvasBase, wx.Panel): - """ - The FigureCanvas contains the figure and does event handling. - - In the wxPython backend, it is derived from wxPanel, and (usually) lives - inside a frame instantiated by a FigureManagerWx. The parent window - probably implements a wx.Sizer to control the displayed control size - but - we give a hint as to our preferred minimum size. - """ - - required_interactive_framework = "wx" - _timer_cls = TimerWx - manager_class = _api.classproperty(lambda cls: FigureManagerWx) - - keyvald = { - wx.WXK_CONTROL: 'control', - wx.WXK_SHIFT: 'shift', - wx.WXK_ALT: 'alt', - wx.WXK_CAPITAL: 'caps_lock', - wx.WXK_LEFT: 'left', - wx.WXK_UP: 'up', - wx.WXK_RIGHT: 'right', - wx.WXK_DOWN: 'down', - wx.WXK_ESCAPE: 'escape', - wx.WXK_F1: 'f1', - wx.WXK_F2: 'f2', - wx.WXK_F3: 'f3', - wx.WXK_F4: 'f4', - wx.WXK_F5: 'f5', - wx.WXK_F6: 'f6', - wx.WXK_F7: 'f7', - wx.WXK_F8: 'f8', - wx.WXK_F9: 'f9', - wx.WXK_F10: 'f10', - wx.WXK_F11: 'f11', - wx.WXK_F12: 'f12', - wx.WXK_SCROLL: 'scroll_lock', - wx.WXK_PAUSE: 'break', - wx.WXK_BACK: 'backspace', - wx.WXK_RETURN: 'enter', - wx.WXK_INSERT: 'insert', - wx.WXK_DELETE: 'delete', - wx.WXK_HOME: 'home', - wx.WXK_END: 'end', - wx.WXK_PAGEUP: 'pageup', - wx.WXK_PAGEDOWN: 'pagedown', - wx.WXK_NUMPAD0: '0', - wx.WXK_NUMPAD1: '1', - wx.WXK_NUMPAD2: '2', - wx.WXK_NUMPAD3: '3', - wx.WXK_NUMPAD4: '4', - wx.WXK_NUMPAD5: '5', - wx.WXK_NUMPAD6: '6', - wx.WXK_NUMPAD7: '7', - wx.WXK_NUMPAD8: '8', - wx.WXK_NUMPAD9: '9', - wx.WXK_NUMPAD_ADD: '+', - wx.WXK_NUMPAD_SUBTRACT: '-', - wx.WXK_NUMPAD_MULTIPLY: '*', - wx.WXK_NUMPAD_DIVIDE: '/', - wx.WXK_NUMPAD_DECIMAL: 'dec', - wx.WXK_NUMPAD_ENTER: 'enter', - wx.WXK_NUMPAD_UP: 'up', - wx.WXK_NUMPAD_RIGHT: 'right', - wx.WXK_NUMPAD_DOWN: 'down', - wx.WXK_NUMPAD_LEFT: 'left', - wx.WXK_NUMPAD_PAGEUP: 'pageup', - wx.WXK_NUMPAD_PAGEDOWN: 'pagedown', - wx.WXK_NUMPAD_HOME: 'home', - wx.WXK_NUMPAD_END: 'end', - wx.WXK_NUMPAD_INSERT: 'insert', - wx.WXK_NUMPAD_DELETE: 'delete', - } - - def __init__(self, parent, id, figure=None): - """ - Initialize a FigureWx instance. - - - Initialize the FigureCanvasBase and wxPanel parents. - - Set event handlers for resize, paint, and keyboard and mouse - interaction. - """ - - FigureCanvasBase.__init__(self, figure) - w, h = map(math.ceil, self.figure.bbox.size) - # Set preferred window size hint - helps the sizer, if one is connected - wx.Panel.__init__(self, parent, id, size=wx.Size(w, h)) - # Create the drawing bitmap - self.bitmap = wx.Bitmap(w, h) - _log.debug("%s - __init__() - bitmap w:%d h:%d", type(self), w, h) - self._isDrawn = False - self._rubberband_rect = None - self._rubberband_pen_black = wx.Pen('BLACK', 1, wx.PENSTYLE_SHORT_DASH) - self._rubberband_pen_white = wx.Pen('WHITE', 1, wx.PENSTYLE_SOLID) - - self.Bind(wx.EVT_SIZE, self._on_size) - self.Bind(wx.EVT_PAINT, self._on_paint) - self.Bind(wx.EVT_CHAR_HOOK, self._on_key_down) - self.Bind(wx.EVT_KEY_UP, self._on_key_up) - self.Bind(wx.EVT_LEFT_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_LEFT_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_LEFT_UP, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MIDDLE_UP, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_RIGHT_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_DOWN, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_UP, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX1_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MOUSE_AUX2_DCLICK, self._on_mouse_button) - self.Bind(wx.EVT_MOUSEWHEEL, self._on_mouse_wheel) - self.Bind(wx.EVT_MOTION, self._on_motion) - self.Bind(wx.EVT_ENTER_WINDOW, self._on_enter) - self.Bind(wx.EVT_LEAVE_WINDOW, self._on_leave) - - self.Bind(wx.EVT_MOUSE_CAPTURE_CHANGED, self._on_capture_lost) - self.Bind(wx.EVT_MOUSE_CAPTURE_LOST, self._on_capture_lost) - - self.SetBackgroundStyle(wx.BG_STYLE_PAINT) # Reduce flicker. - self.SetBackgroundColour(wx.WHITE) - - def Copy_to_Clipboard(self, event=None): - """Copy bitmap of canvas to system clipboard.""" - bmp_obj = wx.BitmapDataObject() - bmp_obj.SetBitmap(self.bitmap) - - if not wx.TheClipboard.IsOpened(): - open_success = wx.TheClipboard.Open() - if open_success: - wx.TheClipboard.SetData(bmp_obj) - wx.TheClipboard.Flush() - wx.TheClipboard.Close() - - def draw_idle(self): - # docstring inherited - _log.debug("%s - draw_idle()", type(self)) - self._isDrawn = False # Force redraw - # Triggering a paint event is all that is needed to defer drawing - # until later. The platform will send the event when it thinks it is - # a good time (usually as soon as there are no other events pending). - self.Refresh(eraseBackground=False) - - def flush_events(self): - # docstring inherited - wx.Yield() - - def start_event_loop(self, timeout=0): - # docstring inherited - if hasattr(self, '_event_loop'): - raise RuntimeError("Event loop already running") - timer = wx.Timer(self, id=wx.ID_ANY) - if timeout > 0: - timer.Start(int(timeout * 1000), oneShot=True) - self.Bind(wx.EVT_TIMER, self.stop_event_loop, id=timer.GetId()) - # Event loop handler for start/stop event loop - self._event_loop = wx.GUIEventLoop() - self._event_loop.Run() - timer.Stop() - - def stop_event_loop(self, event=None): - # docstring inherited - if hasattr(self, '_event_loop'): - if self._event_loop.IsRunning(): - self._event_loop.Exit() - del self._event_loop - - def _get_imagesave_wildcards(self): - """Return the wildcard string for the filesave dialog.""" - default_filetype = self.get_default_filetype() - filetypes = self.get_supported_filetypes_grouped() - sorted_filetypes = sorted(filetypes.items()) - wildcards = [] - extensions = [] - filter_index = 0 - for i, (name, exts) in enumerate(sorted_filetypes): - ext_list = ';'.join(['*.%s' % ext for ext in exts]) - extensions.append(exts[0]) - wildcard = f'{name} ({ext_list})|{ext_list}' - if default_filetype in exts: - filter_index = i - wildcards.append(wildcard) - wildcards = '|'.join(wildcards) - return wildcards, extensions, filter_index - - def gui_repaint(self, drawDC=None): - """ - Update the displayed image on the GUI canvas, using the supplied - wx.PaintDC device context. - """ - _log.debug("%s - gui_repaint()", type(self)) - # The "if self" check avoids a "wrapped C/C++ object has been deleted" - # RuntimeError if doing things after window is closed. - if not (self and self.IsShownOnScreen()): - return - if not drawDC: # not called from OnPaint use a ClientDC - drawDC = wx.ClientDC(self) - # For 'WX' backend on Windows, the bitmap cannot be in use by another - # DC (see GraphicsContextWx._cache). - bmp = (self.bitmap.ConvertToImage().ConvertToBitmap() - if wx.Platform == '__WXMSW__' - and isinstance(self.figure.canvas.get_renderer(), RendererWx) - else self.bitmap) - drawDC.DrawBitmap(bmp, 0, 0) - if self._rubberband_rect is not None: - # Some versions of wx+python don't support numpy.float64 here. - x0, y0, x1, y1 = map(round, self._rubberband_rect) - rect = [(x0, y0, x1, y0), (x1, y0, x1, y1), - (x0, y0, x0, y1), (x0, y1, x1, y1)] - drawDC.DrawLineList(rect, self._rubberband_pen_white) - drawDC.DrawLineList(rect, self._rubberband_pen_black) - - filetypes = { - **FigureCanvasBase.filetypes, - 'bmp': 'Windows bitmap', - 'jpeg': 'JPEG', - 'jpg': 'JPEG', - 'pcx': 'PCX', - 'png': 'Portable Network Graphics', - 'tif': 'Tagged Image Format File', - 'tiff': 'Tagged Image Format File', - 'xpm': 'X pixmap', - } - - def _on_paint(self, event): - """Called when wxPaintEvt is generated.""" - _log.debug("%s - _on_paint()", type(self)) - drawDC = wx.PaintDC(self) - if not self._isDrawn: - self.draw(drawDC=drawDC) - else: - self.gui_repaint(drawDC=drawDC) - drawDC.Destroy() - - def _on_size(self, event): - """ - Called when wxEventSize is generated. - - In this application we attempt to resize to fit the window, so it - is better to take the performance hit and redraw the whole window. - """ - - _log.debug("%s - _on_size()", type(self)) - sz = self.GetParent().GetSizer() - if sz: - si = sz.GetItem(self) - if sz and si and not si.Proportion and not si.Flag & wx.EXPAND: - # managed by a sizer, but with a fixed size - size = self.GetMinSize() - else: - # variable size - size = self.GetClientSize() - # Do not allow size to become smaller than MinSize - size.IncTo(self.GetMinSize()) - if getattr(self, "_width", None): - if size == (self._width, self._height): - # no change in size - return - self._width, self._height = size - self._isDrawn = False - - if self._width <= 1 or self._height <= 1: - return # Empty figure - - # Create a new, correctly sized bitmap - self.bitmap = wx.Bitmap(self._width, self._height) - - dpival = self.figure.dpi - winch = self._width / dpival - hinch = self._height / dpival - self.figure.set_size_inches(winch, hinch, forward=False) - - # Rendering will happen on the associated paint event - # so no need to do anything here except to make sure - # the whole background is repainted. - self.Refresh(eraseBackground=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - @staticmethod - def _mpl_modifiers(event=None, *, exclude=None): - mod_table = [ - ("ctrl", wx.MOD_CONTROL, wx.WXK_CONTROL), - ("alt", wx.MOD_ALT, wx.WXK_ALT), - ("shift", wx.MOD_SHIFT, wx.WXK_SHIFT), - ] - if event is not None: - modifiers = event.GetModifiers() - return [name for name, mod, key in mod_table - if modifiers & mod and exclude != key] - else: - return [name for name, mod, key in mod_table - if wx.GetKeyState(key)] - - def _get_key(self, event): - keyval = event.KeyCode - if keyval in self.keyvald: - key = self.keyvald[keyval] - elif keyval < 256: - key = chr(keyval) - # wx always returns an uppercase, so make it lowercase if the shift - # key is not depressed (NOTE: this will not handle Caps Lock) - if not event.ShiftDown(): - key = key.lower() - else: - return None - mods = self._mpl_modifiers(event, exclude=keyval) - if "shift" in mods and key.isupper(): - mods.remove("shift") - return "+".join([*mods, key]) - - def _mpl_coords(self, pos=None): - """ - Convert a wx position, defaulting to the current cursor position, to - Matplotlib coordinates. - """ - if pos is None: - pos = wx.GetMouseState() - x, y = self.ScreenToClient(pos.X, pos.Y) - else: - x, y = pos.X, pos.Y - # flip y so y=0 is bottom of canvas - return x, self.figure.bbox.height - y - - def _on_key_down(self, event): - """Capture key press.""" - KeyEvent("key_press_event", self, - self._get_key(event), *self._mpl_coords(), - guiEvent=event)._process() - if self: - event.Skip() - - def _on_key_up(self, event): - """Release key.""" - KeyEvent("key_release_event", self, - self._get_key(event), *self._mpl_coords(), - guiEvent=event)._process() - if self: - event.Skip() - - def set_cursor(self, cursor): - # docstring inherited - cursor = wx.Cursor(_api.check_getitem({ - cursors.MOVE: wx.CURSOR_HAND, - cursors.HAND: wx.CURSOR_HAND, - cursors.POINTER: wx.CURSOR_ARROW, - cursors.SELECT_REGION: wx.CURSOR_CROSS, - cursors.WAIT: wx.CURSOR_WAIT, - cursors.RESIZE_HORIZONTAL: wx.CURSOR_SIZEWE, - cursors.RESIZE_VERTICAL: wx.CURSOR_SIZENS, - }, cursor=cursor)) - self.SetCursor(cursor) - self.Refresh() - - def _set_capture(self, capture=True): - """Control wx mouse capture.""" - if self.HasCapture(): - self.ReleaseMouse() - if capture: - self.CaptureMouse() - - def _on_capture_lost(self, event): - """Capture changed or lost""" - self._set_capture(False) - - def _on_mouse_button(self, event): - """Start measuring on an axis.""" - event.Skip() - self._set_capture(event.ButtonDown() or event.ButtonDClick()) - x, y = self._mpl_coords(event) - button_map = { - wx.MOUSE_BTN_LEFT: MouseButton.LEFT, - wx.MOUSE_BTN_MIDDLE: MouseButton.MIDDLE, - wx.MOUSE_BTN_RIGHT: MouseButton.RIGHT, - wx.MOUSE_BTN_AUX1: MouseButton.BACK, - wx.MOUSE_BTN_AUX2: MouseButton.FORWARD, - } - button = event.GetButton() - button = button_map.get(button, button) - modifiers = self._mpl_modifiers(event) - if event.ButtonDown(): - MouseEvent("button_press_event", self, x, y, button, - modifiers=modifiers, guiEvent=event)._process() - elif event.ButtonDClick(): - MouseEvent("button_press_event", self, x, y, button, - dblclick=True, modifiers=modifiers, - guiEvent=event)._process() - elif event.ButtonUp(): - MouseEvent("button_release_event", self, x, y, button, - modifiers=modifiers, guiEvent=event)._process() - - def _on_mouse_wheel(self, event): - """Translate mouse wheel events into matplotlib events""" - x, y = self._mpl_coords(event) - # Convert delta/rotation/rate into a floating point step size - step = event.LinesPerAction * event.WheelRotation / event.WheelDelta - # Done handling event - event.Skip() - # Mac gives two events for every wheel event; skip every second one. - if wx.Platform == '__WXMAC__': - if not hasattr(self, '_skipwheelevent'): - self._skipwheelevent = True - elif self._skipwheelevent: - self._skipwheelevent = False - return # Return without processing event - else: - self._skipwheelevent = True - MouseEvent("scroll_event", self, x, y, step=step, - modifiers=self._mpl_modifiers(event), - guiEvent=event)._process() - - def _on_motion(self, event): - """Start measuring on an axis.""" - event.Skip() - MouseEvent("motion_notify_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(event), - guiEvent=event)._process() - - def _on_enter(self, event): - """Mouse has entered the window.""" - event.Skip() - LocationEvent("figure_enter_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def _on_leave(self, event): - """Mouse has left the window.""" - event.Skip() - LocationEvent("figure_leave_event", self, - *self._mpl_coords(event), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - -class FigureCanvasWx(_FigureCanvasWxBase): - # Rendering to a Wx canvas using the deprecated Wx renderer. - - def draw(self, drawDC=None): - """ - Render the figure using RendererWx instance renderer, or using a - previously defined renderer if none is specified. - """ - _log.debug("%s - draw()", type(self)) - self.renderer = RendererWx(self.bitmap, self.figure.dpi) - self.figure.draw(self.renderer) - self._isDrawn = True - self.gui_repaint(drawDC=drawDC) - - def _print_image(self, filetype, filename): - bitmap = wx.Bitmap(math.ceil(self.figure.bbox.width), - math.ceil(self.figure.bbox.height)) - self.figure.draw(RendererWx(bitmap, self.figure.dpi)) - saved_obj = (bitmap.ConvertToImage() - if cbook.is_writable_file_like(filename) - else bitmap) - if not saved_obj.SaveFile(filename, filetype): - raise RuntimeError(f'Could not save figure to {filename}') - # draw() is required here since bits of state about the last renderer - # are strewn about the artist draw methods. Do not remove the draw - # without first verifying that these have been cleaned up. The artist - # contains() methods will fail otherwise. - if self._isDrawn: - self.draw() - # The "if self" check avoids a "wrapped C/C++ object has been deleted" - # RuntimeError if doing things after window is closed. - if self: - self.Refresh() - - print_bmp = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_BMP) - print_jpeg = print_jpg = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_JPEG) - print_pcx = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_PCX) - print_png = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_PNG) - print_tiff = print_tif = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_TIF) - print_xpm = functools.partialmethod( - _print_image, wx.BITMAP_TYPE_XPM) - - -class FigureFrameWx(wx.Frame): - def __init__(self, num, fig, *, canvas_class): - # On non-Windows platform, explicitly set the position - fix - # positioning bug on some Linux platforms - if wx.Platform == '__WXMSW__': - pos = wx.DefaultPosition - else: - pos = wx.Point(20, 20) - super().__init__(parent=None, id=-1, pos=pos) - # Frame will be sized later by the Fit method - _log.debug("%s - __init__()", type(self)) - _set_frame_icon(self) - - self.canvas = canvas_class(self, -1, fig) - # Auto-attaches itself to self.canvas.manager - manager = FigureManagerWx(self.canvas, num, self) - - toolbar = self.canvas.manager.toolbar - if toolbar is not None: - self.SetToolBar(toolbar) - - # On Windows, canvas sizing must occur after toolbar addition; - # otherwise the toolbar further resizes the canvas. - w, h = map(math.ceil, fig.bbox.size) - self.canvas.SetInitialSize(wx.Size(w, h)) - self.canvas.SetMinSize((2, 2)) - self.canvas.SetFocus() - - self.Fit() - - self.Bind(wx.EVT_CLOSE, self._on_close) - - def _on_close(self, event): - _log.debug("%s - on_close()", type(self)) - CloseEvent("close_event", self.canvas)._process() - self.canvas.stop_event_loop() - # set FigureManagerWx.frame to None to prevent repeated attempts to - # close this frame from FigureManagerWx.destroy() - self.canvas.manager.frame = None - # remove figure manager from Gcf.figs - Gcf.destroy(self.canvas.manager) - try: # See issue 2941338. - self.canvas.mpl_disconnect(self.canvas.toolbar._id_drag) - except AttributeError: # If there's no toolbar. - pass - # Carry on with close event propagation, frame & children destruction - event.Skip() - - -class FigureManagerWx(FigureManagerBase): - """ - Container/controller for the FigureCanvas and GUI frame. - - It is instantiated by Gcf whenever a new figure is created. Gcf is - responsible for managing multiple instances of FigureManagerWx. - - Attributes - ---------- - canvas : `FigureCanvas` - a FigureCanvasWx(wx.Panel) instance - window : wxFrame - a wxFrame instance - wxpython.org/Phoenix/docs/html/Frame.html - """ - - def __init__(self, canvas, num, frame): - _log.debug("%s - __init__()", type(self)) - self.frame = self.window = frame - super().__init__(canvas, num) - - @classmethod - def create_with_canvas(cls, canvas_class, figure, num): - # docstring inherited - wxapp = wx.GetApp() or _create_wxapp() - frame = FigureFrameWx(num, figure, canvas_class=canvas_class) - manager = figure.canvas.manager - if mpl.is_interactive(): - manager.frame.Show() - figure.canvas.draw_idle() - return manager - - @classmethod - def start_main_loop(cls): - if not wx.App.IsMainLoopRunning(): - wxapp = wx.GetApp() - if wxapp is not None: - wxapp.MainLoop() - - def show(self): - # docstring inherited - self.frame.Show() - self.canvas.draw() - if mpl.rcParams['figure.raise_window']: - self.frame.Raise() - - def destroy(self, *args): - # docstring inherited - _log.debug("%s - destroy()", type(self)) - frame = self.frame - if frame: # Else, may have been already deleted, e.g. when closing. - # As this can be called from non-GUI thread from plt.close use - # wx.CallAfter to ensure thread safety. - wx.CallAfter(frame.Close) - - def full_screen_toggle(self): - # docstring inherited - self.frame.ShowFullScreen(not self.frame.IsFullScreen()) - - def get_window_title(self): - # docstring inherited - return self.window.GetTitle() - - def set_window_title(self, title): - # docstring inherited - self.window.SetTitle(title) - - def resize(self, width, height): - # docstring inherited - # Directly using SetClientSize doesn't handle the toolbar on Windows. - self.window.SetSize(self.window.ClientToWindowSize(wx.Size( - math.ceil(width), math.ceil(height)))) - - -def _load_bitmap(filename): - """ - Load a wx.Bitmap from a file in the "images" directory of the Matplotlib - data. - """ - return wx.Bitmap(str(cbook._get_data_path('images', filename))) - - -def _set_frame_icon(frame): - bundle = wx.IconBundle() - for image in ('matplotlib.png', 'matplotlib_large.png'): - icon = wx.Icon(_load_bitmap(image)) - if not icon.IsOk(): - return - bundle.AddIcon(icon) - frame.SetIcons(bundle) - - -class NavigationToolbar2Wx(NavigationToolbar2, wx.ToolBar): - def __init__(self, canvas, coordinates=True, *, style=wx.TB_BOTTOM): - wx.ToolBar.__init__(self, canvas.GetParent(), -1, style=style) - - if 'wxMac' in wx.PlatformInfo: - self.SetToolBitmapSize((24, 24)) - self.wx_ids = {} - for text, tooltip_text, image_file, callback in self.toolitems: - if text is None: - self.AddSeparator() - continue - self.wx_ids[text] = ( - self.AddTool( - -1, - bitmap=self._icon(f"{image_file}.png"), - bmpDisabled=wx.NullBitmap, - label=text, shortHelp=tooltip_text, - kind=(wx.ITEM_CHECK if text in ["Pan", "Zoom"] - else wx.ITEM_NORMAL)) - .Id) - self.Bind(wx.EVT_TOOL, getattr(self, callback), - id=self.wx_ids[text]) - - self._coordinates = coordinates - if self._coordinates: - self.AddStretchableSpace() - self._label_text = wx.StaticText(self, style=wx.ALIGN_RIGHT) - self.AddControl(self._label_text) - - self.Realize() - - NavigationToolbar2.__init__(self, canvas) - - @staticmethod - def _icon(name): - """ - Construct a `wx.Bitmap` suitable for use as icon from an image file - *name*, including the extension and relative to Matplotlib's "images" - data directory. - """ - pilimg = PIL.Image.open(cbook._get_data_path("images", name)) - # ensure RGBA as wx BitMap expects RGBA format - image = np.array(pilimg.convert("RGBA")) - try: - dark = wx.SystemSettings.GetAppearance().IsDark() - except AttributeError: # wxpython < 4.1 - # copied from wx's IsUsingDarkBackground / GetLuminance. - bg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOW) - fg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOWTEXT) - # See wx.Colour.GetLuminance. - bg_lum = (.299 * bg.red + .587 * bg.green + .114 * bg.blue) / 255 - fg_lum = (.299 * fg.red + .587 * fg.green + .114 * fg.blue) / 255 - dark = fg_lum - bg_lum > .2 - if dark: - fg = wx.SystemSettings.GetColour(wx.SYS_COLOUR_WINDOWTEXT) - black_mask = (image[..., :3] == 0).all(axis=-1) - image[black_mask, :3] = (fg.Red(), fg.Green(), fg.Blue()) - return wx.Bitmap.FromBufferRGBA( - image.shape[1], image.shape[0], image.tobytes()) - - def _update_buttons_checked(self): - if "Pan" in self.wx_ids: - self.ToggleTool(self.wx_ids["Pan"], self.mode.name == "PAN") - if "Zoom" in self.wx_ids: - self.ToggleTool(self.wx_ids["Zoom"], self.mode.name == "ZOOM") - - def zoom(self, *args): - super().zoom(*args) - self._update_buttons_checked() - - def pan(self, *args): - super().pan(*args) - self._update_buttons_checked() - - def save_figure(self, *args): - # Fetch the required filename and file type. - filetypes, exts, filter_index = self.canvas._get_imagesave_wildcards() - default_file = self.canvas.get_default_filename() - dialog = wx.FileDialog( - self.canvas.GetParent(), "Save to file", - mpl.rcParams["savefig.directory"], default_file, filetypes, - wx.FD_SAVE | wx.FD_OVERWRITE_PROMPT) - dialog.SetFilterIndex(filter_index) - if dialog.ShowModal() == wx.ID_OK: - path = pathlib.Path(dialog.GetPath()) - _log.debug('%s - Save file path: %s', type(self), path) - fmt = exts[dialog.GetFilterIndex()] - ext = path.suffix[1:] - if ext in self.canvas.get_supported_filetypes() and fmt != ext: - # looks like they forgot to set the image type drop - # down, going with the extension. - _log.warning('extension %s did not match the selected ' - 'image type %s; going with %s', - ext, fmt, ext) - fmt = ext - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams["savefig.directory"]: - mpl.rcParams["savefig.directory"] = str(path.parent) - try: - self.canvas.figure.savefig(path, format=fmt) - except Exception as e: - dialog = wx.MessageDialog( - parent=self.canvas.GetParent(), message=str(e), - caption='Matplotlib error') - dialog.ShowModal() - dialog.Destroy() - - def draw_rubberband(self, event, x0, y0, x1, y1): - height = self.canvas.figure.bbox.height - self.canvas._rubberband_rect = (x0, height - y0, x1, height - y1) - self.canvas.Refresh() - - def remove_rubberband(self): - self.canvas._rubberband_rect = None - self.canvas.Refresh() - - def set_message(self, s): - if self._coordinates: - self._label_text.SetLabel(s) - - def set_history_buttons(self): - can_backward = self._nav_stack._pos > 0 - can_forward = self._nav_stack._pos < len(self._nav_stack) - 1 - if 'Back' in self.wx_ids: - self.EnableTool(self.wx_ids['Back'], can_backward) - if 'Forward' in self.wx_ids: - self.EnableTool(self.wx_ids['Forward'], can_forward) - - -# tools for matplotlib.backend_managers.ToolManager: - -class ToolbarWx(ToolContainerBase, wx.ToolBar): - def __init__(self, toolmanager, parent=None, style=wx.TB_BOTTOM): - if parent is None: - parent = toolmanager.canvas.GetParent() - ToolContainerBase.__init__(self, toolmanager) - wx.ToolBar.__init__(self, parent, -1, style=style) - self._space = self.AddStretchableSpace() - self._label_text = wx.StaticText(self, style=wx.ALIGN_RIGHT) - self.AddControl(self._label_text) - self._toolitems = {} - self._groups = {} # Mapping of groups to the separator after them. - - def _get_tool_pos(self, tool): - """ - Find the position (index) of a wx.ToolBarToolBase in a ToolBar. - - ``ToolBar.GetToolPos`` is not useful because wx assigns the same Id to - all Separators and StretchableSpaces. - """ - pos, = [pos for pos in range(self.ToolsCount) - if self.GetToolByPos(pos) == tool] - return pos - - def add_toolitem(self, name, group, position, image_file, description, - toggle): - # Find or create the separator that follows this group. - if group not in self._groups: - self._groups[group] = self.InsertSeparator( - self._get_tool_pos(self._space)) - sep = self._groups[group] - # List all separators. - seps = [t for t in map(self.GetToolByPos, range(self.ToolsCount)) - if t.IsSeparator() and not t.IsStretchableSpace()] - # Find where to insert the tool. - if position >= 0: - # Find the start of the group by looking for the separator - # preceding this one; then move forward from it. - start = (0 if sep == seps[0] - else self._get_tool_pos(seps[seps.index(sep) - 1]) + 1) - else: - # Move backwards from this separator. - start = self._get_tool_pos(sep) + 1 - idx = start + position - if image_file: - bmp = NavigationToolbar2Wx._icon(image_file) - kind = wx.ITEM_NORMAL if not toggle else wx.ITEM_CHECK - tool = self.InsertTool(idx, -1, name, bmp, wx.NullBitmap, kind, - description or "") - else: - size = (self.GetTextExtent(name)[0] + 10, -1) - if toggle: - control = wx.ToggleButton(self, -1, name, size=size) - else: - control = wx.Button(self, -1, name, size=size) - tool = self.InsertControl(idx, control, label=name) - self.Realize() - - def handler(event): - self.trigger_tool(name) - - if image_file: - self.Bind(wx.EVT_TOOL, handler, tool) - else: - control.Bind(wx.EVT_LEFT_DOWN, handler) - - self._toolitems.setdefault(name, []) - self._toolitems[name].append((tool, handler)) - - def toggle_toolitem(self, name, toggled): - if name not in self._toolitems: - return - for tool, handler in self._toolitems[name]: - if not tool.IsControl(): - self.ToggleTool(tool.Id, toggled) - else: - tool.GetControl().SetValue(toggled) - self.Refresh() - - def remove_toolitem(self, name): - for tool, handler in self._toolitems[name]: - self.DeleteTool(tool.Id) - del self._toolitems[name] - - def set_message(self, s): - self._label_text.SetLabel(s) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class ConfigureSubplotsWx(backend_tools.ConfigureSubplotsBase): - def trigger(self, *args): - NavigationToolbar2Wx.configure_subplots(self) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class SaveFigureWx(backend_tools.SaveFigureBase): - def trigger(self, *args): - NavigationToolbar2Wx.save_figure( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class RubberbandWx(backend_tools.RubberbandBase): - def draw_rubberband(self, x0, y0, x1, y1): - NavigationToolbar2Wx.draw_rubberband( - self._make_classic_style_pseudo_toolbar(), None, x0, y0, x1, y1) - - def remove_rubberband(self): - NavigationToolbar2Wx.remove_rubberband( - self._make_classic_style_pseudo_toolbar()) - - -class _HelpDialog(wx.Dialog): - _instance = None # a reference to an open dialog singleton - headers = [("Action", "Shortcuts", "Description")] - widths = [100, 140, 300] - - def __init__(self, parent, help_entries): - super().__init__(parent, title="Help", - style=wx.DEFAULT_DIALOG_STYLE | wx.RESIZE_BORDER) - - sizer = wx.BoxSizer(wx.VERTICAL) - grid_sizer = wx.FlexGridSizer(0, 3, 8, 6) - # create and add the entries - bold = self.GetFont().MakeBold() - for r, row in enumerate(self.headers + help_entries): - for (col, width) in zip(row, self.widths): - label = wx.StaticText(self, label=col) - if r == 0: - label.SetFont(bold) - label.Wrap(width) - grid_sizer.Add(label, 0, 0, 0) - # finalize layout, create button - sizer.Add(grid_sizer, 0, wx.ALL, 6) - ok = wx.Button(self, wx.ID_OK) - sizer.Add(ok, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.ALL, 8) - self.SetSizer(sizer) - sizer.Fit(self) - self.Layout() - self.Bind(wx.EVT_CLOSE, self._on_close) - ok.Bind(wx.EVT_BUTTON, self._on_close) - - def _on_close(self, event): - _HelpDialog._instance = None # remove global reference - self.DestroyLater() - event.Skip() - - @classmethod - def show(cls, parent, help_entries): - # if no dialog is shown, create one; otherwise just re-raise it - if cls._instance: - cls._instance.Raise() - return - cls._instance = cls(parent, help_entries) - cls._instance.Show() - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class HelpWx(backend_tools.ToolHelpBase): - def trigger(self, *args): - _HelpDialog.show(self.figure.canvas.GetTopLevelParent(), - self._get_help_entries()) - - -@backend_tools._register_tool_class(_FigureCanvasWxBase) -class ToolCopyToClipboardWx(backend_tools.ToolCopyToClipboardBase): - def trigger(self, *args, **kwargs): - if not self.canvas._isDrawn: - self.canvas.draw() - if not self.canvas.bitmap.IsOk() or not wx.TheClipboard.Open(): - return - try: - wx.TheClipboard.SetData(wx.BitmapDataObject(self.canvas.bitmap)) - finally: - wx.TheClipboard.Close() - - -FigureManagerWx._toolbar2_class = NavigationToolbar2Wx -FigureManagerWx._toolmanager_toolbar_class = ToolbarWx - - -@_Backend.export -class _BackendWx(_Backend): - FigureCanvas = FigureCanvasWx - FigureManager = FigureManagerWx - mainloop = FigureManagerWx.start_main_loop diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_font_manager.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_font_manager.py deleted file mode 100644 index ec901452ee20b10f4192b0802afda01ce261d12d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_font_manager.py +++ /dev/null @@ -1,348 +0,0 @@ -from io import BytesIO, StringIO -import gc -import multiprocessing -import os -from pathlib import Path -from PIL import Image -import shutil -import subprocess -import sys -import warnings - -import numpy as np -import pytest - -from matplotlib.font_manager import ( - findfont, findSystemFonts, FontEntry, FontProperties, fontManager, - json_dump, json_load, get_font, is_opentype_cff_font, - MSUserFontDirectories, _get_fontconfig_fonts, ttfFontProperty) -from matplotlib import cbook, ft2font, pyplot as plt, rc_context, figure as mfigure - -has_fclist = shutil.which('fc-list') is not None - - -def test_font_priority(): - with rc_context(rc={ - 'font.sans-serif': - ['cmmi10', 'Bitstream Vera Sans']}): - fontfile = findfont(FontProperties(family=["sans-serif"])) - assert Path(fontfile).name == 'cmmi10.ttf' - - # Smoketest get_charmap, which isn't used internally anymore - font = get_font(fontfile) - cmap = font.get_charmap() - assert len(cmap) == 131 - assert cmap[8729] == 30 - - -def test_score_weight(): - assert 0 == fontManager.score_weight("regular", "regular") - assert 0 == fontManager.score_weight("bold", "bold") - assert (0 < fontManager.score_weight(400, 400) < - fontManager.score_weight("normal", "bold")) - assert (0 < fontManager.score_weight("normal", "regular") < - fontManager.score_weight("normal", "bold")) - assert (fontManager.score_weight("normal", "regular") == - fontManager.score_weight(400, 400)) - - -def test_json_serialization(tmpdir): - # Can't open a NamedTemporaryFile twice on Windows, so use a temporary - # directory instead. - path = Path(tmpdir, "fontlist.json") - json_dump(fontManager, path) - copy = json_load(path) - with warnings.catch_warnings(): - warnings.filterwarnings('ignore', 'findfont: Font family.*not found') - for prop in ({'family': 'STIXGeneral'}, - {'family': 'Bitstream Vera Sans', 'weight': 700}, - {'family': 'no such font family'}): - fp = FontProperties(**prop) - assert (fontManager.findfont(fp, rebuild_if_missing=False) == - copy.findfont(fp, rebuild_if_missing=False)) - - -def test_otf(): - fname = '/usr/share/fonts/opentype/freefont/FreeMono.otf' - if Path(fname).exists(): - assert is_opentype_cff_font(fname) - for f in fontManager.ttflist: - if 'otf' in f.fname: - with open(f.fname, 'rb') as fd: - res = fd.read(4) == b'OTTO' - assert res == is_opentype_cff_font(f.fname) - - -@pytest.mark.skipif(sys.platform == "win32" or not has_fclist, - reason='no fontconfig installed') -def test_get_fontconfig_fonts(): - assert len(_get_fontconfig_fonts()) > 1 - - -@pytest.mark.parametrize('factor', [2, 4, 6, 8]) -def test_hinting_factor(factor): - font = findfont(FontProperties(family=["sans-serif"])) - - font1 = get_font(font, hinting_factor=1) - font1.clear() - font1.set_size(12, 100) - font1.set_text('abc') - expected = font1.get_width_height() - - hinted_font = get_font(font, hinting_factor=factor) - hinted_font.clear() - hinted_font.set_size(12, 100) - hinted_font.set_text('abc') - # Check that hinting only changes text layout by a small (10%) amount. - np.testing.assert_allclose(hinted_font.get_width_height(), expected, - rtol=0.1) - - -def test_utf16m_sfnt(): - try: - # seguisbi = Microsoft Segoe UI Semibold - entry = next(entry for entry in fontManager.ttflist - if Path(entry.fname).name == "seguisbi.ttf") - except StopIteration: - pytest.skip("Couldn't find seguisbi.ttf font to test against.") - else: - # Check that we successfully read "semibold" from the font's sfnt table - # and set its weight accordingly. - assert entry.weight == 600 - - -def test_find_ttc(): - fp = FontProperties(family=["WenQuanYi Zen Hei"]) - if Path(findfont(fp)).name != "wqy-zenhei.ttc": - pytest.skip("Font wqy-zenhei.ttc may be missing") - fig, ax = plt.subplots() - ax.text(.5, .5, "\N{KANGXI RADICAL DRAGON}", fontproperties=fp) - for fmt in ["raw", "svg", "pdf", "ps"]: - fig.savefig(BytesIO(), format=fmt) - - -def test_find_noto(): - fp = FontProperties(family=["Noto Sans CJK SC", "Noto Sans CJK JP"]) - name = Path(findfont(fp)).name - if name not in ("NotoSansCJKsc-Regular.otf", "NotoSansCJK-Regular.ttc"): - pytest.skip(f"Noto Sans CJK SC font may be missing (found {name})") - - fig, ax = plt.subplots() - ax.text(0.5, 0.5, 'Hello, 你好', fontproperties=fp) - for fmt in ["raw", "svg", "pdf", "ps"]: - fig.savefig(BytesIO(), format=fmt) - - -def test_find_invalid(tmpdir): - tmp_path = Path(tmpdir) - - with pytest.raises(FileNotFoundError): - get_font(tmp_path / 'non-existent-font-name.ttf') - - with pytest.raises(FileNotFoundError): - get_font(str(tmp_path / 'non-existent-font-name.ttf')) - - with pytest.raises(FileNotFoundError): - get_font(bytes(tmp_path / 'non-existent-font-name.ttf')) - - # Not really public, but get_font doesn't expose non-filename constructor. - from matplotlib.ft2font import FT2Font - with pytest.raises(TypeError, match='font file or a binary-mode file'): - FT2Font(StringIO()) # type: ignore[arg-type] - - -@pytest.mark.skipif(sys.platform != 'linux' or not has_fclist, - reason='only Linux with fontconfig installed') -def test_user_fonts_linux(tmpdir, monkeypatch): - font_test_file = 'mpltest.ttf' - - # Precondition: the test font should not be available - fonts = findSystemFonts() - if any(font_test_file in font for font in fonts): - pytest.skip(f'{font_test_file} already exists in system fonts') - - # Prepare a temporary user font directory - user_fonts_dir = tmpdir.join('fonts') - user_fonts_dir.ensure(dir=True) - shutil.copyfile(Path(__file__).parent / font_test_file, - user_fonts_dir.join(font_test_file)) - - with monkeypatch.context() as m: - m.setenv('XDG_DATA_HOME', str(tmpdir)) - _get_fontconfig_fonts.cache_clear() - # Now, the font should be available - fonts = findSystemFonts() - assert any(font_test_file in font for font in fonts) - - # Make sure the temporary directory is no longer cached. - _get_fontconfig_fonts.cache_clear() - - -def test_addfont_as_path(): - """Smoke test that addfont() accepts pathlib.Path.""" - font_test_file = 'mpltest.ttf' - path = Path(__file__).parent / font_test_file - try: - fontManager.addfont(path) - added, = [font for font in fontManager.ttflist - if font.fname.endswith(font_test_file)] - fontManager.ttflist.remove(added) - finally: - to_remove = [font for font in fontManager.ttflist - if font.fname.endswith(font_test_file)] - for font in to_remove: - fontManager.ttflist.remove(font) - - -@pytest.mark.skipif(sys.platform != 'win32', reason='Windows only') -def test_user_fonts_win32(): - if not (os.environ.get('APPVEYOR') or os.environ.get('TF_BUILD')): - pytest.xfail("This test should only run on CI (appveyor or azure) " - "as the developer's font directory should remain " - "unchanged.") - pytest.xfail("We need to update the registry for this test to work") - font_test_file = 'mpltest.ttf' - - # Precondition: the test font should not be available - fonts = findSystemFonts() - if any(font_test_file in font for font in fonts): - pytest.skip(f'{font_test_file} already exists in system fonts') - - user_fonts_dir = MSUserFontDirectories[0] - - # Make sure that the user font directory exists (this is probably not the - # case on Windows versions < 1809) - os.makedirs(user_fonts_dir) - - # Copy the test font to the user font directory - shutil.copy(Path(__file__).parent / font_test_file, user_fonts_dir) - - # Now, the font should be available - fonts = findSystemFonts() - assert any(font_test_file in font for font in fonts) - - -def _model_handler(_): - fig, ax = plt.subplots() - fig.savefig(BytesIO(), format="pdf") - plt.close() - - -@pytest.mark.skipif(not hasattr(os, "register_at_fork"), - reason="Cannot register at_fork handlers") -def test_fork(): - _model_handler(0) # Make sure the font cache is filled. - ctx = multiprocessing.get_context("fork") - with ctx.Pool(processes=2) as pool: - pool.map(_model_handler, range(2)) - - -def test_missing_family(caplog): - plt.rcParams["font.sans-serif"] = ["this-font-does-not-exist"] - with caplog.at_level("WARNING"): - findfont("sans") - assert [rec.getMessage() for rec in caplog.records] == [ - "findfont: Font family ['sans'] not found. " - "Falling back to DejaVu Sans.", - "findfont: Generic family 'sans' not found because none of the " - "following families were found: this-font-does-not-exist", - ] - - -def _test_threading(): - import threading - from matplotlib.ft2font import LOAD_NO_HINTING - import matplotlib.font_manager as fm - - N = 10 - b = threading.Barrier(N) - - def bad_idea(n): - b.wait() - for j in range(100): - font = fm.get_font(fm.findfont("DejaVu Sans")) - font.set_text(str(n), 0.0, flags=LOAD_NO_HINTING) - - threads = [ - threading.Thread(target=bad_idea, name=f"bad_thread_{j}", args=(j,)) - for j in range(N) - ] - - for t in threads: - t.start() - - for t in threads: - t.join() - - -def test_fontcache_thread_safe(): - pytest.importorskip('threading') - import inspect - - proc = subprocess.run( - [sys.executable, "-c", - inspect.getsource(_test_threading) + '\n_test_threading()'] - ) - if proc.returncode: - pytest.fail("The subprocess returned with non-zero exit status " - f"{proc.returncode}.") - - -def test_fontentry_dataclass(): - fontent = FontEntry(name='font-name') - - png = fontent._repr_png_() - img = Image.open(BytesIO(png)) - assert img.width > 0 - assert img.height > 0 - - html = fontent._repr_html_() - assert html.startswith(" 0 and isinstance(args[0], types.FunctionType): - return args[0] - - return lambda x: x - - -# We don't use the pytest parametrizing function, since it seems to break -# with unittest.TestCase subclasses. -def parametrize(field_names, field_values): - # If we're not given a list of field names, we make it. - if not isinstance(field_names, (tuple, list)): - field_names = (field_names,) - field_values = [(val,) for val in field_values] - - # Create a decorator that saves this list of field names and values on the - # function for later parametrizing. - def decorator(func): - func.__dict__['param_names'] = field_names - func.__dict__['param_values'] = field_values - return func - - return decorator - - -# This is a metaclass that actually performs the parametrization. -class ParametrizingMetaclass(type): - IDENTIFIER_RE = re.compile('[^A-Za-z0-9]') - - def __new__(klass, name, bases, attrs): - new_attrs = attrs.copy() - for attr_name, attr in attrs.items(): - # We only care about functions - if not isinstance(attr, types.FunctionType): - continue - - param_names = attr.__dict__.pop('param_names', None) - param_values = attr.__dict__.pop('param_values', None) - if param_names is None or param_values is None: - continue - - # Create multiple copies of the function. - for i, values in enumerate(param_values): - assert len(param_names) == len(values) - - # Get a repr of the values, and fix it to be a valid identifier - human = '_'.join( - [klass.IDENTIFIER_RE.sub('', repr(x)) for x in values] - ) - - # Create a new name. - # new_name = attr.__name__ + "_%d" % i - new_name = attr.__name__ + "__" + human - - # Create a replacement function. - def create_new_func(func, names, values): - # Create a kwargs dictionary. - kwargs = dict(zip(names, values)) - - @functools.wraps(func) - def new_func(self): - return func(self, **kwargs) - - # Manually set the name and return the new function. - new_func.__name__ = new_name - return new_func - - # Actually create the new function. - new_func = create_new_func(attr, param_names, values) - - # Save this new function in our attrs dict. - new_attrs[new_name] = new_func - - # Remove the old attribute from our new dictionary. - del new_attrs[attr_name] - - # We create the class as normal, except we use our new attributes. - return type.__new__(klass, name, bases, new_attrs) - - -# This is a class decorator that actually applies the above metaclass. -def parametrize_class(klass): - return ParametrizingMetaclass(klass.__name__, - klass.__bases__, - klass.__dict__) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/base.py deleted file mode 100644 index 677dd369fa4ee9ed2e4545e102f9324fdce1e805..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/internals/base.py +++ /dev/null @@ -1,376 +0,0 @@ -""" -Base class for the internal managers. Both BlockManager and ArrayManager -inherit from this class. -""" -from __future__ import annotations - -from typing import ( - TYPE_CHECKING, - Any, - Literal, - cast, - final, -) - -import numpy as np - -from pandas._config import using_copy_on_write - -from pandas._libs import ( - algos as libalgos, - lib, -) -from pandas.errors import AbstractMethodError -from pandas.util._validators import validate_bool_kwarg - -from pandas.core.dtypes.cast import ( - find_common_type, - np_can_hold_element, -) -from pandas.core.dtypes.dtypes import ( - ExtensionDtype, - SparseDtype, -) - -from pandas.core.base import PandasObject -from pandas.core.construction import extract_array -from pandas.core.indexes.api import ( - Index, - default_index, -) - -if TYPE_CHECKING: - from pandas._typing import ( - ArrayLike, - AxisInt, - DtypeObj, - Self, - Shape, - ) - - -class DataManager(PandasObject): - # TODO share more methods/attributes - - axes: list[Index] - - @property - def items(self) -> Index: - raise AbstractMethodError(self) - - @final - def __len__(self) -> int: - return len(self.items) - - @property - def ndim(self) -> int: - return len(self.axes) - - @property - def shape(self) -> Shape: - return tuple(len(ax) for ax in self.axes) - - @final - def _validate_set_axis(self, axis: AxisInt, new_labels: Index) -> None: - # Caller is responsible for ensuring we have an Index object. - old_len = len(self.axes[axis]) - new_len = len(new_labels) - - if axis == 1 and len(self.items) == 0: - # If we are setting the index on a DataFrame with no columns, - # it is OK to change the length. - pass - - elif new_len != old_len: - raise ValueError( - f"Length mismatch: Expected axis has {old_len} elements, new " - f"values have {new_len} elements" - ) - - def reindex_indexer( - self, - new_axis, - indexer, - axis: AxisInt, - fill_value=None, - allow_dups: bool = False, - copy: bool = True, - only_slice: bool = False, - ) -> Self: - raise AbstractMethodError(self) - - @final - def reindex_axis( - self, - new_index: Index, - axis: AxisInt, - fill_value=None, - only_slice: bool = False, - ) -> Self: - """ - Conform data manager to new index. - """ - new_index, indexer = self.axes[axis].reindex(new_index) - - return self.reindex_indexer( - new_index, - indexer, - axis=axis, - fill_value=fill_value, - copy=False, - only_slice=only_slice, - ) - - def _equal_values(self, other: Self) -> bool: - """ - To be implemented by the subclasses. Only check the column values - assuming shape and indexes have already been checked. - """ - raise AbstractMethodError(self) - - @final - def equals(self, other: object) -> bool: - """ - Implementation for DataFrame.equals - """ - if not isinstance(other, DataManager): - return False - - self_axes, other_axes = self.axes, other.axes - if len(self_axes) != len(other_axes): - return False - if not all(ax1.equals(ax2) for ax1, ax2 in zip(self_axes, other_axes)): - return False - - return self._equal_values(other) - - def apply( - self, - f, - align_keys: list[str] | None = None, - **kwargs, - ) -> Self: - raise AbstractMethodError(self) - - def apply_with_block( - self, - f, - align_keys: list[str] | None = None, - **kwargs, - ) -> Self: - raise AbstractMethodError(self) - - @final - def isna(self, func) -> Self: - return self.apply("apply", func=func) - - @final - def fillna(self, value, limit: int | None, inplace: bool, downcast) -> Self: - if limit is not None: - # Do this validation even if we go through one of the no-op paths - limit = libalgos.validate_limit(None, limit=limit) - - return self.apply_with_block( - "fillna", - value=value, - limit=limit, - inplace=inplace, - downcast=downcast, - using_cow=using_copy_on_write(), - ) - - @final - def where(self, other, cond, align: bool) -> Self: - if align: - align_keys = ["other", "cond"] - else: - align_keys = ["cond"] - other = extract_array(other, extract_numpy=True) - - return self.apply_with_block( - "where", - align_keys=align_keys, - other=other, - cond=cond, - using_cow=using_copy_on_write(), - ) - - @final - def putmask(self, mask, new, align: bool = True) -> Self: - if align: - align_keys = ["new", "mask"] - else: - align_keys = ["mask"] - new = extract_array(new, extract_numpy=True) - - return self.apply_with_block( - "putmask", - align_keys=align_keys, - mask=mask, - new=new, - using_cow=using_copy_on_write(), - ) - - @final - def round(self, decimals: int, using_cow: bool = False) -> Self: - return self.apply_with_block( - "round", - decimals=decimals, - using_cow=using_cow, - ) - - @final - def replace(self, to_replace, value, inplace: bool) -> Self: - inplace = validate_bool_kwarg(inplace, "inplace") - # NDFrame.replace ensures the not-is_list_likes here - assert not lib.is_list_like(to_replace) - assert not lib.is_list_like(value) - return self.apply_with_block( - "replace", - to_replace=to_replace, - value=value, - inplace=inplace, - using_cow=using_copy_on_write(), - ) - - @final - def replace_regex(self, **kwargs) -> Self: - return self.apply_with_block( - "_replace_regex", **kwargs, using_cow=using_copy_on_write() - ) - - @final - def replace_list( - self, - src_list: list[Any], - dest_list: list[Any], - inplace: bool = False, - regex: bool = False, - ) -> Self: - """do a list replace""" - inplace = validate_bool_kwarg(inplace, "inplace") - - bm = self.apply_with_block( - "replace_list", - src_list=src_list, - dest_list=dest_list, - inplace=inplace, - regex=regex, - using_cow=using_copy_on_write(), - ) - bm._consolidate_inplace() - return bm - - def interpolate(self, inplace: bool, **kwargs) -> Self: - return self.apply_with_block( - "interpolate", inplace=inplace, **kwargs, using_cow=using_copy_on_write() - ) - - def pad_or_backfill(self, inplace: bool, **kwargs) -> Self: - return self.apply_with_block( - "pad_or_backfill", - inplace=inplace, - **kwargs, - using_cow=using_copy_on_write(), - ) - - def shift(self, periods: int, fill_value) -> Self: - if fill_value is lib.no_default: - fill_value = None - - return self.apply_with_block("shift", periods=periods, fill_value=fill_value) - - # -------------------------------------------------------------------- - # Consolidation: No-ops for all but BlockManager - - def is_consolidated(self) -> bool: - return True - - def consolidate(self) -> Self: - return self - - def _consolidate_inplace(self) -> None: - return - - -class SingleDataManager(DataManager): - @property - def ndim(self) -> Literal[1]: - return 1 - - @final - @property - def array(self) -> ArrayLike: - """ - Quick access to the backing array of the Block or SingleArrayManager. - """ - # error: "SingleDataManager" has no attribute "arrays"; maybe "array" - return self.arrays[0] # type: ignore[attr-defined] - - def setitem_inplace(self, indexer, value) -> None: - """ - Set values with indexer. - - For Single[Block/Array]Manager, this backs s[indexer] = value - - This is an inplace version of `setitem()`, mutating the manager/values - in place, not returning a new Manager (and Block), and thus never changing - the dtype. - """ - arr = self.array - - # EAs will do this validation in their own __setitem__ methods. - if isinstance(arr, np.ndarray): - # Note: checking for ndarray instead of np.dtype means we exclude - # dt64/td64, which do their own validation. - value = np_can_hold_element(arr.dtype, value) - - if isinstance(value, np.ndarray) and value.ndim == 1 and len(value) == 1: - # NumPy 1.25 deprecation: https://github.com/numpy/numpy/pull/10615 - value = value[0, ...] - - arr[indexer] = value - - def grouped_reduce(self, func): - arr = self.array - res = func(arr) - index = default_index(len(res)) - - mgr = type(self).from_array(res, index) - return mgr - - @classmethod - def from_array(cls, arr: ArrayLike, index: Index): - raise AbstractMethodError(cls) - - -def interleaved_dtype(dtypes: list[DtypeObj]) -> DtypeObj | None: - """ - Find the common dtype for `blocks`. - - Parameters - ---------- - blocks : List[DtypeObj] - - Returns - ------- - dtype : np.dtype, ExtensionDtype, or None - None is returned when `blocks` is empty. - """ - if not len(dtypes): - return None - - return find_common_type(dtypes) - - -def ensure_np_dtype(dtype: DtypeObj) -> np.dtype: - # TODO: https://github.com/pandas-dev/pandas/issues/22791 - # Give EAs some input on what happens here. Sparse needs this. - if isinstance(dtype, SparseDtype): - dtype = dtype.subtype - dtype = cast(np.dtype, dtype) - elif isinstance(dtype, ExtensionDtype): - dtype = np.dtype("object") - elif dtype == np.dtype(str): - dtype = np.dtype("object") - return dtype diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/excel/_odswriter.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/excel/_odswriter.py deleted file mode 100644 index 0bc335a9b75b64408ad1c509d0a1df3ebb4236e1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/excel/_odswriter.py +++ /dev/null @@ -1,347 +0,0 @@ -from __future__ import annotations - -from collections import defaultdict -import datetime -from typing import ( - TYPE_CHECKING, - Any, - DefaultDict, - cast, - overload, -) - -from pandas._libs import json - -from pandas.io.excel._base import ExcelWriter -from pandas.io.excel._util import ( - combine_kwargs, - validate_freeze_panes, -) - -if TYPE_CHECKING: - from pandas._typing import ( - ExcelWriterIfSheetExists, - FilePath, - StorageOptions, - WriteExcelBuffer, - ) - - from pandas.io.formats.excel import ExcelCell - - -class ODSWriter(ExcelWriter): - _engine = "odf" - _supported_extensions = (".ods",) - - def __init__( - self, - path: FilePath | WriteExcelBuffer | ExcelWriter, - engine: str | None = None, - date_format: str | None = None, - datetime_format=None, - mode: str = "w", - storage_options: StorageOptions | None = None, - if_sheet_exists: ExcelWriterIfSheetExists | None = None, - engine_kwargs: dict[str, Any] | None = None, - **kwargs, - ) -> None: - from odf.opendocument import OpenDocumentSpreadsheet - - if mode == "a": - raise ValueError("Append mode is not supported with odf!") - - engine_kwargs = combine_kwargs(engine_kwargs, kwargs) - self._book = OpenDocumentSpreadsheet(**engine_kwargs) - - super().__init__( - path, - mode=mode, - storage_options=storage_options, - if_sheet_exists=if_sheet_exists, - engine_kwargs=engine_kwargs, - ) - - self._style_dict: dict[str, str] = {} - - @property - def book(self): - """ - Book instance of class odf.opendocument.OpenDocumentSpreadsheet. - - This attribute can be used to access engine-specific features. - """ - return self._book - - @property - def sheets(self) -> dict[str, Any]: - """Mapping of sheet names to sheet objects.""" - from odf.table import Table - - result = { - sheet.getAttribute("name"): sheet - for sheet in self.book.getElementsByType(Table) - } - return result - - def _save(self) -> None: - """ - Save workbook to disk. - """ - for sheet in self.sheets.values(): - self.book.spreadsheet.addElement(sheet) - self.book.save(self._handles.handle) - - def _write_cells( - self, - cells: list[ExcelCell], - sheet_name: str | None = None, - startrow: int = 0, - startcol: int = 0, - freeze_panes: tuple[int, int] | None = None, - ) -> None: - """ - Write the frame cells using odf - """ - from odf.table import ( - Table, - TableCell, - TableRow, - ) - from odf.text import P - - sheet_name = self._get_sheet_name(sheet_name) - assert sheet_name is not None - - if sheet_name in self.sheets: - wks = self.sheets[sheet_name] - else: - wks = Table(name=sheet_name) - self.book.spreadsheet.addElement(wks) - - if validate_freeze_panes(freeze_panes): - freeze_panes = cast(tuple[int, int], freeze_panes) - self._create_freeze_panes(sheet_name, freeze_panes) - - for _ in range(startrow): - wks.addElement(TableRow()) - - rows: DefaultDict = defaultdict(TableRow) - col_count: DefaultDict = defaultdict(int) - - for cell in sorted(cells, key=lambda cell: (cell.row, cell.col)): - # only add empty cells if the row is still empty - if not col_count[cell.row]: - for _ in range(startcol): - rows[cell.row].addElement(TableCell()) - - # fill with empty cells if needed - for _ in range(cell.col - col_count[cell.row]): - rows[cell.row].addElement(TableCell()) - col_count[cell.row] += 1 - - pvalue, tc = self._make_table_cell(cell) - rows[cell.row].addElement(tc) - col_count[cell.row] += 1 - p = P(text=pvalue) - tc.addElement(p) - - # add all rows to the sheet - if len(rows) > 0: - for row_nr in range(max(rows.keys()) + 1): - wks.addElement(rows[row_nr]) - - def _make_table_cell_attributes(self, cell) -> dict[str, int | str]: - """Convert cell attributes to OpenDocument attributes - - Parameters - ---------- - cell : ExcelCell - Spreadsheet cell data - - Returns - ------- - attributes : Dict[str, Union[int, str]] - Dictionary with attributes and attribute values - """ - attributes: dict[str, int | str] = {} - style_name = self._process_style(cell.style) - if style_name is not None: - attributes["stylename"] = style_name - if cell.mergestart is not None and cell.mergeend is not None: - attributes["numberrowsspanned"] = max(1, cell.mergestart) - attributes["numbercolumnsspanned"] = cell.mergeend - return attributes - - def _make_table_cell(self, cell) -> tuple[object, Any]: - """Convert cell data to an OpenDocument spreadsheet cell - - Parameters - ---------- - cell : ExcelCell - Spreadsheet cell data - - Returns - ------- - pvalue, cell : Tuple[str, TableCell] - Display value, Cell value - """ - from odf.table import TableCell - - attributes = self._make_table_cell_attributes(cell) - val, fmt = self._value_with_fmt(cell.val) - pvalue = value = val - if isinstance(val, bool): - value = str(val).lower() - pvalue = str(val).upper() - if isinstance(val, datetime.datetime): - # Fast formatting - value = val.isoformat() - # Slow but locale-dependent - pvalue = val.strftime("%c") - return ( - pvalue, - TableCell(valuetype="date", datevalue=value, attributes=attributes), - ) - elif isinstance(val, datetime.date): - # Fast formatting - value = f"{val.year}-{val.month:02d}-{val.day:02d}" - # Slow but locale-dependent - pvalue = val.strftime("%x") - return ( - pvalue, - TableCell(valuetype="date", datevalue=value, attributes=attributes), - ) - else: - class_to_cell_type = { - str: "string", - int: "float", - float: "float", - bool: "boolean", - } - return ( - pvalue, - TableCell( - valuetype=class_to_cell_type[type(val)], - value=value, - attributes=attributes, - ), - ) - - @overload - def _process_style(self, style: dict[str, Any]) -> str: - ... - - @overload - def _process_style(self, style: None) -> None: - ... - - def _process_style(self, style: dict[str, Any] | None) -> str | None: - """Convert a style dictionary to a OpenDocument style sheet - - Parameters - ---------- - style : Dict - Style dictionary - - Returns - ------- - style_key : str - Unique style key for later reference in sheet - """ - from odf.style import ( - ParagraphProperties, - Style, - TableCellProperties, - TextProperties, - ) - - if style is None: - return None - style_key = json.ujson_dumps(style) - if style_key in self._style_dict: - return self._style_dict[style_key] - name = f"pd{len(self._style_dict)+1}" - self._style_dict[style_key] = name - odf_style = Style(name=name, family="table-cell") - if "font" in style: - font = style["font"] - if font.get("bold", False): - odf_style.addElement(TextProperties(fontweight="bold")) - if "borders" in style: - borders = style["borders"] - for side, thickness in borders.items(): - thickness_translation = {"thin": "0.75pt solid #000000"} - odf_style.addElement( - TableCellProperties( - attributes={f"border{side}": thickness_translation[thickness]} - ) - ) - if "alignment" in style: - alignment = style["alignment"] - horizontal = alignment.get("horizontal") - if horizontal: - odf_style.addElement(ParagraphProperties(textalign=horizontal)) - vertical = alignment.get("vertical") - if vertical: - odf_style.addElement(TableCellProperties(verticalalign=vertical)) - self.book.styles.addElement(odf_style) - return name - - def _create_freeze_panes( - self, sheet_name: str, freeze_panes: tuple[int, int] - ) -> None: - """ - Create freeze panes in the sheet. - - Parameters - ---------- - sheet_name : str - Name of the spreadsheet - freeze_panes : tuple of (int, int) - Freeze pane location x and y - """ - from odf.config import ( - ConfigItem, - ConfigItemMapEntry, - ConfigItemMapIndexed, - ConfigItemMapNamed, - ConfigItemSet, - ) - - config_item_set = ConfigItemSet(name="ooo:view-settings") - self.book.settings.addElement(config_item_set) - - config_item_map_indexed = ConfigItemMapIndexed(name="Views") - config_item_set.addElement(config_item_map_indexed) - - config_item_map_entry = ConfigItemMapEntry() - config_item_map_indexed.addElement(config_item_map_entry) - - config_item_map_named = ConfigItemMapNamed(name="Tables") - config_item_map_entry.addElement(config_item_map_named) - - config_item_map_entry = ConfigItemMapEntry(name=sheet_name) - config_item_map_named.addElement(config_item_map_entry) - - config_item_map_entry.addElement( - ConfigItem(name="HorizontalSplitMode", type="short", text="2") - ) - config_item_map_entry.addElement( - ConfigItem(name="VerticalSplitMode", type="short", text="2") - ) - config_item_map_entry.addElement( - ConfigItem( - name="HorizontalSplitPosition", type="int", text=str(freeze_panes[0]) - ) - ) - config_item_map_entry.addElement( - ConfigItem( - name="VerticalSplitPosition", type="int", text=str(freeze_panes[1]) - ) - ) - config_item_map_entry.addElement( - ConfigItem(name="PositionRight", type="int", text=str(freeze_panes[0])) - ) - config_item_map_entry.addElement( - ConfigItem(name="PositionBottom", type="int", text=str(freeze_panes[1])) - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/test_datetimes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/test_datetimes.py deleted file mode 100644 index c2d68a79f32d4c7b80013300c254c2ae73fff8bf..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/test_datetimes.py +++ /dev/null @@ -1,760 +0,0 @@ -""" -Tests for DatetimeArray -""" -from __future__ import annotations - -from datetime import timedelta -import operator - -try: - from zoneinfo import ZoneInfo -except ImportError: - # Cannot assign to a type - ZoneInfo = None # type: ignore[misc, assignment] - -import numpy as np -import pytest - -from pandas._libs.tslibs import ( - npy_unit_to_abbrev, - tz_compare, -) - -from pandas.core.dtypes.dtypes import DatetimeTZDtype - -import pandas as pd -import pandas._testing as tm -from pandas.core.arrays import ( - DatetimeArray, - TimedeltaArray, -) - - -class TestNonNano: - @pytest.fixture(params=["s", "ms", "us"]) - def unit(self, request): - """Fixture returning parametrized time units""" - return request.param - - @pytest.fixture - def dtype(self, unit, tz_naive_fixture): - tz = tz_naive_fixture - if tz is None: - return np.dtype(f"datetime64[{unit}]") - else: - return DatetimeTZDtype(unit=unit, tz=tz) - - @pytest.fixture - def dta_dti(self, unit, dtype): - tz = getattr(dtype, "tz", None) - - dti = pd.date_range("2016-01-01", periods=55, freq="D", tz=tz) - if tz is None: - arr = np.asarray(dti).astype(f"M8[{unit}]") - else: - arr = np.asarray(dti.tz_convert("UTC").tz_localize(None)).astype( - f"M8[{unit}]" - ) - - dta = DatetimeArray._simple_new(arr, dtype=dtype) - return dta, dti - - @pytest.fixture - def dta(self, dta_dti): - dta, dti = dta_dti - return dta - - def test_non_nano(self, unit, dtype): - arr = np.arange(5, dtype=np.int64).view(f"M8[{unit}]") - dta = DatetimeArray._simple_new(arr, dtype=dtype) - - assert dta.dtype == dtype - assert dta[0].unit == unit - assert tz_compare(dta.tz, dta[0].tz) - assert (dta[0] == dta[:1]).all() - - @pytest.mark.parametrize( - "field", DatetimeArray._field_ops + DatetimeArray._bool_ops - ) - def test_fields(self, unit, field, dtype, dta_dti): - dta, dti = dta_dti - - assert (dti == dta).all() - - res = getattr(dta, field) - expected = getattr(dti._data, field) - tm.assert_numpy_array_equal(res, expected) - - def test_normalize(self, unit): - dti = pd.date_range("2016-01-01 06:00:00", periods=55, freq="D") - arr = np.asarray(dti).astype(f"M8[{unit}]") - - dta = DatetimeArray._simple_new(arr, dtype=arr.dtype) - - assert not dta.is_normalized - - # TODO: simplify once we can just .astype to other unit - exp = np.asarray(dti.normalize()).astype(f"M8[{unit}]") - expected = DatetimeArray._simple_new(exp, dtype=exp.dtype) - - res = dta.normalize() - tm.assert_extension_array_equal(res, expected) - - def test_simple_new_requires_match(self, unit): - arr = np.arange(5, dtype=np.int64).view(f"M8[{unit}]") - dtype = DatetimeTZDtype(unit, "UTC") - - dta = DatetimeArray._simple_new(arr, dtype=dtype) - assert dta.dtype == dtype - - wrong = DatetimeTZDtype("ns", "UTC") - with pytest.raises(AssertionError, match=""): - DatetimeArray._simple_new(arr, dtype=wrong) - - def test_std_non_nano(self, unit): - dti = pd.date_range("2016-01-01", periods=55, freq="D") - arr = np.asarray(dti).astype(f"M8[{unit}]") - - dta = DatetimeArray._simple_new(arr, dtype=arr.dtype) - - # we should match the nano-reso std, but floored to our reso. - res = dta.std() - assert res._creso == dta._creso - assert res == dti.std().floor(unit) - - @pytest.mark.filterwarnings("ignore:Converting to PeriodArray.*:UserWarning") - def test_to_period(self, dta_dti): - dta, dti = dta_dti - result = dta.to_period("D") - expected = dti._data.to_period("D") - - tm.assert_extension_array_equal(result, expected) - - def test_iter(self, dta): - res = next(iter(dta)) - expected = dta[0] - - assert type(res) is pd.Timestamp - assert res._value == expected._value - assert res._creso == expected._creso - assert res == expected - - def test_astype_object(self, dta): - result = dta.astype(object) - assert all(x._creso == dta._creso for x in result) - assert all(x == y for x, y in zip(result, dta)) - - def test_to_pydatetime(self, dta_dti): - dta, dti = dta_dti - - result = dta.to_pydatetime() - expected = dti.to_pydatetime() - tm.assert_numpy_array_equal(result, expected) - - @pytest.mark.parametrize("meth", ["time", "timetz", "date"]) - def test_time_date(self, dta_dti, meth): - dta, dti = dta_dti - - result = getattr(dta, meth) - expected = getattr(dti, meth) - tm.assert_numpy_array_equal(result, expected) - - def test_format_native_types(self, unit, dtype, dta_dti): - # In this case we should get the same formatted values with our nano - # version dti._data as we do with the non-nano dta - dta, dti = dta_dti - - res = dta._format_native_types() - exp = dti._data._format_native_types() - tm.assert_numpy_array_equal(res, exp) - - def test_repr(self, dta_dti, unit): - dta, dti = dta_dti - - assert repr(dta) == repr(dti._data).replace("[ns", f"[{unit}") - - # TODO: tests with td64 - def test_compare_mismatched_resolutions(self, comparison_op): - # comparison that numpy gets wrong bc of silent overflows - op = comparison_op - - iinfo = np.iinfo(np.int64) - vals = np.array([iinfo.min, iinfo.min + 1, iinfo.max], dtype=np.int64) - - # Construct so that arr2[1] < arr[1] < arr[2] < arr2[2] - arr = np.array(vals).view("M8[ns]") - arr2 = arr.view("M8[s]") - - left = DatetimeArray._simple_new(arr, dtype=arr.dtype) - right = DatetimeArray._simple_new(arr2, dtype=arr2.dtype) - - if comparison_op is operator.eq: - expected = np.array([False, False, False]) - elif comparison_op is operator.ne: - expected = np.array([True, True, True]) - elif comparison_op in [operator.lt, operator.le]: - expected = np.array([False, False, True]) - else: - expected = np.array([False, True, False]) - - result = op(left, right) - tm.assert_numpy_array_equal(result, expected) - - result = op(left[1], right) - tm.assert_numpy_array_equal(result, expected) - - if op not in [operator.eq, operator.ne]: - # check that numpy still gets this wrong; if it is fixed we may be - # able to remove compare_mismatched_resolutions - np_res = op(left._ndarray, right._ndarray) - tm.assert_numpy_array_equal(np_res[1:], ~expected[1:]) - - def test_add_mismatched_reso_doesnt_downcast(self): - # https://github.com/pandas-dev/pandas/pull/48748#issuecomment-1260181008 - td = pd.Timedelta(microseconds=1) - dti = pd.date_range("2016-01-01", periods=3) - td - dta = dti._data.as_unit("us") - - res = dta + td.as_unit("us") - # even though the result is an even number of days - # (so we _could_ downcast to unit="s"), we do not. - assert res.unit == "us" - - @pytest.mark.parametrize( - "scalar", - [ - timedelta(hours=2), - pd.Timedelta(hours=2), - np.timedelta64(2, "h"), - np.timedelta64(2 * 3600 * 1000, "ms"), - pd.offsets.Minute(120), - pd.offsets.Hour(2), - ], - ) - def test_add_timedeltalike_scalar_mismatched_reso(self, dta_dti, scalar): - dta, dti = dta_dti - - td = pd.Timedelta(scalar) - exp_reso = max(dta._creso, td._creso) - exp_unit = npy_unit_to_abbrev(exp_reso) - - expected = (dti + td)._data.as_unit(exp_unit) - result = dta + scalar - tm.assert_extension_array_equal(result, expected) - - result = scalar + dta - tm.assert_extension_array_equal(result, expected) - - expected = (dti - td)._data.as_unit(exp_unit) - result = dta - scalar - tm.assert_extension_array_equal(result, expected) - - def test_sub_datetimelike_scalar_mismatch(self): - dti = pd.date_range("2016-01-01", periods=3) - dta = dti._data.as_unit("us") - - ts = dta[0].as_unit("s") - - result = dta - ts - expected = (dti - dti[0])._data.as_unit("us") - assert result.dtype == "m8[us]" - tm.assert_extension_array_equal(result, expected) - - def test_sub_datetime64_reso_mismatch(self): - dti = pd.date_range("2016-01-01", periods=3) - left = dti._data.as_unit("s") - right = left.as_unit("ms") - - result = left - right - exp_values = np.array([0, 0, 0], dtype="m8[ms]") - expected = TimedeltaArray._simple_new( - exp_values, - dtype=exp_values.dtype, - ) - tm.assert_extension_array_equal(result, expected) - result2 = right - left - tm.assert_extension_array_equal(result2, expected) - - -class TestDatetimeArrayComparisons: - # TODO: merge this into tests/arithmetic/test_datetime64 once it is - # sufficiently robust - - def test_cmp_dt64_arraylike_tznaive(self, comparison_op): - # arbitrary tz-naive DatetimeIndex - op = comparison_op - - dti = pd.date_range("2016-01-1", freq="MS", periods=9, tz=None) - arr = DatetimeArray(dti) - assert arr.freq == dti.freq - assert arr.tz == dti.tz - - right = dti - - expected = np.ones(len(arr), dtype=bool) - if comparison_op.__name__ in ["ne", "gt", "lt"]: - # for these the comparisons should be all-False - expected = ~expected - - result = op(arr, arr) - tm.assert_numpy_array_equal(result, expected) - for other in [ - right, - np.array(right), - list(right), - tuple(right), - right.astype(object), - ]: - result = op(arr, other) - tm.assert_numpy_array_equal(result, expected) - - result = op(other, arr) - tm.assert_numpy_array_equal(result, expected) - - -class TestDatetimeArray: - def test_astype_non_nano_tznaive(self): - dti = pd.date_range("2016-01-01", periods=3) - - res = dti.astype("M8[s]") - assert res.dtype == "M8[s]" - - dta = dti._data - res = dta.astype("M8[s]") - assert res.dtype == "M8[s]" - assert isinstance(res, pd.core.arrays.DatetimeArray) # used to be ndarray - - def test_astype_non_nano_tzaware(self): - dti = pd.date_range("2016-01-01", periods=3, tz="UTC") - - res = dti.astype("M8[s, US/Pacific]") - assert res.dtype == "M8[s, US/Pacific]" - - dta = dti._data - res = dta.astype("M8[s, US/Pacific]") - assert res.dtype == "M8[s, US/Pacific]" - - # from non-nano to non-nano, preserving reso - res2 = res.astype("M8[s, UTC]") - assert res2.dtype == "M8[s, UTC]" - assert not tm.shares_memory(res2, res) - - res3 = res.astype("M8[s, UTC]", copy=False) - assert res2.dtype == "M8[s, UTC]" - assert tm.shares_memory(res3, res) - - def test_astype_to_same(self): - arr = DatetimeArray._from_sequence( - ["2000"], dtype=DatetimeTZDtype(tz="US/Central") - ) - result = arr.astype(DatetimeTZDtype(tz="US/Central"), copy=False) - assert result is arr - - @pytest.mark.parametrize("dtype", ["datetime64[ns]", "datetime64[ns, UTC]"]) - @pytest.mark.parametrize( - "other", ["datetime64[ns]", "datetime64[ns, UTC]", "datetime64[ns, CET]"] - ) - def test_astype_copies(self, dtype, other): - # https://github.com/pandas-dev/pandas/pull/32490 - ser = pd.Series([1, 2], dtype=dtype) - orig = ser.copy() - - err = False - if (dtype == "datetime64[ns]") ^ (other == "datetime64[ns]"): - # deprecated in favor of tz_localize - err = True - - if err: - if dtype == "datetime64[ns]": - msg = "Use obj.tz_localize instead or series.dt.tz_localize instead" - else: - msg = "from timezone-aware dtype to timezone-naive dtype" - with pytest.raises(TypeError, match=msg): - ser.astype(other) - else: - t = ser.astype(other) - t[:] = pd.NaT - tm.assert_series_equal(ser, orig) - - @pytest.mark.parametrize("dtype", [int, np.int32, np.int64, "uint32", "uint64"]) - def test_astype_int(self, dtype): - arr = DatetimeArray._from_sequence([pd.Timestamp("2000"), pd.Timestamp("2001")]) - - if np.dtype(dtype) != np.int64: - with pytest.raises(TypeError, match=r"Do obj.astype\('int64'\)"): - arr.astype(dtype) - return - - result = arr.astype(dtype) - expected = arr._ndarray.view("i8") - tm.assert_numpy_array_equal(result, expected) - - def test_astype_to_sparse_dt64(self): - # GH#50082 - dti = pd.date_range("2016-01-01", periods=4) - dta = dti._data - result = dta.astype("Sparse[datetime64[ns]]") - - assert result.dtype == "Sparse[datetime64[ns]]" - assert (result == dta).all() - - def test_tz_setter_raises(self): - arr = DatetimeArray._from_sequence( - ["2000"], dtype=DatetimeTZDtype(tz="US/Central") - ) - with pytest.raises(AttributeError, match="tz_localize"): - arr.tz = "UTC" - - def test_setitem_str_impute_tz(self, tz_naive_fixture): - # Like for getitem, if we are passed a naive-like string, we impute - # our own timezone. - tz = tz_naive_fixture - - data = np.array([1, 2, 3], dtype="M8[ns]") - dtype = data.dtype if tz is None else DatetimeTZDtype(tz=tz) - arr = DatetimeArray(data, dtype=dtype) - expected = arr.copy() - - ts = pd.Timestamp("2020-09-08 16:50").tz_localize(tz) - setter = str(ts.tz_localize(None)) - - # Setting a scalar tznaive string - expected[0] = ts - arr[0] = setter - tm.assert_equal(arr, expected) - - # Setting a listlike of tznaive strings - expected[1] = ts - arr[:2] = [setter, setter] - tm.assert_equal(arr, expected) - - def test_setitem_different_tz_raises(self): - # pre-2.0 we required exact tz match, in 2.0 we require only - # tzawareness-match - data = np.array([1, 2, 3], dtype="M8[ns]") - arr = DatetimeArray(data, copy=False, dtype=DatetimeTZDtype(tz="US/Central")) - with pytest.raises(TypeError, match="Cannot compare tz-naive and tz-aware"): - arr[0] = pd.Timestamp("2000") - - ts = pd.Timestamp("2000", tz="US/Eastern") - arr[0] = ts - assert arr[0] == ts.tz_convert("US/Central") - - def test_setitem_clears_freq(self): - a = DatetimeArray(pd.date_range("2000", periods=2, freq="D", tz="US/Central")) - a[0] = pd.Timestamp("2000", tz="US/Central") - assert a.freq is None - - @pytest.mark.parametrize( - "obj", - [ - pd.Timestamp("2021-01-01"), - pd.Timestamp("2021-01-01").to_datetime64(), - pd.Timestamp("2021-01-01").to_pydatetime(), - ], - ) - def test_setitem_objects(self, obj): - # make sure we accept datetime64 and datetime in addition to Timestamp - dti = pd.date_range("2000", periods=2, freq="D") - arr = dti._data - - arr[0] = obj - assert arr[0] == obj - - def test_repeat_preserves_tz(self): - dti = pd.date_range("2000", periods=2, freq="D", tz="US/Central") - arr = DatetimeArray(dti) - - repeated = arr.repeat([1, 1]) - - # preserves tz and values, but not freq - expected = DatetimeArray(arr.asi8, freq=None, dtype=arr.dtype) - tm.assert_equal(repeated, expected) - - def test_value_counts_preserves_tz(self): - dti = pd.date_range("2000", periods=2, freq="D", tz="US/Central") - arr = DatetimeArray(dti).repeat([4, 3]) - - result = arr.value_counts() - - # Note: not tm.assert_index_equal, since `freq`s do not match - assert result.index.equals(dti) - - arr[-2] = pd.NaT - result = arr.value_counts(dropna=False) - expected = pd.Series([4, 2, 1], index=[dti[0], dti[1], pd.NaT], name="count") - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize("method", ["pad", "backfill"]) - def test_fillna_preserves_tz(self, method): - dti = pd.date_range("2000-01-01", periods=5, freq="D", tz="US/Central") - arr = DatetimeArray(dti, copy=True) - arr[2] = pd.NaT - - fill_val = dti[1] if method == "pad" else dti[3] - expected = DatetimeArray._from_sequence( - [dti[0], dti[1], fill_val, dti[3], dti[4]], - dtype=DatetimeTZDtype(tz="US/Central"), - ) - - result = arr._pad_or_backfill(method=method) - tm.assert_extension_array_equal(result, expected) - - # assert that arr and dti were not modified in-place - assert arr[2] is pd.NaT - assert dti[2] == pd.Timestamp("2000-01-03", tz="US/Central") - - def test_fillna_2d(self): - dti = pd.date_range("2016-01-01", periods=6, tz="US/Pacific") - dta = dti._data.reshape(3, 2).copy() - dta[0, 1] = pd.NaT - dta[1, 0] = pd.NaT - - res1 = dta._pad_or_backfill(method="pad") - expected1 = dta.copy() - expected1[1, 0] = dta[0, 0] - tm.assert_extension_array_equal(res1, expected1) - - res2 = dta._pad_or_backfill(method="backfill") - expected2 = dta.copy() - expected2 = dta.copy() - expected2[1, 0] = dta[2, 0] - expected2[0, 1] = dta[1, 1] - tm.assert_extension_array_equal(res2, expected2) - - # with different ordering for underlying ndarray; behavior should - # be unchanged - dta2 = dta._from_backing_data(dta._ndarray.copy(order="F")) - assert dta2._ndarray.flags["F_CONTIGUOUS"] - assert not dta2._ndarray.flags["C_CONTIGUOUS"] - tm.assert_extension_array_equal(dta, dta2) - - res3 = dta2._pad_or_backfill(method="pad") - tm.assert_extension_array_equal(res3, expected1) - - res4 = dta2._pad_or_backfill(method="backfill") - tm.assert_extension_array_equal(res4, expected2) - - # test the DataFrame method while we're here - df = pd.DataFrame(dta) - res = df.ffill() - expected = pd.DataFrame(expected1) - tm.assert_frame_equal(res, expected) - - res = df.bfill() - expected = pd.DataFrame(expected2) - tm.assert_frame_equal(res, expected) - - def test_array_interface_tz(self): - tz = "US/Central" - data = DatetimeArray(pd.date_range("2017", periods=2, tz=tz)) - result = np.asarray(data) - - expected = np.array( - [ - pd.Timestamp("2017-01-01T00:00:00", tz=tz), - pd.Timestamp("2017-01-02T00:00:00", tz=tz), - ], - dtype=object, - ) - tm.assert_numpy_array_equal(result, expected) - - result = np.asarray(data, dtype=object) - tm.assert_numpy_array_equal(result, expected) - - result = np.asarray(data, dtype="M8[ns]") - - expected = np.array( - ["2017-01-01T06:00:00", "2017-01-02T06:00:00"], dtype="M8[ns]" - ) - tm.assert_numpy_array_equal(result, expected) - - def test_array_interface(self): - data = DatetimeArray(pd.date_range("2017", periods=2)) - expected = np.array( - ["2017-01-01T00:00:00", "2017-01-02T00:00:00"], dtype="datetime64[ns]" - ) - - result = np.asarray(data) - tm.assert_numpy_array_equal(result, expected) - - result = np.asarray(data, dtype=object) - expected = np.array( - [pd.Timestamp("2017-01-01T00:00:00"), pd.Timestamp("2017-01-02T00:00:00")], - dtype=object, - ) - tm.assert_numpy_array_equal(result, expected) - - @pytest.mark.parametrize("index", [True, False]) - def test_searchsorted_different_tz(self, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 - arr = DatetimeArray(data, freq="D").tz_localize("Asia/Tokyo") - if index: - arr = pd.Index(arr) - - expected = arr.searchsorted(arr[2]) - result = arr.searchsorted(arr[2].tz_convert("UTC")) - assert result == expected - - expected = arr.searchsorted(arr[2:6]) - result = arr.searchsorted(arr[2:6].tz_convert("UTC")) - tm.assert_equal(result, expected) - - @pytest.mark.parametrize("index", [True, False]) - def test_searchsorted_tzawareness_compat(self, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 - arr = DatetimeArray(data, freq="D") - if index: - arr = pd.Index(arr) - - mismatch = arr.tz_localize("Asia/Tokyo") - - msg = "Cannot compare tz-naive and tz-aware datetime-like objects" - with pytest.raises(TypeError, match=msg): - arr.searchsorted(mismatch[0]) - with pytest.raises(TypeError, match=msg): - arr.searchsorted(mismatch) - - with pytest.raises(TypeError, match=msg): - mismatch.searchsorted(arr[0]) - with pytest.raises(TypeError, match=msg): - mismatch.searchsorted(arr) - - @pytest.mark.parametrize( - "other", - [ - 1, - np.int64(1), - 1.0, - np.timedelta64("NaT"), - pd.Timedelta(days=2), - "invalid", - np.arange(10, dtype="i8") * 24 * 3600 * 10**9, - np.arange(10).view("timedelta64[ns]") * 24 * 3600 * 10**9, - pd.Timestamp("2021-01-01").to_period("D"), - ], - ) - @pytest.mark.parametrize("index", [True, False]) - def test_searchsorted_invalid_types(self, other, index): - data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9 - arr = DatetimeArray(data, freq="D") - if index: - arr = pd.Index(arr) - - msg = "|".join( - [ - "searchsorted requires compatible dtype or scalar", - "value should be a 'Timestamp', 'NaT', or array of those. Got", - ] - ) - with pytest.raises(TypeError, match=msg): - arr.searchsorted(other) - - def test_shift_fill_value(self): - dti = pd.date_range("2016-01-01", periods=3) - - dta = dti._data - expected = DatetimeArray(np.roll(dta._ndarray, 1)) - - fv = dta[-1] - for fill_value in [fv, fv.to_pydatetime(), fv.to_datetime64()]: - result = dta.shift(1, fill_value=fill_value) - tm.assert_datetime_array_equal(result, expected) - - dta = dta.tz_localize("UTC") - expected = expected.tz_localize("UTC") - fv = dta[-1] - for fill_value in [fv, fv.to_pydatetime()]: - result = dta.shift(1, fill_value=fill_value) - tm.assert_datetime_array_equal(result, expected) - - def test_shift_value_tzawareness_mismatch(self): - dti = pd.date_range("2016-01-01", periods=3) - - dta = dti._data - - fv = dta[-1].tz_localize("UTC") - for invalid in [fv, fv.to_pydatetime()]: - with pytest.raises(TypeError, match="Cannot compare"): - dta.shift(1, fill_value=invalid) - - dta = dta.tz_localize("UTC") - fv = dta[-1].tz_localize(None) - for invalid in [fv, fv.to_pydatetime(), fv.to_datetime64()]: - with pytest.raises(TypeError, match="Cannot compare"): - dta.shift(1, fill_value=invalid) - - def test_shift_requires_tzmatch(self): - # pre-2.0 we required exact tz match, in 2.0 we require just - # matching tzawareness - dti = pd.date_range("2016-01-01", periods=3, tz="UTC") - dta = dti._data - - fill_value = pd.Timestamp("2020-10-18 18:44", tz="US/Pacific") - - result = dta.shift(1, fill_value=fill_value) - expected = dta.shift(1, fill_value=fill_value.tz_convert("UTC")) - tm.assert_equal(result, expected) - - def test_tz_localize_t2d(self): - dti = pd.date_range("1994-05-12", periods=12, tz="US/Pacific") - dta = dti._data.reshape(3, 4) - result = dta.tz_localize(None) - - expected = dta.ravel().tz_localize(None).reshape(dta.shape) - tm.assert_datetime_array_equal(result, expected) - - roundtrip = expected.tz_localize("US/Pacific") - tm.assert_datetime_array_equal(roundtrip, dta) - - easts = ["US/Eastern", "dateutil/US/Eastern"] - if ZoneInfo is not None: - try: - tz = ZoneInfo("US/Eastern") - except KeyError: - # no tzdata - pass - else: - # Argument 1 to "append" of "list" has incompatible type "ZoneInfo"; - # expected "str" - easts.append(tz) # type: ignore[arg-type] - - @pytest.mark.parametrize("tz", easts) - def test_iter_zoneinfo_fold(self, tz): - # GH#49684 - utc_vals = np.array( - [1320552000, 1320555600, 1320559200, 1320562800], dtype=np.int64 - ) - utc_vals *= 1_000_000_000 - - dta = DatetimeArray(utc_vals).tz_localize("UTC").tz_convert(tz) - - left = dta[2] - right = list(dta)[2] - assert str(left) == str(right) - # previously there was a bug where with non-pytz right would be - # Timestamp('2011-11-06 01:00:00-0400', tz='US/Eastern') - # while left would be - # Timestamp('2011-11-06 01:00:00-0500', tz='US/Eastern') - # The .value's would match (so they would compare as equal), - # but the folds would not - assert left.utcoffset() == right.utcoffset() - - # The same bug in ints_to_pydatetime affected .astype, so we test - # that here. - right2 = dta.astype(object)[2] - assert str(left) == str(right2) - assert left.utcoffset() == right2.utcoffset() - - -def test_factorize_sort_without_freq(): - dta = DatetimeArray._from_sequence([0, 2, 1]) - - msg = r"call pd.factorize\(obj, sort=True\) instead" - with pytest.raises(NotImplementedError, match=msg): - dta.factorize(sort=True) - - # Do TimedeltaArray while we're here - tda = dta - dta[0] - with pytest.raises(NotImplementedError, match=msg): - tda.factorize(sort=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_set_index.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_set_index.py deleted file mode 100644 index 5984e591dd6c113c50939ed5aa69a9481951271e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_set_index.py +++ /dev/null @@ -1,702 +0,0 @@ -""" -See also: test_reindex.py:TestReindexSetIndex -""" - -from datetime import ( - datetime, - timedelta, -) - -import numpy as np -import pytest - -from pandas import ( - Categorical, - DataFrame, - DatetimeIndex, - Index, - MultiIndex, - Series, - date_range, - period_range, - to_datetime, -) -import pandas._testing as tm - - -class TestSetIndex: - def test_set_index_multiindex(self): - # segfault in GH#3308 - d = {"t1": [2, 2.5, 3], "t2": [4, 5, 6]} - df = DataFrame(d) - tuples = [(0, 1), (0, 2), (1, 2)] - df["tuples"] = tuples - - index = MultiIndex.from_tuples(df["tuples"]) - # it works! - df.set_index(index) - - def test_set_index_empty_column(self): - # GH#1971 - df = DataFrame( - [ - {"a": 1, "p": 0}, - {"a": 2, "m": 10}, - {"a": 3, "m": 11, "p": 20}, - {"a": 4, "m": 12, "p": 21}, - ], - columns=["a", "m", "p", "x"], - ) - - result = df.set_index(["a", "x"]) - - expected = df[["m", "p"]] - expected.index = MultiIndex.from_arrays([df["a"], df["x"]], names=["a", "x"]) - tm.assert_frame_equal(result, expected) - - def test_set_index_empty_dataframe(self): - # GH#38419 - df1 = DataFrame( - {"a": Series(dtype="datetime64[ns]"), "b": Series(dtype="int64"), "c": []} - ) - - df2 = df1.set_index(["a", "b"]) - result = df2.index.to_frame().dtypes - expected = df1[["a", "b"]].dtypes - tm.assert_series_equal(result, expected) - - def test_set_index_multiindexcolumns(self): - columns = MultiIndex.from_tuples([("foo", 1), ("foo", 2), ("bar", 1)]) - df = DataFrame( - np.random.default_rng(2).standard_normal((3, 3)), columns=columns - ) - - result = df.set_index(df.columns[0]) - - expected = df.iloc[:, 1:] - expected.index = df.iloc[:, 0].values - expected.index.names = [df.columns[0]] - tm.assert_frame_equal(result, expected) - - def test_set_index_timezone(self): - # GH#12358 - # tz-aware Series should retain the tz - idx = DatetimeIndex(["2014-01-01 10:10:10"], tz="UTC").tz_convert("Europe/Rome") - df = DataFrame({"A": idx}) - assert df.set_index(idx).index[0].hour == 11 - assert DatetimeIndex(Series(df.A))[0].hour == 11 - assert df.set_index(df.A).index[0].hour == 11 - - def test_set_index_cast_datetimeindex(self): - df = DataFrame( - { - "A": [datetime(2000, 1, 1) + timedelta(i) for i in range(1000)], - "B": np.random.default_rng(2).standard_normal(1000), - } - ) - - idf = df.set_index("A") - assert isinstance(idf.index, DatetimeIndex) - - def test_set_index_dst(self): - di = date_range("2006-10-29 00:00:00", periods=3, freq="H", tz="US/Pacific") - - df = DataFrame(data={"a": [0, 1, 2], "b": [3, 4, 5]}, index=di).reset_index() - # single level - res = df.set_index("index") - exp = DataFrame( - data={"a": [0, 1, 2], "b": [3, 4, 5]}, - index=Index(di, name="index"), - ) - exp.index = exp.index._with_freq(None) - tm.assert_frame_equal(res, exp) - - # GH#12920 - res = df.set_index(["index", "a"]) - exp_index = MultiIndex.from_arrays([di, [0, 1, 2]], names=["index", "a"]) - exp = DataFrame({"b": [3, 4, 5]}, index=exp_index) - tm.assert_frame_equal(res, exp) - - def test_set_index(self, float_string_frame): - df = float_string_frame - idx = Index(np.arange(len(df))[::-1]) - - df = df.set_index(idx) - tm.assert_index_equal(df.index, idx) - with pytest.raises(ValueError, match="Length mismatch"): - df.set_index(idx[::2]) - - def test_set_index_names(self): - df = tm.makeDataFrame() - df.index.name = "name" - - assert df.set_index(df.index).index.names == ["name"] - - mi = MultiIndex.from_arrays(df[["A", "B"]].T.values, names=["A", "B"]) - mi2 = MultiIndex.from_arrays( - df[["A", "B", "A", "B"]].T.values, names=["A", "B", "C", "D"] - ) - - df = df.set_index(["A", "B"]) - - assert df.set_index(df.index).index.names == ["A", "B"] - - # Check that set_index isn't converting a MultiIndex into an Index - assert isinstance(df.set_index(df.index).index, MultiIndex) - - # Check actual equality - tm.assert_index_equal(df.set_index(df.index).index, mi) - - idx2 = df.index.rename(["C", "D"]) - - # Check that [MultiIndex, MultiIndex] yields a MultiIndex rather - # than a pair of tuples - assert isinstance(df.set_index([df.index, idx2]).index, MultiIndex) - - # Check equality - tm.assert_index_equal(df.set_index([df.index, idx2]).index, mi2) - - # A has duplicate values, C does not - @pytest.mark.parametrize("keys", ["A", "C", ["A", "B"], ("tuple", "as", "label")]) - @pytest.mark.parametrize("inplace", [True, False]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_drop_inplace(self, frame_of_index_cols, drop, inplace, keys): - df = frame_of_index_cols - - if isinstance(keys, list): - idx = MultiIndex.from_arrays([df[x] for x in keys], names=keys) - else: - idx = Index(df[keys], name=keys) - expected = df.drop(keys, axis=1) if drop else df - expected.index = idx - - if inplace: - result = df.copy() - return_value = result.set_index(keys, drop=drop, inplace=True) - assert return_value is None - else: - result = df.set_index(keys, drop=drop) - - tm.assert_frame_equal(result, expected) - - # A has duplicate values, C does not - @pytest.mark.parametrize("keys", ["A", "C", ["A", "B"], ("tuple", "as", "label")]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_append(self, frame_of_index_cols, drop, keys): - df = frame_of_index_cols - - keys = keys if isinstance(keys, list) else [keys] - idx = MultiIndex.from_arrays( - [df.index] + [df[x] for x in keys], names=[None] + keys - ) - expected = df.drop(keys, axis=1) if drop else df.copy() - expected.index = idx - - result = df.set_index(keys, drop=drop, append=True) - - tm.assert_frame_equal(result, expected) - - # A has duplicate values, C does not - @pytest.mark.parametrize("keys", ["A", "C", ["A", "B"], ("tuple", "as", "label")]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_append_to_multiindex(self, frame_of_index_cols, drop, keys): - # append to existing multiindex - df = frame_of_index_cols.set_index(["D"], drop=drop, append=True) - - keys = keys if isinstance(keys, list) else [keys] - expected = frame_of_index_cols.set_index(["D"] + keys, drop=drop, append=True) - - result = df.set_index(keys, drop=drop, append=True) - - tm.assert_frame_equal(result, expected) - - def test_set_index_after_mutation(self): - # GH#1590 - df = DataFrame({"val": [0, 1, 2], "key": ["a", "b", "c"]}) - expected = DataFrame({"val": [1, 2]}, Index(["b", "c"], name="key")) - - df2 = df.loc[df.index.map(lambda indx: indx >= 1)] - result = df2.set_index("key") - tm.assert_frame_equal(result, expected) - - # MultiIndex constructor does not work directly on Series -> lambda - # Add list-of-list constructor because list is ambiguous -> lambda - # also test index name if append=True (name is duplicate here for B) - @pytest.mark.parametrize( - "box", - [ - Series, - Index, - np.array, - list, - lambda x: [list(x)], - lambda x: MultiIndex.from_arrays([x]), - ], - ) - @pytest.mark.parametrize( - "append, index_name", [(True, None), (True, "B"), (True, "test"), (False, None)] - ) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_pass_single_array( - self, frame_of_index_cols, drop, append, index_name, box - ): - df = frame_of_index_cols - df.index.name = index_name - - key = box(df["B"]) - if box == list: - # list of strings gets interpreted as list of keys - msg = "['one', 'two', 'three', 'one', 'two']" - with pytest.raises(KeyError, match=msg): - df.set_index(key, drop=drop, append=append) - else: - # np.array/list-of-list "forget" the name of B - name_mi = getattr(key, "names", None) - name = [getattr(key, "name", None)] if name_mi is None else name_mi - - result = df.set_index(key, drop=drop, append=append) - - # only valid column keys are dropped - # since B is always passed as array above, nothing is dropped - expected = df.set_index(["B"], drop=False, append=append) - expected.index.names = [index_name] + name if append else name - - tm.assert_frame_equal(result, expected) - - # MultiIndex constructor does not work directly on Series -> lambda - # also test index name if append=True (name is duplicate here for A & B) - @pytest.mark.parametrize( - "box", [Series, Index, np.array, list, lambda x: MultiIndex.from_arrays([x])] - ) - @pytest.mark.parametrize( - "append, index_name", - [(True, None), (True, "A"), (True, "B"), (True, "test"), (False, None)], - ) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_pass_arrays( - self, frame_of_index_cols, drop, append, index_name, box - ): - df = frame_of_index_cols - df.index.name = index_name - - keys = ["A", box(df["B"])] - # np.array/list "forget" the name of B - names = ["A", None if box in [np.array, list, tuple, iter] else "B"] - - result = df.set_index(keys, drop=drop, append=append) - - # only valid column keys are dropped - # since B is always passed as array above, only A is dropped, if at all - expected = df.set_index(["A", "B"], drop=False, append=append) - expected = expected.drop("A", axis=1) if drop else expected - expected.index.names = [index_name] + names if append else names - - tm.assert_frame_equal(result, expected) - - # MultiIndex constructor does not work directly on Series -> lambda - # We also emulate a "constructor" for the label -> lambda - # also test index name if append=True (name is duplicate here for A) - @pytest.mark.parametrize( - "box2", - [ - Series, - Index, - np.array, - list, - iter, - lambda x: MultiIndex.from_arrays([x]), - lambda x: x.name, - ], - ) - @pytest.mark.parametrize( - "box1", - [ - Series, - Index, - np.array, - list, - iter, - lambda x: MultiIndex.from_arrays([x]), - lambda x: x.name, - ], - ) - @pytest.mark.parametrize( - "append, index_name", [(True, None), (True, "A"), (True, "test"), (False, None)] - ) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_pass_arrays_duplicate( - self, frame_of_index_cols, drop, append, index_name, box1, box2 - ): - df = frame_of_index_cols - df.index.name = index_name - - keys = [box1(df["A"]), box2(df["A"])] - result = df.set_index(keys, drop=drop, append=append) - - # if either box is iter, it has been consumed; re-read - keys = [box1(df["A"]), box2(df["A"])] - - # need to adapt first drop for case that both keys are 'A' -- - # cannot drop the same column twice; - # plain == would give ambiguous Boolean error for containers - first_drop = ( - False - if ( - isinstance(keys[0], str) - and keys[0] == "A" - and isinstance(keys[1], str) - and keys[1] == "A" - ) - else drop - ) - # to test against already-tested behaviour, we add sequentially, - # hence second append always True; must wrap keys in list, otherwise - # box = list would be interpreted as keys - expected = df.set_index([keys[0]], drop=first_drop, append=append) - expected = expected.set_index([keys[1]], drop=drop, append=True) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("append", [True, False]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_pass_multiindex(self, frame_of_index_cols, drop, append): - df = frame_of_index_cols - keys = MultiIndex.from_arrays([df["A"], df["B"]], names=["A", "B"]) - - result = df.set_index(keys, drop=drop, append=append) - - # setting with a MultiIndex will never drop columns - expected = df.set_index(["A", "B"], drop=False, append=append) - - tm.assert_frame_equal(result, expected) - - def test_construction_with_categorical_index(self): - ci = tm.makeCategoricalIndex(10) - ci.name = "B" - - # with Categorical - df = DataFrame( - {"A": np.random.default_rng(2).standard_normal(10), "B": ci.values} - ) - idf = df.set_index("B") - tm.assert_index_equal(idf.index, ci) - - # from a CategoricalIndex - df = DataFrame({"A": np.random.default_rng(2).standard_normal(10), "B": ci}) - idf = df.set_index("B") - tm.assert_index_equal(idf.index, ci) - - # round-trip - idf = idf.reset_index().set_index("B") - tm.assert_index_equal(idf.index, ci) - - def test_set_index_preserve_categorical_dtype(self): - # GH#13743, GH#13854 - df = DataFrame( - { - "A": [1, 2, 1, 1, 2], - "B": [10, 16, 22, 28, 34], - "C1": Categorical(list("abaab"), categories=list("bac"), ordered=False), - "C2": Categorical(list("abaab"), categories=list("bac"), ordered=True), - } - ) - for cols in ["C1", "C2", ["A", "C1"], ["A", "C2"], ["C1", "C2"]]: - result = df.set_index(cols).reset_index() - result = result.reindex(columns=df.columns) - tm.assert_frame_equal(result, df) - - def test_set_index_datetime(self): - # GH#3950 - df = DataFrame( - { - "label": ["a", "a", "a", "b", "b", "b"], - "datetime": [ - "2011-07-19 07:00:00", - "2011-07-19 08:00:00", - "2011-07-19 09:00:00", - "2011-07-19 07:00:00", - "2011-07-19 08:00:00", - "2011-07-19 09:00:00", - ], - "value": range(6), - } - ) - df.index = to_datetime(df.pop("datetime"), utc=True) - df.index = df.index.tz_convert("US/Pacific") - - expected = DatetimeIndex( - ["2011-07-19 07:00:00", "2011-07-19 08:00:00", "2011-07-19 09:00:00"], - name="datetime", - ) - expected = expected.tz_localize("UTC").tz_convert("US/Pacific") - - df = df.set_index("label", append=True) - tm.assert_index_equal(df.index.levels[0], expected) - tm.assert_index_equal(df.index.levels[1], Index(["a", "b"], name="label")) - assert df.index.names == ["datetime", "label"] - - df = df.swaplevel(0, 1) - tm.assert_index_equal(df.index.levels[0], Index(["a", "b"], name="label")) - tm.assert_index_equal(df.index.levels[1], expected) - assert df.index.names == ["label", "datetime"] - - df = DataFrame(np.random.default_rng(2).random(6)) - idx1 = DatetimeIndex( - [ - "2011-07-19 07:00:00", - "2011-07-19 08:00:00", - "2011-07-19 09:00:00", - "2011-07-19 07:00:00", - "2011-07-19 08:00:00", - "2011-07-19 09:00:00", - ], - tz="US/Eastern", - ) - idx2 = DatetimeIndex( - [ - "2012-04-01 09:00", - "2012-04-01 09:00", - "2012-04-01 09:00", - "2012-04-02 09:00", - "2012-04-02 09:00", - "2012-04-02 09:00", - ], - tz="US/Eastern", - ) - idx3 = date_range("2011-01-01 09:00", periods=6, tz="Asia/Tokyo") - idx3 = idx3._with_freq(None) - - df = df.set_index(idx1) - df = df.set_index(idx2, append=True) - df = df.set_index(idx3, append=True) - - expected1 = DatetimeIndex( - ["2011-07-19 07:00:00", "2011-07-19 08:00:00", "2011-07-19 09:00:00"], - tz="US/Eastern", - ) - expected2 = DatetimeIndex( - ["2012-04-01 09:00", "2012-04-02 09:00"], tz="US/Eastern" - ) - - tm.assert_index_equal(df.index.levels[0], expected1) - tm.assert_index_equal(df.index.levels[1], expected2) - tm.assert_index_equal(df.index.levels[2], idx3) - - # GH#7092 - tm.assert_index_equal(df.index.get_level_values(0), idx1) - tm.assert_index_equal(df.index.get_level_values(1), idx2) - tm.assert_index_equal(df.index.get_level_values(2), idx3) - - def test_set_index_period(self): - # GH#6631 - df = DataFrame(np.random.default_rng(2).random(6)) - idx1 = period_range("2011-01-01", periods=3, freq="M") - idx1 = idx1.append(idx1) - idx2 = period_range("2013-01-01 09:00", periods=2, freq="H") - idx2 = idx2.append(idx2).append(idx2) - idx3 = period_range("2005", periods=6, freq="A") - - df = df.set_index(idx1) - df = df.set_index(idx2, append=True) - df = df.set_index(idx3, append=True) - - expected1 = period_range("2011-01-01", periods=3, freq="M") - expected2 = period_range("2013-01-01 09:00", periods=2, freq="H") - - tm.assert_index_equal(df.index.levels[0], expected1) - tm.assert_index_equal(df.index.levels[1], expected2) - tm.assert_index_equal(df.index.levels[2], idx3) - - tm.assert_index_equal(df.index.get_level_values(0), idx1) - tm.assert_index_equal(df.index.get_level_values(1), idx2) - tm.assert_index_equal(df.index.get_level_values(2), idx3) - - -class TestSetIndexInvalid: - def test_set_index_verify_integrity(self, frame_of_index_cols): - df = frame_of_index_cols - - with pytest.raises(ValueError, match="Index has duplicate keys"): - df.set_index("A", verify_integrity=True) - # with MultiIndex - with pytest.raises(ValueError, match="Index has duplicate keys"): - df.set_index([df["A"], df["A"]], verify_integrity=True) - - @pytest.mark.parametrize("append", [True, False]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_raise_keys(self, frame_of_index_cols, drop, append): - df = frame_of_index_cols - - with pytest.raises(KeyError, match="['foo', 'bar', 'baz']"): - # column names are A-E, as well as one tuple - df.set_index(["foo", "bar", "baz"], drop=drop, append=append) - - # non-existent key in list with arrays - with pytest.raises(KeyError, match="X"): - df.set_index([df["A"], df["B"], "X"], drop=drop, append=append) - - msg = "[('foo', 'foo', 'foo', 'bar', 'bar')]" - # tuples always raise KeyError - with pytest.raises(KeyError, match=msg): - df.set_index(tuple(df["A"]), drop=drop, append=append) - - # also within a list - with pytest.raises(KeyError, match=msg): - df.set_index(["A", df["A"], tuple(df["A"])], drop=drop, append=append) - - @pytest.mark.parametrize("append", [True, False]) - @pytest.mark.parametrize("drop", [True, False]) - @pytest.mark.parametrize("box", [set], ids=["set"]) - def test_set_index_raise_on_type(self, frame_of_index_cols, box, drop, append): - df = frame_of_index_cols - - msg = 'The parameter "keys" may be a column key, .*' - # forbidden type, e.g. set - with pytest.raises(TypeError, match=msg): - df.set_index(box(df["A"]), drop=drop, append=append) - - # forbidden type in list, e.g. set - with pytest.raises(TypeError, match=msg): - df.set_index(["A", df["A"], box(df["A"])], drop=drop, append=append) - - # MultiIndex constructor does not work directly on Series -> lambda - @pytest.mark.parametrize( - "box", - [Series, Index, np.array, iter, lambda x: MultiIndex.from_arrays([x])], - ids=["Series", "Index", "np.array", "iter", "MultiIndex"], - ) - @pytest.mark.parametrize("length", [4, 6], ids=["too_short", "too_long"]) - @pytest.mark.parametrize("append", [True, False]) - @pytest.mark.parametrize("drop", [True, False]) - def test_set_index_raise_on_len( - self, frame_of_index_cols, box, length, drop, append - ): - # GH 24984 - df = frame_of_index_cols # has length 5 - - values = np.random.default_rng(2).integers(0, 10, (length,)) - - msg = "Length mismatch: Expected 5 rows, received array of length.*" - - # wrong length directly - with pytest.raises(ValueError, match=msg): - df.set_index(box(values), drop=drop, append=append) - - # wrong length in list - with pytest.raises(ValueError, match=msg): - df.set_index(["A", df.A, box(values)], drop=drop, append=append) - - -class TestSetIndexCustomLabelType: - def test_set_index_custom_label_type(self): - # GH#24969 - - class Thing: - def __init__(self, name, color) -> None: - self.name = name - self.color = color - - def __str__(self) -> str: - return f"" - - # necessary for pretty KeyError - __repr__ = __str__ - - thing1 = Thing("One", "red") - thing2 = Thing("Two", "blue") - df = DataFrame({thing1: [0, 1], thing2: [2, 3]}) - expected = DataFrame({thing1: [0, 1]}, index=Index([2, 3], name=thing2)) - - # use custom label directly - result = df.set_index(thing2) - tm.assert_frame_equal(result, expected) - - # custom label wrapped in list - result = df.set_index([thing2]) - tm.assert_frame_equal(result, expected) - - # missing key - thing3 = Thing("Three", "pink") - msg = "" - with pytest.raises(KeyError, match=msg): - # missing label directly - df.set_index(thing3) - - with pytest.raises(KeyError, match=msg): - # missing label in list - df.set_index([thing3]) - - def test_set_index_custom_label_hashable_iterable(self): - # GH#24969 - - # actual example discussed in GH 24984 was e.g. for shapely.geometry - # objects (e.g. a collection of Points) that can be both hashable and - # iterable; using frozenset as a stand-in for testing here - - class Thing(frozenset): - # need to stabilize repr for KeyError (due to random order in sets) - def __repr__(self) -> str: - tmp = sorted(self) - joined_reprs = ", ".join(map(repr, tmp)) - # double curly brace prints one brace in format string - return f"frozenset({{{joined_reprs}}})" - - thing1 = Thing(["One", "red"]) - thing2 = Thing(["Two", "blue"]) - df = DataFrame({thing1: [0, 1], thing2: [2, 3]}) - expected = DataFrame({thing1: [0, 1]}, index=Index([2, 3], name=thing2)) - - # use custom label directly - result = df.set_index(thing2) - tm.assert_frame_equal(result, expected) - - # custom label wrapped in list - result = df.set_index([thing2]) - tm.assert_frame_equal(result, expected) - - # missing key - thing3 = Thing(["Three", "pink"]) - msg = r"frozenset\(\{'Three', 'pink'\}\)" - with pytest.raises(KeyError, match=msg): - # missing label directly - df.set_index(thing3) - - with pytest.raises(KeyError, match=msg): - # missing label in list - df.set_index([thing3]) - - def test_set_index_custom_label_type_raises(self): - # GH#24969 - - # purposefully inherit from something unhashable - class Thing(set): - def __init__(self, name, color) -> None: - self.name = name - self.color = color - - def __str__(self) -> str: - return f"" - - thing1 = Thing("One", "red") - thing2 = Thing("Two", "blue") - df = DataFrame([[0, 2], [1, 3]], columns=[thing1, thing2]) - - msg = 'The parameter "keys" may be a column key, .*' - - with pytest.raises(TypeError, match=msg): - # use custom label directly - df.set_index(thing2) - - with pytest.raises(TypeError, match=msg): - # custom label wrapped in list - df.set_index([thing2]) - - def test_set_index_periodindex(self): - # GH#6631 - df = DataFrame(np.random.default_rng(2).random(6)) - idx1 = period_range("2011/01/01", periods=6, freq="M") - idx2 = period_range("2013", periods=6, freq="A") - - df = df.set_index(idx1) - tm.assert_index_equal(df.index, idx1) - df = df.set_index(idx2) - tm.assert_index_equal(df.index, idx2) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/highlighter.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/highlighter.py deleted file mode 100644 index 8afdd017b6e5c11458b3dd61af3bbe9c20ba8ea6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/highlighter.py +++ /dev/null @@ -1,147 +0,0 @@ -from abc import ABC, abstractmethod -from typing import List, Union - -from .text import Text - - -def _combine_regex(*regexes: str) -> str: - """Combine a number of regexes in to a single regex. - - Returns: - str: New regex with all regexes ORed together. - """ - return "|".join(regexes) - - -class Highlighter(ABC): - """Abstract base class for highlighters.""" - - def __call__(self, text: Union[str, Text]) -> Text: - """Highlight a str or Text instance. - - Args: - text (Union[str, ~Text]): Text to highlight. - - Raises: - TypeError: If not called with text or str. - - Returns: - Text: A test instance with highlighting applied. - """ - if isinstance(text, str): - highlight_text = Text(text) - elif isinstance(text, Text): - highlight_text = text.copy() - else: - raise TypeError(f"str or Text instance required, not {text!r}") - self.highlight(highlight_text) - return highlight_text - - @abstractmethod - def highlight(self, text: Text) -> None: - """Apply highlighting in place to text. - - Args: - text (~Text): A text object highlight. - """ - - -class NullHighlighter(Highlighter): - """A highlighter object that doesn't highlight. - - May be used to disable highlighting entirely. - - """ - - def highlight(self, text: Text) -> None: - """Nothing to do""" - - -class RegexHighlighter(Highlighter): - """Applies highlighting from a list of regular expressions.""" - - highlights: List[str] = [] - base_style: str = "" - - def highlight(self, text: Text) -> None: - """Highlight :class:`rich.text.Text` using regular expressions. - - Args: - text (~Text): Text to highlighted. - - """ - - highlight_regex = text.highlight_regex - for re_highlight in self.highlights: - highlight_regex(re_highlight, style_prefix=self.base_style) - - -class ReprHighlighter(RegexHighlighter): - """Highlights the text typically produced from ``__repr__`` methods.""" - - base_style = "repr." - highlights = [ - r"(?P\<)(?P[\w\-\.\:]*)(?P[\w\W]*?)(?P\>)", - r"(?P[\w_]{1,50})=(?P\"?[\w_]+\"?)?", - r"(?P[\{\[\(\)\]\}])", - _combine_regex( - r"(?P[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})", - r"(?P([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4})", - r"(?P(?:[0-9A-Fa-f]{1,2}-){7}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{1,2}:){7}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{4}\.){3}[0-9A-Fa-f]{4})", - r"(?P(?:[0-9A-Fa-f]{1,2}-){5}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{1,2}:){5}[0-9A-Fa-f]{1,2}|(?:[0-9A-Fa-f]{4}\.){2}[0-9A-Fa-f]{4})", - r"(?P[\w\.]*?)\(", - r"\b(?PTrue)\b|\b(?PFalse)\b|\b(?PNone)\b", - r"(?P\.\.\.)", - r"(?P(?\B(\/[\w\.\-\_\+]+)*\/)(?P[\w\.\-\_\+]*)?", - r"(?b?\'\'\'.*?(?[a-fA-F0-9]{8}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{12})", - r"(?P(file|https|http|ws|wss):\/\/[0-9a-zA-Z\$\-\_\+\!`\(\)\,\.\?\/\;\:\&\=\%\#]*)", - ), - ] - - -class JSONHighlighter(RegexHighlighter): - """Highlights JSON""" - - base_style = "json." - highlights = [ - _combine_regex( - r"(?P[\{\[\(\)\]\}])", - r"\b(?Ptrue)\b|\b(?Pfalse)\b|\b(?Pnull)\b", - r"(?P(?b?\".*?(?b?\".*?(? None: - """Before call strategy that does nothing.""" - - -def before_log(logger: "logging.Logger", log_level: int) -> typing.Callable[["RetryCallState"], None]: - """Before call strategy that logs to some logger the attempt.""" - - def log_it(retry_state: "RetryCallState") -> None: - logger.log( - log_level, - f"Starting call to '{_utils.get_callback_name(retry_state.fn)}', " - f"this is the {_utils.to_ordinal(retry_state.attempt_number)} time calling it.", - ) - - return log_it diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/sas.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/sas.py deleted file mode 100644 index 86d7ed38f83f3c6f0273c67658f1a2138dc1a6a1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/sas.py +++ /dev/null @@ -1,41 +0,0 @@ -""" - pygments.styles.sas - ~~~~~~~~~~~~~~~~~~~ - - Style inspired by SAS' enhanced program editor. Note This is not - meant to be a complete style. It's merely meant to mimic SAS' - program editor syntax highlighting. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Other, Whitespace, Generic - - -class SasStyle(Style): - """ - Style inspired by SAS' enhanced program editor. Note This is not - meant to be a complete style. It's merely meant to mimic SAS' - program editor syntax highlighting. - """ - - styles = { - Whitespace: '#bbbbbb', - Comment: 'italic #008800', - String: '#800080', - Number: 'bold #2c8553', - Other: 'bg:#ffffe0', - Keyword: '#2c2cff', - Keyword.Reserved: 'bold #353580', - Keyword.Constant: 'bold', - Name.Builtin: '#2c2cff', - Name.Function: 'bold italic', - Name.Variable: 'bold #2c2cff', - Generic: '#2c2cff', - Generic.Emph: '#008800', - Generic.Error: '#d30202', - Error: 'bg:#e3d2d2 #a61717' - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/http.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/http.py deleted file mode 100644 index 2ac7f7092d58c8caf2b7289dc3ce334f167327b1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/http.py +++ /dev/null @@ -1,201 +0,0 @@ -from __future__ import annotations - -import asyncio -import re -from typing import Tuple - -from ..datastructures import Headers -from ..exceptions import SecurityError - - -__all__ = ["read_request", "read_response"] - -MAX_HEADERS = 128 -MAX_LINE = 8192 - - -def d(value: bytes) -> str: - """ - Decode a bytestring for interpolating into an error message. - - """ - return value.decode(errors="backslashreplace") - - -# See https://www.rfc-editor.org/rfc/rfc7230.html#appendix-B. - -# Regex for validating header names. - -_token_re = re.compile(rb"[-!#$%&\'*+.^_`|~0-9a-zA-Z]+") - -# Regex for validating header values. - -# We don't attempt to support obsolete line folding. - -# Include HTAB (\x09), SP (\x20), VCHAR (\x21-\x7e), obs-text (\x80-\xff). - -# The ABNF is complicated because it attempts to express that optional -# whitespace is ignored. We strip whitespace and don't revalidate that. - -# See also https://www.rfc-editor.org/errata_search.php?rfc=7230&eid=4189 - -_value_re = re.compile(rb"[\x09\x20-\x7e\x80-\xff]*") - - -async def read_request(stream: asyncio.StreamReader) -> Tuple[str, Headers]: - """ - Read an HTTP/1.1 GET request and return ``(path, headers)``. - - ``path`` isn't URL-decoded or validated in any way. - - ``path`` and ``headers`` are expected to contain only ASCII characters. - Other characters are represented with surrogate escapes. - - :func:`read_request` doesn't attempt to read the request body because - WebSocket handshake requests don't have one. If the request contains a - body, it may be read from ``stream`` after this coroutine returns. - - Args: - stream: Input to read the request from. - - Raises: - EOFError: If the connection is closed without a full HTTP request. - SecurityError: If the request exceeds a security limit. - ValueError: If the request isn't well formatted. - - """ - # https://www.rfc-editor.org/rfc/rfc7230.html#section-3.1.1 - - # Parsing is simple because fixed values are expected for method and - # version and because path isn't checked. Since WebSocket software tends - # to implement HTTP/1.1 strictly, there's little need for lenient parsing. - - try: - request_line = await read_line(stream) - except EOFError as exc: - raise EOFError("connection closed while reading HTTP request line") from exc - - try: - method, raw_path, version = request_line.split(b" ", 2) - except ValueError: # not enough values to unpack (expected 3, got 1-2) - raise ValueError(f"invalid HTTP request line: {d(request_line)}") from None - - if method != b"GET": - raise ValueError(f"unsupported HTTP method: {d(method)}") - if version != b"HTTP/1.1": - raise ValueError(f"unsupported HTTP version: {d(version)}") - path = raw_path.decode("ascii", "surrogateescape") - - headers = await read_headers(stream) - - return path, headers - - -async def read_response(stream: asyncio.StreamReader) -> Tuple[int, str, Headers]: - """ - Read an HTTP/1.1 response and return ``(status_code, reason, headers)``. - - ``reason`` and ``headers`` are expected to contain only ASCII characters. - Other characters are represented with surrogate escapes. - - :func:`read_request` doesn't attempt to read the response body because - WebSocket handshake responses don't have one. If the response contains a - body, it may be read from ``stream`` after this coroutine returns. - - Args: - stream: Input to read the response from. - - Raises: - EOFError: If the connection is closed without a full HTTP response. - SecurityError: If the response exceeds a security limit. - ValueError: If the response isn't well formatted. - - """ - # https://www.rfc-editor.org/rfc/rfc7230.html#section-3.1.2 - - # As in read_request, parsing is simple because a fixed value is expected - # for version, status_code is a 3-digit number, and reason can be ignored. - - try: - status_line = await read_line(stream) - except EOFError as exc: - raise EOFError("connection closed while reading HTTP status line") from exc - - try: - version, raw_status_code, raw_reason = status_line.split(b" ", 2) - except ValueError: # not enough values to unpack (expected 3, got 1-2) - raise ValueError(f"invalid HTTP status line: {d(status_line)}") from None - - if version != b"HTTP/1.1": - raise ValueError(f"unsupported HTTP version: {d(version)}") - try: - status_code = int(raw_status_code) - except ValueError: # invalid literal for int() with base 10 - raise ValueError(f"invalid HTTP status code: {d(raw_status_code)}") from None - if not 100 <= status_code < 1000: - raise ValueError(f"unsupported HTTP status code: {d(raw_status_code)}") - if not _value_re.fullmatch(raw_reason): - raise ValueError(f"invalid HTTP reason phrase: {d(raw_reason)}") - reason = raw_reason.decode() - - headers = await read_headers(stream) - - return status_code, reason, headers - - -async def read_headers(stream: asyncio.StreamReader) -> Headers: - """ - Read HTTP headers from ``stream``. - - Non-ASCII characters are represented with surrogate escapes. - - """ - # https://www.rfc-editor.org/rfc/rfc7230.html#section-3.2 - - # We don't attempt to support obsolete line folding. - - headers = Headers() - for _ in range(MAX_HEADERS + 1): - try: - line = await read_line(stream) - except EOFError as exc: - raise EOFError("connection closed while reading HTTP headers") from exc - if line == b"": - break - - try: - raw_name, raw_value = line.split(b":", 1) - except ValueError: # not enough values to unpack (expected 2, got 1) - raise ValueError(f"invalid HTTP header line: {d(line)}") from None - if not _token_re.fullmatch(raw_name): - raise ValueError(f"invalid HTTP header name: {d(raw_name)}") - raw_value = raw_value.strip(b" \t") - if not _value_re.fullmatch(raw_value): - raise ValueError(f"invalid HTTP header value: {d(raw_value)}") - - name = raw_name.decode("ascii") # guaranteed to be ASCII at this point - value = raw_value.decode("ascii", "surrogateescape") - headers[name] = value - - else: - raise SecurityError("too many HTTP headers") - - return headers - - -async def read_line(stream: asyncio.StreamReader) -> bytes: - """ - Read a single line from ``stream``. - - CRLF is stripped from the return value. - - """ - # Security: this is bounded by the StreamReader's limit (default = 32 KiB). - line = await stream.readline() - # Security: this guarantees header values are small (hard-coded = 8 KiB) - if len(line) > MAX_LINE: - raise SecurityError("line too long") - # Not mandatory but safe - https://www.rfc-editor.org/rfc/rfc7230.html#section-3.5 - if not line.endswith(b"\r\n"): - raise EOFError("line without CRLF") - return line[:-2] diff --git a/spaces/pyesonekyaw/faceforgerydetection/README.md b/spaces/pyesonekyaw/faceforgerydetection/README.md deleted file mode 100644 index 4358ae28c15046d559b3b2e5e7bf2c97d2fbee1b..0000000000000000000000000000000000000000 --- a/spaces/pyesonekyaw/faceforgerydetection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Generalizable Face Forgery Detection with Self-Blended Consistency Learning -emoji: 🐰 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" deleted file mode 100644 index 462f965758a49cb7283faa0e78e074135fdfbfe2..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" +++ /dev/null @@ -1,245 +0,0 @@ -from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder -from toolbox import CatchException, report_execption, write_history_to_file, zip_folder - - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - def merge_result(self): - self.file_result = ["" for _ in range(len(self.file_paths))] - for r, k in zip(self.sp_file_result, self.sp_file_index): - self.file_result[k] += r - - def write_result(self): - manifest = [] - for path, res in zip(self.file_paths, self.file_result): - with open(path + '.polish.tex', 'w', encoding='utf8') as f: - manifest.append(path + '.polish.tex') - f.write(res) - return manifest - - def zip_result(self): - import os, time - folder = os.path.dirname(self.file_paths[0]) - t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - zip_folder(folder, get_log_folder(), f'{t}-polished.zip') - - -def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='polish'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'(? - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - - # <-------- 多线程润色开始 ----------> - if language == 'en': - if mode == 'polish': - inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " + - "improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [r"Below is a section from an academic paper, proofread this section." + - r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + - r"Answer me only with the revised text:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)] - elif language == 'zh': - if mode == 'polish': - inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - else: - inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag] - sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)] - - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待 - scroller_max_len = 80 - ) - - # <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ----------> - try: - pfg.sp_file_result = [] - for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]): - pfg.sp_file_result.append(gpt_say) - pfg.merge_result() - pfg.write_result() - pfg.zip_result() - except: - print(trimmed_format_exc()) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name) - promote_file_to_downloadzone(res, chatbot=chatbot) - - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用“Latex英文纠错+高亮”插件)"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en') - - - - - - -@CatchException -def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh') - - - - -@CatchException -def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行纠错。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='proofread') - - - diff --git a/spaces/qingxu98/gpt-academic/docs/README.md.German.md b/spaces/qingxu98/gpt-academic/docs/README.md.German.md deleted file mode 100644 index d514de30f54bd8931568c029a3bbd3aa3eacdbb1..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/docs/README.md.German.md +++ /dev/null @@ -1,307 +0,0 @@ -> **Hinweis** -> -> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` - -# GPT Akademisch optimiert (GPT Academic) - -**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.** - -Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde. -Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell). - -> **Hinweis** -> -> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie. -> -> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation). -> -> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung ---- | --- -Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten -Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung -Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu -[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen -Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts -[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte -Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung -LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels -Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren -Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen? -Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung -[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads) -[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download -[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen -Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten -Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights -Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/) -Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/gpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren -[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder? -Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/) -Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments …… - -- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln) -
    - -
    - All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard. -
    - -
    - -- Proofreading/Correcting -
    - -
    - -- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading. -
    - -
    - -- Don't feel like reading the project code? Show off the entire project to chatgpt. -
    - -
    - -- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4). -
    - -
    - ---- -# Installation -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/gpt_academic.git -cd gpt_academic -``` - -2. Configure API_KEY - -Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`) - - -3. Install dependencies -```sh -# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # Create an anaconda environment -conda activate gptac_venv # Activate the anaconda environment -python -m pip install -r requirements.txt # Same step as pip installation -``` - -
    Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend -

    - -[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run -```sh -python main.py -```5. Testing Function Plugin -``` -- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation-Method 2: Using Docker - -1. Only ChatGPT (Recommended for most people) - -``` sh -git clone https://github.com/binary-husky/gpt_academic.git # Download the project -cd gpt_academic # Enter the path -nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc. -docker build -t gpt-academic . # Install - -# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick -docker run --rm -it --net=host gpt-academic -# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker) - -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - -3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker) -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - - -## Installation-Method 3: Other Deployment Options - -1. How to use reverse proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote cloud server deployment (requires cloud server knowledge and experience) -Please visit [Deployment wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL 2 (Windows subsystem for Linux) -Please visit [Deployment wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run at a secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI operating instructions](docs/WithFastapi.md) - -5. Use docker-compose to run -Please read docker-compose.yml and follow the prompts to operate. - ---- -# Advanced Usage -## Customize new convenience buttons / custom function plugins. - -1. Customize new convenience buttons (Academic Shortcut Keys) -Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.) -For example -``` -"Super English to Chinese": { - # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n", - - # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom function plugins - -Write powerful function plugins to perform any task you want and can't think of. -The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided. -For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden. -
    - -
    - -2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht. -
    - - - -
    - -3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen. -
    - - -
    - -4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann. -
    - -
    - -5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem. -
    - -
    - -
    - -
    - -6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich). -
    - -
    - -7. Neue MOSS-Sprachmodellunterstützung. -
    - -
    - -8. OpenAI-Bildgenerierung. -
    - -
    - -9. OpenAI-Audio-Analyse und Zusammenfassung. -
    - -
    - -10. Latex-Proofreading des gesamten Textes. -
    - -
    - - -## Version: -- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität). -- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM). -- Version 3.3: + Internet-Informationssynthese-Funktion -- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination) -- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln. -- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs -- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins -- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen. -- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins. -- Version 2.3: Verbesserte Interaktivität mit mehreren Threads -- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload" -- Version 2.1: Faltbares Layout -- Version 2.0: Einführung von modularisierten Funktionserweiterungen -- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535 - -- Bekannte Probleme - - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören. - - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen. - -## Referenz und Lernen - -``` -Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere: - -# Projekt 1: ChatGLM-6B der Tsinghua Universität: -https://github.com/THUDM/ChatGLM-6B - -# Projekt 2: JittorLLMs der Tsinghua Universität: -https://github.com/Jittor/JittorLLMs - -# Projekt 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Projekt 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projekt 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Mehr: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/qq12122211/Real-CUGAN/README.md b/spaces/qq12122211/Real-CUGAN/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/qq12122211/Real-CUGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Elipse Scada Hardkey Crack Fixl.md b/spaces/quidiaMuxgu/Expedit-SAM/Elipse Scada Hardkey Crack Fixl.md deleted file mode 100644 index cf1ebd1e4e3cad36d7ad215152a0d8ecd9ad7193..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Elipse Scada Hardkey Crack Fixl.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    Elipse Windows HMI with Application is a mobile platform for SCADA integration. SCADA without HMI is used in conjunction with the terminal element to monitor indicators, receive and send commands to devices, and more. Elipse E3 provides a large display area, offering at least four times the area of the mobile platform, which enables you to display large data sets, graphs, and indicators with ease.

    -

    Elipse Windows HMI with Table Workstation is a mobile platform for SCADA integration that allows indicators to be monitored and commands to be sent to devices. It can be quickly and easily integrated into Elipse E3 with no need for further changes to the application, thus making it easier to display the data collected by the SCADA in tablets and on mobile phones. The customization of this platform allows you to give your own aesthetic style to it, so you can adapt it to your environment.

    -

    Elipse Scada Hardkey Crackl


    Download ===> https://geags.com/2uCqxe



    -

    You can even know what information is available from any window, simply by using Elipse Cloud, which also allows you to synchronize data with any other system. Cloud is a completely free tool, which can be used to transmit data, applications, and processes, or used as a local client. The same command button in the menu, with this program, allows you to operate in either mode, transmitting data, storage locally or uploading to a server.

    -

    The Elipse platform has a version for every type of user, from those who have never used a database to enterprise users. Another of its attractive features is its use of the ideas of power, cost and maintenance to achieve the most cost-effective solution. An example of this is in the choice of database management systems used by Elipse. Each of them has received the certification ISO/IEC JTC1 SC22/WG22 (ISO/IEC 15504)

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Essayonquaideazaminurduforclass7.md b/spaces/quidiaMuxgu/Expedit-SAM/Essayonquaideazaminurduforclass7.md deleted file mode 100644 index b2842583f3bc64c7a923db4014212c3c8c3498f4..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Essayonquaideazaminurduforclass7.md +++ /dev/null @@ -1,9 +0,0 @@ -

    essayonquaideazaminurduforclass7


    Downloadhttps://geags.com/2uCrhq



    - -October 31, 2018 - Complete Urdu grammar for 5th, 6th, 7th, 8th, 9th, 10th, 11th, 12th graders and Urdu readers. . Quaid and Azam Muhammad Ali Jinnah Urdu Essays. This book contains a complete grammar of Urdu and includes all major categories for grades 5-11. -It is very suitable for reading, listening and learning in the classroom. -It is also the perfect study guide to learn Urdu after you have finished reading this book. -This is a complete Urdu grammar course for grades 5-11 and includes: 8a78ff9644
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Gori Tere Pyaar Mein! 5 720p Movies.md b/spaces/quidiaMuxgu/Expedit-SAM/Gori Tere Pyaar Mein! 5 720p Movies.md deleted file mode 100644 index 971caf3ecd521f0c7a4dff06a2c39af8cb8f7d1f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Gori Tere Pyaar Mein! 5 720p Movies.md +++ /dev/null @@ -1,11 +0,0 @@ -

    Gori Tere Pyaar Mein! 5 720p movies


    Download Zip ✓✓✓ https://geags.com/2uCslh



    - -February 2, 2022 - Movie Stars: Maite Perroni, Eric Heiser, Alejandro Spitzer, Jorge Posa, Catherine Siachoke, Arturo Barba Film quality: 480p HDRip About the film: The main characters are four teenagers go to school, but they are already in trouble with the law. -The fact is that they are at the very bottom of society. -They constantly steal, use drugs, rob and come into contact with the bandits of the local area. -They live in complete hopelessness and hopelessness. -But one day the guys decided to commit a heroic deed and save one defenseless girl from the hands of the gang. -However, she does not want to be saved and leaves the guys. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (AK 47 Movie Hindi Dubbed Mp4 Hd Download [BETTER]).md b/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (AK 47 Movie Hindi Dubbed Mp4 Hd Download [BETTER]).md deleted file mode 100644 index cdec469829f7228f77b671aba21bfcecbb6c2a0e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (AK 47 Movie Hindi Dubbed Mp4 Hd Download [BETTER]).md +++ /dev/null @@ -1,10 +0,0 @@ - -

    hd online player (ak 47 movie hindi dubbed mp4 hd download) is the best quality movie and tv series download site to download movies, games and other stuff at one place. if you are looking for any movie or tv series please hit the below link and select your desired content. enjoy and have a nice day.

    -

    here you can download full and high quality mp4/avi of hd online player (ak 47 movie hindi dubbed mp4 hd download) for free. all the videos are hosted on our third party servers and are not hosted in our server.

    -

    HD Online Player (AK 47 movie hindi dubbed mp4 hd download)


    Download Filehttps://geags.com/2uCsj6



    -

    live stream..

    go live on the big screen with the fastest, easiest, and most affordable way to stream hd movies, shows, and sporting events.

    watch movies online.

    live tv.

    the best movies, shows, live sports, and more in hd online.

    free, no downloads, no registration, no limit.

    for more information, visit hdboxonline.com.

    -

    hd online player is a free download program designed to play divx and mp4 files and play them on your computer. it was created by a team of contributors, many of which are students, and is free for any use. this program can be used to view and download divx and mp4 movies online. it can also be used to play movies by converting them to divx and mp4 files on your computer. you can also download movies and view them on your computer using this program. this is a very simple program to download divx and mp4 files for free. the latest version of the hd online player can be downloaded from the site below.

    -

    once you have downloaded the hd online player, you can open it and it will load up. if it does not open, you may need to re-install the program. once it is open you can start downloading movies and watching them on your computer. it can be downloaded to any operating system including windows, osx, linux, etc. you can download movies from the site below or directly from your computer using the hd online player.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/install_Applio.bat b/spaces/r3gm/Aesthetic_RVC_Inference_HF/install_Applio.bat deleted file mode 100644 index 966c10158941990028fd16b9186410cac88b5af9..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/install_Applio.bat +++ /dev/null @@ -1,145 +0,0 @@ -@echo off -Title Applio - Installer -setlocal -cd %~dp0 - -::: -::: _ _ -::: /\ | (_) -::: / \ _ __ _ __ | |_ ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | -::: /_/ \_\ .__/| .__/|_|_|\___/ -::: | | | | -::: |_| |_| -::: -::: - -set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork.git" -set "repoFolder=Applio-RVC-Fork" -set "principal=%cd%\%repoFolder%" -set "runtime_scripts=%cd%\%repoFolder%\runtime\Scripts" -set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main" -set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main" - -echo. -cls -echo INFO: It's important not to run this installer as an administrator as it might cause issues, and it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models. -echo. -pause - -cls -echo INFO: Please ensure you have installed the required dependencies before continuing. Refer to the installation guide for details. -echo. -echo Step-by-step guide: https://rentry.org/appliolocal -echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe -echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe -echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe -echo Python 3.9.8: https://www.python.org/ftp/python/3.9.8/python-3.9.8-amd64.exe -echo. -echo INFO: Its recommend installing Python 3.9.X and ensuring that it has been added to the system's path. -echo. -pause -cls -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A -echo. - -echo Cloning the repository... -git clone %repoUrl% %repoFolder% -cd %repoFolder% -del install_Applio.bat -del /q *.sh -echo. -cls - -echo Installing dependencies... -echo. -echo Recommended for Nvidia GPU users: -echo [1] Download Runtime (pre-installed dependencies) -echo. -echo Recommended for AMD/Intel GPU users (Broken): -echo [2] Download DML Runtime (pre-installed dependencies) -echo. -echo Only recommended for experienced users: -echo [3] Nvidia graphics cards -echo [4] AMD / Intel graphics cards -echo. -echo [5] I have already installed the dependencies -echo. -set /p choice=Select the option according to your GPU: -set choice=%choice: =% - -if "%choice%"=="1" ( -cls -powershell -command "Invoke-WebRequest -Uri https://frippery.org/files/busybox/busybox.exe -OutFile busybox.exe" -busybox.exe wget %URL_EXTRA%/runtime.zip -echo. -echo Extracting the runtime.zip file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('runtime.zip', '%principal%') }" -echo. -del runtime.zip busybox.exe -cls -echo. -goto dependenciesFinished -) - -if "%choice%"=="2" ( -cls -powershell -command "Invoke-WebRequest -Uri https://frippery.org/files/busybox/busybox.exe -OutFile busybox.exe" -busybox.exe wget %URL_EXTRA%/runtime_dml.zip -echo. -echo Extracting the runtime_dml.zip file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('runtime_dml.zip', '%principal%') }" -echo. -del runtime_dml.zip busybox.exe -cd runtime -python.exe -m pip install onnxruntime -cd .. -cls -echo. -goto dependenciesFinished -) - -if "%choice%"=="3" ( -cls -pip install -r assets/requirements/requirements.txt -echo. -pip uninstall torch torchvision torchaudio -y -echo. -pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117 -echo. -echo. -cls -echo Dependencies successfully installed! -echo. -goto dependenciesFinished -) - -if "%choice%"=="4" ( -cls -pip uninstall onnxruntime onnxruntime-directml -echo. -pip install -r assets/requirements/requirements.txt -echo. -pip install -r assets/requirements/requirements-dml.txt -echo. -echo. -cls -echo Dependencies successfully installed! -echo. -goto dependenciesFinished -) - -if "%choice%"=="5" ( -echo Dependencies successfully installed! -echo. -goto dependenciesFinished -) - -:dependenciesFinished -cls -echo Applio has been successfully downloaded, run the file go-applio.bat to run the web interface! -echo. -pause -exit - diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Aarachar Malayalam Novel PDF Download The Themes and Symbols of the Book.md b/spaces/raedeXanto/academic-chatgpt-beta/Aarachar Malayalam Novel PDF Download The Themes and Symbols of the Book.md deleted file mode 100644 index d121772e2e9d9476fcefcf2848be85647d29e088..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Aarachar Malayalam Novel PDF Download The Themes and Symbols of the Book.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -

    Aarachar Malayalam Novel PDF Download

    -

    If you are looking for a way to download Aarachar Malayalam novel PDF, you have come to the right place. In this article, we will tell you what Aarachar is, who wrote it, what it is about, and how you can download it for free. We will also give you some reasons why you should read this amazing novel that has won many awards and accolades. So, let's get started!

    -

    Introduction

    -

    Aarachar is a Malayalam novel written by K.R. Meera, one of the most prominent and prolific writers in India. The novel was first published in 2012 and has since been translated into many languages, including English, Hindi, Tamil, Telugu, Kannada, Bengali, Marathi, and Gujarati. The English translation, titled Hangwoman, was done by J. Devika and published by Penguin Books in 2014.

    -

    aaracharmalayalamnovelpdfdownload


    Download Ziphttps://tinourl.com/2uL23y



    -

    What is Aarachar?

    -

    Aarachar means executioner in Malayalam. The novel tells the story of Chetna Grddha Mullick, a woman who belongs to a family of executioners with a long lineage that dates back to the fourth century BC. Chetna is the first woman in her family to inherit this profession, which is traditionally passed on from father to son. She faces many challenges and struggles as she tries to uphold her family legacy and perform her duty in a society that is hostile and indifferent to her.

    -

    Who is the author of Aarachar?

    -

    K.R. Meera is an Indian author who writes in Malayalam. She was born in Sasthamkotta, Kollam district in Kerala. She worked as a journalist in Malayala Manorama, a leading newspaper in Kerala, before resigning to focus on her writing career. She has written many novels, short stories, essays, and columns on various topics. Some of her notable works include Ave Maria (2007), Yellow Is The Colour Of Longing (2008), The Gospel Of Yudas (2016), The Poison Of Love (2017), and The Angel's Beauty Spots (2019). She has won many awards for her writing, such as the Kerala Sahitya Akademi Award, the Odakkuzhal Award, the Vayalar Award, the Kendra Sahitya Akademi Award, and the Crossword Book Award.

    -

    What are the themes of Aarachar?

    -

    Aarachar is a novel that explores many themes, such as capital punishment, justice, revenge, gender, caste, religion, history, culture, politics, and media. The novel raises many questions about the morality and legality of the death penalty, the role and responsibility of the executioner, the impact of violence on individuals and society, the discrimination and oppression faced by women and lower castes in India, the influence and manipulation of the media on public opinion and perception, and the importance and relevance of history and tradition in a changing world.

    -

    How to download Aarachar Malayalam novel PDF?

    -

    If you want to read Aarachar Malayalam novel PDF online or offline, you have several options to choose from. Here are some of them:

    -

    Option 1: Goodreads

    -

    Goodreads is a popular website where you can find books of all genres and languages. You can also read reviews and ratings from other readers, join groups and discussions, and discover new books based on your preferences. To download Aarachar Malayalam novel PDF from Goodreads,

    -
      -
    • Go to https://www.goodreads.com/book/show/17154644
    • -
    • Click on "Get a copy"
    • -
    • Select "Kindle"
    • -
    • You will be redirected to Amazon.in where you can buy the Kindle edition for Rs. 399
    • -
    • After buying it, you can download it to your Kindle device or app
    • -
    • You can also read it on your browser using Kindle Cloud Reader
    • -
    -

    Option 2: FlipHTML5

    -

    FlipHTML5 is a website where you can create and share flipbooks online. You can also find flipbooks created by other users on various topics. To download Aarachar Malayalam novel PDF from FlipHTML5,

    -
      -
    • Go to https://fliphtml5.com/rtxw/mbjy
    • -
    • Click on "Download"
    • -
    • You will be asked to sign up or log in
    • -
    • After signing up or logging in, you can download the PDF file for free
    • -
    • You can also read it online using FlipHTML5 Reader
    • -
    -

    Option 3: Scribd

    -

    Scribd is a website where you can access millions of books, audiobooks, magazines, podcasts, documents, and more. You can also upload your own content and share it with others. To download Aarachar Malayalam novel PDF from Scribd,

    - -

    Why should you read Aarachar Malayalam novel PDF?

    -

    Apart from being free and easy to download from various sources, Aarachar Malayalam novel PDF is also worth reading for many reasons. Here are some of them:

    -

    aarachar malayalam novel pdf download free
    -aarachar malayalam novel pdf download online
    -aarachar malayalam novel pdf download full
    -aarachar malayalam novel pdf download link
    -aarachar malayalam novel pdf download sites
    -aarachar malayalam novel pdf download k r meera
    -aarachar malayalam novel pdf download english translation
    -aarachar malayalam novel pdf download google drive
    -aarachar malayalam novel pdf download telegram
    -aarachar malayalam novel pdf download amazon
    -aarachar malayalam novel pdf download reddit
    -aarachar malayalam novel pdf download quora
    -aarachar malayalam novel pdf download epub
    -aarachar malayalam novel pdf download mobi
    -aarachar malayalam novel pdf download kindle
    -aarachar malayalam novel pdf download summary
    -aarachar malayalam novel pdf download review
    -aarachar malayalam novel pdf download analysis
    -aarachar malayalam novel pdf download characters
    -aarachar malayalam novel pdf download themes
    -aarachar malayalam novel pdf download quotes
    -aarachar malayalam novel pdf download awards
    -aarachar malayalam novel pdf download movie
    -aarachar malayalam novel pdf download audiobook
    -aarachar malayalam novel pdf download podcast
    -aarachar malayalam novel pdf download youtube
    -aarachar malayalam novel pdf download video
    -aarachar malayalam novel pdf download images
    -aarachar malayalam novel pdf download wallpaper
    -aarachar malayalam novel pdf download cover
    -aarachar malayalam novel pdf download font
    -aarachar malayalam novel pdf download page count
    -aa

    -

    It is a critically acclaimed and award-winning novel

    -

    Aarachar has received rave reviews from critics and readers alike for its powerful storytelling, rich language, and profound insights. It has also won several prestigious awards, such as the Kerala Sahitya Akademi Award, the Odakkuzhal Award, the Vayalar Award, the Kendra Sahitya Akademi Award, and the Crossword Book Award. It has been hailed as one of the best literary works produced in Malayalam and a masterpiece of Indian literature.

    -

    It is a gripping and thought-provoking story

    -

    Aarachar is not just a novel about an executioner; it is also a novel about life, death, love, hate, justice, revenge, and everything in between. It keeps you hooked from the first page to the last with its twists and turns, its suspense and drama, its humor and tragedy, and its emotions and reflections. It makes you think about your own views and values on various issues and challenges your assumptions and prejudices.

    -

    It explores the history and culture of India

    -

    Aarachar is not just a novel set in contemporary Kolkata; it is also a novel that spans centuries and regions of Indian history and culture. It traces the origins and evolution of the profession and practice of execution in India from ancient times to modern times. It depicts the social and political changes and conflicts that have shaped India's identity and destiny. It showcases the diversity and richness of India's languages, religions, castes, traditions, and arts.

    -

    Conclusion

    -

    In conclusion, Aarachar Malayalam novel PDF download

    It is a novel that will make you feel, think, and learn about yourself and the world around you. It is a novel that will challenge you, inspire you, and entertain you at the same time. It is a novel that will stay with you long after you finish reading it.

    -

    So, what are you waiting for? Download Aarachar Malayalam novel PDF today and enjoy this amazing literary journey!

    -

    Summary of the main points

    -

    To summarize, here are the main points of this article:

    -
      -
    • Aarachar is a Malayalam novel written by K.R. Meera that tells the story of Chetna, a woman executioner in Kolkata.
    • -
    • The novel explores many themes, such as capital punishment, justice, revenge, gender, caste, religion, history, culture, politics, and media.
    • -
    • You can download Aarachar Malayalam novel PDF for free from various sources, such as Goodreads, FlipHTML5, and Scribd.
    • -
    • You should read Aarachar Malayalam novel PDF because it is a critically acclaimed and award-winning novel, a gripping and thought-provoking story, and an exploration of the history and culture of India.
    • -
    -

    Call to action

    -

    If you liked this article, please share it with your friends and family who might be interested in reading Aarachar Malayalam novel PDF. You can also leave a comment below and let us know what you think about the novel and the article. We would love to hear from you!

    -

    FAQs

    -

    Here are some frequently asked questions about Aarachar Malayalam novel PDF:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Aarachar based on a true story?No, Aarachar is a fictional story that is inspired by some historical facts and events. However, the characters and situations are purely imaginary and do not represent any real people or places.
    Is Aarachar available in other languages?Yes, Aarachar has been translated into many languages, including English, Hindi, Tamil, Telugu, Kannada, Bengali, Marathi, and Gujarati. You can find the translations on various online platforms or bookstores.
    Is Aarachar suitable for children?No, Aarachar is not suitable for children as it contains graphic descriptions of violence, torture, rape, and death. It also deals with mature and sensitive topics that may not be appropriate for young readers. It is recommended for adults only.
    Is Aarachar a movie or a TV series?No, Aarachar has not been adapted into a movie or a TV series yet. However, there have been some rumors and speculations about possible adaptations in the future. We will update you if we hear any official news about it.
    Where can I find more books by K.R. Meera?You can find more books by K.R. Meera on Goodreads or Amazon. Some of her popular books are Ave Maria (2007), Yellow Is The Colour Of Longing (2008), The Gospel Of Yudas (2016), The Poison Of Love (2017), and The Angel's Beauty Spots (2019).
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Audi VWTool 2.0.9 for Windows 7 English.rar What You Need to Know Before Downloading.md b/spaces/raedeXanto/academic-chatgpt-beta/Audi VWTool 2.0.9 for Windows 7 English.rar What You Need to Know Before Downloading.md deleted file mode 100644 index 1fa0dbaec7311fd0fa707d82fe3f640f06fcc4e8..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Audi VWTool 2.0.9 for Windows 7 English.rar What You Need to Know Before Downloading.md +++ /dev/null @@ -1,202 +0,0 @@ - -

    Audi VWTool 2.0.9 for Windows 7 English.rar: A Comprehensive Guide

    -

    If you own an Audi or a Volkswagen car, you might have heard of Audi VWTool 2.0.9 for Windows 7 English.rar, a software that allows you to diagnose and repair your car using your computer.

    -

    But what exactly is this software, how do you download and install it, how do you use it, and why do you need it? In this article, we will answer all these questions and more, so that you can get the most out of your car and your software.

    -

    Audi VWTool 2.0.9 for Windows 7 English.rar


    DOWNLOADhttps://tinourl.com/2uL1ty



    -

    What is Audi VWTool 2.0.9 for Windows 7 English.rar?

    -

    Audi VWTool 2.0.9 for Windows 7 English.rar is a software that enables you to communicate with your car's electronic control units (ECUs) using a special cable that connects your computer's serial port to your car's diagnostic port.

    -

    With this software, you can perform various tasks such as:

    -
      -
    • Read and clear fault codes
    • -
    • View live data and graphs
    • -
    • Adjust settings and parameters
    • -
    • Test components and systems
    • -
    • Reset service intervals and reminders
    • -
    • Program new keys and immobilizers
    • -
    • And much more!
    • -
    -

    Audi VWTool 2.0.9 for Windows 7 English.rar supports most Audi and Volkswagen models from 1991 to 2004, including:

    -
      -
    • Audi A2, A3, A4, A6, A8, S3, S4, S6, S8
    • -
    • Volkswagen Golf, Jetta, Passat, Polo, Sharan, Touareg
    • -
    • And others!
    • -
    -

    How to download and install Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    To download and install Audi VWTool 2.0.9 for Windows 7 English.rar, you will need the following:

    -

    Audi-VWTool for VAG with KW1281 protocol
    -Audi-VWTool version 2.0.9 download
    -Audi-VWTool software free download
    -Audi-VWTool Windows 7 64-bit
    -Audi-VWTool Deutsch download
    -Audi-VWTool for Windows XP Mode
    -Audi-VWTool 2.0.9 for Windows 7 (English) torrent
    -Audi-VWTool for CNC milling machine
    -Audi-VWTool transmission device communication software
    -Audi-VWTool for Siemens PLC
    -Audi-VWTool 2.0.9 pl + en.rar
    -Audi-VWTool for VAG-COM 409.1
    -Audi-VWTool for VCDS Lite
    -Audi-VWTool for Windows 8.1
    -Audi-VWTool for Windows 10
    -Audi-VWTool crack
    -Audi-VWTool serial key
    -Audi-VWTool activation code
    -Audi-VWTool license key
    -Audi-VWTool registration key
    -Audi-VWTool full version
    -Audi-VWTool portable version
    -Audi-VWTool zip file
    -Audi-VWTool rar file
    -Audi-VWTool exe file
    -Audi VW Tool 2.0.9 for Windows 7 English.rar free download
    -Audi VW Tool 2.0.9 for Windows 7 English.rar direct download link
    -Audi VW Tool 2.0.9 for Windows 7 English.rar fast download speed
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no virus
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no malware
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no spyware
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no adware
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no survey
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no password
    -Audi VW Tool 2.0.9 for Windows 7 English.rar no captcha
    -How to install Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to use Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to uninstall Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to update Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to fix errors in Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to scan cars with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to reset service light with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to read fault codes with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to clear fault codes with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to adapt control units with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to program keys with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to test actuators with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to calibrate sensors with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to modify settings with Audi VW Tool 2.0.9 for Windows 7 English.rar
    -How to diagnose problems with Audi VW Tool 2.0.9 for Windows

    -
      -
    • A computer running Windows XP or Windows 7 (32-bit or 64-bit)
    • -
    • A serial port or a USB-to-serial adapter
    • -
    • A KKL VAG-COM cable (also known as OBD-II cable)
    • -
    • An internet connection
    • -
    -

    Here are the steps to download and install Audi VWTool 2.0.9 for Windows 7 English.rar:

    -
      -
    1. Download the software from this link: Audi VWTool 2.0.9 for Windows 7 English.rar
    2. -
    3. Extract the file using WinRAR or any other archive software
    4. -
    5. Open the folder and run the setup.exe file
    6. -
    7. Follow the instructions on the screen to complete the installation
    8. -
    9. Restart your computer if prompted
    10. -
    11. Connect your KKL VAG-COM cable to your computer's serial port or USB-to-serial adapter
    12. -
    13. Connect the other end of the cable to your car's diagnostic port (usually located under the dashboard)
    14. -
    15. Turn on your car's ignition (but do not start the engine)
    16. -
    17. Launch Audi VWTool from your desktop or start menu
    18. -
    19. Select your car model and ECU from the drop-down menus
    20. -
    21. Click on Connect button to establish communication with your car
    22. -
    23. You are now ready to use Audi VWTool!
    24. -
    -

    How to use Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    Audi VWTool has a user-friendly interface that allows you to access various functions and features with ease.

    -

    The main window consists of four tabs: Info, Fault Codes, Data Blocks, and Basic Settings.

    -

    The Info tab displays general information about your car and ECU, such as VIN number, part number, software version, coding, etc.

    -

    The Fault Codes tab allows you to read and clear fault codes stored in your ECU's memory.

    -

    The Data Blocks tab allows you to view live data from various sensors and actuators in your car.

    -

    The Basic Settings tab allows you to adjust settings and parameters in your ECU.

    -

    To use any of these tabs, simply click on them and follow the instructions on the screen.

    - 7 English.rar? -

    Audi VWTool 2.0.9 for Windows 7 English.rar is a useful software for anyone who owns an Audi or a Volkswagen car, especially if you want to save money and time on car maintenance and repair.

    -

    With Audi VWTool 2.0.9 for Windows 7 English.rar, you can:

    -
      -
    • Diagnose and fix common problems with your car without going to a mechanic
    • -
    • Monitor your car's performance and health in real-time
    • -
    • Customize your car's settings and features to suit your preferences
    • -
    • Learn more about your car's functions and systems
    • -
    • And much more!
    • -
    -

    Audi VWTool 2.0.9 for Windows 7 English.rar is also a great tool for enthusiasts and hobbyists who want to explore and experiment with their car's capabilities and potential.

    -

    How to troubleshoot common issues with Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    While Audi VWTool 2.0.9 for Windows 7 English.rar is a reliable and easy-to-use software, you might encounter some issues or errors while using it.

    -

    Here are some of the most common issues and how to fix them:

    -

    How to fix compatibility issues with Windows 7

    -

    If you are using Windows 7, you might need to run Audi VWTool in compatibility mode to make it work properly.

    -

    To do this, follow these steps:

    -
      -
    1. Right-click on the Audi VWTool icon on your desktop or start menu
    2. -
    3. Select Properties from the menu
    4. -
    5. Click on the Compatibility tab
    6. -
    7. Check the box that says Run this program in compatibility mode for:
    8. -
    9. Select Windows XP (Service Pack 3) from the drop-down menu
    10. -
    11. Click on Apply and then OK
    12. -
    13. Run Audi VWTool as usual
    14. -
    -

    How to fix connection issues with your car

    -

    If you are having trouble connecting your car to Audi VWTool, you might need to check the following:

    -
      -
    • Make sure your KKL VAG-COM cable is connected securely to both your computer and your car
    • -
    • Make sure your car's ignition is turned on (but do not start the engine)
    • -
    • Make sure you have selected the correct car model and ECU from the drop-down menus in Audi VWTool
    • -
    • Make sure you have clicked on the Connect button in Audi VWTool
    • -
    • If none of these work, try disconnecting and reconnecting the cable, restarting your computer, or using a different serial port or USB-to-serial adapter
    • -
    -

    How to fix error messages and codes with Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    If you encounter any error messages or codes while using Audi VWTool, you can refer to the following table for their meanings and solutions:

    - | Error Message or Code | Meaning | Solution | | --- | --- | --- | | No response from controller | The ECU is not responding to Audi VWTool | Check the connection and try again | | Controller not found | The ECU is not recognized by Audi VWTool | Check the connection and try again | | Invalid login code | The login code entered is incorrect | Enter the correct login code for your ECU | | Invalid coding | The coding entered is incorrect | Enter the correct coding for your ECU | | Out of range | The value entered is out of range | Enter a valid value within the range | | Communication error | There is a problem with the communication between Audi VWTool and the ECU | Check the connection and try again |

    How to get the most out of Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    Audi VWTool 2.0.9 for Windows 7 English.rar is a powerful software that can help you improve your car's performance and functionality.

    -

    To get the most out of it, you can try the following tips:

    -

    How to update Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    To ensure that you have the latest version of Audi VWTool, you can check for updates regularly.

    -

    To do this, follow these steps:

    -
      -
    1. Launch Audi VWTool from your desktop or start menu
    2. -
    3. Select Help from the menu bar
    4. -
    5. Select Check for Updates from the drop-down menu
    6. -
    7. If there is an update available, follow the instructions on the screen to download and install it
    8. -
    9. If there is no update available, you will see a message saying You have the latest version of Audi VWTool installed
    10. -
    -

    How to customize Audi VWTool 2.0.9 for Windows 7 English.rar settings and preferences

    -

    To make Audi VWTool more suitable for your needs and preferences, you can customize its settings and options.

    -

    To do this, follow these steps:

    -
      -
    1. Launch Audi VWTool from your desktop or start menu
    2. -
    3. Select Options from the menu bar
    4. -
    5. Select Settings from the drop-down menu
    6. -
    7. You will see a window with various tabs such as General, Interface, Language, etc.
    8. -
    9. Select the tab that corresponds to the setting or option you want to change
    10. -
    11. Make the changes as desired
    12. -
    13. Click on OK to save the changes
    14. -
    -

    How to access advanced features and functions with Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    Audi VWTool has some advanced features and functions that are not available in the main window.

    -

    To access them, you can use keyboard shortcuts or hidden menus.

    -

    Here are some of them:

    -
      -
    • To access Adaptation mode, press F4 on your keyboard while connected to an ECU. This mode allows you to change values in certain data blocks.
    • -
    • To access Output Tests mode, press F5 on your keyboard while connected to an ECU. This mode allows you to activate certain components and systems in your car.
    • -
    • To access Login mode, press F6 on your keyboard while connected to an ECU. This mode allows you to enter a login code to access certain functions or data blocks.
    • -the coding of your ECU. -
    • To access Hidden Menu mode, press F9 on your keyboard while connected to an ECU. This mode allows you to access some hidden functions or data blocks that are not shown in the main window.
    • -
    -

    Be careful when using these advanced features and functions, as they can affect your car's performance and functionality. Make sure you know what you are doing and have a backup of your original settings before making any changes.

    -

    Conclusion

    -

    Audi VWTool 2.0.9 for Windows 7 English.rar is a software that allows you to diagnose and repair your Audi or Volkswagen car using your computer.

    -

    It is a useful tool for anyone who wants to save money and time on car maintenance and repair, or who wants to learn more about their car's functions and systems.

    -

    It is easy to download and install, and has a user-friendly interface that allows you to access various features and functions with ease.

    -

    It also has some advanced features and functions that are not available in the main window, but can be accessed using keyboard shortcuts or hidden menus.

    -

    However, it also has some limitations and issues that might require some troubleshooting or compatibility adjustments.

    -

    Overall, Audi VWTool 2.0.9 for Windows 7 English.rar is a great software for anyone who owns an Audi or a Volkswagen car, especially if you want to get the most out of your car and your software.

    -

    FAQs

    -

    Here are some frequently asked questions about Audi VWTool 2.0.9 for Windows 7 English.rar:

    -

    Q: Is Audi VWTool 2.0.9 for Windows 7 English.rar free?

    -

    A: Yes, Audi VWTool 2.0.9 for Windows 7 English.rar is a free software that you can download from this link: Audi VWTool 2.0.9 for Windows 7 English.rar

    -

    Q: Is Audi VWTool 2.0.9 for Windows 7 English.rar safe?

    -

    A: Yes, Audi VWTool 2.0.9 for Windows 7 English.rar is a safe software that does not contain any viruses or malware. However, you should always scan any file you download from the internet with an antivirus software before opening it.

    -

    Q: What is the difference between Audi VWTool 2.0.9 for Windows 7 English.rar and other similar software such as VAG-COM or VCDS?

    -

    A: Audi VWTool 2.0.9 for Windows 7 English.rar is a software that was developed by an independent developer and is not affiliated with any official company or brand. It supports most Audi and Volkswagen models from 1991 to 2004, but it might not have all the features and functions that other software have.

    -

    VAG-COM and VCDS are software that are developed by Ross-Tech LLC, a company that specializes in diagnostic tools for Volkswagen Group vehicles. They support most Audi and Volkswagen models from 1990 to present, and have more features and functions than Audi VWTool 2.0.9 for Windows 7 English.rar.

    -

    However, VAG-COM and VCDS are not free software, and require a license and a special cable to use them.

    -

    Q: Where can I get a KKL VAG-COM cable?

    -

    A: A KKL VAG-COM cable is a cable that connects your computer's serial port to your car's diagnostic port. You can buy one online from various websites such as Amazon or eBay, or from local shops that sell car accessories or parts.

    -

    Q: How can I contact the developer of Audi VWTool 2.0.9 for Windows 7 English.rar?

    -

    A: The developer of Audi VWTool 2.0.9 for Windows 7 English.rar is unknown, and there is no official website or contact information for the software. However, you can try to contact the uploader of the file on MediaFire, who goes by the name of "AudiVWTool". You can leave a comment on the file page or send a message to their profile.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Doraemon Movies In Telugu 17 The Ultimate Guide to Doraemon and Nobitas Moon Adventure.md b/spaces/raedeXanto/academic-chatgpt-beta/Doraemon Movies In Telugu 17 The Ultimate Guide to Doraemon and Nobitas Moon Adventure.md deleted file mode 100644 index 592aca81645874ae15116893231bd0df7cbf6ce0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Doraemon Movies In Telugu 17 The Ultimate Guide to Doraemon and Nobitas Moon Adventure.md +++ /dev/null @@ -1,95 +0,0 @@ -
    -

    Doraemon Movies In Telugu 17

    -

    Introduction

    -

    If you are a fan of anime, you must have heard of Doraemon, the blue cat robot from the future who helps a young boy named Nobita with his gadgets. Doraemon is one of the most popular and beloved characters in Japan and around the world, and has been adapted into various media, including movies. In this article, we will explore some of the best Doraemon movies that have been dubbed in Telugu, a language spoken by millions of people in India. We will also look at why these movies are so popular among Telugu audiences, and what makes them so enjoyable and entertaining.

    -

    What is Doraemon?

    -

    Doraemon is a manga series created by Fujiko F. Fujio in 1970, which follows the adventures of a robotic cat named Doraemon who travels back in time from the 22nd century to help a young boy named Nobita Nobi. Nobita is a lazy and clumsy student who often gets bullied by his classmates and scolded by his parents. Doraemon uses his four-dimensional pocket to pull out various futuristic gadgets to help Nobita solve his problems, such as the bamboo copter, the anywhere door, the time machine, and the memory bread. However, these gadgets often cause more trouble than they solve, leading to hilarious situations and lessons for Nobita and his friends.

    -

    Doraemon Movies In Telugu 17


    DOWNLOADhttps://tinourl.com/2uL3ol



    -

    What are Doraemon movies?

    -

    Doraemon movies are animated films based on the manga and anime series of Doraemon. The first Doraemon movie was released in 1980, and since then, there have been 41 movies released as of 2021. The movies usually feature an original story that is not based on any specific manga chapter or anime episode, but sometimes incorporate elements from them. The movies often take Nobita and his friends to different places and times, such as ancient Japan, prehistoric Earth, outer space, fantasy worlds, and even parallel universes. The movies also introduce new characters and villains that challenge Doraemon and Nobita in their adventures.

    -

    Why are Doraemon movies popular in Telugu?

    -

    Doraemon movies are popular in Telugu because they appeal to a wide range of audiences, from children to adults. The movies are full of humor, action, adventure, drama, romance, and emotion that can captivate anyone who watches them. The movies also convey positive messages and values that resonate with Telugu culture, such as friendship, family, courage, honesty, loyalty, and compassion. Moreover, the movies are dubbed in Telugu by professional voice actors who do a great job of bringing the characters to life and making them relatable to Telugu speakers. The movies are also aired on popular TV channels and streaming platforms that make them accessible to millions of viewers.

    -

    Doraemon Movies in Telugu 17: Nobita's Chronicle of The Moon Exploration
    -Doraemon Movies in Telugu 17: Nobita's Great Adventure in the Antarctic Kachi Kochi
    -Doraemon Movies in Telugu 17: Nobita and the Legend of the Sun King
    -Doraemon Movies in Telugu 17: Nobita's Three Visionary Swordsmen
    -Doraemon Movies in Telugu 17: Nobita and the Knights on Dinosaurs
    -Doraemon Movies in Telugu 17: Adventure of Koya Koya Planet
    -Doraemon Movies in Telugu 17: Nobita's Little space Wars
    -Doraemon Movies in Telugu 17: Toofani Adventure
    -Doraemon Movies in Telugu 17: Nobita Aur Jadooi Tapu
    -Doraemon Movies in Telugu 17: Nobita and Jungle Mein Dangal
    -Doraemon Movies in Telugu 17: Underwater Adventure
    -Doraemon Movies in Telugu 17: Nobita And The Steel Troops
    -Doraemon Movies in Telugu 17: Nobita's Secret Gadget Museum
    -Doraemon Movies in Telugu 17: Nobita The Explorer Bow Bow
    -Doraemon Movies in Telugu 17: Nobita and the Galaxy Super-Express
    -Doraemon Movies in Telugu 17: Nobita's Diary on the Creation of the World
    -Doraemon Movies in Telugu 17: Nobita's Great Adventure in the South Seas
    -Doraemon Movies in Telugu 17: Nobita Ki Universe Yatra
    -Doraemon Movies in Telugu 17: Jadoo Mantar Aur Jahnoom
    -Doraemon Movies in Telugu 17: Nobita's Dorabian Nights
    -Doraemon Movies in Telugu 17: Nobita's Dinosaur
    -Doraemon Movies in Telugu 17: Ichi Mera Dost
    -Doraemon Movies in Telugu 17: Download Free Online
    -Doraemon Movies in Telugu 17: Watch Full HD Quality
    -Doraemon Movies in Telugu 17: Best Collection of Animated Films
    -Doraemon Movies in Telugu 17: Latest Release Date and News
    -Doraemon Movies in Telugu 17: Reviews and Ratings by Fans
    -Doraemon Movies in Telugu 17: Fun and Educational for Kids
    -Doraemon Movies in Telugu 17: How to Watch with Subtitles
    -Doraemon Movies in Telugu 17: Where to Stream or Buy DVDs
    -Doraemon Movies in Telugu 17: Trailer and Teaser Videos
    -Doraemon Movies in Telugu 17: Cast and Crew Details
    -Doraemon Movies in Telugu 17: Plot Summary and Synopsis
    -Doraemon Movies in Telugu 17: Interesting Facts and Trivia
    -Doraemon Movies in Telugu 17: Behind the Scenes and Making Of
    -Doraemon Movies in Telugu 17: Songs and Music Score
    -Doraemon Movies in Telugu 17: Awards and Nominations
    -Doraemon Movies in Telugu 17: Box Office Collection and Budget
    -Doraemon Movies in Telugu 17: Merchandise and Toys Available
    -Doraemon Movies in Telugu 17: Fan Art and Cosplay Ideas
    -Doraemon Movies in Telugu 17: Comparison with Other Versions of Doraemon Films
    -Doraemon Movies in Telugu 17: History and Origin of the Characters and Storyline
    -Doraemon Movies in Telugu 17: Themes and Messages Explored

    -

    List of Doraemon Movies In Telugu 17

    -

    Here are some of the best Doraemon movies that have been dubbed in Telugu 17 (the year 2023), along with their plot summaries and release dates.

    -

    Doraemon: Nobita's Chronicle of The Moon Exploration

    -

    Plot summary

    -

    In this movie, Nobita finds a mysterious white rabbit-like creature named Luka who can communicate with him telepathically. Luka tells Nobita that he is from the moon and that he needs his help to save his people from a dark force that is trying to destroy their civilization. Nobita decides to go to the moon with Doraemon and his friends using a rocket made from bamboo copters. There they discover a hidden world full of wonders and dangers, such as giant plants, ancient ruins, flying saucers, and lunar rabbits. They also encounter a mysterious girl named Luna who has a connection with Luka. Together they must stop the evil plot of Professor Ochanomizu who wants to use the moon's resources for his own benefit.

    -

    Release date and reception

    -

    This movie was released in Japan on March 1st 2019 , where it became the highest-grossing film of that year with over $64 million . It was also well-received by critics and audiences alike for its stunning animation quality , its thrilling story , its charming characters , its heartwarming message , and its homage to classic sci-fi films . It was dubbed in Telugu 17 (the year 2023) by Hungama TV , where it also gained popularity among Telugu viewers.

    -

    Doraemon: Nobita's Great Adventure in the Antarctic Kachi Kochi

    -

    Plot summary

    -

    In this movie , Nobita finds a mysterious golden ring buried in the ice while playing with snowballs . He decides to keep it as a treasure , but soon realizes that it has a strange power that can freeze anything it touches . He accidentally freezes Shizuka , Gian , Suneo , and even Doraemon with it . To unfreeze them , he has to go to Antarctica where he can find a special flower that can melt anything . He uses Doraemon's time machine to travel back 100000 years ago when Antarctica was not covered by ice . There he meets a young girl named Carla who lives with her tribe in harmony with nature . He also encounters a group of explorers who are looking for the same flower for their own purposes . He has to protect Carla , his friends , and the flower from their greedy schemes .

    -

    Release date and reception

    -

    This movie was released in Japan on March 4th 2017 , where it became the second highest-grossing film of that year with over $42 million . It was also praised by critics and audiences for its beautiful scenery , its exciting adventure , its funny moments , its touching friendship , and its environmental message . It was dubbed in Telugu 17 (the year 2023) by Disney Channel India , where it also attracted many fans among Telugu viewers.

    -

    Doraemon: Nobita and the Island of Miracles—Animal Adventure

    -

    Plot summary

    -

    In this movie , Nobita wants to go on an animal safari with his friends using Doraemon's animal translator gadget . However , he finds out that many animals are endangered or extinct due to human activities . He decides to go back in time to see them before they disappear . He chooses an island called Miraculous Island where all kinds of animals live peacefully together . He meets a boy named Kibo who can communicate with animals using a special pendant . He also meets a girl named Rirea who is part of an organization called Animal Rescue Team that protects animals from poachers . He joins them in their mission to save the animals from a ruthless hunter named Goro who wants to capture them for his collection . He also discovers a secret about Miraculous Island that could change everything .

    -

    Release date and reception

    -

    This movie was released in Japan on March 3rd 2012 , where it became the third highest-grossing film of that year with over $38 million . It was also acclaimed by critics and audiences for its colorful animation , its thrilling action , its humorous scenes , its adorable animals , and its inspiring message . It was dubbed in Telugu 17 (the year 2023) by Hungama TV , where it also won many hearts among Telugu viewers.

    -

    Doraemon: Nobita's Treasure Island

    -

    Plot summary

    -

    In this movie , Nobita finds an old map that leads to Treasure Island where pirates hid their loot centuries I'm glad you liked it. Here is the rest of the article. ago. He decides to go on a treasure hunt with Doraemon and his friends using a ship made from a mini-Doraemon. They also meet a girl named Fiona who claims to be the descendant of the pirate captain. However, they are not the only ones looking for the treasure, as they are pursued by a group of modern pirates led by a man named Mr. Cash. They also face various dangers and mysteries on the island, such as a giant octopus, a ghost ship, and a hidden city. They also discover the true identity of Fiona and the secret behind the treasure island.

    -

    Release date and reception

    -

    This movie was released in Japan on March 3rd 2018 , where it became the highest-grossing film of that year with over $80 million . It was also highly praised by critics and audiences for its spectacular animation, its engaging story, its humorous characters, its adventurous spirit, and its homage to classic pirate films. It was dubbed in Telugu 17 (the year 2023) by Disney Channel India , where it also received a lot of love from Telugu viewers.

    -

    Conclusion

    -

    Doraemon movies are some of the best anime movies that can entertain and inspire anyone who watches them. They are especially popular in Telugu because they offer a mix of humor, action, adventure, drama, romance, and emotion that appeals to Telugu culture and values. They also feature amazing animation quality, original stories, lovable characters, and positive messages that can make anyone smile and cry. If you are looking for some fun and exciting movies to watch with your family and friends, you should definitely check out these Doraemon movies in Telugu 17.

    -

    FAQs

    -

    Here are some frequently asked questions about Doraemon movies in Telugu 17.

    -
      -
    • Q: How many Doraemon movies are there in total?
    • -
    • A: There are 41 Doraemon movies as of 2021, with one new movie released every year since 1980.
    • -
    • Q: How can I watch Doraemon movies in Telugu 17?
    • -
    • A: You can watch Doraemon movies in Telugu 17 on various TV channels and streaming platforms that have dubbed them in Telugu, such as Hungama TV, Disney Channel India, Netflix, etc.
    • -
    • Q: Which Doraemon movie is the best?
    • -
    • A: This is a subjective question that depends on your personal preference and taste. However, some of the most popular and acclaimed Doraemon movies are Doraemon: Nobita's Chronicle of The Moon Exploration, Doraemon: Nobita's Great Adventure in the Antarctic Kachi Kochi, Doraemon: Nobita and the Island of Miracles—Animal Adventure, and Doraemon: Nobita's Treasure Island.
    • -
    • Q: Who are the voice actors for Doraemon and Nobita in Telugu?
    • -
    • A: The voice actors for Doraemon and Nobita in Telugu are Sanket Mhatre and Sonal Kaushal respectively. They have been voicing these characters since 2005.
    • -
    • Q: What is the theme song for Doraemon movies?
    • -
    • A: The theme song for Doraemon movies is "Doraemon" by Gen Hoshino , a Japanese singer-songwriter who is also a fan of Doraemon. He composed and performed this song for the 2018 movie Doraemon: Nobita's Treasure Island , and it has been used as the ending theme song for all subsequent movies.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/audio_text.py b/spaces/ramiin2/AutoGPT/autogpt/commands/audio_text.py deleted file mode 100644 index cae32d4eb78c4268bf6ef1bae3c15a399af046bf..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/commands/audio_text.py +++ /dev/null @@ -1,36 +0,0 @@ -import json - -import requests - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -cfg = Config() - - -def read_audio_from_file(audio_path): - audio_path = path_in_workspace(audio_path) - with open(audio_path, "rb") as audio_file: - audio = audio_file.read() - return read_audio(audio) - - -def read_audio(audio): - model = cfg.huggingface_audio_to_text_model - api_url = f"https://api-inference.huggingface.co/models/{model}" - api_token = cfg.huggingface_api_token - headers = {"Authorization": f"Bearer {api_token}"} - - if api_token is None: - raise ValueError( - "You need to set your Hugging Face API token in the config file." - ) - - response = requests.post( - api_url, - headers=headers, - data=audio, - ) - - text = json.loads(response.content.decode("utf-8"))["text"] - return "The audio says: " + text diff --git a/spaces/ramiin2/AutoGPT/scripts/check_requirements.py b/spaces/ramiin2/AutoGPT/scripts/check_requirements.py deleted file mode 100644 index e4eab024a6280c0d54110c69b2e03de639325fa6..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/scripts/check_requirements.py +++ /dev/null @@ -1,32 +0,0 @@ -import sys - -import pkg_resources - - -def main(): - requirements_file = sys.argv[1] - with open(requirements_file, "r") as f: - required_packages = [ - line.strip().split("#")[0].strip() for line in f.readlines() - ] - - installed_packages = [package.key for package in pkg_resources.working_set] - - missing_packages = [] - for package in required_packages: - if not package: # Skip empty lines - continue - package_name = package.strip().split("==")[0] - if package_name.lower() not in installed_packages: - missing_packages.append(package_name) - - if missing_packages: - print("Missing packages:") - print(", ".join(missing_packages)) - sys.exit(1) - else: - print("All packages are installed.") - - -if __name__ == "__main__": - main() diff --git a/spaces/rayman-studio/README/README.md b/spaces/rayman-studio/README/README.md deleted file mode 100644 index 20b292c24ef0a647b919908a8882b978cc901419..0000000000000000000000000000000000000000 --- a/spaces/rayman-studio/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 😻 -colorFrom: gray -colorTo: blue -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card. diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Young Malang Movie 720p Download Uto).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Young Malang Movie 720p Download Uto).md deleted file mode 100644 index b57f28b0a9e285e0e15387a14f96dd297fb6fcf5..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Young Malang Movie 720p Download Uto).md +++ /dev/null @@ -1,68 +0,0 @@ -

    HD Online Player (Young Malang Movie 720p Download Uto)


    DOWNLOAD ✫✫✫ https://urlgoal.com/2uCMES



    - -They fall in love and spend a romantic month getting to know each other. The film is based on a Gujarati novel of the same name. The original Gujarati version had a release in 2013. - -In 2016, Satish Bahl's music album of the film was released. - -Plot - -Advait is an introvert from Mumbai who comes to Goa to work as a tourist guide for the day. There he meets Sara, a free spirited girl from London who has come to India for the first time to live life and who has come with an open mind. He falls in love with Sara and they spend a romantic month getting to know each other. After a month of fun and festivities, Sara is about to return to London, but wants to spend one last month with Advait. He promises he will not play with her feelings and will only be with her to say goodbye. However, he finds it difficult to leave her alone. As they say goodbye they promise to write to each other, and give each other gifts. - -Cast - - Kalki Subramaniam as Advait - - Saina Nehwal as Sara - - Niharika Acharya as Indian Airlines cabin crew member - - Sudha Kongar as Ms. Shobhana - - Menaka Deshpande as Richa - - Sridhar Rao as Ganesh - - Pallavi Subramaniam as Shobhana's Wife - - Vibha Kharadi as Shobhana's Daughter - - Sushma Ravi as Gaurav - - Rishikesh Seth as Hotel Manager - - Raghunath Shetty as Ganesh's Father - - Ramana Dev as Raghu - - Daya Krishna as Richa's Mother - -Production - -Filming was completed in September 2015. The first teaser was released on 19 January 2016. - -Awards - -The film won many awards including best actress for Kalki Subramaniam at the Zee Gaurav Awards 2016. It was also nominated for best picture, best cinematographer, best editing and best song, and won best dialogues. - -References - -External links - - - -Category:2016 films - -Category:Films shot in Goa - -Category:Indian romance films - -Category:Films shot in Mumbai - -Category:Films set in Mumbai - -Category:Films based on Indian novelsEthnopharmacological relevance of crude extracts from Mexican plants used in traditional medicine. - -To review the current knowledge of the phyt 4fefd39f24
    -
    -
    -

    diff --git a/spaces/rizmyabdulla/tiny-Question-answering/README.md b/spaces/rizmyabdulla/tiny-Question-answering/README.md deleted file mode 100644 index e712bc6a06df1e62faec45d48eea9e4bb36f6638..0000000000000000000000000000000000000000 --- a/spaces/rizmyabdulla/tiny-Question-answering/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tiny Question Answering -emoji: 😻 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/robjm16/domain_specific_ChatGPT/app.py b/spaces/robjm16/domain_specific_ChatGPT/app.py deleted file mode 100644 index d172219d7604417217f6545eeea2c7e1b71a51ac..0000000000000000000000000000000000000000 --- a/spaces/robjm16/domain_specific_ChatGPT/app.py +++ /dev/null @@ -1,350 +0,0 @@ -""" -This program demonstrates how openAI's ChatGPT language model can be used to answer questions in specific domain areas. -The program asks a user for a question in a prescribed domain area. The program then compares the user's query against -pre-loaded domain content to identify the most useful sections of content. The program answers the question by leveraging -ChatGPT's powerful general capabilities with the newly incorporated domain knowledge. Such an approach might be used, -for example, to provide a customized chat box for an insurance company's customers, where the company's policy materials -are brought in as domain content. For this example, I compiled the 2023 investment outlook summaries posted on the websites of -Morgan Stanley (https://www.morganstanley.com/ideas/global-investment-strategy-outlook-2023), -JPMorgan (https://www.jpmorgan.com/insights/research/market-outlook) and -Goldman Sachs (https://www.goldmansachs.com/insights/pages/gs-research/macro-outlook-2023-this-cycle-is-different/report.pdf). -Far more robust domain-specific responses are possible with further customization/retraining of ChatGPT. -""" - -################################# LOAD LIBRARIES/IMPORTS ######################################### - -# !pip install openai -# ! pip install transformers -# ! pip install gradio -# ! pip install PyPDF2 -# ! pip install python-docx -# ! pip install pandas - - -import docx -import pandas as pd -import numpy as np -import openai -import gradio as gr -import pickle -import os -from transformers import GPT2TokenizerFast -# import openai_secret_manager - -################################# VARIABLES ######################################### - -USE_INTERFACE = True # Change to False if you want to run the code without the Gradio interface, and instead see a single pre-supplied question -filepath = '2023_investment_outlook.docx' - # Path to document containing domain content. Initial cleaning of domain content - # can be done inside (eg, using Python) or outside (eg, using Word) this program, - # depending on needs and circumstances. -# emb_filepath = 'PATH HERE' # Path to document containing saved content embeddings, if applicable -COMPLETIONS_MODEL = "text-davinci-003" -# Get the value of confidential OpenAI API key; register at OpenAI for keys -openai.api_key = os.environ["API-KEY"] -MODEL_NAME = "curie" -DOC_EMBEDDINGS_MODEL = f"text-search-{MODEL_NAME}-doc-001" -QUERY_EMBEDDINGS_MODEL = f"text-search-{MODEL_NAME}-query-001" -MAX_SECTION_LEN =1100 # The API limits total tokens -- for the prompt containing the question and domain-specific content and the answer -- to 2048 tokens, or about 1500 words. -SEPARATOR = "\n* " # A string called SEPARATOR is defined as the newline character followed by an asterisk and a space. This string will be used as a separator between different pieces of text. -tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") -separator_len = len(tokenizer.tokenize(SEPARATOR)) -COMPLETIONS_API_PARAMS = { - # We use temperature of 0.0 because it gives the most predictable, factual answer. - "temperature": 0.0, - "max_tokens": 300, - "model": COMPLETIONS_MODEL, -} - -################################# FUNCTIONS ######################################### - -def load_text(filepath): - """ - Loads a Microsoft Word document and returns a DataFrame containing the text of each paragraph in the document. - - Input: - filepath (str): the filepath to the Microsoft Word document. - - Returns: - df (pandas.DataFrame): a DataFrame containing the 'content' column with the text of each paragraph in the document. - """ - # Open the Word document - doc = docx.Document(filepath) - - # Create an empty pandas DataFrame - df = pd.DataFrame() - - # Iterate through the paragraphs in the document and add each to the df - for i, p in enumerate(doc.paragraphs): - - # Add the paragraph text [and index to the DataFrame] - df.loc[i, 'content'] = p.text - # df.loc[i, 'paragraph_index'] = i - - # Delete empty paragraphs - df['content'] = df['content'].replace('', np.nan) - df = df.dropna(axis=0, subset=['content']).reset_index(drop=True) - - return df - -def count_tokens(row): - """count the number of tokens in a string""" - return len(tokenizer.encode(row)) - -def truncate_text(df): - """ - Truncates the text in the 'content' column of the input DataFrame if the number of tokens - in the text exceeds a specified maximum number. It will set the truncated text and the - number of tokens in the 'content' and 'tokens' columns, respectively. - - Input: - df (pandas.DataFrame): a DataFrame containing the 'content' column - - Returns: - df (pandas.DataFrame): the input DataFrame with modified 'content' and 'tokens' columns. - - """ - for i in range(len(df)): - if df['tokens'][i] > 590: - text = df['content'][i] - tokens = tokenizer.encode(text) - truncated_tokens = tokens[:590] - truncated_text = tokenizer.decode(truncated_tokens) - df.at[i, 'content'] = truncated_text - df.at[i, 'tokens'] = len(truncated_tokens) - return df - - -def get_embedding(text, model): - """ - Generates an embedding for the given text using the specified OpenAI model. - - Args: - text (str): The text for which to generate an embedding. - model (str): The name of the OpenAI model to use for generating the embedding. - - Returns: - numpy.ndarray: The embedding for the given text. - """ - result = openai.Embedding.create( - model=model, - input=[text] - ) - return result["data"][0]["embedding"] - -def get_doc_embedding(text): - """ - Generates an embedding for the given text using the OpenAI document embeddings model. - - Args: - text (str): The text for which to generate an embedding. - - Returns: - numpy.ndarray: The embedding for the given text. - """ - return get_embedding(text, DOC_EMBEDDINGS_MODEL) - -def get_query_embedding(text): - """ - Generates an embedding for the given text using the OpenAI query embeddings model. - - Args: - text (str): The text for which to generate an embedding. - - Returns: - numpy.ndarray: The embedding for the given text. - """ - return get_embedding(text, QUERY_EMBEDDINGS_MODEL) - -def compute_doc_embeddings(df): - """ - Generate embeddings for each row in a Pandas DataFrame using the OpenAI document embeddings model. - - Args: - df (pandas.DataFrame): The DataFrame for which to generate embeddings. - - Returns: - dict: A dictionary that maps the embedding vectors to the indices of the rows that they correspond to. - """ - return { - idx: get_doc_embedding(r.content.replace("\n", " ")) for idx, r in df.iterrows() # r here refers to each row - } - -def load_embeddings(fname): - """ - Load document embeddings and their keys from a CSV file. Only if embeddings are pre-loaded. - - Args: - fname (str): The path to the CSV file. The file must have exactly these named columns: - "title", "heading", "0", "1", ... up to the length of the embedding vectors. - - Returns: - dict: A dictionary that maps the embedding vectors to tuples of the form (title, heading). - """ - - df = pd.read_csv(fname, header=0) - max_dim = max([int(c) for c in df.columns if c != "title" and c != "heading"]) - return { - (r.title, r.heading): [r[str(i)] for i in range(max_dim + 1)] for _, r in df.iterrows() - } - -def vector_similarity(x, y): - """ - Calculate the similarity between two vectors using dot product. - - Args: - x (iterable): The first vector. - y (iterable): The second vector. - - Returns: - float: The dot product of the two vectors. - """ - return np.dot(np.array(x), np.array(y)) - -def order_document_sections_by_query_similarity(query, contexts): - """ - Find the query embedding for the given query, and compare it against all of the pre-calculated document embeddings - to find the most relevant sections. - - Args: - query (str): The query for which to find relevant document sections. - contexts (dict): A dictionary mapping document embeddings to their indices. - - Returns: - list: A list of tuples, each containing the similarity score and index of a document section, sorted in descending - order of relevance. - """ - query_embedding = get_query_embedding(query) - document_similarities = sorted([(vector_similarity(query_embedding, doc_embedding), doc_index) for doc_index, doc_embedding in contexts.items() - ], reverse=True) - - return document_similarities - -def construct_prompt(question, context_embeddings, df): - """ - Construct a prompt for answering a question using the most relevant document sections. - - Args: - question (str): The question to answer. - context_embeddings (dict): A dictionary mapping document embeddings to their indices. - df (pandas.DataFrame): A DataFrame containing the document sections. - - Returns: - str: The prompt, including the question and the relevant context. - """ - most_relevant_document_sections = order_document_sections_by_query_similarity(question, context_embeddings) - - chosen_sections = [] - chosen_sections_len = 0 - chosen_sections_indexes = [] - - for _, section_index in most_relevant_document_sections: - # Add contexts until we run out of space. - document_section = df.loc[section_index] - - chosen_sections_len += document_section.tokens + separator_len - if chosen_sections_len > MAX_SECTION_LEN: - break - - chosen_sections.append(SEPARATOR + document_section.content.replace("\n", " ")) - chosen_sections_indexes.append(str(section_index)) - - # # Useful diagnostic information -- FOR TESTING PURPOSES - # print(f"Selected {len(chosen_sections)} document sections:") - # print("\n".join(chosen_sections_indexes)) - - header = """Answer the question as truthfully as possible using the provided context, and if the answer is not contained within the text below, say "Sorry, I don't know."\n\nContext:\n""" - - full_prompt = header + "".join(chosen_sections) + "\n\n Q: " + question + "\n A:" - - # print(full_prompt) # FOR TESTING PURPOSES - - return full_prompt - - -def answer_query_with_context( - query, - df, - document_embeddings, - show_prompt: bool = False): - prompt = construct_prompt( - query, - document_embeddings, - df - ) - """ - Answer a query using relevant context from a DataFrame. - - Args: - query (str): The query to answer. - df (pandas.DataFrame): A DataFrame containing the document sections. - document_embeddings (dict): A dictionary mapping document embeddings to their indices. - show_prompt (bool, optional): If `True`, print the prompt before generating a response. - - Returns: - str: The generated response to the query. - """ - if show_prompt: - print(prompt) - - response = openai.Completion.create( - prompt=prompt, - **COMPLETIONS_API_PARAMS - ) - - return response["choices"][0]["text"].strip(" \n") - -######################### MAIN PROGRAM ######################################### - -# Load the text into dataframe -df = load_text(filepath) -# print(df.head()) # FOR TESTING PURPOSES - -# Count the tokens -df = df.copy() -df['tokens'] = df['content'].apply(count_tokens) - -# print(df.head(10)) # FOR TESTING PURPOSES -# print(df['content'][3]) # FOR TESTING PURPOSES - -# Call the truncate_text function on the dataframe -df = df.copy() -df = truncate_text(df) - -# print(df.head(10)) # FOR TESTING PURPOSES -# print(df['content'][3]) # FOR TESTING PURPOSES - -# Use code below only if importing embeddings from file, rather than creating in real time through OpenAI API -# document_embeddings = load_embeddings(empb_filepath) - -# Use code below if calculating the embeddings in real time via OpenAI API -document_embeddings = compute_doc_embeddings(df[:33]) # Can limit size (eg, df[:10] if run into limit on free-of-charge usage - -# Embedding; embedding have 4096 dimensions, FOR TESTING ONLY -# example_entry = list(document_embeddings.items())[4] -# print(example_entry) -# print ("Length of example embedding = ", len(example_entry[1])) - -if USE_INTERFACE: - demo = gr.Interface( - fn=lambda query: answer_query_with_context(query, df, document_embeddings), - inputs=gr.Textbox(lines=2, label="Query", placeholder="Type Question Here..."), - outputs=gr.Textbox(lines=2, label="Answer"), - description="Example of a domain-specific chatbot, using ChatGPT with supplemental content added.
    \ - Here, the content relates to the investment outlook for 2023, according to Morgan Stanley, JPMorgan and Goldman Sachs.
    \ - Sample queries: What is Goldman's outlook for inflation? What about the bond market? What does JPMorgan think about 2023?
    \ - NOTE: High-level demo only. Supplemental content used here limited to about 30 paragraphs, due to limits on free-of-charge usage of ChatGPT.
    \ - More robust domain-specific responses are possible.", - title="Domain-Specific Chatbot",) - # Launch the interface - demo.launch() -else: - prompt = construct_prompt( - 'What is the outlook for inflation?', - document_embeddings, - df - ) - - # print("===\n", prompt) # FOR TESTING ONLY - - answer_query_with_context("What is Goldman's outlook for inflation?", df, document_embeddings) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/deformable_detr_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/deformable_detr_head.py deleted file mode 100644 index 31290dbb51b2991514fe00effadce97d5df6ce01..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/deformable_detr_head.py +++ /dev/null @@ -1,318 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Linear, bias_init_with_prob, constant_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from mmdet.models.utils.transformer import inverse_sigmoid -from ..builder import HEADS -from .detr_head import DETRHead - - -@HEADS.register_module() -class DeformableDETRHead(DETRHead): - """Head of DeformDETR: Deformable DETR: Deformable Transformers for End-to- - End Object Detection. - - Code is modified from the `official github repo - `_. - - More details can be found in the `paper - `_ . - - Args: - with_box_refine (bool): Whether to refine the reference points - in the decoder. Defaults to False. - as_two_stage (bool) : Whether to generate the proposal from - the outputs of encoder. - transformer (obj:`ConfigDict`): ConfigDict is used for building - the Encoder and Decoder. - """ - - def __init__(self, - *args, - with_box_refine=False, - as_two_stage=False, - transformer=None, - **kwargs): - self.with_box_refine = with_box_refine - self.as_two_stage = as_two_stage - if self.as_two_stage: - transformer['as_two_stage'] = self.as_two_stage - - super(DeformableDETRHead, self).__init__( - *args, transformer=transformer, **kwargs) - - def _init_layers(self): - """Initialize classification branch and regression branch of head.""" - - fc_cls = Linear(self.embed_dims, self.cls_out_channels) - reg_branch = [] - for _ in range(self.num_reg_fcs): - reg_branch.append(Linear(self.embed_dims, self.embed_dims)) - reg_branch.append(nn.ReLU()) - reg_branch.append(Linear(self.embed_dims, 4)) - reg_branch = nn.Sequential(*reg_branch) - - def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - # last reg_branch is used to generate proposal from - # encode feature map when as_two_stage is True. - num_pred = (self.transformer.decoder.num_layers + 1) if \ - self.as_two_stage else self.transformer.decoder.num_layers - - if self.with_box_refine: - self.cls_branches = _get_clones(fc_cls, num_pred) - self.reg_branches = _get_clones(reg_branch, num_pred) - else: - - self.cls_branches = nn.ModuleList( - [fc_cls for _ in range(num_pred)]) - self.reg_branches = nn.ModuleList( - [reg_branch for _ in range(num_pred)]) - - if not self.as_two_stage: - self.query_embedding = nn.Embedding(self.num_query, - self.embed_dims * 2) - - def init_weights(self): - """Initialize weights of the DeformDETR head.""" - self.transformer.init_weights() - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - for m in self.cls_branches: - nn.init.constant_(m.bias, bias_init) - for m in self.reg_branches: - constant_init(m[-1], 0, bias=0) - nn.init.constant_(self.reg_branches[0][-1].bias.data[2:], -2.0) - if self.as_two_stage: - for m in self.reg_branches: - nn.init.constant_(m[-1].bias.data[2:], 0.0) - - def forward(self, mlvl_feats, img_metas): - """Forward function. - - Args: - mlvl_feats (tuple[Tensor]): Features from the upstream - network, each is a 4D-tensor with shape - (N, C, H, W). - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, \ - shape [nb_dec, bs, num_query, cls_out_channels]. Note \ - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression \ - head with normalized coordinate format (cx, cy, w, h). \ - Shape [nb_dec, bs, num_query, 4]. - enc_outputs_class (Tensor): The score of each point on encode \ - feature map, has shape (N, h*w, num_class). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - enc_outputs_coord (Tensor): The proposal generate from the \ - encode feature map, has shape (N, h*w, 4). Only when \ - as_two_stage is True it would be returned, otherwise \ - `None` would be returned. - """ - - batch_size = mlvl_feats[0].size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - img_masks = mlvl_feats[0].new_ones( - (batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - img_masks[img_id, :img_h, :img_w] = 0 - - mlvl_masks = [] - mlvl_positional_encodings = [] - for feat in mlvl_feats: - mlvl_masks.append( - F.interpolate(img_masks[None], - size=feat.shape[-2:]).to(torch.bool).squeeze(0)) - mlvl_positional_encodings.append( - self.positional_encoding(mlvl_masks[-1])) - - query_embeds = None - if not self.as_two_stage: - query_embeds = self.query_embedding.weight - hs, init_reference, inter_references, \ - enc_outputs_class, enc_outputs_coord = self.transformer( - mlvl_feats, - mlvl_masks, - query_embeds, - mlvl_positional_encodings, - reg_branches=self.reg_branches if self.with_box_refine else None, # noqa:E501 - cls_branches=self.cls_branches if self.as_two_stage else None # noqa:E501 - ) - hs = hs.permute(0, 2, 1, 3) - outputs_classes = [] - outputs_coords = [] - - for lvl in range(hs.shape[0]): - if lvl == 0: - reference = init_reference - else: - reference = inter_references[lvl - 1] - reference = inverse_sigmoid(reference) - outputs_class = self.cls_branches[lvl](hs[lvl]) - tmp = self.reg_branches[lvl](hs[lvl]) - if reference.shape[-1] == 4: - tmp += reference - else: - assert reference.shape[-1] == 2 - tmp[..., :2] += reference - outputs_coord = tmp.sigmoid() - outputs_classes.append(outputs_class) - outputs_coords.append(outputs_coord) - - outputs_classes = torch.stack(outputs_classes) - outputs_coords = torch.stack(outputs_coords) - if self.as_two_stage: - return outputs_classes, outputs_coords, \ - enc_outputs_class, \ - enc_outputs_coord.sigmoid() - else: - return outputs_classes, outputs_coords, \ - None, None - - @force_fp32(apply_to=('all_cls_scores', 'all_bbox_preds')) - def loss(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert gt_bboxes_ignore is None, \ - f'{self.__class__.__name__} only supports ' \ - f'for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss of proposal generated from encode feature map. - if enc_cls_scores is not None: - binary_labels_list = [ - torch.zeros_like(gt_labels_list[i]) - for i in range(len(img_metas)) - ] - enc_loss_cls, enc_losses_bbox, enc_losses_iou = \ - self.loss_single(enc_cls_scores, enc_bbox_preds, - gt_bboxes_list, binary_labels_list, - img_metas, gt_bboxes_ignore) - loss_dict['enc_loss_cls'] = enc_loss_cls - loss_dict['enc_loss_bbox'] = enc_losses_bbox - loss_dict['enc_loss_iou'] = enc_losses_iou - - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - @force_fp32(apply_to=('all_cls_scores', 'all_bbox_preds')) - def get_bboxes(self, - all_cls_scores, - all_bbox_preds, - enc_cls_scores, - enc_bbox_preds, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores (Tensor): Classification score of all - decoder layers, has shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds (Tensor): Sigmoid regression - outputs of all decode layers. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - enc_cls_scores (Tensor): Classification scores of - points on encode feature map , has shape - (N, h*w, num_classes). Only be passed when as_two_stage is - True, otherwise is None. - enc_bbox_preds (Tensor): Regression results of each points - on the encode feature map, has shape (N, h*w, 4). Only be - passed when as_two_stage is True, otherwise is None. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - cls_scores = all_cls_scores[-1] - bbox_preds = all_bbox_preds[-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - return result_list diff --git a/spaces/rorallitri/biomedical-language-models/logs/Free Download NI LabWindows CVI 2012 Crack UPD And Keygen Added.md b/spaces/rorallitri/biomedical-language-models/logs/Free Download NI LabWindows CVI 2012 Crack UPD And Keygen Added.md deleted file mode 100644 index 8468af44925e44f3d2100179bd4ed164a0de5866..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Free Download NI LabWindows CVI 2012 Crack UPD And Keygen Added.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Free Download NI LabWindows CVI 2012 Crack And Keygen Added


    DOWNLOAD > https://tinurll.com/2uznGx



    - -Using pirated/cracked software is an easy way to infect your ... Autodesk BIM 360 Glue AutoCAD 2015 Add-in 64 bit (HKLM\. ... HHD Software Free Hex Editor Neo 6.44 ... Microsoft Visual C++ 2012 Redistributable (x64) - 11.0.61030 ... NI LabWindows/CVI 2015 DLL Builder for LabVIEW (HKLM-x32\. 1fdad05405
    -
    -
    -

    diff --git a/spaces/samuelinferences/TabPFN/TabPFN/scripts/model_configs.py b/spaces/samuelinferences/TabPFN/TabPFN/scripts/model_configs.py deleted file mode 100644 index 89e2decf949b6b102f373c817c56999346a3844d..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/TabPFN/TabPFN/scripts/model_configs.py +++ /dev/null @@ -1,210 +0,0 @@ -from copy import deepcopy -from priors.utils import uniform_int_sampler_f -from priors.differentiable_prior import DifferentiableHyperparameter -from ConfigSpace import hyperparameters as CSH -import torch -from priors.differentiable_prior import replace_differentiable_distributions - -import ConfigSpace as CS - -def get_general_config(max_features, bptt, eval_positions=None): - """" - Returns the general PFN training hyperparameters. - """ - config_general = { - "lr": CSH.UniformFloatHyperparameter('lr', lower=0.00002, upper=0.0002, log=True), - "dropout": CSH.CategoricalHyperparameter('dropout', [0.0]), - "emsize": CSH.CategoricalHyperparameter('emsize', [2 ** i for i in range(8, 9)]), ## upper bound is -1 - "batch_size": CSH.CategoricalHyperparameter('batch_size', [2 ** i for i in range(8, 9)]), - "nlayers": CSH.CategoricalHyperparameter('nlayers', [12]), - "num_features": max_features, - "nhead": CSH.CategoricalHyperparameter('nhead', [4]), - "nhid_factor": 2, - "bptt": bptt, - "eval_positions": None, - "seq_len_used": bptt, - "sampling": 'normal',#hp.choice('sampling', ['mixed', 'normal']), # uniform - "epochs": 80, - "num_steps": 100, - "verbose": False, - "pre_sample_causes": True, # This is MLP - "mix_activations": False,#hp.choice('mix_activations', [True, False]), - } - - return config_general - -def get_flexible_categorical_config(max_features): - """" - Returns the configuration parameters for the tabular multiclass wrapper. - """ - config_flexible_categorical = { - "nan_prob_unknown_reason_reason_prior": CSH.CategoricalHyperparameter('nan_prob_unknown_reason_reason_prior', [1.0]), - "categorical_feature_p": CSH.CategoricalHyperparameter('categorical_feature_p', [0.0]), - "nan_prob_no_reason": CSH.CategoricalHyperparameter('nan_prob_no_reason', [0.0, 0.1, 0.2]), - "nan_prob_unknown_reason": CSH.CategoricalHyperparameter('nan_prob_unknown_reason', [0.0]), - "nan_prob_a_reason": CSH.CategoricalHyperparameter('nan_prob_a_reason', [0.0]), - # "num_classes": lambda : random.randint(2, 10), "balanced": False, - "max_num_classes": 2, - "num_classes": 2, - "noise_type": CSH.CategoricalHyperparameter('noise_type', ["Gaussian"]), # NN - "balanced": True, - "normalize_to_ranking": CSH.CategoricalHyperparameter('normalize_to_ranking', [False]), - "set_value_to_nan": CSH.CategoricalHyperparameter('set_value_to_nan', [0.5, 0.2, 0.0]), - "normalize_by_used_features": True, - "num_features_used": - {'uniform_int_sampler_f(3,max_features)': uniform_int_sampler_f(1, max_features)} - # hp.choice('conv_activation', [{'distribution': 'uniform', 'min': 2.0, 'max': 8.0}, None]), - } - return config_flexible_categorical - -def get_diff_flex(): - """" - Returns the configuration parameters for a differentiable wrapper around the tabular multiclass wrapper. - """ - diff_flex = { - # "ordinal_pct": {'distribution': 'uniform', 'min': 0.0, 'max': 0.5}, - # "num_categorical_features_sampler_a": hp.choice('num_categorical_features_sampler_a', - # [{'distribution': 'uniform', 'min': 0.3, 'max': 0.9}, None]), - # "num_categorical_features_sampler_b": {'distribution': 'uniform', 'min': 0.3, 'max': 0.9}, - "output_multiclass_ordered_p": {'distribution': 'uniform', 'min': 0.0, 'max': 0.5}, #CSH.CategoricalHyperparameter('output_multiclass_ordered_p', [0.0, 0.1, 0.2]), - "multiclass_type": {'distribution': 'meta_choice', 'choice_values': ['value', 'rank']}, - } - - return diff_flex - -def get_diff_gp(): - """" - Returns the configuration parameters for a differentiable wrapper around GP. - """ - diff_gp = { - 'outputscale': {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 10., 'min_mean': 0.00001, 'round': False, - 'lower_bound': 0}, - 'lengthscale': {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 10., 'min_mean': 0.00001, 'round': False, - 'lower_bound': 0}, - 'noise': {'distribution': 'meta_choice', 'choice_values': [0.00001, 0.0001, 0.01]} - } - - return diff_gp - -def get_diff_causal(): - """" - Returns the configuration parameters for a differentiable wrapper around MLP / Causal mixture. - """ - diff_causal = { - "num_layers": {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 6, 'min_mean': 1, 'round': True, - 'lower_bound': 2}, - # Better beta? - "prior_mlp_hidden_dim": {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 130, 'min_mean': 5, - 'round': True, 'lower_bound': 4}, - - "prior_mlp_dropout_prob": {'distribution': 'meta_beta', 'scale': 0.9, 'min': 0.1, 'max': 5.0}, - # This mustn't be too high since activations get too large otherwise - - "noise_std": {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': .3, 'min_mean': 0.0001, 'round': False, - 'lower_bound': 0.0}, - "init_std": {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 10.0, 'min_mean': 0.01, 'round': False, - 'lower_bound': 0.0}, - "num_causes": {'distribution': 'meta_trunc_norm_log_scaled', 'max_mean': 12, 'min_mean': 1, 'round': True, - 'lower_bound': 1}, - "is_causal": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - "pre_sample_weights": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - "y_is_effect": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - "prior_mlp_activations": {'distribution': 'meta_choice_mixed', 'choice_values': [ - torch.nn.Tanh - , torch.nn.ReLU - , torch.nn.Identity - , lambda : torch.nn.LeakyReLU(negative_slope=0.1) - , torch.nn.ELU - ]}, - "block_wise_dropout": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - "sort_features": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - "in_clique": {'distribution': 'meta_choice', 'choice_values': [True, False]}, - } - - return diff_causal - -def get_diff_prior_bag(): - """" - Returns the configuration parameters for a GP and MLP / Causal mixture. - """ - diff_prior_bag = { - 'prior_bag_exp_weights_1': {'distribution': 'uniform', 'min': 100000., 'max': 100001.}, - # MLP Weight (Biased, since MLP works better, 1.0 is weight for prior number 0) - } - - return diff_prior_bag - -def get_diff_config(): - """" - Returns the configuration parameters for a differentiable wrapper around GP and MLP / Causal mixture priors. - """ - diff_prior_bag = get_diff_prior_bag() - diff_causal = get_diff_causal() - diff_gp = get_diff_gp() - diff_flex = get_diff_flex() - - config_diff = {'differentiable_hyperparameters': {**diff_prior_bag, **diff_causal, **diff_gp, **diff_flex}} - - return config_diff - - -def sample_differentiable(config): - """" - Returns sampled hyperparameters from a differentiable wrapper, that is it makes a non-differentiable out of - differentiable. - """ - # config is a dict of dicts, dicts that have a 'distribution' key are treated as distributions to be sampled - result = deepcopy(config) - del result['differentiable_hyperparameters'] - - for k, v in config['differentiable_hyperparameters'].items(): - s_indicator, s_hp = DifferentiableHyperparameter(**v, embedding_dim=None, - device=None)() # both of these are actually not used to the best of my knowledge - result[k] = s_hp - - return result - -def list_all_hps_in_nested(config): - """" - Returns a list of hyperparameters from a neszed dict of hyperparameters. - """ - - if isinstance(config, CSH.Hyperparameter): - return [config] - elif isinstance(config, dict): - result = [] - for k, v in config.items(): - result += list_all_hps_in_nested(v) - return result - else: - return [] - -def create_configspace_from_hierarchical(config): - cs = CS.ConfigurationSpace() - for hp in list_all_hps_in_nested(config): - cs.add_hyperparameter(hp) - return cs - -def fill_in_configsample(config, configsample): - # config is our dict that defines config distribution - # configsample is a CS.Configuration - hierarchical_configsample = deepcopy(config) - for k, v in config.items(): - if isinstance(v, CSH.Hyperparameter): - hierarchical_configsample[k] = configsample[v.name] - elif isinstance(v, dict): - hierarchical_configsample[k] = fill_in_configsample(v, configsample) - return hierarchical_configsample - - -def evaluate_hypers(config, sample_diff_hps=False): - """" - Samples a hyperparameter configuration from a sampleable configuration (can be used in HP search). - """ - if sample_diff_hps: - # I do a deepcopy here, such that the config stays the same and can still be used with diff. hps - config = deepcopy(config) - replace_differentiable_distributions(config) - cs = create_configspace_from_hierarchical(config) - cs_sample = cs.sample_configuration() - return fill_in_configsample(config, cs_sample) diff --git a/spaces/sasaro/webui/README.md b/spaces/sasaro/webui/README.md deleted file mode 100644 index 013d12c9f3a56698056ae1bdbbfb0ec009805237..0000000000000000000000000000000000000000 --- a/spaces/sasaro/webui/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI -emoji: 🚧 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: camenduru/webui ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/sayakpaul/sidd-denoising-maxim/maxim/__init__.py b/spaces/sayakpaul/sidd-denoising-maxim/maxim/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sccstandardteam/ChuanhuChatGPT/modules/config.py b/spaces/sccstandardteam/ChuanhuChatGPT/modules/config.py deleted file mode 100644 index c5ae0b3ad061f1088d5cf9cb739dbe96254a503b..0000000000000000000000000000000000000000 --- a/spaces/sccstandardteam/ChuanhuChatGPT/modules/config.py +++ /dev/null @@ -1,186 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import commentjson as json - -from . import shared -from . import presets - - -__all__ = [ - "my_api_key", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "render_latex", - "usage_limit", - "multi_api_key", - "server_name", - "server_port", - "share", - "hide_history_when_not_logged_in" -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - -lang_config = config.get("language", "auto") -language = os.environ.get("LANGUAGE", lang_config) - -hide_history_when_not_logged_in = config.get("hide_history_when_not_logged_in", False) - -if os.path.exists("api_key.txt"): - logging.info("检测到api_key.txt文件,正在进行迁移...") - with open("api_key.txt", "r") as f: - config["openai_api_key"] = f.read().strip() - os.rename("api_key.txt", "api_key(deprecated).txt") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -if os.path.exists("auth.json"): - logging.info("检测到auth.json文件,正在进行迁移...") - auth_list = [] - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - config["users"] = auth_list - os.rename("auth.json", "auth(deprecated).json") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -## 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -## 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "") -my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key) - -xmchat_api_key = config.get("xmchat_api_key", "") -os.environ["XMCHAT_API_KEY"] = xmchat_api_key - -render_latex = config.get("render_latex", True) - -if render_latex: - os.environ["RENDER_LATEX"] = "yes" -else: - os.environ["RENDER_LATEX"] = "no" - -usage_limit = os.environ.get("USAGE_LIMIT", config.get("usage_limit", 120)) - -## 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get("api_host", config.get("api_host", "")) -if api_host: - shared.state.set_api_host(api_host) - -@contextmanager -def retrieve_openai_api(api_key = None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - -## 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -## 处理代理: -http_proxy = config.get("http_proxy", "") -https_proxy = config.get("https_proxy", "") -http_proxy = os.environ.get("HTTP_PROXY", http_proxy) -https_proxy = os.environ.get("HTTPS_PROXY", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -local_embedding = config.get("local_embedding", False) # 是否使用本地embedding - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -## 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) -def update_doc_config(two_column_pdf): - global advance_docs - advance_docs["pdf"]["two_column"] = two_column_pdf - - logging.info(f"更新后的文件参数为:{advance_docs}") - -## 处理gradio.launch参数 -server_name = config.get("server_name", None) -server_port = config.get("server_port", None) -if server_name is None: - if dockerflag: - server_name = "0.0.0.0" - else: - server_name = "127.0.0.1" -if server_port is None: - if dockerflag: - server_port = 7860 - -assert server_port is None or type(server_port) == int, "要求port设置为int类型" - -# 设置默认model -default_model = config.get("default_model", "") -try: - presets.DEFAULT_MODEL = presets.MODELS.index(default_model) -except ValueError: - pass - -share = config.get("share", False) diff --git a/spaces/scedlatioru/img-to-music/example/DICIONARIO AURELIO SEC XXI Free Download UPD.md b/spaces/scedlatioru/img-to-music/example/DICIONARIO AURELIO SEC XXI Free Download UPD.md deleted file mode 100644 index 53c537fcf808dbc30541d8e9b70c6949aef39f3d..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/DICIONARIO AURELIO SEC XXI Free Download UPD.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    DICIONARIO AURELIO SEC XXI free download: How to Get the Best Portuguese Dictionary for Your PC

    - -

    If you are looking for a reliable and comprehensive Portuguese dictionary for your PC, you might want to consider downloading DICIONARIO AURELIO SEC XXI. This is an electronic version of the famous Aurélio dictionary, which is one of the most respected and widely used dictionaries in Brazil and other Portuguese-speaking countries.

    -

    DICIONARIO AURELIO SEC XXI free download


    Downloadhttps://gohhs.com/2uEzHx



    - -

    DICIONARIO AURELIO SEC XXI is a digital dictionary that contains more than 435,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes grammatical information, such as gender, number, verb conjugations, and irregular plurals.

    - -

    DICIONARIO AURELIO SEC XXI is easy to use and install on your PC. You can search for words by typing them in the search box or by browsing through the alphabetical index. You can also access the dictionary from any Windows application by using a hotkey or a mouse click. You can customize the appearance and functionality of the dictionary according to your preferences.

    - -

    In this article, we will explain how to get DICIONARIO AURELIO SEC XXI free download and install it on your PC. We will also explain what are the benefits and features of this dictionary and why you should use it.

    - -

    How to get DICIONARIO AURELIO SEC XXI free download?

    - -

    If you want to get DICIONARIO AURELIO SEC XXI free download, you will need to follow these steps:

    -

    - -
      -
    1. Go to a reliable source that offers DICIONARIO AURELIO SEC XXI free download. You can find several websites that offer this service by searching on Google or other search engines. Make sure you choose a reputable and secure site that does not contain viruses or malware.
    2. -
    3. Click on the download link or button and wait for the download process to start. You will need to have enough space on your PC to download the file, which is about 200 MB in size.
    4. -
    5. Once the download is complete, locate the file on your PC and double-click on it to open it. You will see a setup wizard that will guide you through the installation process.
    6. -
    7. Follow the instructions on the screen and choose where you want to install the dictionary on your PC. You can also choose what language you want to use for the interface and what hotkey you want to use to access the dictionary from other applications.
    8. -
    9. Once the installation is done, you can launch the dictionary from the desktop shortcut or from the start menu. You can also access it from any Windows application by using the hotkey or the mouse click.
    10. -
    - -

    Congratulations! You have successfully downloaded and installed DICIONARIO AURELIO SEC XXI on your PC. Now you can enjoy using this dictionary whenever you need it.

    - -

    What are the benefits and features of DICIONARIO AURELIO SEC XXI?

    - -

    By downloading DICIONARIO AURELIO SEC XXI on your PC, you will enjoy several benefits and features that will make your learning and communication in Portuguese easier and more effective. Here are some of them:

    - -
      -
    • You will have access to a reliable and comprehensive Portuguese dictionary that contains more than 435,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies.
    • -
    • You will be able to learn both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. You will also be able to learn about the history and evolution of the Portuguese language.
    • -
    • You will be able to improve your grammar and spelling by using the grammatical information provided by the dictionary, such as gender, number, verb conjugations, and irregular plurals.
    • -
    • You will be able to use the dictionary easily and conveniently from any Windows application by using a hotkey or a mouse click. You will also be able to customize the appearance and functionality of the dictionary according to your preferences.
    • -
    • You will be able to save time and money by downloading DICIONARIO AURELIO SEC XXI for free instead of buying a physical copy of the dictionary or subscribing to an online service.
    • -
    - -

    These are some of the benefits and features of DICIONARIO AURELIO SEC XXI that make it a great choice for anyone who wants to learn or improve their Portuguese skills.

    -

    How to update DICIONARIO AURELIO SEC XXI?

    - -

    If you have downloaded and installed DICIONARIO AURELIO SEC XXI on your PC, you might want to update it from time to time to get the latest features and improvements. Here are some tips on how to update it:

    - -
      -
    1. Check for updates online. You can check for updates online by visiting the official website of the dictionary or by searching on Google or other search engines. You can also check for updates from within the dictionary by clicking on the help menu and selecting check for updates.
    2. -
    3. Download the update file. If there is an update available, you can download the update file from the source that you have chosen. Make sure you download the file from a reliable and secure site that does not contain viruses or malware.
    4. -
    5. Run the update file. Once you have downloaded the update file, locate it on your PC and double-click on it to run it. You will see a setup wizard that will guide you through the update process.
    6. -
    7. Follow the instructions on the screen and choose whether you want to install the update over your existing version or in a different location. You can also choose what language you want to use for the interface and what hotkey you want to use to access the dictionary from other applications.
    8. -
    9. Once the update is done, you can launch the dictionary from the desktop shortcut or from the start menu. You can also access it from any Windows application by using the hotkey or the mouse click.
    10. -
    - -

    These are some tips on how to update DICIONARIO AURELIO SEC XXI on your PC. By updating your dictionary, you will be able to enjoy the latest features and improvements that will make your learning and communication in Portuguese easier and more effective.

    - -

    What are some alternatives to DICIONARIO AURELIO SEC XXI?

    - -

    If you are looking for some alternatives to DICIONARIO AURELIO SEC XXI, here are some suggestions that you might want to consider:

    - -
      -
    • Dicionário Priberam da Língua Portuguesa: This is an online dictionary that contains more than 110,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes grammatical information, such as gender, number, verb conjugations, and irregular plurals.
    • -
    • Dicionário Michaelis Português: This is an electronic dictionary that contains more than 200,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes grammatical information, such as gender, number, verb conjugations, and irregular plurals.
    • -
    • Dicionário Houaiss da Língua Portuguesa: This is a physical dictionary that contains more than 228,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes grammatical information, such as gender, number, verb conjugations, and irregular plurals.
    • -
    - -

    These are some of the alternatives to DICIONARIO AURELIO SEC XXI that you might want to consider. These dictionaries are also reliable and comprehensive sources of information for your learning and communication in Portuguese.

    -

    How to uninstall DICIONARIO AURELIO SEC XXI?

    - -

    If you want to uninstall DICIONARIO AURELIO SEC XXI from your PC, you will need to follow these steps:

    - -
      -
    1. Go to the control panel of your PC and select programs and features.
    2. -
    3. Find DICIONARIO AURELIO SEC XXI in the list of programs and click on it.
    4. -
    5. Click on the uninstall button and confirm your choice.
    6. -
    7. Wait for the uninstall process to finish and restart your PC if necessary.
    8. -
    - -

    These are some steps on how to uninstall DICIONARIO AURELIO SEC XXI from your PC. By uninstalling your dictionary, you will free up some space on your PC and remove any traces of the program.

    - -

    What are some reviews of DICIONARIO AURELIO SEC XXI?

    - -

    If you are curious about what other users think of DICIONARIO AURELIO SEC XXI, here are some reviews that we have found online:

    - -
      -
    • "I have been using DICIONARIO AURELIO SEC XXI for a long time and I love it. It is very complete and accurate, and it helps me a lot with my studies and work. I recommend it to anyone who wants to learn or improve their Portuguese."
    • -
    • "DICIONARIO AURELIO SEC XXI is a great dictionary for Portuguese speakers. It has a lot of entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It also has grammatical information, such as gender, number, verb conjugations, and irregular plurals. It is easy to use and install, and it works well with any Windows application."
    • -
    • "DICIONARIO AURELIO SEC XXI is a good dictionary for Portuguese learners. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes the history and evolution of the Portuguese language. However, it is a bit outdated and it does not have some new words or expressions that are used nowadays."
    • -
    - -

    These are some of the reviews of DICIONARIO AURELIO SEC XXI that we have found online. As you can see, most users are satisfied with this dictionary and find it useful and reliable.

    -

    Conclusion

    - -

    DICIONARIO AURELIO SEC XXI free download is a version of the game that offers a lot of benefits and features for Portuguese speakers and learners. It is a reliable and comprehensive dictionary that contains more than 435,000 entries, definitions, synonyms, antonyms, examples, idioms, expressions, and etymologies. It covers both the European and Brazilian variants of Portuguese, as well as regionalisms and colloquialisms. It also includes grammatical information, such as gender, number, verb conjugations, and irregular plurals. It is easy to download and install on your PC, and you can use it from any Windows application by using a hotkey or a mouse click. You can also customize the appearance and functionality of the dictionary according to your preferences. You can also update it from time to time to get the latest features and improvements. If you are looking for a reliable and comprehensive Portuguese dictionary for your PC, you should definitely try DICIONARIO AURELIO SEC XXI free download.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Karyalaya Krama Sinhala Pdf Free LINK.md b/spaces/scedlatioru/img-to-music/example/Karyalaya Krama Sinhala Pdf Free LINK.md deleted file mode 100644 index 4c1a743425ebdfe34f92eec8457482d74791bfb3..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Karyalaya Krama Sinhala Pdf Free LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    karyalaya krama sinhala pdf free


    Download File ✏ ✏ ✏ https://gohhs.com/2uEACY



    -
    -Pushkin katha pdf free download 2017 sinhala wela 2018 wal katha karyalaya krama sinhala pdf 34 typed pdf version of book is available for download books ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_maskctc.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_maskctc.py deleted file mode 100644 index c283f7de5bbee736e6b95cccc52d6b67bf83307d..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_maskctc.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright 2020 Johns Hopkins University (Shinji Watanabe) -# Waseda University (Yosuke Higuchi) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -""" -Mask CTC based non-autoregressive speech recognition model (pytorch). - -See https://arxiv.org/abs/2005.08700 for the detail. - -""" - -from itertools import groupby -import logging -import math - -from distutils.util import strtobool -import numpy -import torch - -from espnet.nets.pytorch_backend.conformer.encoder import Encoder -from espnet.nets.pytorch_backend.conformer.argument import ( - add_arguments_conformer_common, # noqa: H301 -) -from espnet.nets.pytorch_backend.e2e_asr import CTC_LOSS_THRESHOLD -from espnet.nets.pytorch_backend.e2e_asr_transformer import E2E as E2ETransformer -from espnet.nets.pytorch_backend.maskctc.add_mask_token import mask_uniform -from espnet.nets.pytorch_backend.maskctc.mask import square_mask -from espnet.nets.pytorch_backend.nets_utils import make_non_pad_mask -from espnet.nets.pytorch_backend.nets_utils import th_accuracy - - -class E2E(E2ETransformer): - """E2E module. - - :param int idim: dimension of inputs - :param int odim: dimension of outputs - :param Namespace args: argument Namespace containing options - - """ - - @staticmethod - def add_arguments(parser): - """Add arguments.""" - E2ETransformer.add_arguments(parser) - E2E.add_maskctc_arguments(parser) - - return parser - - @staticmethod - def add_maskctc_arguments(parser): - """Add arguments for maskctc model.""" - group = parser.add_argument_group("maskctc specific setting") - - group.add_argument( - "--maskctc-use-conformer-encoder", - default=False, - type=strtobool, - ) - group = add_arguments_conformer_common(group) - - return parser - - def __init__(self, idim, odim, args, ignore_id=-1): - """Construct an E2E object. - - :param int idim: dimension of inputs - :param int odim: dimension of outputs - :param Namespace args: argument Namespace containing options - """ - odim += 1 # for the mask token - - super().__init__(idim, odim, args, ignore_id) - assert 0.0 <= self.mtlalpha < 1.0, "mtlalpha should be [0.0, 1.0)" - - self.mask_token = odim - 1 - self.sos = odim - 2 - self.eos = odim - 2 - self.odim = odim - - if args.maskctc_use_conformer_encoder: - if args.transformer_attn_dropout_rate is None: - args.transformer_attn_dropout_rate = args.conformer_dropout_rate - self.encoder = Encoder( - idim=idim, - attention_dim=args.adim, - attention_heads=args.aheads, - linear_units=args.eunits, - num_blocks=args.elayers, - input_layer=args.transformer_input_layer, - dropout_rate=args.dropout_rate, - positional_dropout_rate=args.dropout_rate, - attention_dropout_rate=args.transformer_attn_dropout_rate, - pos_enc_layer_type=args.transformer_encoder_pos_enc_layer_type, - selfattention_layer_type=args.transformer_encoder_selfattn_layer_type, - activation_type=args.transformer_encoder_activation_type, - macaron_style=args.macaron_style, - use_cnn_module=args.use_cnn_module, - cnn_module_kernel=args.cnn_module_kernel, - ) - self.reset_parameters(args) - - def forward(self, xs_pad, ilens, ys_pad): - """E2E forward. - - :param torch.Tensor xs_pad: batch of padded source sequences (B, Tmax, idim) - :param torch.Tensor ilens: batch of lengths of source sequences (B) - :param torch.Tensor ys_pad: batch of padded target sequences (B, Lmax) - :return: ctc loss value - :rtype: torch.Tensor - :return: attention loss value - :rtype: torch.Tensor - :return: accuracy in attention decoder - :rtype: float - """ - # 1. forward encoder - xs_pad = xs_pad[:, : max(ilens)] # for data parallel - src_mask = make_non_pad_mask(ilens.tolist()).to(xs_pad.device).unsqueeze(-2) - hs_pad, hs_mask = self.encoder(xs_pad, src_mask) - self.hs_pad = hs_pad - - # 2. forward decoder - ys_in_pad, ys_out_pad = mask_uniform( - ys_pad, self.mask_token, self.eos, self.ignore_id - ) - ys_mask = square_mask(ys_in_pad, self.eos) - pred_pad, pred_mask = self.decoder(ys_in_pad, ys_mask, hs_pad, hs_mask) - self.pred_pad = pred_pad - - # 3. compute attention loss - loss_att = self.criterion(pred_pad, ys_out_pad) - self.acc = th_accuracy( - pred_pad.view(-1, self.odim), ys_out_pad, ignore_label=self.ignore_id - ) - - # 4. compute ctc loss - loss_ctc, cer_ctc = None, None - if self.mtlalpha > 0: - batch_size = xs_pad.size(0) - hs_len = hs_mask.view(batch_size, -1).sum(1) - loss_ctc = self.ctc(hs_pad.view(batch_size, -1, self.adim), hs_len, ys_pad) - if self.error_calculator is not None: - ys_hat = self.ctc.argmax(hs_pad.view(batch_size, -1, self.adim)).data - cer_ctc = self.error_calculator(ys_hat.cpu(), ys_pad.cpu(), is_ctc=True) - # for visualization - if not self.training: - self.ctc.softmax(hs_pad) - - # 5. compute cer/wer - if self.training or self.error_calculator is None or self.decoder is None: - cer, wer = None, None - else: - ys_hat = pred_pad.argmax(dim=-1) - cer, wer = self.error_calculator(ys_hat.cpu(), ys_pad.cpu()) - - alpha = self.mtlalpha - if alpha == 0: - self.loss = loss_att - loss_att_data = float(loss_att) - loss_ctc_data = None - else: - self.loss = alpha * loss_ctc + (1 - alpha) * loss_att - loss_att_data = float(loss_att) - loss_ctc_data = float(loss_ctc) - - loss_data = float(self.loss) - if loss_data < CTC_LOSS_THRESHOLD and not math.isnan(loss_data): - self.reporter.report( - loss_ctc_data, loss_att_data, self.acc, cer_ctc, cer, wer, loss_data - ) - else: - logging.warning("loss (=%f) is not correct", loss_data) - return self.loss - - def recognize(self, x, recog_args, char_list=None, rnnlm=None): - """Recognize input speech. - - :param ndnarray x: input acoustic feature (B, T, D) or (T, D) - :param Namespace recog_args: argment Namespace contraining options - :param list char_list: list of characters - :param torch.nn.Module rnnlm: language model module - :return: decoding result - :rtype: list - """ - - def num2str(char_list, mask_token, mask_char="_"): - def f(yl): - cl = [char_list[y] if y != mask_token else mask_char for y in yl] - return "".join(cl).replace("", " ") - - return f - - n2s = num2str(char_list, self.mask_token) - - self.eval() - h = self.encode(x).unsqueeze(0) - - # greedy ctc outputs - ctc_probs, ctc_ids = torch.exp(self.ctc.log_softmax(h)).max(dim=-1) - y_hat = torch.stack([x[0] for x in groupby(ctc_ids[0])]) - y_idx = torch.nonzero(y_hat != 0).squeeze(-1) - - # calculate token-level ctc probabilities by taking - # the maximum probability of consecutive frames with - # the same ctc symbols - probs_hat = [] - cnt = 0 - for i, y in enumerate(y_hat.tolist()): - probs_hat.append(-1) - while cnt < ctc_ids.shape[1] and y == ctc_ids[0][cnt]: - if probs_hat[i] < ctc_probs[0][cnt]: - probs_hat[i] = ctc_probs[0][cnt].item() - cnt += 1 - probs_hat = torch.from_numpy(numpy.array(probs_hat)) - - # mask ctc outputs based on ctc probabilities - p_thres = recog_args.maskctc_probability_threshold - mask_idx = torch.nonzero(probs_hat[y_idx] < p_thres).squeeze(-1) - confident_idx = torch.nonzero(probs_hat[y_idx] >= p_thres).squeeze(-1) - mask_num = len(mask_idx) - - y_in = torch.zeros(1, len(y_idx), dtype=torch.long) + self.mask_token - y_in[0][confident_idx] = y_hat[y_idx][confident_idx] - - logging.info("ctc:{}".format(n2s(y_in[0].tolist()))) - - # iterative decoding - if not mask_num == 0: - K = recog_args.maskctc_n_iterations - num_iter = K if mask_num >= K and K > 0 else mask_num - - for t in range(num_iter - 1): - pred, _ = self.decoder(y_in, None, h, None) - pred_score, pred_id = pred[0][mask_idx].max(dim=-1) - cand = torch.topk(pred_score, mask_num // num_iter, -1)[1] - y_in[0][mask_idx[cand]] = pred_id[cand] - mask_idx = torch.nonzero(y_in[0] == self.mask_token).squeeze(-1) - - logging.info("msk:{}".format(n2s(y_in[0].tolist()))) - - # predict leftover masks (|masks| < mask_num // num_iter) - pred, pred_mask = self.decoder(y_in, None, h, None) - y_in[0][mask_idx] = pred[0][mask_idx].argmax(dim=-1) - - logging.info("msk:{}".format(n2s(y_in[0].tolist()))) - - ret = y_in.tolist()[0] - hyp = {"score": 0.0, "yseq": [self.sos] + ret + [self.eos]} - - return [hyp] diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/README.en.md b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/README.en.md deleted file mode 100644 index 6fa77f3ec1cfa55a6e91c16eed2012f05f694b98..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/README.en.md +++ /dev/null @@ -1,102 +0,0 @@ -
    - -

    Retrieval-based-Voice-Conversion-WebUI

    -An easy-to-use SVC framework based on VITS.

    - -[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) - -
    - -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) -[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt) -[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) - -[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk) - -
    - ------- -[**Changelog**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md) - -[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) - -> Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here! - -> Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer) - -> The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset. - -> High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement. -## Summary -This repository has the following features: -+ Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval; -+ Easy and fast training, even on relatively poor graphics cards; -+ Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended); -+ Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge); -+ Easy-to-use Webui interface; -+ Use the UVR5 model to quickly separate vocals and instruments. -## Preparing the environment -We recommend you install the dependencies through poetry. - -The following commands need to be executed in the environment of Python version 3.8 or higher: -```bash -# Install PyTorch-related core dependencies, skip if installed -# Reference: https://pytorch.org/get-started/locally/ -pip install torch torchvision torchaudio - -#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21 -#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 - -# Install the Poetry dependency management tool, skip if installed -# Reference: https://python-poetry.org/docs/#installation -curl -sSL https://install.python-poetry.org | python3 - - -# Install the project dependencies -poetry install -``` -You can also use pip to install the dependencies - -**Notice**: `faiss 1.7.2` will raise Segmentation Fault: 11 under `MacOS`, please use `pip install faiss-cpu==1.7.0` if you use pip to install it manually. - -```bash -pip install -r requirements.txt -``` - -## Preparation of other Pre-models -RVC requires other pre-models to infer and train. - -You need to download them from our [Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/). - -Here's a list of Pre-models and other files that RVC needs: -```bash -hubert_base.pt - -./pretrained - -./uvr5_weights - -#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed -ffmpeg.exe -``` -Then use this command to start Webui: -```bash -python infer-web.py -``` -If you are using Windows, you can download and extract `RVC-beta.7z` to use RVC directly and use `go-web.bat` to start Webui. - -There's also a tutorial on RVC in Chinese and you can check it out if needed. - -## Credits -+ [ContentVec](https://github.com/auspicious3000/contentvec/) -+ [VITS](https://github.com/jaywalnut310/vits) -+ [HIFIGAN](https://github.com/jik876/hifi-gan) -+ [Gradio](https://github.com/gradio-app/gradio) -+ [FFmpeg](https://github.com/FFmpeg/FFmpeg) -+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) -+ [audio-slicer](https://github.com/openvpi/audio-slicer) -## Thanks to all contributors for their efforts - - - - - diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/gui.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/gui.py deleted file mode 100644 index 3101cf303865883ce592728d7bf4ea699bb8cf2c..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/gui.py +++ /dev/null @@ -1,570 +0,0 @@ -import os, sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal - -# import matplotlib.pyplot as plt -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from i18n import I18nAuto - -i18n = I18nAuto() -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class RVC: - def __init__( - self, key, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.sr = 16000 - self.window = 160 - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - self.model = self.model.half() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=True) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - self.net_g.half() - except Exception as e: - print(e) - - def get_f0(self, x, f0_up_key, inp_f0=None): - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = self.model.final_proj(logits[0]) - - ####索引优化 - if hasattr(self, "index") and hasattr(self, "big_npy") and self.index_rate != 0: - npy = feats[0].cpu().numpy().astype("float32") - - # _, I = self.index.search(npy, 1) - # npy = self.big_npy[I.squeeze()].astype("float16") - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1).astype( - "float16" - ) - - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class Config: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = Config() - self.flag_vc = False - - self.launcher() - - def launcher(self): - sg.theme("LightBlue3") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title=i18n("加载模型"), - layout=[ - [ - sg.Input(default_text="hubert_base.pt", key="hubert_path"), - sg.FileBrowse(i18n("Hubert模型")), - ], - [ - sg.Input(default_text="TEMP\\atri.pth", key="pth_path"), - sg.FileBrowse(i18n("选择.pth文件")), - ], - [ - sg.Input( - default_text="TEMP\\added_IVF512_Flat_atri_baseline_src_feat.index", - key="index_path", - ), - sg.FileBrowse(i18n("选择.index文件")), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - ), - sg.FileBrowse(i18n("选择.npy文件")), - ], - ], - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("输入设备")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=input_devices[sd.default.device[0]], - ), - ], - [ - sg.Text(i18n("输出设备")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=output_devices[sd.default.device[1]], - ), - ], - ], - title=i18n("音频设备(请使用同种类驱动)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("响应阈值")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=-30, - ), - ], - [ - sg.Text(i18n("音调设置")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=12, - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=0.5, - ), - ], - ], - title=i18n("常规设置"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("采样长度")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=1.0, - ), - ], - [ - sg.Text(i18n("淡入淡出长度")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=0.08, - ), - ], - [ - sg.Text(i18n("额外推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=0.05, - ), - ], - [ - sg.Checkbox(i18n("输入降噪"), key="I_noise_reduce"), - sg.Checkbox(i18n("输出降噪"), key="O_noise_reduce"), - ], - ], - title=i18n("性能设置"), - ), - ], - [ - sg.Button(i18n("开始音频转换"), key="start_vc"), - sg.Button(i18n("停止音频转换"), key="stop_vc"), - sg.Text(i18n("推理时间(ms):")), - sg.Text("0", key="infer_time"), - ], - ] - - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - self.set_values(values) - print(str(self.config.__dict__)) - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - def set_values(self, values): - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = values["hubert_path"] - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] for d in devices if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] for d in devices if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/shiyuleixia/yolov8-segmentation/app.py b/spaces/shiyuleixia/yolov8-segmentation/app.py deleted file mode 100644 index fa0c845018fa95f7c7585a11d17d465ad745c292..0000000000000000000000000000000000000000 --- a/spaces/shiyuleixia/yolov8-segmentation/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import gradio as gr -import torch -from ultralyticsplus import YOLO, render_result -import numpy as np -from PIL import Image -import cv2 - -model_names = [ - "yolov8n-seg.pt", - "yolov8s-seg.pt", - "yolov8m-seg.pt", - "yolov8l-seg.pt", - "yolov8x-seg.pt", -] - -current_model_name = "yolov8m-seg.pt" -model = YOLO(current_model_name) - -def sort_instance_masks_by_centroid(instances_mask, reverse=False): - # Calculate centroid of each instance mask - centroids = [] - for mask in instances_mask: - # Find contours of the mask - mask_np = mask.astype(np.uint8) - #mask_np[mask_np !=0] = 255 - contours, hierarchy = cv2.findContours(mask_np, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) - - # Calculate moments of the contour - moments = cv2.moments(contours[0]) - - # Calculate centroid coordinates - c_x = int(moments["m10"] / moments["m00"]) - c_y = int(moments["m01"] / moments["m00"]) - centroids.append((c_x, c_y)) - - # Sort instance masks by centroid coordinates - sorted_instances_mask = [instance_mask for _, instance_mask in - sorted(zip(centroids, instances_mask), reverse=reverse)] - - return sorted_instances_mask - -def visualize_masks(masks): - masks = masks.detach().cpu().numpy() - height, width = masks.shape[1:] - # 计算有多少个 mask - num_masks = masks.shape[0] - masks = sort_instance_masks_by_centroid(masks) - - - # 创建一个空白图像,背景颜色为黑色 sfasf - img = Image.new('RGB', (width, height),(0,0,0)) - #img.putpalette([0, 0, 0] * 256) - img_array = np.array(img) - colors = [] - - # 将每个 mask 标记为不同的颜色 - for i in range(num_masks): - color = np.random.randint(0, 256, size=3) - colors.append(tuple(color)) - #colorimg.paste - #colorimg = Image.new('RGB', (width,height), color=tuple(np.random.randint(0, 256, size=3))) - #mask_img_tmp = Image.fromarray(masks[i]).convert('RGB') - #mask_array = Image.fromarray(masks[i]) - img_array[masks[i] != 0,:] = color - #mask_img = mask_img.putpalette(color) - #img.paste(mask_img,(0,0),mask_img_tmp) - - #img.putpalette(color + (0,) * 253) - - # 将 mask 根据颜色映射显示为 RGB 图像 - img_rgb = Image.fromarray(img_array) - return img_rgb,colors - - - -def yolov8_inference( - image = None, - model_name = None, - dest_width = 512, - dest_height = 512, - conf_threshold = 0.25, - iou_threshold = 0.45, -): - """ - YOLOv8 inference function - Args: - image: Input image - model_name: Name of the model - image_size: Image size - conf_threshold: Confidence threshold - iou_threshold: IOU threshold - Returns: - Rendered image - """ - global model - global current_model_name - if model_name != current_model_name: - model = YOLO(model_name) - current_model_name = model_name - model.overrides["conf"] = conf_threshold - model.overrides["iou"] = iou_threshold - model.overrides["classes"] = [0] - results = model.predict(image) - renders = [] - colorarray = [] - for image_results in model.predict(image): - #print("predict results: ",type(image_results.masks)) - #render = render_result( - # model=model, image=image, result=image_results - #) - render ,colors= visualize_masks(image_results.masks.data) - render = render.resize((dest_width,dest_height)) - renders.append(render) - colorarray.append(colors) - - return renders[0],','.join(['#%02x%02x%02x' % row for row in colorarray[0]]) - -inputs = [ - gr.Image(type="filepath", label="Input Image"), - gr.Dropdown( - model_names, - value=current_model_name, - label="Model type", - ), - gr.inputs.Slider(minimum=128, maximum=2048, step=64, default=512, label="Width"), - gr.inputs.Slider(minimum=128, maximum=2048, step=64, default=512, label="Height"), - - gr.Slider( - minimum=0.0, maximum=1.0, value=0.25, step=0.05, label="Confidence Threshold" - ), - gr.Slider(minimum=0.0, maximum=1.0, value=0.45, step=0.05, label="IOU Threshold"), -] - -outputs = [gr.Image(type="filepath", label="Output Image"),gr.Textbox(label="Output Text")] -title = "Ultralytics YOLOv8 Segmentation For HumanBody Only Now" - - -demo_app = gr.Interface( - fn=yolov8_inference, - inputs=inputs, - outputs=outputs, - title=title, - examples=None, - cache_examples=False, - theme="default", -) -demo_app.launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/showlab/Show-1/README.md b/spaces/showlab/Show-1/README.md deleted file mode 100644 index 8a7c28e8dd2e134304b82eb0b1fecbc06a2da74f..0000000000000000000000000000000000000000 --- a/spaces/showlab/Show-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Show-1 -emoji: 🎬 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simonwalo/Histwords-Webapp/README.md b/spaces/simonwalo/Histwords-Webapp/README.md deleted file mode 100644 index 241123b30a8c58591bcbd55352b997a86356e123..0000000000000000000000000000000000000000 --- a/spaces/simonwalo/Histwords-Webapp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Histwords Webapp -emoji: 📉 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: Home.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Alight Motion Pro Mod APK 5.1.1 - The Ultimate Video Editing Tool for Android.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Alight Motion Pro Mod APK 5.1.1 - The Ultimate Video Editing Tool for Android.md deleted file mode 100644 index 4a38a20eb08867587cd73fecb6d0c5d7b5fa2f2b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Alight Motion Pro Mod APK 5.1.1 - The Ultimate Video Editing Tool for Android.md +++ /dev/null @@ -1,99 +0,0 @@ -
    -

    Alight Motion 5.1 1 Mod APK Download: A Powerful Video Editing App

    -

    Do you love making videos and animations? Do you want to unleash your creativity and impress your audience? If yes, then you need a powerful and easy-to-use video editing app that can help you create stunning videos and animations. One such app is Alight Motion.

    -

    alight motion 5.1 1 mod apk download


    Download Ziphttps://ssurll.com/2uNQoC



    -

    What is Alight Motion?

    -

    Alight Motion is a professional video and animation editor app that allows you to create amazing videos and animations with your smartphone. You can edit your videos with multiple layers, add effects, adjust colors, animate objects, and much more. You can also export your videos in high-quality formats such as MP4, GIF, or PNG.

    -

    Features of Alight Motion

    -

    Alight Motion has many features that make it one of the best video editing apps available. Here are some of them:

    -

    Multiple layers of graphics, video, and audio

    -

    You can add as many layers as you want to your project and edit them individually. You can also group layers together for easier management. You can adjust the opacity, blending mode, position, rotation, scale, and other properties of each layer.

    -

    Vector and bitmap support

    -

    You can import and edit vector graphics and bitmap images in Alight Motion. You can also create your own vector shapes and text using the built-in tools. You can edit the paths, fill, stroke, gradient, shadow, and other attributes of vector graphics.

    -

    Keyframe animation

    -

    You can animate any layer or property using keyframes. You can set the timing, easing, interpolation, and other parameters of each keyframe. You can also use motion blur, motion paths, and velocity graphs to make your animations smoother and more realistic.

    -

    Color adjustment and effects

    -

    You can enhance the look of your videos and animations by applying color adjustment tools such as brightness, contrast, saturation, hue, temperature, tint, curves, levels, etc. You can also add various effects such as blur, glow, distortion, noise, gradient, etc.

    -

    Export in MP4, GIF, or PNG formats

    -

    You can export your videos and animations in different formats depending on your needs. You can choose the resolution, frame rate, bitrate, quality, and other options for each format. You can also preview your export before saving it.

    -

    alight motion pro mod apk download 5.1 1
    -alight motion 5.1 1 mod apk without watermark
    -alight motion video and animation editor mod apk 5.1 1
    -alight motion premium mod apk download 5.1 1
    -alight motion mod apk latest version 5.1 1
    -alight motion 5.1 1 mod apk unlocked everything
    -alight motion no watermark mod apk download 5.1 1
    -alight motion full unlocked mod apk 5.1 1
    -alight motion paid mod apk free download 5.1 1
    -alight motion hack mod apk download 5.1 1
    -alight motion pro apk download for android 5.1 1
    -alight motion mod apk free subscription 5.1 1
    -alight motion cracked mod apk download 5.1 1
    -alight motion video editor mod apk download 5.1 1
    -alight motion mod apk all fonts unlocked 5.1 1
    -alight motion pro mod apk revdl 5.1 1
    -alight motion mod apk rexdl 5.1 1
    -alight motion pro mod apk happymod 5.1 1
    -alight motion pro mod apk android 11
    -alight motion pro mod apk android republic
    -alight motion pro mod apk android zone
    -alight motion pro mod apk android palace
    -alight motion pro mod apk android oyun club
    -alight motion pro mod apk android offline
    -alight motion pro mod apk android online
    -alight motion pro mod apk android games room
    -alight motion pro mod apk android games hack
    -alight motion pro mod apk android games blogspot
    -alight motion pro mod apk android games heaven
    -alight motion pro mod apk android games hvga
    -alight motion pro mod apk android games and apps center
    -alight motion pro mod apk android games free download full version
    -alight motion pro mod apk android games free download for mobile phone
    -alight motion pro mod apk android games free download for tablet pc
    -alight motion pro mod apk android games free download for samsung galaxy y duos s6102b

    -

    What is Alight Motion Mod APK?

    -

    Alight Motion Mod APK is a modified version of the original Alight Motion app that gives you access to some premium features for free. These features are normally available only to paid subscribers of Alight Motion.

    -

    Benefits of Alight Motion Mod APK

    -

    Here are some of the benefits of using Alight Motion Mod APK:

    -

    No watermark

    -

    When you export your videos and animations using the original Alight Motion app, you will see a watermark on them that says "Made with Alight Motion". This watermark can be annoying and unprofessional. However, with Alight Motion Mod APK, you can remove this watermark completely and have a clean output.

    -

    Premium features unlocked

    -

    Some of the features of Alight Motion are only available to paid subscribers. These include advanced effects such as chroma keying (green screen), light leaks (lens flare), glitch (digital distortion), etc. With Alight Motion Mod APK, you can use these features for free and create amazing videos and animations with them.

    -

    No ads

    -

    The original Alight Motion app may show you some ads while you are using it. These ads can be distracting and annoying. However, with Alight Motion Mod APK, you can enjoy an ad-free experience and focus on your creativity.

    -

    How to download and install Alight Motion Mod APK?

    -

    If you want to download and install Alight Motion Mod APK on your Android device, you need to follow these steps:

    -

    Steps to download and install Alight Motion Mod APK

    -
      -
    1. First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    2. -
    3. Next, you need to download the Alight Motion Mod APK file from a trusted source. You can use this link to download the latest version of Alight Motion Mod APK.
    4. -
    5. After downloading the file, locate it in your file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
    6. -
    7. Once the installation is done, you can launch the Alight Motion Mod APK app from your app drawer or home screen and start creating amazing videos and animations.
    8. -
    -

    Conclusion

    -

    Alight Motion is a powerful and easy-to-use video editing app that allows you to create stunning videos and animations with your smartphone. You can edit your videos with multiple layers, add effects, adjust colors, animate objects, and much more. You can also export your videos in high-quality formats such as MP4, GIF, or PNG.

    -

    Alight Motion Mod APK is a modified version of the original Alight Motion app that gives you access to some premium features for free. These features include no watermark, premium effects unlocked, and no ads. You can download and install Alight Motion Mod APK on your Android device by following the steps mentioned above.

    -

    If you are looking for a video editing app that can help you create amazing videos and animations with your smartphone, you should try Alight Motion Mod APK. It is one of the best video editing apps available for Android users.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Alight Motion Mod APK:

    -
      -
    1. Is Alight Motion Mod APK safe to use?
    2. -

      Yes, Alight Motion Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.

      -
    3. Do I need to root my device to use Alight Motion Mod APK?
    4. -

      No, you do not need to root your device to use Alight Motion Mod APK. You can install it on any Android device that meets the minimum requirements of the app.

      -
    5. What are the minimum requirements of Alight Motion Mod APK?
    6. -

      The minimum requirements of Alight Motion Mod APK are as follows:

      -
        -
      • Android version: 6.0 or higher
      • -
      • RAM: 2 GB or more
      • -
      • Storage: 100 MB or more
      • -
      • Internet connection: Required for downloading and installing the app
      • -
      -
    7. Can I use Alight Motion Mod APK on PC?
    8. -

      No, you cannot use Alight Motion Mod APK on PC directly. However, you can use an Android emulator such as Bluestacks or Nox Player to run Alight Motion Mod APK on your PC. You will need to download and install the emulator first and then follow the same steps as mentioned above for installing the app on your PC.

      -
    9. Can I update Alight Motion Mod APK?
    10. -

      No, you cannot update Alight Motion Mod APK from the Google Play Store or any other source. If you want to update the app, you will need to uninstall the current version and download and install the latest version of Alight Motion Mod APK from a trusted source.

      -
    - : https://www.mediafire.com/file/9w7q8v8z5y9jy0n/Alight_Motion_5.1_1_Mod_APK.apk/file

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FS 21 APK and Start Your Farming Adventure Today.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FS 21 APK and Start Your Farming Adventure Today.md deleted file mode 100644 index a3b9335e760aa064eb4902615a189930f555ec61..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FS 21 APK and Start Your Farming Adventure Today.md +++ /dev/null @@ -1,171 +0,0 @@ - -

    FS 21 APK: A Farming Simulator Game for Android Devices

    -

    If you are looking for a realistic and immersive farming simulator game for your Android device, you might want to check out FS 21 APK. This game lets you manage your own virtual farm, use modern harvesters, tractors, and other machines to grow and harvest crops, raise animals, sell your products, and expand your fields. In this article, we will tell you what FS 21 APK is, why you should play it, and what are some of the alternatives to it.

    -

    What is FS 21 APK?

    -

    FS 21 APK is the short name for Farm Sim 21 PRO - Tractor Farming Simulator 3D APK, which is an Android game developed by War Damage Games. It is a farm simulation game that simulates the life of a modern farmer in a 3D environment. You can choose from a selection of crops to grow, such as wheat, maize, rice, cotton, sugarcane, and more. You can also buy and raise animals like cows, chickens, pigs, and sell their milk, eggs, and meat for a profit. You can drive realistic farming machines like tractors, harvesters, loaders, cultivators, and weeders to plow, seed, water, fertilize, and harvest your crops. You can also explore your surroundings in an open-world simulation with dynamic weather and day/night cycles.

    -

    fs 21 apk


    Download Filehttps://ssurll.com/2uNQuc



    -

    Features of FS 21 APK

    -

    Some of the features of FS 21 APK that make it stand out from other farming simulator games are:

    -
      -
    • It has stunning 3D graphics and animations that make farming even more exciting.
    • -
    • It has an AUTO mode that takes over farming duties when you need a break.
    • -
    • It has a geo map that helps you track your current position and destination.
    • -
    • It has a level system that unlocks rewards and new vehicles as you progress.
    • -
    • It has up to 21 vehicles to choose from, each with different functions and capabilities.
    • -
    • It is free to play and does not require any in-app purchases.
    • -
    • It is offline playable and does not need an internet connection.
    • -
    -

    How to download and install FS 21 APK

    -

    To download and install FS 21 APK on your Android device, you need to follow these steps:

    -
      -
    1. Go to the official website of FS 21 APK or any other trusted source that provides the download link for the game.
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Once the file is downloaded, locate it in your file manager and tap on it to start the installation process.
    6. -
    7. If you see a warning message that says "Install blocked", go to your device settings and enable the option "Allow installation from unknown sources".
    8. -
    9. Follow the instructions on the screen and complete the installation process.
    10. -
    11. Launch the game from your app drawer and enjoy playing it.
    12. -
    -

    Why should you play FS 21 APK?

    -

    FS 21 APK is not just a game but also a learning experience that teaches you about the modern and traditional techniques used by real farmers. By playing this game, you can:

    -

    Benefits of playing FS 21 APK

    -
      -
    • Learn how to manage your own farm and make smart decisions.
    • -

      Benefits of playing FS 21 APK

      -
        -
      • Learn how to manage your own farm and make smart decisions.
      • -
      • Improve your skills in driving, operating, and maintaining various farming machines.
      • -
      • Enjoy the realistic and relaxing atmosphere of farming and nature.
      • -
      • Have fun with the different challenges and missions that test your farming abilities.
      • -
      • Share your achievements and screenshots with your friends and other players online.
      • -
      -

      Tips and tricks for playing FS 21 APK

      -

      To make the most out of your farming experience, you can follow these tips and tricks:

      -

      fs 21 apk download free
      -fs 21 apk mod unlimited money
      -fs 21 apk obb data
      -fs 21 apk latest version
      -fs 21 apk android 1
      -fs 21 apk offline
      -fs 21 apk revdl
      -fs 21 apk hack
      -fs 21 apk pure
      -fs 21 apk uptodown
      -fs 21 apk andropalace
      -fs 21 apk rexdl
      -fs 21 apk for pc
      -fs 21 apk no verification
      -fs 21 apk highly compressed
      -fs 21 apk farming simulator
      -fs 21 apk tractor driving
      -fs 21 apk realistic graphics
      -fs 21 apk pro version
      -fs 21 apk full unlocked
      -fs 21 apk with cheats
      -fs 21 apk new update
      -fs 21 apk original
      -fs 21 apk mirror
      -fs 21 apk mob.org
      -fs 21 apk gameplay
      -fs 21 apk best settings
      -fs 21 apk system requirements
      -fs 21 apk how to install
      -fs 21 apk tips and tricks
      -fs 21 apk features
      -fs 21 apk review
      -fs 21 apk size
      -fs 21 apk old version
      -fs 21 apk beta version
      -fs 21 apk online multiplayer
      -fs 21 apk custom maps
      -fs 21 apk vehicles and tools
      -fs 21 apk animals and crops
      -fs 21 apk weather and seasons
      -fs 21 apk missions and challenges
      -fs 21 apk global market
      -fs 21 apk career mode
      -fs 21 apk sandbox mode
      -fs 21 apk controller support
      -fs 21 apk sound effects and music
      -fs 21 apk bugs and glitches
      -fs 21 apk ratings and feedbacks
      -fs 21 apk alternatives and competitors

      -
        -
      • Plan ahead and choose the crops that suit your climate and soil conditions.
      • -
      • Use the right machines for the right tasks and upgrade them when possible.
      • -
      • Keep an eye on your fuel, water, and fertilizer levels and refill them when needed.
      • -
      • Sell your products at the best prices and invest in new fields and equipment.
      • -
      • Take care of your animals and feed them regularly.
      • -
      -

      What are the alternatives to FS 21 APK?

      -

      If you are looking for other farming simulator games for your Android device, you have plenty of options to choose from. Some of the popular ones are:

      -

      Other farming simulator games for Android devices

      - - - - - - - -
      NameDescription
      Farming Simulator 20A game by GIANTS Software that lets you farm in North America with over 100 vehicles and tools from famous brands like John Deere, Case IH, New Holland, etc.
      FarmVille 2: Country EscapeA game by Zynga that lets you build your own farm, join a co-op, trade with other players, and explore new areas.
      Farm Town: Happy Farming DayA game by Foranj that lets you grow crops, fruits, vegetables, flowers, raise animals, fish, cook, craft, and decorate your farm.
      Farm Frenzy Free: Time Management GameA game by HeroCraft Ltd. that lets you run your own farm, produce goods, sell them in the market, and fend off bears.
      Farm Story 2A game by Storm8 Studios that lets you harvest crops, raise animals, mine for gems, bake pies, make ice cream, and more.
      -

      Comparison of FS 21 APK with other games

      -

      To help you decide which farming simulator game is best for you, here is a comparison of FS 21 APK with other games based on some criteria:

      - - - - - -< -
      CriteriaFS 21 APKFarming Simulator 20FarmVille 2: Country EscapeFarm Town: Happy Farming DayFarm Frenzy Free: Time Management GameFarm Story 2
      Graphics qualityHighHighMediumLowLowMedium
      Gameplay realismHighHighLowLowLowLow
      Variety of crops and animalsMediumMediumHighHighMediumHigh
      Variety of vehicles and toolsHighHighLowLow
      - - - - -< - - - - -< -
      CriteriaFS 21 APKFarming Simulator 20FarmVille 2: Country EscapeFarm Town: Happy Farming DayFarm Frenzy Free: Time Management GameFarm Story 2
      Graphics qualityHighHighMediumLowLowMedium
      Gameplay realismHighHighLowLowLowLow
      Variety of crops and animalsMediumMediumHighHighMediumHigh
      Variety of vehicles and toolsHighHighLowLow -
      CriteriaFS 21 APKFarming Simulator 20FarmVille 2: Country EscapeFarm Town: Happy Farming DayFarm Frenzy Free: Time Management GameFarm Story 2
      Graphics qualityHighHighMediumLowLowMedium
      Gameplay realismHighHighLowLowLowLow
      Variety of crops and animalsMediumMediumHighHighMediumHigh
      Variety of vehicles and toolsHighH igh - - - -< -Variety of vehicles and toolsH igh - - -
      CriteriaFS 21 APKFarming Simulator 20FarmVille 2: Country EscapeFarm Town: Happy Farming DayFarm Frenzy Free: Time Management GameFarm Story 2
      Graphics qualityHighHighMediumLowLowMedium
      Gameplay realismHighHighLowLowLowLow
      Variety of crops and animalsMediumM edium H igh H igh M edium H igh
      H igh L ow L ow L ow L ow
      Game size and compatibility -Sma ll and compatible with most Android devices -Lar ge and compatible with some Android devices -Moderate and compatible with most Android devices -Sma ll and compatible with most Android devices -Sma ll and compatible with most Android devices -Moderate and compatible with most Android devices -
      User rating and reviews -4.1 out of 5 stars with over 10,000 reviews on Google Play Store -4.3 out of 5 stars with over 100,000 reviews on Google Play Store -4.3 out of 5 stars with over 3 million reviews on Google Play Store -4.4 out of 5 stars with over 1 million reviews on Google Play Store -4.2 out of 5 stars with over 300,000 reviews on Google Play Store -4.1 out of 5 stars with over 600,000 reviews on Google Play Store -
      -

      Conclusion

      -

      In conclusion, FS 21 APK is a farming simulator game for Android devices that offers a realistic and immersive farming experience. You can grow and harvest various crops, raise and sell animals, drive and operate different farming machines, and explore an open-world environment. You can also enjoy the benefits of playing this game, such as learning about farming techniques, improving your skills, relaxing in nature, having fun with challenges, and sharing your achievements. You can download and install this game for free from the official website or any other trusted source. You can also compare this game with other farming simulator games based on graphics quality, gameplay realism, variety of crops and animals, variety of vehicles and tools, game size and compatibility, user rating and reviews.

      -

      FAQs

      -

      Here are some frequently asked questions about FS 21 APK:

      -
        -
      1. Q: Is FS 21 APK safe to download and install?
      2. -
      3. A: Yes, FS 21 APK is safe to download and install as long as you get it from the official website or any other trusted source. However, you should always scan the file for viruses before installing it on your device.
      4. -
      5. Q: How can I update FS 21 APK to the latest version?
      6. -
      7. A: You can update FS 21 APK to the latest version by visiting the official website or any other trusted source that provides the download link for the game. You can also check for updates within the game settings.
      8. -
      9. Q: How can I contact the developer of FS 21 APK for feedback or support?
      10. -
      11. A: You can contact the developer of FS 21 APK by sending an email to wardamagegames@gmail.com or by visiting their Facebook page at https://www.facebook.com/WarDamageGames/.
      12. -
      13. Q: How can I play FS 21 APK on PC or iOS devices?
      14. -
      15. A: Unfortunately, FS 21 APK is only available for Android devices at the moment. There is no official version of the game for PC or iOS devices. However, you can try using an Android emulator on your PC or a third-party app store on your iOS device to play the game. However, this is not recommended as it may cause compatibility issues, performance problems, or security risks.
      16. -
      17. Q: How can I get more coins and gems in FS 21 APK?
      18. -
      19. A: You can get more coins and gems in FS 21 APK by completing missions, selling your products, leveling up, watching ads, or using the daily bonus feature. You can also use some cheats or hacks to get unlimited coins and gems, but this is not advised as it may ruin the fun of the game, violate the terms of service, or get your account banned.
      20. -
      -

      I hope you enjoyed reading this article and learned something new about FS 21 APK. If you have any questions or comments, feel free to leave them below. Happy farming!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download True Skate IPA for Free and Master All the Skateboarding Skills.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download True Skate IPA for Free and Master All the Skateboarding Skills.md deleted file mode 100644 index e8ce5769b9a01256aceede2c69a1cb768f3f71ec..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download True Skate IPA for Free and Master All the Skateboarding Skills.md +++ /dev/null @@ -1,129 +0,0 @@ - -

      True Skate IPA Free Download: How to Enjoy the Ultimate Skateboarding Sim on Your iPhone or iPad

      -

      If you are a fan of skateboarding, you probably have heard of True Skate, one of the most popular and realistic skateboarding games for mobile devices. True Skate lets you experience the thrill and challenge of skating on your fingertips, with stunning graphics, physics-based controls, and authentic skateparks. Whether you want to practice your skills, compete with other players, or just have fun, True Skate has something for everyone.

      -

      true skate ipa free download


      Download Zip >>>>> https://ssurll.com/2uNQIT



      -

      But what if you want to play True Skate on your iPhone or iPad without paying for it? Is there a way to download True Skate IPA for free and enjoy all its features and content? The answer is yes, and in this article, we will show you how. We will also give you some tips and tricks on how to play True Skate and master the tricks. So read on and get ready to skate like a pro.

      -

      What is True Skate and why you should download it

      -

      True Skate is a 3D, sports, skateboarding, and single-player simulation game developed by True Axis for mobile platforms such as Android and iOS. The game was released in 2012 and has since become one of the best-selling and highest-rated skateboarding games on the App Store and Google Play. It has also been featured as the official Street League Skateboarding mobile game.

      -

      True Skate is not your typical arcade-style skateboarding game. It is a game of skill that takes 10 minutes to understand but a lifetime to master. You use your fingers like you would your feet on a real skateboard. You flick the board to make it react exactly how you would expect, and drag your finger on the ground to push. The skateboard reacts instantly as foot and finger feel truly connected whether pushing, popping, flipping, or grinding.

      -

      The game features a realistic physics system that listens for swipe, position, direction, and strength from the player and processes how the skateboard should respond in real-time. So the same flick in two different points of the skateboard will react very differently. Literally any trick is possible with true control of the skateboard, so if you can dream it, you can do it.

      -

      The game also features stunning graphics that create a immersive environment for skating. You can choose from over 20 real-world spots including The Berrics, SPoT, Love Park, MACBA, and Street League Skateboarding Championship Courses from 2012. You can also customize your skater and setup with decks and grips from Santa Cruz, DGK , Plan B, and more. You can also change the color of your wheels, trucks, and bearings.

      -

      true skate ipa download for ios
      -true skate ipa cracked version
      -true skate ipa file free
      -true skate ipa no jailbreak
      -true skate ipa latest update
      -true skate ipa modded apk
      -true skate ipa unlimited money
      -true skate ipa all maps unlocked
      -true skate ipa hack tool
      -true skate ipa cheats codes
      -true skate ipa online multiplayer
      -true skate ipa best tricks
      -true skate ipa realistic physics
      -true skate ipa custom boards
      -true skate ipa street league
      -true skate ipa touch arcade
      -true skate ipa review ratings
      -true skate ipa gameplay videos
      -true skate ipa tips and tricks
      -true skate ipa tutorial guide
      -true skate ipa support forum
      -true skate ipa official website
      -true skate ipa developer contact
      -true skate ipa alternative apps
      -true skate ipa similar games
      -true skate ipa compatible devices
      -true skate ipa system requirements
      -true skate ipa installation instructions
      -true skate ipa troubleshooting issues
      -true skate ipa refund policy
      -true skate ipa privacy policy
      -true skate ipa terms and conditions
      -true skate ipa feedback and suggestions
      -true skate ipa frequently asked questions
      -true skate ipa bug reports and fixes
      -true skate ipa new features and improvements
      -true skate ipa upcoming updates and events
      -true skate ipa fan community and groups
      -true skate ipa merchandise and accessories
      -true skate ipa wallpapers and stickers
      -true skate ipa soundtracks and music
      -true skate ipa awards and achievements
      -true skate ipa trivia and facts
      -true skate ipa history and origins
      -true skate ipa inspiration and influences
      -true skate ipa behind the scenes and secrets
      -true skate ipa fun and entertainment
      -true skate ipa challenge and competition
      -true skate ipa skill and creativity

      -

      True Skate is not just a game, it is a community of skaters who share their passion and creativity. You can record your best tricks and edit them with slow motion, reverse, and frame-by-frame options. You can also upload your videos to the True Skate community or share them on social media. You can also watch other players' videos and learn from their techniques. You can also compete with other players in global leaderboards and events.

      -

      True Skate has received rave reviews from users and critics alike. It has a 4.5-star rating on the App Store and a 4.3-star rating on Google Play. It has also been praised by reputable sources such as TouchArcade, IGN, Pocket Gamer, and AppSpy. Here are some of the comments from satisfied players:

      -
        -
      • "This is hands down the best skateboarding game on any platform. The physics are spot on, the graphics are amazing, and the controls are intuitive and responsive."
      • -
      • "This game is so addictive and fun. I love how you can customize your board and skater, and how you can skate in different locations. The tricks are realistic and challenging."
      • -
      • "This game is a masterpiece. It is the closest thing to real skateboarding on a mobile device. The physics are incredible, the graphics are stunning, and the gameplay is smooth and satisfying."
      • -
      -

      If you love skateboarding or want to learn more about it, True Skate is the game for you. It will give you hours of entertainment and challenge, as well as a sense of accomplishment and creativity. You will feel like you are skating for real, without the risk of injury or damage.

      -

      How to download True Skate IPA for free

      -

      True Skate is a paid game on the App Store, which means you have to pay $1.99 to download it on your iPhone or iPad. However, there is a way to get True Skate IPA for free, which is the file format for iOS apps. By downloading True Skate IPA for free, you can enjoy all the features and content of the game without spending a dime.

      -

      But how do you download True Skate IPA for free? The answer is simple: you need to find a reliable source that offers True Skate IPA for free download. There are many websites that claim to provide free IPA files for various apps and games, but not all of them are trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also offer fake or outdated files that won't work on your device.

      -

      To avoid these risks, you need to be careful and selective when choosing a source for downloading True Skate IPA for free. One of the best sources that we recommend is [IPA Library], which is a website that offers a huge collection of free IPA files for iOS apps and games. IPA Library is safe, secure, and updated regularly with new and working files. You can easily find True Skate IPA on IPA Library by using the search function or browsing the categories.

      -

      Before you download True Skate IPA for free, you need to make sure that your device and iOS version are compatible with the game. True Skate requires iOS 10.0 or later and works on iPhone 5s or newer, iPad Air or newer, iPad mini 2 or newer, and iPod touch 6th generation or newer. If your device or iOS version does not meet these requirements, you may not be able to install or run True Skate properly.

      -

      How to install True Skate IPA on your device

      -

      After you download True Skate IPA for free from IPA Library or another source, you need to install it on your device. However, you cannot install True Skate IPA directly from your device's browser or file manager, because it is not an official app from the App Store. You need to use a third-party tool that can sideload IPA files onto your device without jailbreaking it.

      -

      There are two tools that we recommend for sideloading True Skate IPA onto your device: Cydia Impactor and AltStore. Cydia Impactor is a desktop application that can transfer IPA files from your computer to your device via USB cable. AltStore is an alternative app store that can install IPA files from your device's browser or file manager via Wi-Fi.

      -

      Both tools are easy to use and do not require jailbreaking your device. However, they have some limitations and drawbacks that you need to be aware of. For example, Cydia Impactor requires an Apple ID and password to sign the IPA file before installing it on your device. AltStore requires an AltServer app running on your computer to refresh the IPA file every seven days to prevent it from expiring. You can find more information about these tools and their pros and cons on their official websites or online forums.

      -

      To install True Skate IPA on your device using Cydia Impactor, you need to follow these steps:

      -
        -
      1. Download Cydia Impactor from [here] and install it on your computer.
      2. -
      3. Connect your device to your computer via USB cable and launch Cydia Impactor.
      4. -
      5. Drag and drop the True Skate IPA file that you downloaded onto the Cydia Impactor window.
      6. -
      7. Enter your Apple ID and password when prompted. This is only used to sign the IPA file and is not stored by Cydia Impactor or any third party.
      8. -
      9. Wait for Cydia Impactor to install True Skate IPA on your device. You will see a progress bar and a message saying "Complete" when it is done.
      10. -
      11. Disconnect your device from your computer and go to Settings > General > Device Management on your device.
      12. -
      13. Find and tap on the profile that matches your Apple ID and trust it.
      14. -
      15. Go to your home screen and launch True Skate. Enjoy the game!
      16. -
      -

      To install True Skate IPA on your device using AltStore, you need to follow these steps:

      -
        -
      1. Download AltServer from [here] and install it on your computer.
      2. -
      3. Download AltStore from [here] and install it on your device.
      4. -
      5. Connect your device to the same Wi-Fi network as your computer and launch AltServer on your computer.
      6. -
      7. Go to Settings > General > Device Management on your device and trust the AltStore profile.
      8. -
      9. Launch AltStore on your device and sign in with your Apple ID and password. This is only used to sign the IPA file and is not stored by AltStore or any third party.
      10. -
      11. Tap on the + icon at the top left corner of the AltStore app and browse to the True Skate IPA file that you downloaded.
      12. -
      13. Tap on the True Skate IPA file and wait for AltStore to install it on your device. You will see a progress bar and a message saying "Done" when it is done.
      14. -
      15. Go to your home screen and launch True Skate. Enjoy the game!
      16. -
      -

      How to play True Skate and master the tricks

      -

      Now that you have installed True Skate IPA on your device, you are ready to play the game and master the tricks. True Skate is a game that requires practice, patience, and precision, but it is also very rewarding and fun. Here are some tips and tricks to help you get started:

      -
        -
      • The game has two modes: Free Skate and Missions. In Free Skate, you can skate freely in any of the skateparks that you have unlocked or purchased. In Missions, you can complete various challenges and objectives in each skatepark, such as scoring a certain amount of points, performing specific tricks, or following a line. You can earn credits by completing missions, which you can use to buy new skateparks, decks, grips, or wheels.
      • -
      • The game has a tutorial that teaches you the basics of skating, such as pushing, steering, ollieing, flipping, grinding, sliding, and landing. You can access the tutorial by tapping on the menu icon at the top right corner of the screen and selecting Tutorial. You can also watch video tutorials by tapping on the TV icon at the bottom left corner of the screen.
      • -
      • The game uses touch-based controls that mimic real skateboarding movements. You can swipe on the board to make it flip or spin, tap on the tail or nose to pop it up, drag your finger on the ground to push or brake, tilt your device to steer or balance, and double-tap on objects to grind or slide on them. You can also customize the controls by tapping on the menu icon at the top right corner of the screen and selecting Options > Control Options.
      • -
      • The game has a trick list that shows you how to perform various tricks, such as kickflips, heelflips, shuvits, impossibles, tre flips, hardflips, varials, inwards, outwards, 360s, 540s, 720s, big spins, laser flips, late flips, double flips, triple flips , and more. You can access the trick list by tapping on the menu icon at the top right corner of the screen and selecting Trick List. You can also view the trick names and scores by tapping on the eye icon at the bottom right corner of the screen.
      • -
      • The game has a replay editor that allows you to record, edit, and share your best tricks. You can access the replay editor by tapping on the camera icon at the bottom right corner of the screen. You can use the buttons at the bottom of the screen to play, pause, rewind, fast forward, slow motion, reverse, or frame-by-frame your replay. You can also use the slider at the top of the screen to adjust the camera angle, zoom, or focus. You can save your replay by tapping on the disk icon at the top right corner of the screen. You can also upload your replay to the True Skate community or share it on social media by tapping on the share icon at the top left corner of the screen.
      • -
      • The game has a lot of content and features that you can unlock or purchase with credits or real money. You can earn credits by completing missions, watching ads, or buying them with real money. You can use credits to buy new skateparks, decks, grips, wheels, or customizations. You can also buy special packs that include exclusive content and bonuses. You can access the store by tapping on the menu icon at the top right corner of the screen and selecting Store.
      • -
      -

      Conclusion

      -

      True Skate is a game that will make you feel like you are skateboarding for real, without leaving your couch. It is a game that will challenge your skills, creativity, and style, as well as entertain you for hours. It is a game that will let you explore different skateparks, customize your skater and setup, and share your tricks with other players. It is a game that will make you love skateboarding even more.

      -

      If you want to play True Skate on your iPhone or iPad for free, you can download True Skate IPA for free from a reliable source like IPA Library and install it on your device using Cydia Impactor or AltStore. These tools will allow you to sideload True Skate IPA onto your device without jailbreaking it. However, you need to be careful and follow the instructions carefully to avoid any issues or errors.

      -

      True Skate is a game that deserves a try, especially if you are a skateboarding fan or enthusiast. It is a game that will impress you with its realistic physics, graphics, and controls. It is a game that will teach you how to perform various tricks, grinds, slides, and flips. It is a game that will inspire you to skate like a pro.

      -

      So what are you waiting for? Download True Skate IPA for free today and enjoy the ultimate skateboarding sim on your iPhone or iPad.

      -

      FAQs

      -

      Here are some frequently asked questions about True Skate IPA free download:

      -
        -
      1. Q: Is True Skate IPA free download legal?
      2. -
      3. A: True Skate IPA free download is not legal, as it violates the terms and conditions of the App Store and the developer of True Skate. However, it is unlikely that you will face any legal consequences for downloading True Skate IPA for free, as long as you do not distribute or sell it to others.
      4. -
      5. Q: Is True Skate IPA free download safe?
      6. -
      7. A: True Skate IPA free download is safe if you download it from a trusted source like IPA Library and install it using a reputable tool like Cydia Impactor or AltStore. However, you need to be careful and avoid downloading from untrusted sources or using unknown tools, as they may contain viruses, malware, or spyware that can harm your device or steal your personal information.
      8. -
      9. Q: Is True Skate IPA free download compatible with my device and iOS version?
      10. -
      11. A: True Skate IPA free download is compatible with iOS 10.0 or later and works on iPhone 5s or newer, iPad Air or newer, iPad mini 2 or newer , and iPod touch 6th generation or newer. If your device or iOS version does not meet these requirements, you may not be able to install or run True Skate properly.
      12. -
      13. Q: How can I update True Skate IPA free download?
      14. -
      15. A: True Skate IPA free download cannot be updated automatically or manually from the App Store, as it is not an official app. You need to check the source where you downloaded True Skate IPA for free for any updates and download the latest version of the IPA file. Then, you need to uninstall the old version of True Skate from your device and install the new version using Cydia Impactor or AltStore.
      16. -
      17. Q: How can I delete True Skate IPA free download?
      18. -
      19. A: You can delete True Skate IPA free download from your device by following the same steps as deleting any other app. You can either tap and hold on the True Skate icon on your home screen and select Delete App, or go to Settings > General > iPhone Storage and find True Skate and tap on Delete App.
      20. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/sklearn-docs/plot-k-means-digits/README.md b/spaces/sklearn-docs/plot-k-means-digits/README.md deleted file mode 100644 index 934967afc1f8afe17e222307a686589582d0926b..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/plot-k-means-digits/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Plot K Means Digits -emoji: 🐢 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md b/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md deleted file mode 100644 index 521e5580485d75cdb302125e549d09d43b7d9e31..0000000000000000000000000000000000000000 --- a/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sk 06 SL AI Image Music Video UI UX URL -emoji: 🔥 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/skyler36237/vits-uma-genshin-honkai/models.py b/spaces/skyler36237/vits-uma-genshin-honkai/models.py deleted file mode 100644 index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000 --- a/spaces/skyler36237/vits-uma-genshin-honkai/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - device = next(self.parameters()).device # 获取模型所在的设备 - x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device)) - if self.n_speakers > 0: - g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/microsoft___layoutlmv3-base.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/microsoft___layoutlmv3-base.md deleted file mode 100644 index 9be8565124925f706475230a1a9795f86732c8d2..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/microsoft___layoutlmv3-base.md +++ /dev/null @@ -1,29 +0,0 @@ -# LayoutLMv3 - -[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3) - -## Model description - -LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. - -[LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) -Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, ACM Multimedia 2022. - -## Citation - -If you find LayoutLM useful in your research, please cite the following paper: - -``` -@inproceedings{huang2022layoutlmv3, - author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei}, - title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking}, - booktitle={Proceedings of the 30th ACM International Conference on Multimedia}, - year={2022} -} -``` - -## License - -The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). -Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project. -[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct) \ No newline at end of file diff --git a/spaces/softcatala/comparativa-tts-catala/app.py b/spaces/softcatala/comparativa-tts-catala/app.py deleted file mode 100644 index 2f1e71fb64d04a5d72b8c569e17d3a8af496f3cf..0000000000000000000000000000000000000000 --- a/spaces/softcatala/comparativa-tts-catala/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import tempfile -import gradio as gr -import os -from TTS.utils.synthesizer import Synthesizer -from espeak_phonemizer import Phonemizer -from engine import Piper -from festival import festival_synthesize -from mms import MMS - -MAX_TXT_LEN = 325 - -fonemitzador = Phonemizer("ca") - -def carrega_bsc(): - model_path = os.getcwd() + "/models/bsc/best_model.pth" - config_path = os.getcwd() + "/models/bsc/config.json" - speakers_file_path = os.getcwd() + "/models/bsc/speakers.pth" - vocoder_path = None - vocoder_config_path = None - - synthesizer = Synthesizer( - model_path, config_path, speakers_file_path, None, vocoder_path, vocoder_config_path, - ) - - return synthesizer - -def carrega_collectivat(): - model_path = os.getcwd() + "/models/collectivat/fast-speech_best_model.pth" - config_path = os.getcwd() + "/models/collectivat/fast-speech_config.json" - vocoder_path = os.getcwd() + "/models/collectivat/ljspeech--hifigan_v2_model_file.pth" - vocoder_config_path = os.getcwd() + "/models/collectivat/ljspeech--hifigan_v2_config.json" - synthesizer = Synthesizer( - model_path, config_path, None, None, vocoder_path, vocoder_config_path - ) - - return synthesizer - -def carrega_piper(): - return Piper(os.getcwd() + "/models/piper/ca-upc_ona-x-low.onnx") - -def carrega_mms(): - return MMS(os.getcwd() + "/models/mms") - - -model_bsc = carrega_bsc() -SPEAKERS = model_bsc.tts_model.speaker_manager.speaker_names - -model_collectivat = carrega_collectivat() - -model_piper = carrega_piper() - -model_mms = carrega_mms() - -request_count = 0 - -def tts(text, festival_voice, speaker_idx): - if len(text) > MAX_TXT_LEN: - text = text[:MAX_TXT_LEN] - print(f"Input text was cutoff since it went over the {MAX_TXT_LEN} character limit.") - print(text) - - # synthesize - wav_bsc = model_bsc.tts(text, speaker_idx) - wav_coll = model_collectivat.tts(text) - wav_piper = model_piper.synthesize(text) - - fp_bsc = "" - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - model_bsc.save_wav(wav_bsc, fp) - fp_bsc = fp.name - - fp_coll = "" - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - model_collectivat.save_wav(wav_coll, fp) - fp_coll = fp.name - - fp_piper = "" - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - fp.write(wav_piper) - fp_piper = fp.name - - fp_mms = "" - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - model_mms.synthesize(fp.name, text) - fp_mms = fp.name - - fonemes = fonemitzador.phonemize(text, keep_clause_breakers=True) - - fp_festival = festival_synthesize(text, festival_voice) - - global request_count - request_count += 1 - print(f"Requests: {request_count}") - return fonemes, fp_festival, fp_bsc, fp_coll, fp_piper, fp_mms - - -description=""" -Amb aquesta aplicació podeu sintetitzar text a veu amb els últims models neuronals lliures pel català i amb el motor Festival. - -1. Model multi-parlant VITS entrenat pel BSC (Projecte Aina) [enllaç](https://huggingface.co/projecte-aina/tts-ca-coqui-vits-multispeaker) -2. Model Fastspeech entrenat per Col·lectivat [enllaç](https://github.com/CollectivaT-dev/TTS-API) -3. Model VITS entrenat per Piper/Home Assistant [enllaç](https://github.com/rhasspy/piper) -3. Model VITS entrenat per Meta (llicència CC-BY-NC) [enllaç](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) - -El primer model ha estat entrenat amb totes les veus de FestCAT, els talls de Common Voice 8 i un altre corpus pel que conté moltes veus de qualitat variable. La veu d'Ona està seleccionada per defecte per la comparativa però podeu provar les altres. -Els models 2 i 3 han estat entrenats amb la veu d'Ona de FestCAT. -El model 4, anomenat MMS, de Meta (Facebook) ha estat entrenat a partir de dades d'un [audiollibre](http://live.bible.is/bible/CATBSS/LUK/1) de la Bíblia - -Aquesta aplicació fa servir l'últim estat de l'espeak millorat per Carme Armentano del BSC -https://github.com/projecte-aina/espeak-ng - -NOTA: El model de col·lectivat treballa amb grafemes pel que no fa servir espeak com a fonemitzador. Festival conté les seves pròpies normes fonètiques. -""" -article= "" - -iface = gr.Interface( - fn=tts, - inputs=[ - gr.Textbox( - label="Text", - value="L'Èlia i l'Alí a l'aula. L'oli i l'ou. Lulú olorava la lila.", - ), - gr.Dropdown(label="Parlant del motor Festival", choices=["ona", "pau"], value="ona"), - gr.Dropdown(label="Parlant del model VITS multi-parlant del BSC", choices=SPEAKERS, value="ona") - ], - outputs=[ - gr.Markdown(label="Fonemes"), - gr.Audio(label="Festival",type="filepath"), - gr.Audio(label="BSC VITS",type="filepath"), - gr.Audio(label="Collectivat Fastspeech",type="filepath"), - gr.Audio(label="Piper VITS",type="filepath"), - gr.Audio(label="Meta MMS VITS",type="filepath") - ], - title="Comparativa de síntesi lliure en català️", - description=description, - article=article, - allow_flagging="never", - layout="vertical", - live=False, - examples=[ - ["Duc pà sec al sac, m'assec on sóc i el suco amb suc", "ona", "ona"], - ["Un plat pla blanc, ple de pebre negre n’era. Un plat blanc pla, ple de pebre negre està", "ona", "ona"], - ["Visc al bosc i busco vesc i visc del vesc que busco al bosc", "ona", "ona"], - ["Una polla xica, pica, pellarica, camatorta i becarica va tenir sis polls xics, pics, pellarics, camacurts i becarics. Si la polla no hagués sigut xica, pica, pellarica, camatorta i becarica, els sis polls no haurien sigut xics, pics, pellarics, camacurts i becarics.", "ona", "ona"] - ] -) -iface.launch(server_name="0.0.0.0", server_port=7860) diff --git a/spaces/sparanoid/milky-green-svc/README.md b/spaces/sparanoid/milky-green-svc/README.md deleted file mode 100644 index b930db4589cf372ef4894ccda03df0c389cda9ca..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-svc/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Milky Green SOVITS -emoji: 🍵 -colorFrom: cyan -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sqc1729/bingi/src/components/ui/icons.tsx b/spaces/sqc1729/bingi/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/README.md deleted file mode 100644 index 57104230655c7c517d25904e634c53b6159ee60f..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# Unit to Speech Model (unit2speech) - -Unit to speech model is modified Tacotron2 model that learns to synthesize speech from discrete speech units. All models are trained on quantized [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). - -Upstream Units | Download Link -|-|- -Log Mel Filterbank + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km50/tts_checkpoint_best.pt) -Log Mel Filterbank + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km100/tts_checkpoint_best.pt) -Log Mel Filterbank + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km200/tts_checkpoint_best.pt) -Log Mel Filterbank + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km500/tts_checkpoint_best.pt) -Modified CPC + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km50/tts_checkpoint_best.pt) -Modified CPC + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km100/tts_checkpoint_best.pt) -Modified CPC + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km200/tts_checkpoint_best.pt) -Modified CPC + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km500/tts_checkpoint_best.pt) -HuBERT Base + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km50/tts_checkpoint_best.pt) -HuBERT Base + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km100/tts_checkpoint_best.pt) -HuBERT Base + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km200/tts_checkpoint_best.pt) -HuBERT Base + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km500/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km50/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km100/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km200/tts_checkpoint_best.pt) -wav2vec 2.0 Large + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km500/tts_checkpoint_best.pt) - -## Run inference using a unit2speech model -* Install librosa, unidecode and inflect using `pip install librosa, unidecode, inflect` -* Download [Waveglow checkpoint](https://dl.fbaipublicfiles.com/textless_nlp/gslm/waveglow_256channels_new.pt). This is the vocoder. - -Sample commnd to run inference using trained unit2speech models. Please note that the quantized audio to synthesized should be using the same units as the unit2speech model was trained with. -``` -FAIRSEQ_ROOT= -TTS_MODEL_PATH= -QUANTIZED_UNIT_PATH= -OUT_DIR= -WAVEGLOW_PATH= - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py \ - --tts_model_path $TTS_MODEL_PATH \ - --quantized_unit_path $QUANTIZED_UNIT_PATH \ - --out_audio_dir $OUT_DIR \ - --waveglow_path $WAVEGLOW_PATH \ - --max_decoder_steps 2000 -``` \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Battlefield 3 Game File Part 18.rar 238 Mb.rar.md b/spaces/stomexserde/gpt4-ui/Examples/Battlefield 3 Game File Part 18.rar 238 Mb.rar.md deleted file mode 100644 index c9f118b2bedf9ff45ade7502d1eab6070dec37e2..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Battlefield 3 Game File Part 18.rar 238 Mb.rar.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      Battlefield 3: How to Download and Install the Game on PC

      -

      Battlefield 3 is a first-person shooter video game that was released in 2011 by Electronic Arts. It is the sequel to Battlefield 2 and the eleventh installment in the Battlefield franchise. The game features a single-player campaign, a co-operative mode, and a multiplayer mode with up to 64 players on PC. The game also introduces the Battlelog, a free social service that allows players to communicate, track their stats, and join games with friends.

      -

      battlefield 3 game file part 18.rar 238 mb.rar


      Download File ►►► https://urlgoal.com/2uI8sl



      -

      If you want to play Battlefield 3 on your PC, you will need to download and install the game files. There are different ways to do this, depending on where you bought the game from. Here are some of the options:

      -
        -
      • If you bought the game from Steam, you can download and install the game through the Steam client. You will need to log in to your Steam account, go to your library, find Battlefield 3, and click on "Install". The Steam client will automatically download and install the game files for you.
      • -
      • If you bought the game from Origin, you can download and install the game through the Origin client. You will need to log in to your Origin account, go to your library, find Battlefield 3, and click on "Download". The Origin client will automatically download and install the game files for you.
      • -
      • If you bought the game from another online store or have a physical copy of the game, you will need to download and install the game files manually. You will need to find the game file part 18.rar 238 mb.rar, which is one of the compressed files that contain the game data. You will also need to download and install a program that can extract RAR files, such as WinRAR or 7-Zip. Once you have downloaded and installed the program, you can right-click on the game file part 18.rar 238 mb.rar and select "Extract here" or "Extract to folder". This will create a folder with the extracted game files. You will need to repeat this process for all the other RAR files that contain the game data. Once you have extracted all the game files, you can run the setup.exe file inside the folder and follow the instructions to install the game.
      • -
      -

      After you have downloaded and installed the game files, you can launch Battlefield 3 from your desktop shortcut or from your Steam or Origin library. You will need to log in to your EA account and activate your product key if you haven't done so already. You will also need to update your game to the latest version if there are any available patches. Then, you can enjoy playing Battlefield 3 on your PC!

      - -

      Tips for Playing Battlefield 3

      -

      Battlefield 3 is a game that requires teamwork, strategy, and skill to succeed. Whether you are playing the single-player campaign, the co-operative mode, or the multiplayer mode, you will need to know how to use your weapons, vehicles, gadgets, and classes effectively. Here are some tips that can help you improve your performance and have more fun in Battlefield 3.

      -

      -

      Always Be Spotting

      -

      This is the most important tip for Battlefield 3, because it can make a huge difference for your team. By highlighting an enemy and pressing Back on Xbox 360, Select on PS3, or Q on PC, you will mark the enemy with an orange triangle for your entire team. This will allow your teammates to see the enemy's location, movement, and health status. Spotting can also help you earn more points, as you will get assists for every kill that your teammates make on your spotted enemies. Spotting can also reveal enemy vehicles, equipment, and explosives. You should spot every enemy you see, even if you are not going to engage them yourself. Spotting can save your life and your team's life.

      -

      Know Your Role

      -

      Battlefield 3 has four classes: Assault, Support, Engineer, and Recon. Each class has its own specialties, weapons, and gadgets that can help you and your team in different situations. You should choose a class that suits your playstyle and the map you are playing on. You should also be aware of what your class can do and what it cannot do. For example, the Assault class can heal and revive teammates with the Medkit and Defibrillator, but it cannot repair vehicles or destroy enemy armor. The Support class can resupply ammo and suppress enemies with the Ammo Box and Light Machine Guns, but it cannot snipe or spot enemies from afar. The Engineer class can repair vehicles and damage enemy armor with the Repair Tool and Rocket Launchers, but it cannot heal or revive teammates or provide ammo. The Recon class can snipe and spot enemies from afar with the Sniper Rifle and MAV (Micro Air Vehicle), but it cannot resupply or repair anything or engage in close combat effectively.

      -

      Forget About Kill/Death Ratios

      -

      Battlefield 3 is not a game where kills are everything. It is a game where objectives are everything. Whether you are playing Rush, Conquest, Team Deathmatch, or any other mode, you should always focus on completing the objectives rather than getting kills. Objectives can be capturing flags, arming or defusing M-COM stations, destroying vehicles, or simply staying alive. Completing objectives will earn you more points than kills, and will also help your team win the match. Kills are important, but they are not the main goal of the game. You should not worry about dying too much or having a low kill/death ratio. You should worry about helping your team and having fun.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/studiobrn/SplitTrack/audiocraft/models/builders.py b/spaces/studiobrn/SplitTrack/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/sub314xxl/zeroscope/share_btn.py b/spaces/sub314xxl/zeroscope/share_btn.py deleted file mode 100644 index e52053dcf5728969d6d51fd75e98fe56573f7ed8..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/zeroscope/share_btn.py +++ /dev/null @@ -1,72 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `vid-zeroscope-${{videoId}}.mp4`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const captionTxt = gradioEl.querySelector('#prompt-in textarea').value; - const outputVideo = gradioEl.querySelector('#video-output video'); - - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Prompt: -${captionTxt} - -#### Zeroscope video result: -${dataOutputVid} - -`; - const params = new URLSearchParams({ - title: captionTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/zeroscope/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/subhajitmaji/MusicGen/tests/modules/__init__.py b/spaces/subhajitmaji/MusicGen/tests/modules/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/tests/modules/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/subhc/Guess-What-Moves/utils/grid.py b/spaces/subhc/Guess-What-Moves/utils/grid.py deleted file mode 100644 index 52d71b8fd2377cf36c210cc3b8641a4484ef45bb..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/utils/grid.py +++ /dev/null @@ -1,9 +0,0 @@ -import torch - - -def get_meshgrid(resolution, device): - grid_x, grid_y = torch.meshgrid(torch.arange(resolution[0]).float() / resolution[0], - torch.arange(resolution[1]).float() / resolution[1], indexing='ij') - grid_x = grid_x.to(device) - grid_y = grid_y.to(device) - return grid_x, grid_y diff --git a/spaces/sukh28/toxic_gradio_app/README.md b/spaces/sukh28/toxic_gradio_app/README.md deleted file mode 100644 index f71e1e9ee46c612ea3de44d15d8b9dd9bfdbab34..0000000000000000000000000000000000000000 --- a/spaces/sukh28/toxic_gradio_app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Toxic Gradio App -emoji: 🦀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/supercyx3/nova/README.md b/spaces/supercyx3/nova/README.md deleted file mode 100644 index a0d16c86c995f73ac641d0fc0b20823ada2e38a8..0000000000000000000000000000000000000000 --- a/spaces/supercyx3/nova/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPT-Next-Web-Nova -emoji: 🌍 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 3000 -duplicated_from: dongsiqie/nova ---- -免费key的来源:https://nova-oss.com - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download JSpy RAT V0.08 Full Version.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download JSpy RAT V0.08 Full Version.md deleted file mode 100644 index 26afe6fb6a80e104b4b5d63658e51b13fc5ba146..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download JSpy RAT V0.08 Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download jSpy RAT v0.08 Full Version


      Downloadhttps://cinurl.com/2uEYiY



      - -July 29, 2018 - Download jSpy RAT v0.08 full version: Password: EHT Click here to download jSpy rat v0.08 full version. The password is EHT. July 30, 2018 - Download jSpy RAT v0.08 full version: Password: EHT Click here to download jSpy rat v0.08 full version. The password is EHT. August 2, 2018 - Download full version of jSpy RAT v0.08: Password: EHT Click here to download full version of jSpy rat v0.08. The password is EHT. August 3, 2018 - Download full version of jSpy RAT v0.08: Password: EHT Click here to download full version of jSpy rat v0.08. Password - EHT. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/National Book Font Free Download BEST.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/National Book Font Free Download BEST.md deleted file mode 100644 index 745d0ae0202be1ae2a16a783c925ef01ff464a25..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/National Book Font Free Download BEST.md +++ /dev/null @@ -1,22 +0,0 @@ -

      National Book Font Free Download


      DOWNLOAD ✫✫✫ https://cinurl.com/2uEYxz



      - -National-Book Typeface. The typeface was made with several goal goals in mind, with much of its distinguishing features being large letters to make the font attractive and easy to read. - -Carey: After you have your idea for a typeface, it is important to have someone to create the typeface for you. Developing a typeface is like the making of any good product. So, when you need to get ideas on how to make your own typeface, you need to turn to the internet. - -You will find numerous resources online with typeface ideas, like these typefaces, which are made by Christian Schwartz. What is your take on the typeface that Christian made here?. Caret (\): Where two or more letters follow each other, the ones on top usually typefaces is in italics. So, if you are going to use a larger font size, you can make the ones below italic. - -When using boldface on italic type, you will want to use a larger font size, which means you will want to use the backslash character, or backward slash, or it can be accessed by CTRL + G. - -Carey: After you have your idea for a typeface, it is important to have someone to create the typeface for you. - -Developing a typeface is like the making of any good product. So, when you need to get ideas on how to make your own typeface, you need to turn to the internet. - -You will find numerous resources online with typeface ideas, like these typefaces, which are made by Christian Schwartz. What is your take on the typeface that Christian made here?. - -Carey: After you have your idea for a typeface, it is important to have someone to create the typeface for you. Caret (\): Where two or more letters follow each other, the ones on top usually typefaces is in italics. So, if you are going to use a larger font size, you can make the ones below italic. - -Carey: After you have your idea for a typeface, it is important to have someone to create the typeface for you. Developing a typeface is like the making 4fefd39f24
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tableau Desktop 2019.3.1 Crack Torrent Product Keys.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tableau Desktop 2019.3.1 Crack Torrent Product Keys.md deleted file mode 100644 index db6be875575cd360f0bbda71adf84df1989a9797..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Tableau Desktop 2019.3.1 Crack Torrent Product Keys.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Tableau Desktop 2019.3.1 Crack Torrent Product Keys


      Download Filehttps://cinurl.com/2uEYMt



      - -Tableau Desktop 2019.3.1 Crack Torrent Product Keys tableau desktop manage product keys, tableau desktop product key, tableau desktop ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FULL)) Free Download Service Tool Mp287.zip.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FULL)) Free Download Service Tool Mp287.zip.md deleted file mode 100644 index 5eca37444f084d4109af15b0fd87376288e19805..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FULL)) Free Download Service Tool Mp287.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

      free download service tool mp287.zip


      DOWNLOADhttps://urluss.com/2uCFqK



      -
      -When you are done, save the changes and then close the Zip Tool. ... being branded as spyware is actually a Windows Telemetry service deployed by HP, called ... Free Download Driver Printer Canon Pixma MP287 for Windows XP, Vista, ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/registry.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/registry.py deleted file mode 100644 index a204a07fba10e614223f090d1a57cf9c4d74d4a1..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/parallel/registry.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch.nn.parallel import DataParallel, DistributedDataParallel - -from annotator.uniformer.mmcv.utils import Registry - -MODULE_WRAPPERS = Registry('module wrapper') -MODULE_WRAPPERS.register_module(module=DataParallel) -MODULE_WRAPPERS.register_module(module=DistributedDataParallel) diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/preprocess_imagenet22k.py b/spaces/taesiri/ChatGPT-ImageCaptioner/tools/preprocess_imagenet22k.py deleted file mode 100644 index 6dda56c222a30c7be23fafbdab4be3fe611597e2..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/tools/preprocess_imagenet22k.py +++ /dev/null @@ -1,148 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import numpy as np -import sys - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -sys.path.insert(0, 'third_party/Deformable-DETR') -from detic.data.tar_dataset import _TarDataset, DiskTarDataset -import pickle -import io -import gzip -import time - - -class _RawTarDataset(object): - - def __init__(self, filename, indexname, preload=False): - self.filename = filename - self.names = [] - self.offsets = [] - - for l in open(indexname): - ll = l.split() - a, b, c = ll[:3] - offset = int(b[:-1]) - if l.endswith('** Block of NULs **\n'): - self.offsets.append(offset) - break - else: - if c.endswith('JPEG'): - self.names.append(c) - self.offsets.append(offset) - else: - # ignore directories - pass - if preload: - self.data = np.memmap(filename, mode='r', dtype='uint8') - else: - self.data = None - - def __len__(self): - return len(self.names) - - def __getitem__(self, idx): - if self.data is None: - self.data = np.memmap(self.filename, mode='r', dtype='uint8') - ofs = self.offsets[idx] * 512 - fsize = 512 * (self.offsets[idx + 1] - self.offsets[idx]) - data = self.data[ofs:ofs + fsize] - - if data[:13].tostring() == '././@LongLink': - data = data[3 * 512:] - else: - data = data[512:] - - # just to make it more fun a few JPEGs are GZIP compressed... - # catch this case - if tuple(data[:2]) == (0x1f, 0x8b): - s = io.StringIO(data.tostring()) - g = gzip.GzipFile(None, 'r', 0, s) - sdata = g.read() - else: - sdata = data.tostring() - return sdata - - - -def preprocess(): - # Follow https://github.com/Alibaba-MIIL/ImageNet21K/blob/main/dataset_preprocessing/processing_script.sh - # Expect 12358684 samples with 11221 classes - # ImageNet folder has 21841 classes (synsets) - - i22kdir = '/datasets01/imagenet-22k/062717/' - i22ktarlogs = '/checkpoint/imisra/datasets/imagenet-22k/tarindex' - class_names_file = '/checkpoint/imisra/datasets/imagenet-22k/words.txt' - - output_dir = '/checkpoint/zhouxy/Datasets/ImageNet/metadata-22k/' - i22knpytarlogs = '/checkpoint/zhouxy/Datasets/ImageNet/metadata-22k/tarindex_npy' - print('Listing dir') - log_files = os.listdir(i22ktarlogs) - log_files = [x for x in log_files if x.endswith(".tarlog")] - log_files.sort() - chunk_datasets = [] - dataset_lens = [] - min_count = 0 - create_npy_tarlogs = True - print('Creating folders') - if create_npy_tarlogs: - os.makedirs(i22knpytarlogs, exist_ok=True) - for log_file in log_files: - syn = log_file.replace(".tarlog", "") - dataset = _RawTarDataset(os.path.join(i22kdir, syn + ".tar"), - os.path.join(i22ktarlogs, syn + ".tarlog"), - preload=False) - names = np.array(dataset.names) - offsets = np.array(dataset.offsets, dtype=np.int64) - np.save(os.path.join(i22knpytarlogs, f"{syn}_names.npy"), names) - np.save(os.path.join(i22knpytarlogs, f"{syn}_offsets.npy"), offsets) - - os.makedirs(output_dir, exist_ok=True) - - start_time = time.time() - for log_file in log_files: - syn = log_file.replace(".tarlog", "") - dataset = _TarDataset(os.path.join(i22kdir, syn + ".tar"), i22knpytarlogs) - # dataset = _RawTarDataset(os.path.join(i22kdir, syn + ".tar"), - # os.path.join(i22ktarlogs, syn + ".tarlog"), - # preload=False) - dataset_lens.append(len(dataset)) - end_time = time.time() - print(f"Time {end_time - start_time}") - - - dataset_lens = np.array(dataset_lens) - dataset_valid = dataset_lens > min_count - - syn2class = {} - with open(class_names_file) as fh: - for line in fh: - line = line.strip().split("\t") - syn2class[line[0]] = line[1] - - tarlog_files = [] - class_names = [] - tar_files = [] - for k in range(len(dataset_valid)): - if not dataset_valid[k]: - continue - syn = log_files[k].replace(".tarlog", "") - tarlog_files.append(os.path.join(i22ktarlogs, syn + ".tarlog")) - tar_files.append(os.path.join(i22kdir, syn + ".tar")) - class_names.append(syn2class[syn]) - - tarlog_files = np.array(tarlog_files) - tar_files = np.array(tar_files) - class_names = np.array(class_names) - print(f"Have {len(class_names)} classes and {dataset_lens[dataset_valid].sum()} samples") - - np.save(os.path.join(output_dir, "tarlog_files.npy"), tarlog_files) - np.save(os.path.join(output_dir, "tar_files.npy"), tar_files) - np.save(os.path.join(output_dir, "class_names.npy"), class_names) - np.save(os.path.join(output_dir, "tar_files.npy"), tar_files) - - -if __name__ == "__main__": - preprocess() diff --git a/spaces/tensorflow/yamnet/app.py b/spaces/tensorflow/yamnet/app.py deleted file mode 100644 index 3c8f006d4f4e3d6faf7ba981e5add94db42a1cff..0000000000000000000000000000000000000000 --- a/spaces/tensorflow/yamnet/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import tensorflow as tf -import tensorflow_hub as hub -import numpy as np -import csv - -import matplotlib.pyplot as plt -from IPython.display import Audio -from scipy.io import wavfile - -import os - -import gradio as gr - -# Load the model. -model = hub.load('https://tfhub.dev/google/yamnet/1') - -# Find the name of the class with the top score when mean-aggregated across frames. -def class_names_from_csv(class_map_csv_text): - """Returns list of class names corresponding to score vector.""" - class_names = [] - with tf.io.gfile.GFile(class_map_csv_text) as csvfile: - reader = csv.DictReader(csvfile) - for row in reader: - class_names.append(row['display_name']) - - return class_names - -class_map_path = model.class_map_path().numpy() -class_names = class_names_from_csv(class_map_path) - - -def ensure_sample_rate(original_sample_rate, waveform, - desired_sample_rate=16000): - """Resample waveform if required.""" - if original_sample_rate != desired_sample_rate: - desired_length = int(round(float(len(waveform)) / - original_sample_rate * desired_sample_rate)) - waveform = scipy.signal.resample(waveform, desired_length) - return desired_sample_rate, waveform - -os.system("wget https://storage.googleapis.com/audioset/miaow_16k.wav") - -def inference(audio): - # wav_file_name = 'speech_whistling2.wav' - wav_file_name = audio - sample_rate, wav_data = wavfile.read(wav_file_name, 'rb') - sample_rate, wav_data = ensure_sample_rate(sample_rate, wav_data) - - waveform = wav_data / tf.int16.max - - # Run the model, check the output. - scores, embeddings, spectrogram = model(waveform) - - scores_np = scores.numpy() - spectrogram_np = spectrogram.numpy() - infered_class = class_names[scores_np.mean(axis=0).argmax()] - - return f'The main sound is: {infered_class}' - -examples=[['miaow_16k.wav']] -title="yamnet" -description="An audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology." -gr.Interface(inference,gr.inputs.Audio(type="filepath"),"text",examples=examples,title=title,description=description).launch(enable_queue=True) - \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Aktivasi Windows 8 Release Preview Build 8400 12 WORK.md b/spaces/terfces0erbo/CollegeProjectV2/Aktivasi Windows 8 Release Preview Build 8400 12 WORK.md deleted file mode 100644 index ee1903520d4dec9821e353799f095b0aa1ee170b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Aktivasi Windows 8 Release Preview Build 8400 12 WORK.md +++ /dev/null @@ -1,6 +0,0 @@ - -

      after this problem finally got the message, it takes around 15-20 minutes and it has to be done at least twice for it to go away.
      1. go to control panel –> apps and features –> turn windows features on or off
      1. select "windows 10 features"
      2. in the "turn windows features on or off" dialog box, click the "change" button next to "windows 10 feature (s)"
      2. click "optional updates"
      2. in the "optional updates" dialog box, click "add", and then click the "add/remove features" link
      2. in the "add/remove features" dialog box, click the small arrow next to "microsoft windows", and then click the "move up/down" link, and then type in "microsoft.windows.welcomeredirectui" in the "feature name" box
      3. click "ok" to return to the "change/remove features" dialog box
      3. click "ok" to return to the "windows 10 feature (s)" dialog box
      3. click "ok" to return to the "turn windows features on or off" dialog box
      3. click "ok" to turn the windows 10 feature(s) back on.

      -

      Aktivasi windows 8 release preview build 8400 12


      Download File ->>->>->> https://bytlly.com/2uGjw7



      -

      i have just recently re-installed windows 10 and i keep getting an error message when i try to activate. i first tried moving it to another partition but to no avail. i have already done a clean install of windows as per the guidance in the article i am posting. i didn't use the product key that was on the drive. i did the following in cmd. put in the following line. "activate-windowsfeature -name net-ad-ds-server -disablemodule:windowsfirewallmodule -include:*" <-- already tried both "net-ad-ds-server" and "net-ad-ds-server-*"> -all /featurename: (remove: net-ad-ds-server))<--- but still no luck. <-- tried net-ad-ds-server and net-ad-ds-server-*> c:\windows\features\<---remove: net-ad-ds-server) -all /featurename:net-ad-ds-server<--- still no luck. activating all at the end of the cmd line.. done.<- have been doing for the last 3 days and still no luck. device id's: 00000000-0000-0000-0000-000000000000. i have seen on many other articles that windows 10 has a new problem. i have reinstalled windows 10 which didn't work and then i tried switching to another partition and i still got the same message. i tried the suggestion from the comments and still no go. keep getting a validation error. things to note before you start the process: on the previous version of the 64 bit windows 7, only windows features that only included the net-ad-ds-server module were activated. for x64, in addition, the windows firewall module may be activated. this module is a mandatory prerequisite for installation of any other features, and cannot be deactivated.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Conceptdraw Pro 10 Full Version [NEW].md b/spaces/terfces0erbo/CollegeProjectV2/Conceptdraw Pro 10 Full Version [NEW].md deleted file mode 100644 index c1e7af636c23ccecdc0928963c5c700e2897e4a5..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Conceptdraw Pro 10 Full Version [NEW].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Conceptdraw Pro 10 Full Version


      DOWNLOAD ===> https://bytlly.com/2uGiQ8



      - - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/theintuitiveye/modernartstyle/app.py b/spaces/theintuitiveye/modernartstyle/app.py deleted file mode 100644 index 27e7dd35cc869a2ce30feac535c49e3863452454..0000000000000000000000000000000000000000 --- a/spaces/theintuitiveye/modernartstyle/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'theintuitiveye/modernartstyle' -prefix = 'modernartst' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Modernartstyle

      -
      -

      - Demo for Modernartstyle Stable Diffusion model.
      - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

      - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

      - Duplicate Space -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (modernartst)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
      -
      -

      This space was created using SD Space Creator.

      -
      - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW X8 Free Download The Best Options for Windows 8.1 Users.md b/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW X8 Free Download The Best Options for Windows 8.1 Users.md deleted file mode 100644 index d2c694de27ae144851f0cd9e780416fa3e01c79d..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW X8 Free Download The Best Options for Windows 8.1 Users.md +++ /dev/null @@ -1,26 +0,0 @@ -
      -

      How to Download CorelDRAW X8 Full Version with Crack for Windows 8.1

      -

      If you are looking for a powerful and versatile graphics software that can handle vector illustration, photo editing, page layout, logo design and more, you might be interested in CorelDRAW X8. This is the latest version of CorelDRAW Graphics Suite, a comprehensive suite of applications that can help you create stunning graphics and designs for any purpose. However, before you download CorelDRAW X8 full version with crack for Windows 8.1, you should be aware of the risks and consequences of using pirated software.

      -

      corel draw x8 free download full version with crack for windows 8.1


      Download File > https://urlcod.com/2uK6lP



      -

      What is CorelDRAW X8 Crack?

      -

      A crack is a program or a file that modifies or bypasses the original security features of a software, such as activation, registration or licensing. By using a crack, you can access the full functionality of a software without paying for it or following the terms and conditions of the developer. A CorelDRAW X8 crack is a modified version of CorelDRAW X8 that allows you to use it without purchasing a license or entering a serial number.

      -

      Why You Should Not Use CorelDRAW X8 Crack?

      -

      While it might be tempting to download CorelDRAW X8 full version with crack for Windows 8.1 and save some money, there are many reasons why you should not do so. Here are some of the disadvantages and dangers of using CorelDRAW X8 crack:

      -
        -
      • It is illegal. Using a cracked software is a violation of the intellectual property rights of the developer and the distributor. You can face legal actions, fines or even imprisonment if you are caught using or distributing pirated software.
      • -
      • It is unsafe. Cracked software often contains viruses, malware, spyware or other malicious code that can harm your computer, steal your personal information, damage your files or compromise your security. You can also expose yourself to cyberattacks, identity theft or fraud by using cracked software.
      • -
      • It is unreliable. Cracked software often has bugs, errors, glitches or compatibility issues that can affect its performance and functionality. You can also miss out on the latest updates, patches, features and improvements that the official version offers. You can also lose your technical support, customer service and warranty from the developer if you use cracked software.
      • -
      • It is unethical. Using cracked software is unfair to the developers who spend time, money and effort to create quality software that meets the needs and expectations of the users. By using cracked software, you are depriving them of their rightful income and recognition. You are also hurting the software industry and discouraging innovation and creativity.
      • -
      -

      How to Download CorelDRAW X8 Legally?

      -

      If you want to use CorelDRAW X8 without risking any legal, security or quality issues, you should download it from the official website of Corel Corporation. You can choose from different options depending on your needs and budget:

      -

      -
        -
      • Free trial. You can download a 15-day free trial of CorelDRAW X8 and test its features and performance before you decide to buy it. You can access all the tools and functions of the software without any limitations or watermarks. You can also get online tutorials, tips and tricks to help you get started.
      • -
      • Subscription. You can subscribe to CorelDRAW X8 for a monthly or annual fee and get access to the latest version of the software as well as updates, support and cloud storage. You can also cancel your subscription anytime without any penalty.
      • -
      • Perpetual license. You can buy a one-time perpetual license of CorelDRAW X8 and own it forever without any recurring payments or expiration dates. You can also get updates and support for one year after your purchase.
      • -
      -

      Conclusion

      -

      CorelDRAW X8 is a powerful and versatile graphics software that can help you create amazing graphics and designs for any purpose. However, you should not download CorelDRAW X8 full version with crack for Windows 8.1 because it is illegal, unsafe, unreliable and unethical. Instead, you should download CorelDRAW X8 from the official website of Corel Corporation and choose the option that suits your needs and budget.

      ddb901b051
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Cubase Pro 10.5 Crack Torrent Full Version (New).md b/spaces/tialenAdioni/chat-gpt-api/logs/Cubase Pro 10.5 Crack Torrent Full Version (New).md deleted file mode 100644 index e63a6d045ecb0a6abfc0a442835699335b8fafb2..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Cubase Pro 10.5 Crack Torrent Full Version (New).md +++ /dev/null @@ -1,31 +0,0 @@ - -I'll try to create that. -Here is the title and article I generated for you: - -

      Cubase Pro 10.5 Crack Torrent Full Version (New)

      -

      If you are looking for a powerful and versatile digital audio workstation, you might want to check out Cubase Pro 10.5. This software is one of the most popular and widely used music production tools in the industry. It offers a comprehensive set of features and functions that can help you create, record, edit, mix and master your music projects.

      -

      However, Cubase Pro 10.5 is not a cheap software. It costs $559.99 for the full version, which might be too expensive for some users. That's why some people resort to downloading Cubase Pro 10.5 crack torrent files from the internet. These files claim to provide a free and easy way to activate the full version of Cubase Pro 10.5 without paying anything.

      -

      Cubase Pro 10.5 Crack Torrent Full Version (New)


      DOWNLOAD ✫✫✫ https://urlcod.com/2uK7Bh



      -

      But is it really worth it to download Cubase Pro 10.5 crack torrent files? What are the risks and consequences of using cracked software? In this article, we will answer these questions and more. We will also show you a better and safer alternative to Cubase Pro 10.5 crack torrent files.

      - -

      What are Cubase Pro 10.5 crack torrent files?

      -

      Cubase Pro 10.5 crack torrent files are illegal and unauthorized copies of the original software that have been modified or hacked by third-party sources. These files usually contain a crack or a keygen program that can generate a serial number or a license key to bypass the activation process of Cubase Pro 10.5.

      -

      By using these files, you can supposedly get access to the full version of Cubase Pro 10.5 without paying anything. You can download these files from various torrent sites or file-sharing platforms on the internet. However, these sources are often unreliable and untrustworthy, as they may contain malware, viruses, spyware or other harmful elements that can damage your computer or compromise your personal data.

      - -

      What are the risks and consequences of using Cubase Pro 10.5 crack torrent files?

      -

      Using Cubase Pro 10.5 crack torrent files is not only illegal but also risky and unethical. Here are some of the possible risks and consequences of using cracked software:

      -
        -
      • Legal issues: Downloading and using Cubase Pro 10.5 crack torrent files is a violation of the intellectual property rights of Steinberg, the developer of Cubase Pro 10.5. This can result in legal actions such as fines, lawsuits or even criminal charges against you.
      • -
      • Security issues: As mentioned earlier, Cubase Pro 10.5 crack torrent files may contain malware, viruses, spyware or other harmful elements that can infect your computer or steal your personal data. These elements can also interfere with the performance and functionality of Cubase Pro 10.5 or other programs on your computer.
      • -
      • Quality issues: Cubase Pro 10.5 crack torrent files are not guaranteed to work properly or at all. They may have bugs, errors, glitches or compatibility issues that can affect the quality and reliability of your music projects. They may also lack some features or updates that are available in the original software.
      • -
      • Ethical issues: By using Cubase Pro 10.5 crack torrent files, you are depriving Steinberg of their rightful income and recognition for their hard work and innovation in developing Cubase Pro 10.5. You are also disrespecting the music industry and other artists who use legitimate software to create their music.
      • -
      - -

      What is a better and safer alternative to Cubase Pro 10.5 crack torrent files?

      -

      If you want to use Cubase Pro 10.5 without risking any legal, security, quality or ethical issues, you should avoid downloading and using Cubase Pro 10.5 crack torrent files at all costs. Instead, you should opt for a better and safer alternative: buying the original software from the official website of Steinberg.

      -

      By buying the original software from Steinberg, you can enjoy the following benefits:

      -
        -
      • Legal benefits: You can use Cubase Pro 10.5 legally and without any fear of legal actions from Steinberg or other authorities. -

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/EasyBits Magic Desktop 9.5.0.213 Full Crack The Ultimate Solution for Creating Video Discs.md b/spaces/tialenAdioni/chat-gpt-api/logs/EasyBits Magic Desktop 9.5.0.213 Full Crack The Ultimate Solution for Creating Video Discs.md deleted file mode 100644 index b89812b4a1aee6bba09c2d060664858c72004d73..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/EasyBits Magic Desktop 9.5.0.213 Full Crack The Ultimate Solution for Creating Video Discs.md +++ /dev/null @@ -1,109 +0,0 @@ -
        -

        EasyBits Magic Desktop 9.5.0.213 Full Crack: A Safe and Stimulating Environment for Kids

        - -

        If you are looking for a software that can create a safe and stimulating environment for your kids to improve their computer literacy at their own pace, you might want to try EasyBits Magic Desktop 9.5.0.213 Full Crack. This software is a platform that has been developed for teaching children with hundreds of different exercises and simulators, and an endless amount of educational, informative, and entertaining material.

        -

        EasyBits Magic Desktop 9.5.0.213 Full Crack


        Download Zip ✺✺✺ https://urlcod.com/2uK58L



        - -

        In this article, we will show you how to download, install, and activate EasyBits Magic Desktop 9.5.0.213 Full Crack for free on your PC, without any registration or payment required. We will also give you some features and benefits of this software, as well as some drawbacks and alternatives.

        - -

        How to Download EasyBits Magic Desktop 9 .5 .0 .213 Full Crack for Free on Your PC

        - -

        There are several ways to download EasyBits Magic Desktop 9 .5 .0 .213 Full Crack for free on your PC . Here are some of them:

        - -
          -
        • 4DOWNLOAD: This is a website that offers a collection of software that you can download and install on your PC for free . You can find EasyBits Magic Desktop v9 .5 .0 .214 Full version on this website . To download it , simply click on the download button and follow the instructions.
        • -
        • KoLomPC: This is another website that offers a collection of software that you can download and install on your PC for free . You can find EasyBits Magic Desktop 9 .5 .0 .219 Crack on this website . To download it , simply click on the download button and follow the instructions.
        • -
        • PIRATEPC: This is yet another website that offers a collection of software that you can download and install on your PC for free . You can find EasyBits Magic Desktop 9 .5 .0 .213 Incl.Full Crack on this website . To download it , simply click on the download button and follow the instructions.
        • -
        - -

        How to Install and Activate EasyBits Magic Desktop 9 .5 .0 .213 Full Crack on Your PC

        - -

        After downloading EasyBits Magic Desktop 9 .5 .0 .213 Full Crack from one of the websites above , you need to install and activate it on your PC to use it without any limitations or restrictions . Here are the steps to do so:

        - -
          -
        1. Extract the downloaded file: You will get a compressed file in RAR or ZIP format that contains the setup file and the crack file of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack . You need to extract this file using a program like WinRAR or 7-Zip.
        2. -
        3. Run the setup file: You will get a file named "MagicDesktopSetup.exe" or something similar that is the setup file of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack . You need to run this file as administrator by right-clicking on it and choosing "Run as administrator".
        4. -
        5. Follow the installation wizard: You will see a window that guides you through the installation process of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack on your PC . You need to follow the instructions and choose the options that suit your preference.
        6. -
        7. Copy and paste the crack file: You will get a file named "MagicDesktop.exe" or something similar that is the crack file of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack . You need to copy this file and paste it into the installation folder of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack on your PC , which is usually located at "C:\Program Files (x86)\EasyBits For Kids\Magic Desktop". You need to replace the original file with the crack file.
        8. -
        9. Launch the program: You can now launch EasyBits Magic Desktop 9

          -

          What are the Features and Benefits of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack

          - -

          EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is a platform that has been developed for teaching children with hundreds of different exercises and simulators , and an endless amount of educational , informative , and entertaining material.

          - -

          Some of its features and benefits are:

          -

          How to download EasyBits Magic Desktop 9.5.0.213 with crack
          -EasyBits Magic Desktop 9.5.0.213 full version free download
          -EasyBits Magic Desktop 9.5.0.213 activation key generator
          -EasyBits Magic Desktop 9.5.0.213 cracked software for windows
          -EasyBits Magic Desktop 9.5.0.213 license code crack
          -EasyBits Magic Desktop 9.5.0.213 serial number crack
          -EasyBits Magic Desktop 9.5.0.213 patch file download
          -EasyBits Magic Desktop 9.5.0.213 keygen crack
          -EasyBits Magic Desktop 9.5.0.213 registration code crack
          -EasyBits Magic Desktop 9.5.0.213 full crack download link
          -EasyBits Magic Desktop 9.5.0.213 crack torrent download
          -EasyBits Magic Desktop 9.5.0.213 crack direct download
          -EasyBits Magic Desktop 9.5.0.213 crack no survey
          -EasyBits Magic Desktop 9.5.0.213 crack no password
          -EasyBits Magic Desktop 9.5.0.213 crack online activation
          -EasyBits Magic Desktop 9.5.0.213 crack offline activation
          -EasyBits Magic Desktop 9.5.0.213 crack installation guide
          -EasyBits Magic Desktop 9.5.0.213 crack review
          -EasyBits Magic Desktop 9.5.0.213 crack features
          -EasyBits Magic Desktop 9.5.0.213 crack benefits
          -EasyBits Magic Desktop 9.5.0.213 crack pros and cons
          -EasyBits Magic Desktop 9.5.0.213 crack comparison
          -EasyBits Magic Desktop 9.5.0.213 crack alternatives
          -EasyBits Magic Desktop 9.5.0.213 crack vs other software
          -EasyBits Magic Desktop 9.5.0.213 crack testimonials
          -EasyBits Magic Desktop 9.5

          - -
            -
          • The safest browser: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack comes with My First Browser , which is the safest kid’s Internet browser in existence! It allows you to hand-select your favorite kid-friendly websites and allow navigation on parent-approved sites only.
          • -
          • Parental Control: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack allows you to control not only which programs your child may access but also when and how kids access approved programs . With a "No Homework-No Play!" focus , technology becomes your new best friend.
          • -
          • Delight and Entertain: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack comes with a sensational collection of kid-friendly games , photo and drawing tools , and Web content – with new updates every month!
          • -
          • Computer Protection: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack safeguards important system settings and data files from accidental interference . So your PC stays in peak working condition.
          • -
          • Kids love it!: Children have fun while learning with an assortment of popular activities and applications . Millions of young Magic Desktop daily users can’t be wrong.
          • -
          • Peace-of-mind: Parents love the peace-of-mind EasyBits Magic Desktop 9 .5 .0 .213 Full Crack provides , with no more worrying about mishaps or deleted files on the family PC.
          • -
          • The Safest Web: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack protects young , innocent eyes with parent-managed Web browsing . It offers fresh updates of child-friendly content every month.
          • -
          • Early Learning: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack gives your child a head start by teaching them how computers work and encouraging exploration . It is suitable for kids as young as toddlers.
          • -
          • Unleash Creativity: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack provides a multitude of creation tools that help children express their creativity and share their creations with supportive friends and family.
          • -
          • Family Fun: EasyBits Magic Desktop 9 -

            What are the Drawbacks and Alternatives of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack

            - -

            EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is not without its drawbacks and alternatives . Here are some of them:

            - -
              -
            • It can be outdated: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is not updated as frequently as other platforms by other developers . Some of its features and content may be outdated or incompatible with newer versions of Windows or other software.
            • -
            • It can be expensive: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is free to download , install , and activate with a crack , but if you want to use it legally , you have to pay for a license , which can cost up to $39 per year per PC.
            • -
            • It can be limited: EasyBits Magic Desktop 9 .5 .0 .213 Full Crack has a lot of features , but it may not have everything that you or your child need or want in terms of education , entertainment , or customization . You may have to look for other platforms or programs that offer more options or variety.
            • -
            • Kano OS: This is an alternative platform that creates a safe , fun , and creative environment for kids to learn about coding , art , music , games , and more on their own computers or devices . It is free to download , install , and use , but it requires a Kano kit or device , which can cost up to $299.
            • -
            • Zoodles Kid Mode: This is another alternative platform that creates a safe , fun , and educational environment for kids to access thousands of games -

              How to Use EasyBits Magic Desktop 9 .5 .0 .213 Full Crack

              - -

              Once you have installed and activated EasyBits Magic Desktop 9 .5 .0 .213 Full Crack on your PC , you can start using it to create a safe and stimulating environment for your kids . Here are some tips on how to use it:

              - -
                -
              • Choose a theme: You can choose from different themes that suit your child's age and preference , such as Animals , Fairy Tales , Space , Sports , and more . You can also customize the theme with your own wallpapers , icons , sounds , and colors.
              • -
              • Choose a profile: You can create different profiles for each child that uses EasyBits Magic Desktop 9 .5 .0 .213 Full Crack , with their own name , avatar , password , and settings . You can also monitor their progress and activity with reports and statistics.
              • -
              • Choose a program: You can access different programs that are designed for learning , playing , creating , and exploring . Some of the programs are:
              • -
                  -
                • Magic Learning: This is a program that offers hundreds of educational games and activities that cover various subjects , such as math , language , science , geography , music , and more . It also adapts to your child's level and pace of learning.
                • -
                • Magic Games: This is a program that offers dozens of fun and engaging games that challenge your child's logic , memory , coordination , and creativity . It also rewards your child with stars and trophies for their achievements.
                • -
                • Magic Studio: This is a program that offers various tools for drawing , painting , coloring , animating , and editing photos and videos . It also allows your child to share their creations with friends and family via e-mail or print.
                • -
                • Magic Mail: This is a program that offers a safe and easy way for your child to communicate with their approved contacts via e-mail . It also allows your child to send and receive attachments , such as photos , videos , drawings , and stickers.
                • -
                • Magic Web: This is a program that offers a safe and kid-friendly web browser that only allows access to parent-approved websites . It also offers fresh updates of child-friendly content every month.
                • -
                -
              • Choose a setting: You can adjust various settings that affect the performance and appearance of EasyBits Magic Desktop 9 .5 .0 .213 Full Crack , such as language , resolution , sound , mouse speed , parental control , and more.
              • -
              - -

              Conclusion

              - -

              EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is a software that can create a safe and stimulating environment for your kids to improve their computer literacy at their own pace. It offers hundreds of different exercises and simulators, and an endless amount of educational, informative, and entertaining material. It is easy to download, install, and activate for free on your PC, without any registration or payment required. However, it also has some drawbacks and alternatives that you may want to consider before using it.

              - -

              We hope this article has helped you learn how to get EasyBits Magic Desktop 9 .5 .0 .213 Full Crack for free on your PC and how to use it. If you have any questions or comments, feel free to leave them below.

              -

              EasyBits Magic Desktop 9 .5 .0 .213 Full Crack is a software that can create a safe and stimulating environment for your kids to improve their computer literacy at their own pace. It offers hundreds of different exercises and simulators, and an endless amount of educational, informative, and entertaining material. It is easy to download, install, and activate for free on your PC, without any registration or payment required. However, it also has some drawbacks and alternatives that you may want to consider before using it.

              - -

              We hope this article has helped you learn how to get EasyBits Magic Desktop 9 .5 .0 .213 Full Crack for free on your PC and how to use it. If you have any questions or comments, feel free to leave them below.

              679dcb208e
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alto 39s Adventure Game __EXCLUSIVE__ Download For Pc.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alto 39s Adventure Game __EXCLUSIVE__ Download For Pc.md deleted file mode 100644 index 4e054ca7dc036d07103a97ff706895eae563769e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alto 39s Adventure Game __EXCLUSIVE__ Download For Pc.md +++ /dev/null @@ -1,99 +0,0 @@ -
              -

              How to Download Alto's Adventure for PC

              -

              Alto's Adventure is a beautiful and relaxing endless runner game that has won many awards and accolades since its release in 2015. The game follows Alto and his friends as they snowboard across stunning alpine landscapes, rescuing llamas, performing tricks, and avoiding obstacles. The game features fluid physics-based gameplay, dynamic weather effects, a soothing soundtrack, and a minimalist art style.

              -

              alto 39;s adventure game download for pc


              Download File ››› https://bltlly.com/2uOp6V



              -

              If you are looking for a game that can calm your mind and challenge your skills, Alto's Adventure is a great choice. But how can you play it on your PC? In this article, we will show you three easy ways to download and install Alto's Adventure on your Windows computer. We will also compare the pros and cons of each method and help you decide which one is best for you.

              -

              How to Download Alto's Adventure from the Microsoft Store

              -

              The Microsoft Store is a convenient way to get games for your PC. You can browse, buy, and download games directly from the store app on your Windows device. You can also access your game library, achievements, and friends list from the store. Here are the steps to download Alto's Adventure from the Microsoft Store:

              -
                -
              1. Open the Microsoft Store app on your PC. You can find it by typing "Microsoft Store" in the search box on the taskbar.
              2. -
              3. Click on the Gaming category in the sidebar. You can also use the search box to look for "Alto's Adventure".
              4. -
              5. Select Alto's Adventure from the list of games. You will see a page with more information about the game, such as screenshots, videos, reviews, and system requirements.
              6. -
              7. Click on the Buy button to purchase the game. The game costs $4.99 as of writing this article. You can also try it for free for an hour before buying it.
              8. -
              9. After purchasing the game, click on the Install button to start downloading it. You will need about 300 MB of free space on your hard drive.
              10. -
              11. Once the download is complete, you can launch the game from the store app or from your Start menu.
              12. -
              -

              The pros of this method are:

              -
                -
              • You can enjoy a smooth and secure purchase process with your Microsoft account.
              • -
              • You can access other features of the Microsoft Store, such as updates, refunds, ratings, and support.
              • -
              • You can play the game offline once you install it.
              • -
              -

              The cons of this method are:

              -
                -
              • You need to have a Microsoft account and sign in to the store app.
              • -
              • You need to have Windows 10 or later as your operating system.
              • -
              • You may encounter some compatibility issues with older hardware or software.
              • -
              -

              How to Download Alto's Adventure from the Epic Games Store

              -

              The Epic Games Store is another popular platform for downloading PC games. It offers a variety of games from different genres and publishers, including some exclusive titles. It also gives away free games every week that you can keep forever. Here are the steps to download Alto's Adventure from the Epic Games Store:

              -
                -
              1. Open your web browser and go to https://store.epicgames.com/en-US/news/how-to-download-pc-games. This is the official website of the Epic Games Store.
              2. -
              3. Click on the Sign In button at the top right corner of the page. You will need to create an Epic Games account or use your Google, Facebook, or console accounts to. sign in.
              4. -
              5. Once you are signed in, click on the Store tab at the top of the page. You will see a list of featured games and categories that you can browse.
              6. -
              7. Use the search box to look for "Alto's Adventure". You can also filter the results by genre, price, platform, and rating.
              8. -
              9. Select Alto's Adventure from the list of games. You will see a page with more information about the game, such as screenshots, videos, reviews, and system requirements.
              10. -
              11. Click on the Get button to add the game to your library. The game is free to download as of writing this article. You can also add it to your wishlist or share it with your friends.
              12. -
              13. After adding the game to your library, click on the Library tab at the top of the page. You will see a list of games that you own or have access to.
              14. -
              15. Select Alto's Adventure from your library and click on the Install button to start downloading it. You will need about 300 MB of free space on your hard drive.
              16. -
              17. Once the download is complete, you can launch the game from your library or from your desktop shortcut.
              18. -
              -

              The pros of this method are:

              -
                -
              • You can get the game for free if you claim it during the promotional period.
              • -
              • You can access other features of the Epic Games Store, such as updates, refunds, ratings, and support.
              • -
              • You can play the game offline once you install it.
              • -
              -

              The cons of this method are:

              -

              -
                -
              • You need to have an Epic Games account and sign in to the store website or app.
              • -
              • You need to have Windows 7 or later as your operating system.
              • -
              • You may encounter some compatibility issues with older hardware or software.
              • -
              -

              How to Download Alto's Adventure from the Game's Official Website

              -

              The third way to download Alto's Adventure for PC is to go directly to the game's official website. This is the simplest and most straightforward method, as you don't need any third-party platforms or accounts. Here are the steps to download Alto's Adventure from the game's official website:

              -
                -
              1. Open your web browser and go to https://altosadventure.com/. This is the official website of Alto's Adventure.
              2. -
              3. Click on the Windows icon at the top right corner of the page. You will see a pop-up window with a link to download the game.
              4. -
              5. Click on the Download button to start downloading the game. You will need about 300 MB of free space on your hard drive.
              6. -
              7. Once the download is complete, open the downloaded file and follow the instructions to install the game on your PC.
              8. -
              9. After installing the game, you can launch it from your Start menu or from your desktop shortcut.
              10. -
              -

              The pros of this method are:

              -
                -
              • You don't need any other platforms or accounts to get the game.
              • -
              • You can get the latest version of the game directly from the developers.
              • -
              • You can play the game offline once you install it.
              • -
              -

              The cons of this method are:

              -
                -
              • You need to pay $4.99 for the game via PayPal or credit card.
              • -
              • You may not get any updates, refunds, ratings, or support from the developers.
              • -
              • You may encounter some compatibility issues with older hardware or software.
              • -
              -

              Comparison Table of the Three Methods

              - - - - - -

              Conclusion

              -

              In this article, we have shown you three easy ways to download and play Alto's Adventure on your PC. Each method has its own pros and cons, so you should choose the one that suits your preferences and needs the best. We hope this article has helped you enjoy this amazing game on your PC.

              -

              If you have any questions, comments, or feedback, please feel free to share them with us in the comment section below. We would love to hear from you and learn from your experiences. Happy snowboarding!

              -

              FAQs

              -

              Here are some frequently asked questions and answers about Alto's Adventure:

              -
                -
              1. Is Alto's Adventure a multiplayer game?
              2. -

                No, Alto's Adventure is a single-player game. You can play it by yourself or compete with your friends on the leaderboards.

                -
              3. How many characters are there in Alto's Adventure?
              4. -

                There are six characters in Alto's Adventure: Alto, Maya, Paz, Izel, Felipe, and Tupa. Each character has their own attributes and abilities that affect their gameplay.

                -
              5. How do I unlock new characters and items in Alto's Adventure?
              6. -

                You can unlock new characters and items by completing goals, collecting coins, and finding secrets. You can also buy some items with real money if you want to support the developers.

                -
              7. How do I perform tricks and combos in Alto's Adventure?
              8. -

                You can perform tricks and combos by tapping the screen to jump and holding it to flip. The longer you hold, the more flips you can do. You can also grind on rails, bunting lines, and rooftops to extend your combos. Tricks and combos increase your speed, score, and multiplier.

                -
              9. What are the benefits of rescuing llamas in Alto's Adventure?
              10. -

                Rescuing llamas is one of the main objectives of the game. Llamas give you points and coins, which you can use to buy new items and upgrade your abilities. Llamas also make cute noises and follow you around.

                -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Dreamweaver Download _VERIFIED_ Full Version Crack.md b/spaces/tioseFevbu/cartoon-converter/scripts/Dreamweaver Download _VERIFIED_ Full Version Crack.md deleted file mode 100644 index 6e6f4e00c7109fa5793849d7cd79e7259b4c506f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Dreamweaver Download _VERIFIED_ Full Version Crack.md +++ /dev/null @@ -1,29 +0,0 @@ - -

              How to Download and Install Adobe Dreamweaver 2021 Full Version with Crack

              -

              If you are looking for a web design software that supports HTML, CSS, JavaScript, and more, you might want to try Adobe Dreamweaver 2021. This software allows you to create, code, and manage dynamic websites easily with a smart, simplified coding engine. You can also preview your sites and edits in real time to make sure they look and work the way you want before you publish them.

              -

              Dreamweaver Download Full Version Crack


              Download File ✒ ✒ ✒ https://urlcod.com/2uHyAC



              -

              However, Adobe Dreamweaver 2021 is not a free software. You need to pay a monthly or annual subscription fee to use it. If you don't want to spend money on this software, you might be tempted to look for a cracked version online. But is it safe and legal to do so?

              -

              The Risks of Downloading a Cracked Version of Dreamweaver

              -

              Downloading a cracked version of Dreamweaver means that you are using an illegal copy of the software that has been modified by someone else to bypass the activation process. This might seem like an easy way to get the software for free, but it comes with many risks and disadvantages.

              -
                -
              • Malware infection: The crack files that you download from unknown sources might contain viruses, trojans, worms, spyware, ransomware, or other malicious programs that can harm your computer and compromise your personal data. You might end up losing your files, passwords, bank accounts, or even your identity.
              • -
              • Lack of updates and support: A cracked version of Dreamweaver will not receive any updates or patches from Adobe. This means that you will miss out on the latest features, bug fixes, security enhancements, and compatibility improvements. You will also not be able to access any online services or customer support from Adobe if you encounter any problems or issues with the software.
              • -
              • Legal consequences: Downloading and using a cracked version of Dreamweaver is a violation of Adobe's terms of service and intellectual property rights. You are exposing yourself to potential lawsuits, fines, or even criminal charges if you are caught using pirated software. You are also disrespecting the hard work and creativity of the developers who created the software.
              • -
              -

              The Benefits of Using a Genuine Version of Dreamweaver

              -

              Instead of risking your computer, data, and reputation by downloading a cracked version of Dreamweaver, you should consider using a genuine version of the software. Here are some of the benefits of doing so:

              -
                -
              • Safety and security: A genuine version of Dreamweaver is free from any malware or viruses that can harm your computer or data. You can also enjoy the latest updates and patches from Adobe that will keep your software running smoothly and securely.
              • -
              • Features and functionality: A genuine version of Dreamweaver will give you access to all the features and functionality that the software has to offer. You can also use the online services and cloud storage that Adobe provides for its customers. You can also get help and support from Adobe's experts and community if you have any questions or issues with the software.
              • -
              • Ethics and legality: A genuine version of Dreamweaver is a legal and ethical way to use the software. You are respecting Adobe's terms of service and intellectual property rights by paying for their product. You are also supporting the development and innovation of the software by providing feedback and suggestions.
              • -
              -

              How to Download and Install Adobe Dreamweaver 2021 Full Version with Crack

              -

              If you still want to download and install Adobe Dreamweaver 2021 full version with crack, here are the steps that you need to follow:

              -

              -
                -
              1. Download the setup file from one of the links below[^1^] [^2^] [^3^] [^4^]. Make sure you have a reliable antivirus program installed on your computer before downloading anything from unknown sources.
              2. -
              3. Disable your antivirus program and internet connection before proceeding with the installation.
              4. -
              5. Run the setup file and follow the instructions on the screen to install Dreamweaver 2021 on your computer.
              6. -
              7. Copy the crack file from the downloaded folder

                cec2833e83
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Microbiologia Medica Murray Pdf Italiano 14.md b/spaces/tioseFevbu/cartoon-converter/scripts/Microbiologia Medica Murray Pdf Italiano 14.md deleted file mode 100644 index 62f1acff007853d78d19ba532412dc982af09b16..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Microbiologia Medica Murray Pdf Italiano 14.md +++ /dev/null @@ -1,25 +0,0 @@ - -Here is a possible title and article with HTML formatting for the keyword "Microbiologia Medica Murray Pdf Italiano 14": - -``` -

                Microbiologia Medica Murray Pdf Italiano 14: un libro di testo per gli studenti di medicina

                -

                La microbiologia medica è la scienza che studia i microrganismi patogeni per l'uomo, le loro caratteristiche, le malattie che provocano e le strategie per prevenirle e curarle. È una disciplina fondamentale per la formazione degli studenti di medicina e per la pratica clinica.

                -

                Un libro di testo che offre una trattazione completa ed aggiornata della microbiologia medica è Microbiologia Medica di Patrick R. Murray, Ken S. Rosenthal e Michael A. Pfaller, pubblicato in italiano dalla casa editrice Elsevier. Si tratta della settima edizione di un'opera di riferimento a livello internazionale, che copre tutti gli aspetti della materia, dalla biologia molecolare alla diagnosi di laboratorio, dalla patogenesi alla terapia antimicrobica.

                -

                Microbiologia Medica Murray Pdf Italiano 14


                DOWNLOADhttps://urlcod.com/2uHwpf



                -

                Il libro è suddiviso in quattro sezioni: la prima introduce i concetti generali della microbiologia medica, la seconda descrive i principali gruppi di microrganismi patogeni (batteri, virus, funghi e parassiti), la terza approfondisce le infezioni dei vari sistemi e apparati del corpo umano e la quarta illustra le basi della prevenzione e del controllo delle infezioni.

                -

                Ogni capitolo presenta una struttura chiara e uniforme, con obiettivi di apprendimento, sommario, definizioni dei termini chiave, box riassuntivi, casi clinici, domande di autovalutazione e bibliografia. Il testo è arricchito da oltre 900 immagini a colori, tra fotografie, disegni schematizzati e tabelle sinottiche. Inoltre, il libro è corredato da un sito web dedicato, che offre risorse didattiche aggiuntive per gli studenti e per i docenti.

                -

                Microbiologia Medica è un libro di testo indispensabile per gli studenti di medicina che vogliono acquisire una solida conoscenza della microbiologia medica e per i medici che vogliono aggiornarsi sulle novità scientifiche e cliniche in questo campo. Il libro è disponibile in formato PDF italiano al seguente link: https://jinyurl.com/2sBokY.

                -```Here is a possible continuation of the article with HTML formatting: - -``` -

                Se si vuole approfondire ulteriormente lo studio della microbiologia medica, esistono altri libri di testo che offrono una visione diversa o complementare a quella di Murray e collaboratori. Tra questi, possiamo citare:

                -
                  -
                • Brock Biology of Microorganisms di Michael T. Madigan, John M. Martinko, Kelly S. Bender, Daniel H. Buckley e David A. Stahl, pubblicato in inglese dalla casa editrice Pearson. Si tratta della quattordicesima edizione di un classico della microbiologia, che tratta sia gli aspetti generali che quelli applicati della disciplina, con una forte enfasi sulla biologia molecolare e sulla diversità microbica.
                • -
                • Microbiology: An Introduction di Gerard J. Tortora, Berdell R. Funke e Christine L. Case, pubblicato in inglese dalla casa editrice Pearson. Si tratta della tredicesima edizione di un testo introduttivo alla microbiologia, che copre i principi fondamentali e le applicazioni cliniche della materia, con un approccio didattico e motivante.
                • -
                • Clinical Microbiology Made Ridiculously Simple di Mark Gladwin, William Trattler e C. Scott Mahan, pubblicato in inglese dalla casa editrice MedMaster. Si tratta della sesta edizione di un testo semplificato e divertente sulla microbiologia clinica, che utilizza schemi, cartoni animati e mnemoniche per facilitare la memorizzazione dei concetti chiave.
                • -
                -

                Questi e altri libri di testo sulla microbiologia medica sono disponibili in formato PDF inglese al seguente link: https://microbiologyinfo.com/top-and-best-microbiology-books/.

                -```

                -

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexer.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexer.py deleted file mode 100644 index ec7f4de32cfdf58f2bf54cc9fec089ac78b2a276..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/lexer.py +++ /dev/null @@ -1,882 +0,0 @@ -""" - pygments.lexer - ~~~~~~~~~~~~~~ - - Base lexer classes. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import time - -from pip._vendor.pygments.filter import apply_filters, Filter -from pip._vendor.pygments.filters import get_filter_by_name -from pip._vendor.pygments.token import Error, Text, Other, _TokenType -from pip._vendor.pygments.util import get_bool_opt, get_int_opt, get_list_opt, \ - make_analysator, Future, guess_decode -from pip._vendor.pygments.regexopt import regex_opt - -__all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer', - 'LexerContext', 'include', 'inherit', 'bygroups', 'using', 'this', - 'default', 'words'] - - -_encoding_map = [(b'\xef\xbb\xbf', 'utf-8'), - (b'\xff\xfe\0\0', 'utf-32'), - (b'\0\0\xfe\xff', 'utf-32be'), - (b'\xff\xfe', 'utf-16'), - (b'\xfe\xff', 'utf-16be')] - -_default_analyse = staticmethod(lambda x: 0.0) - - -class LexerMeta(type): - """ - This metaclass automagically converts ``analyse_text`` methods into - static methods which always return float values. - """ - - def __new__(mcs, name, bases, d): - if 'analyse_text' in d: - d['analyse_text'] = make_analysator(d['analyse_text']) - return type.__new__(mcs, name, bases, d) - - -class Lexer(metaclass=LexerMeta): - """ - Lexer for a specific language. - - Basic options recognized: - ``stripnl`` - Strip leading and trailing newlines from the input (default: True). - ``stripall`` - Strip all leading and trailing whitespace from the input - (default: False). - ``ensurenl`` - Make sure that the input ends with a newline (default: True). This - is required for some lexers that consume input linewise. - - .. versionadded:: 1.3 - - ``tabsize`` - If given and greater than 0, expand tabs in the input (default: 0). - ``encoding`` - If given, must be an encoding name. This encoding will be used to - convert the input string to Unicode, if it is not already a Unicode - string (default: ``'guess'``, which uses a simple UTF-8 / Locale / - Latin1 detection. Can also be ``'chardet'`` to use the chardet - library, if it is installed. - ``inencoding`` - Overrides the ``encoding`` if given. - """ - - #: Name of the lexer - name = None - - #: URL of the language specification/definition - url = None - - #: Shortcuts for the lexer - aliases = [] - - #: File name globs - filenames = [] - - #: Secondary file name globs - alias_filenames = [] - - #: MIME types - mimetypes = [] - - #: Priority, should multiple lexers match and no content is provided - priority = 0 - - def __init__(self, **options): - self.options = options - self.stripnl = get_bool_opt(options, 'stripnl', True) - self.stripall = get_bool_opt(options, 'stripall', False) - self.ensurenl = get_bool_opt(options, 'ensurenl', True) - self.tabsize = get_int_opt(options, 'tabsize', 0) - self.encoding = options.get('encoding', 'guess') - self.encoding = options.get('inencoding') or self.encoding - self.filters = [] - for filter_ in get_list_opt(options, 'filters', ()): - self.add_filter(filter_) - - def __repr__(self): - if self.options: - return '' % (self.__class__.__name__, - self.options) - else: - return '' % self.__class__.__name__ - - def add_filter(self, filter_, **options): - """ - Add a new stream filter to this lexer. - """ - if not isinstance(filter_, Filter): - filter_ = get_filter_by_name(filter_, **options) - self.filters.append(filter_) - - def analyse_text(text): - """ - Has to return a float between ``0`` and ``1`` that indicates - if a lexer wants to highlight this text. Used by ``guess_lexer``. - If this method returns ``0`` it won't highlight it in any case, if - it returns ``1`` highlighting with this lexer is guaranteed. - - The `LexerMeta` metaclass automatically wraps this function so - that it works like a static method (no ``self`` or ``cls`` - parameter) and the return value is automatically converted to - `float`. If the return value is an object that is boolean `False` - it's the same as if the return values was ``0.0``. - """ - - def get_tokens(self, text, unfiltered=False): - """ - Return an iterable of (tokentype, value) pairs generated from - `text`. If `unfiltered` is set to `True`, the filtering mechanism - is bypassed even if filters are defined. - - Also preprocess the text, i.e. expand tabs and strip it if - wanted and applies registered filters. - """ - if not isinstance(text, str): - if self.encoding == 'guess': - text, _ = guess_decode(text) - elif self.encoding == 'chardet': - try: - from pip._vendor import chardet - except ImportError as e: - raise ImportError('To enable chardet encoding guessing, ' - 'please install the chardet library ' - 'from http://chardet.feedparser.org/') from e - # check for BOM first - decoded = None - for bom, encoding in _encoding_map: - if text.startswith(bom): - decoded = text[len(bom):].decode(encoding, 'replace') - break - # no BOM found, so use chardet - if decoded is None: - enc = chardet.detect(text[:1024]) # Guess using first 1KB - decoded = text.decode(enc.get('encoding') or 'utf-8', - 'replace') - text = decoded - else: - text = text.decode(self.encoding) - if text.startswith('\ufeff'): - text = text[len('\ufeff'):] - else: - if text.startswith('\ufeff'): - text = text[len('\ufeff'):] - - # text now *is* a unicode string - text = text.replace('\r\n', '\n') - text = text.replace('\r', '\n') - if self.stripall: - text = text.strip() - elif self.stripnl: - text = text.strip('\n') - if self.tabsize > 0: - text = text.expandtabs(self.tabsize) - if self.ensurenl and not text.endswith('\n'): - text += '\n' - - def streamer(): - for _, t, v in self.get_tokens_unprocessed(text): - yield t, v - stream = streamer() - if not unfiltered: - stream = apply_filters(stream, self.filters, self) - return stream - - def get_tokens_unprocessed(self, text): - """ - Return an iterable of (index, tokentype, value) pairs where "index" - is the starting position of the token within the input text. - - In subclasses, implement this method as a generator to - maximize effectiveness. - """ - raise NotImplementedError - - -class DelegatingLexer(Lexer): - """ - This lexer takes two lexer as arguments. A root lexer and - a language lexer. First everything is scanned using the language - lexer, afterwards all ``Other`` tokens are lexed using the root - lexer. - - The lexers from the ``template`` lexer package use this base lexer. - """ - - def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options): - self.root_lexer = _root_lexer(**options) - self.language_lexer = _language_lexer(**options) - self.needle = _needle - Lexer.__init__(self, **options) - - def get_tokens_unprocessed(self, text): - buffered = '' - insertions = [] - lng_buffer = [] - for i, t, v in self.language_lexer.get_tokens_unprocessed(text): - if t is self.needle: - if lng_buffer: - insertions.append((len(buffered), lng_buffer)) - lng_buffer = [] - buffered += v - else: - lng_buffer.append((i, t, v)) - if lng_buffer: - insertions.append((len(buffered), lng_buffer)) - return do_insertions(insertions, - self.root_lexer.get_tokens_unprocessed(buffered)) - - -# ------------------------------------------------------------------------------ -# RegexLexer and ExtendedRegexLexer -# - - -class include(str): # pylint: disable=invalid-name - """ - Indicates that a state should include rules from another state. - """ - pass - - -class _inherit: - """ - Indicates the a state should inherit from its superclass. - """ - def __repr__(self): - return 'inherit' - -inherit = _inherit() # pylint: disable=invalid-name - - -class combined(tuple): # pylint: disable=invalid-name - """ - Indicates a state combined from multiple states. - """ - - def __new__(cls, *args): - return tuple.__new__(cls, args) - - def __init__(self, *args): - # tuple.__init__ doesn't do anything - pass - - -class _PseudoMatch: - """ - A pseudo match object constructed from a string. - """ - - def __init__(self, start, text): - self._text = text - self._start = start - - def start(self, arg=None): - return self._start - - def end(self, arg=None): - return self._start + len(self._text) - - def group(self, arg=None): - if arg: - raise IndexError('No such group') - return self._text - - def groups(self): - return (self._text,) - - def groupdict(self): - return {} - - -def bygroups(*args): - """ - Callback that yields multiple actions for each group in the match. - """ - def callback(lexer, match, ctx=None): - for i, action in enumerate(args): - if action is None: - continue - elif type(action) is _TokenType: - data = match.group(i + 1) - if data: - yield match.start(i + 1), action, data - else: - data = match.group(i + 1) - if data is not None: - if ctx: - ctx.pos = match.start(i + 1) - for item in action(lexer, - _PseudoMatch(match.start(i + 1), data), ctx): - if item: - yield item - if ctx: - ctx.pos = match.end() - return callback - - -class _This: - """ - Special singleton used for indicating the caller class. - Used by ``using``. - """ - -this = _This() - - -def using(_other, **kwargs): - """ - Callback that processes the match with a different lexer. - - The keyword arguments are forwarded to the lexer, except `state` which - is handled separately. - - `state` specifies the state that the new lexer will start in, and can - be an enumerable such as ('root', 'inline', 'string') or a simple - string which is assumed to be on top of the root state. - - Note: For that to work, `_other` must not be an `ExtendedRegexLexer`. - """ - gt_kwargs = {} - if 'state' in kwargs: - s = kwargs.pop('state') - if isinstance(s, (list, tuple)): - gt_kwargs['stack'] = s - else: - gt_kwargs['stack'] = ('root', s) - - if _other is this: - def callback(lexer, match, ctx=None): - # if keyword arguments are given the callback - # function has to create a new lexer instance - if kwargs: - # XXX: cache that somehow - kwargs.update(lexer.options) - lx = lexer.__class__(**kwargs) - else: - lx = lexer - s = match.start() - for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): - yield i + s, t, v - if ctx: - ctx.pos = match.end() - else: - def callback(lexer, match, ctx=None): - # XXX: cache that somehow - kwargs.update(lexer.options) - lx = _other(**kwargs) - - s = match.start() - for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): - yield i + s, t, v - if ctx: - ctx.pos = match.end() - return callback - - -class default: - """ - Indicates a state or state action (e.g. #pop) to apply. - For example default('#pop') is equivalent to ('', Token, '#pop') - Note that state tuples may be used as well. - - .. versionadded:: 2.0 - """ - def __init__(self, state): - self.state = state - - -class words(Future): - """ - Indicates a list of literal words that is transformed into an optimized - regex that matches any of the words. - - .. versionadded:: 2.0 - """ - def __init__(self, words, prefix='', suffix=''): - self.words = words - self.prefix = prefix - self.suffix = suffix - - def get(self): - return regex_opt(self.words, prefix=self.prefix, suffix=self.suffix) - - -class RegexLexerMeta(LexerMeta): - """ - Metaclass for RegexLexer, creates the self._tokens attribute from - self.tokens on the first instantiation. - """ - - def _process_regex(cls, regex, rflags, state): - """Preprocess the regular expression component of a token definition.""" - if isinstance(regex, Future): - regex = regex.get() - return re.compile(regex, rflags).match - - def _process_token(cls, token): - """Preprocess the token component of a token definition.""" - assert type(token) is _TokenType or callable(token), \ - 'token type must be simple type or callable, not %r' % (token,) - return token - - def _process_new_state(cls, new_state, unprocessed, processed): - """Preprocess the state transition action of a token definition.""" - if isinstance(new_state, str): - # an existing state - if new_state == '#pop': - return -1 - elif new_state in unprocessed: - return (new_state,) - elif new_state == '#push': - return new_state - elif new_state[:5] == '#pop:': - return -int(new_state[5:]) - else: - assert False, 'unknown new state %r' % new_state - elif isinstance(new_state, combined): - # combine a new state from existing ones - tmp_state = '_tmp_%d' % cls._tmpname - cls._tmpname += 1 - itokens = [] - for istate in new_state: - assert istate != new_state, 'circular state ref %r' % istate - itokens.extend(cls._process_state(unprocessed, - processed, istate)) - processed[tmp_state] = itokens - return (tmp_state,) - elif isinstance(new_state, tuple): - # push more than one state - for istate in new_state: - assert (istate in unprocessed or - istate in ('#pop', '#push')), \ - 'unknown new state ' + istate - return new_state - else: - assert False, 'unknown new state def %r' % new_state - - def _process_state(cls, unprocessed, processed, state): - """Preprocess a single state definition.""" - assert type(state) is str, "wrong state name %r" % state - assert state[0] != '#', "invalid state name %r" % state - if state in processed: - return processed[state] - tokens = processed[state] = [] - rflags = cls.flags - for tdef in unprocessed[state]: - if isinstance(tdef, include): - # it's a state reference - assert tdef != state, "circular state reference %r" % state - tokens.extend(cls._process_state(unprocessed, processed, - str(tdef))) - continue - if isinstance(tdef, _inherit): - # should be processed already, but may not in the case of: - # 1. the state has no counterpart in any parent - # 2. the state includes more than one 'inherit' - continue - if isinstance(tdef, default): - new_state = cls._process_new_state(tdef.state, unprocessed, processed) - tokens.append((re.compile('').match, None, new_state)) - continue - - assert type(tdef) is tuple, "wrong rule def %r" % tdef - - try: - rex = cls._process_regex(tdef[0], rflags, state) - except Exception as err: - raise ValueError("uncompilable regex %r in state %r of %r: %s" % - (tdef[0], state, cls, err)) from err - - token = cls._process_token(tdef[1]) - - if len(tdef) == 2: - new_state = None - else: - new_state = cls._process_new_state(tdef[2], - unprocessed, processed) - - tokens.append((rex, token, new_state)) - return tokens - - def process_tokendef(cls, name, tokendefs=None): - """Preprocess a dictionary of token definitions.""" - processed = cls._all_tokens[name] = {} - tokendefs = tokendefs or cls.tokens[name] - for state in list(tokendefs): - cls._process_state(tokendefs, processed, state) - return processed - - def get_tokendefs(cls): - """ - Merge tokens from superclasses in MRO order, returning a single tokendef - dictionary. - - Any state that is not defined by a subclass will be inherited - automatically. States that *are* defined by subclasses will, by - default, override that state in the superclass. If a subclass wishes to - inherit definitions from a superclass, it can use the special value - "inherit", which will cause the superclass' state definition to be - included at that point in the state. - """ - tokens = {} - inheritable = {} - for c in cls.__mro__: - toks = c.__dict__.get('tokens', {}) - - for state, items in toks.items(): - curitems = tokens.get(state) - if curitems is None: - # N.b. because this is assigned by reference, sufficiently - # deep hierarchies are processed incrementally (e.g. for - # A(B), B(C), C(RegexLexer), B will be premodified so X(B) - # will not see any inherits in B). - tokens[state] = items - try: - inherit_ndx = items.index(inherit) - except ValueError: - continue - inheritable[state] = inherit_ndx - continue - - inherit_ndx = inheritable.pop(state, None) - if inherit_ndx is None: - continue - - # Replace the "inherit" value with the items - curitems[inherit_ndx:inherit_ndx+1] = items - try: - # N.b. this is the index in items (that is, the superclass - # copy), so offset required when storing below. - new_inh_ndx = items.index(inherit) - except ValueError: - pass - else: - inheritable[state] = inherit_ndx + new_inh_ndx - - return tokens - - def __call__(cls, *args, **kwds): - """Instantiate cls after preprocessing its token definitions.""" - if '_tokens' not in cls.__dict__: - cls._all_tokens = {} - cls._tmpname = 0 - if hasattr(cls, 'token_variants') and cls.token_variants: - # don't process yet - pass - else: - cls._tokens = cls.process_tokendef('', cls.get_tokendefs()) - - return type.__call__(cls, *args, **kwds) - - -class RegexLexer(Lexer, metaclass=RegexLexerMeta): - """ - Base for simple stateful regular expression-based lexers. - Simplifies the lexing process so that you need only - provide a list of states and regular expressions. - """ - - #: Flags for compiling the regular expressions. - #: Defaults to MULTILINE. - flags = re.MULTILINE - - #: At all time there is a stack of states. Initially, the stack contains - #: a single state 'root'. The top of the stack is called "the current state". - #: - #: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}`` - #: - #: ``new_state`` can be omitted to signify no state transition. - #: If ``new_state`` is a string, it is pushed on the stack. This ensure - #: the new current state is ``new_state``. - #: If ``new_state`` is a tuple of strings, all of those strings are pushed - #: on the stack and the current state will be the last element of the list. - #: ``new_state`` can also be ``combined('state1', 'state2', ...)`` - #: to signify a new, anonymous state combined from the rules of two - #: or more existing ones. - #: Furthermore, it can be '#pop' to signify going back one step in - #: the state stack, or '#push' to push the current state on the stack - #: again. Note that if you push while in a combined state, the combined - #: state itself is pushed, and not only the state in which the rule is - #: defined. - #: - #: The tuple can also be replaced with ``include('state')``, in which - #: case the rules from the state named by the string are included in the - #: current one. - tokens = {} - - def get_tokens_unprocessed(self, text, stack=('root',)): - """ - Split ``text`` into (tokentype, text) pairs. - - ``stack`` is the initial stack (default: ``['root']``) - """ - pos = 0 - tokendefs = self._tokens - statestack = list(stack) - statetokens = tokendefs[statestack[-1]] - while 1: - for rexmatch, action, new_state in statetokens: - m = rexmatch(text, pos) - if m: - if action is not None: - if type(action) is _TokenType: - yield pos, action, m.group() - else: - yield from action(self, m) - pos = m.end() - if new_state is not None: - # state transition - if isinstance(new_state, tuple): - for state in new_state: - if state == '#pop': - if len(statestack) > 1: - statestack.pop() - elif state == '#push': - statestack.append(statestack[-1]) - else: - statestack.append(state) - elif isinstance(new_state, int): - # pop, but keep at least one state on the stack - # (random code leading to unexpected pops should - # not allow exceptions) - if abs(new_state) >= len(statestack): - del statestack[1:] - else: - del statestack[new_state:] - elif new_state == '#push': - statestack.append(statestack[-1]) - else: - assert False, "wrong state def: %r" % new_state - statetokens = tokendefs[statestack[-1]] - break - else: - # We are here only if all state tokens have been considered - # and there was not a match on any of them. - try: - if text[pos] == '\n': - # at EOL, reset state to "root" - statestack = ['root'] - statetokens = tokendefs['root'] - yield pos, Text, '\n' - pos += 1 - continue - yield pos, Error, text[pos] - pos += 1 - except IndexError: - break - - -class LexerContext: - """ - A helper object that holds lexer position data. - """ - - def __init__(self, text, pos, stack=None, end=None): - self.text = text - self.pos = pos - self.end = end or len(text) # end=0 not supported ;-) - self.stack = stack or ['root'] - - def __repr__(self): - return 'LexerContext(%r, %r, %r)' % ( - self.text, self.pos, self.stack) - - -class ExtendedRegexLexer(RegexLexer): - """ - A RegexLexer that uses a context object to store its state. - """ - - def get_tokens_unprocessed(self, text=None, context=None): - """ - Split ``text`` into (tokentype, text) pairs. - If ``context`` is given, use this lexer context instead. - """ - tokendefs = self._tokens - if not context: - ctx = LexerContext(text, 0) - statetokens = tokendefs['root'] - else: - ctx = context - statetokens = tokendefs[ctx.stack[-1]] - text = ctx.text - while 1: - for rexmatch, action, new_state in statetokens: - m = rexmatch(text, ctx.pos, ctx.end) - if m: - if action is not None: - if type(action) is _TokenType: - yield ctx.pos, action, m.group() - ctx.pos = m.end() - else: - yield from action(self, m, ctx) - if not new_state: - # altered the state stack? - statetokens = tokendefs[ctx.stack[-1]] - # CAUTION: callback must set ctx.pos! - if new_state is not None: - # state transition - if isinstance(new_state, tuple): - for state in new_state: - if state == '#pop': - if len(ctx.stack) > 1: - ctx.stack.pop() - elif state == '#push': - ctx.stack.append(ctx.stack[-1]) - else: - ctx.stack.append(state) - elif isinstance(new_state, int): - # see RegexLexer for why this check is made - if abs(new_state) >= len(ctx.stack): - del ctx.stack[1:] - else: - del ctx.stack[new_state:] - elif new_state == '#push': - ctx.stack.append(ctx.stack[-1]) - else: - assert False, "wrong state def: %r" % new_state - statetokens = tokendefs[ctx.stack[-1]] - break - else: - try: - if ctx.pos >= ctx.end: - break - if text[ctx.pos] == '\n': - # at EOL, reset state to "root" - ctx.stack = ['root'] - statetokens = tokendefs['root'] - yield ctx.pos, Text, '\n' - ctx.pos += 1 - continue - yield ctx.pos, Error, text[ctx.pos] - ctx.pos += 1 - except IndexError: - break - - -def do_insertions(insertions, tokens): - """ - Helper for lexers which must combine the results of several - sublexers. - - ``insertions`` is a list of ``(index, itokens)`` pairs. - Each ``itokens`` iterable should be inserted at position - ``index`` into the token stream given by the ``tokens`` - argument. - - The result is a combined token stream. - - TODO: clean up the code here. - """ - insertions = iter(insertions) - try: - index, itokens = next(insertions) - except StopIteration: - # no insertions - yield from tokens - return - - realpos = None - insleft = True - - # iterate over the token stream where we want to insert - # the tokens from the insertion list. - for i, t, v in tokens: - # first iteration. store the position of first item - if realpos is None: - realpos = i - oldi = 0 - while insleft and i + len(v) >= index: - tmpval = v[oldi:index - i] - if tmpval: - yield realpos, t, tmpval - realpos += len(tmpval) - for it_index, it_token, it_value in itokens: - yield realpos, it_token, it_value - realpos += len(it_value) - oldi = index - i - try: - index, itokens = next(insertions) - except StopIteration: - insleft = False - break # not strictly necessary - if oldi < len(v): - yield realpos, t, v[oldi:] - realpos += len(v) - oldi - - # leftover tokens - while insleft: - # no normal tokens, set realpos to zero - realpos = realpos or 0 - for p, t, v in itokens: - yield realpos, t, v - realpos += len(v) - try: - index, itokens = next(insertions) - except StopIteration: - insleft = False - break # not strictly necessary - - -class ProfilingRegexLexerMeta(RegexLexerMeta): - """Metaclass for ProfilingRegexLexer, collects regex timing info.""" - - def _process_regex(cls, regex, rflags, state): - if isinstance(regex, words): - rex = regex_opt(regex.words, prefix=regex.prefix, - suffix=regex.suffix) - else: - rex = regex - compiled = re.compile(rex, rflags) - - def match_func(text, pos, endpos=sys.maxsize): - info = cls._prof_data[-1].setdefault((state, rex), [0, 0.0]) - t0 = time.time() - res = compiled.match(text, pos, endpos) - t1 = time.time() - info[0] += 1 - info[1] += t1 - t0 - return res - return match_func - - -class ProfilingRegexLexer(RegexLexer, metaclass=ProfilingRegexLexerMeta): - """Drop-in replacement for RegexLexer that does profiling of its regexes.""" - - _prof_data = [] - _prof_sort_index = 4 # defaults to time per call - - def get_tokens_unprocessed(self, text, stack=('root',)): - # this needs to be a stack, since using(this) will produce nested calls - self.__class__._prof_data.append({}) - yield from RegexLexer.get_tokens_unprocessed(self, text, stack) - rawdata = self.__class__._prof_data.pop() - data = sorted(((s, repr(r).strip('u\'').replace('\\\\', '\\')[:65], - n, 1000 * t, 1000 * t / n) - for ((s, r), (n, t)) in rawdata.items()), - key=lambda x: x[self._prof_sort_index], - reverse=True) - sum_total = sum(x[3] for x in data) - - print() - print('Profiling result for %s lexing %d chars in %.3f ms' % - (self.__class__.__name__, len(text), sum_total)) - print('=' * 110) - print('%-20s %-64s ncalls tottime percall' % ('state', 'regex')) - print('-' * 110) - for d in data: - print('%-20s %-65s %5d %8.4f %8.4f' % d) - print('=' * 110) diff --git a/spaces/tomaseo2022/Traductor-Voz-de-Video/__init__.py b/spaces/tomaseo2022/Traductor-Voz-de-Video/__init__.py deleted file mode 100644 index fae7a0d6924f0841507c15a37000fc22a370697d..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/Traductor-Voz-de-Video/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -"""Free Google Translate API for Python. Translates totally free of charge.""" -__all__ = 'Translator', -__version__ = '3.0.0' - - -import client -from client import Translator -import constants -from constants import LANGCODES, LANGUAGES # noqa diff --git a/spaces/tomofi/MMOCR/demo/README.md b/spaces/tomofi/MMOCR/demo/README.md deleted file mode 100644 index 321a8dc5c58eaaa5356cc171c75e9feda35e116f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/demo/README.md +++ /dev/null @@ -1,251 +0,0 @@ -# Demo - -We provide an easy-to-use API for the demo and application purpose in [ocr.py](https://github.com/open-mmlab/mmocr/blob/main/mmocr/utils/ocr.py) script. - -The API can be called through command line (CL) or by calling it from another python script. - ---- - -## Example 1: Text Detection - -
                -
                -
                -
                - -**Instruction:** Perform detection inference on an image with the TextSnake recognition model, export the result in a json file (default) and save the visualization file. - -- CL interface: - -```shell -python mmocr/utils/ocr.py demo/demo_text_det.jpg --output demo/det_out.jpg --det TextSnake --recog None --export demo/ -``` - -- Python interface: - -```python -from mmocr.utils.ocr import MMOCR - -# Load models into memory -ocr = MMOCR(det='TextSnake', recog=None) - -# Inference -results = ocr.readtext('demo/demo_text_det.jpg', output='demo/det_out.jpg', export='demo/') -``` - -## Example 2: Text Recognition - -
                -
                -
                -
                - -**Instruction:** Perform batched recognition inference on a folder with hundreds of image with the CRNN_TPS recognition model and save the visualization results in another folder. -*Batch size is set to 10 to prevent out of memory CUDA runtime errors.* - -- CL interface: - -```shell -python mmocr/utils/ocr.py %INPUT_FOLDER_PATH% --det None --recog CRNN_TPS --batch-mode --single-batch-size 10 --output %OUPUT_FOLDER_PATH% -``` - -- Python interface: - -```python -from mmocr.utils.ocr import MMOCR - -# Load models into memory -ocr = MMOCR(det=None, recog='CRNN_TPS') - -# Inference -results = ocr.readtext(%INPUT_FOLDER_PATH%, output = %OUTPUT_FOLDER_PATH%, batch_mode=True, single_batch_size = 10) -``` - -## Example 3: Text Detection + Recognition - -
                -
                -
                -
                - -**Instruction:** Perform ocr (det + recog) inference on the demo/demo_text_det.jpg image with the PANet_IC15 (default) detection model and SAR (default) recognition model, print the result in the terminal and show the visualization. - -- CL interface: - -```shell -python mmocr/utils/ocr.py demo/demo_text_ocr.jpg --print-result --imshow -``` - -:::{note} - -When calling the script from the command line, the script assumes configs are saved in the `configs/` folder. User can customize the directory by specifying the value of `config_dir`. - -::: - -- Python interface: - -```python -from mmocr.utils.ocr import MMOCR - -# Load models into memory -ocr = MMOCR() - -# Inference -results = ocr.readtext('demo/demo_text_ocr.jpg', print_result=True, imshow=True) -``` - ---- - -## Example 4: Text Detection + Recognition + Key Information Extraction - -
                -
                -
                -
                - -**Instruction:** Perform end-to-end ocr (det + recog) inference first with PS_CTW detection model and SAR recognition model, then run KIE inference with SDMGR model on the ocr result and show the visualization. - -- CL interface: - -```shell -python mmocr/utils/ocr.py demo/demo_kie.jpeg --det PS_CTW --recog SAR --kie SDMGR --print-result --imshow -``` - -:::{note} - -Note: When calling the script from the command line, the script assumes configs are saved in the `configs/` folder. User can customize the directory by specifying the value of `config_dir`. - -::: - -- Python interface: - -```python -from mmocr.utils.ocr import MMOCR - -# Load models into memory -ocr = MMOCR(det='PS_CTW', recog='SAR', kie='SDMGR') - -# Inference -results = ocr.readtext('demo/demo_kie.jpeg', print_result=True, imshow=True) -``` - ---- - -## API Arguments - -The API has an extensive list of arguments that you can use. The following tables are for the python interface. - -**MMOCR():** - -| Arguments | Type | Default | Description | -| -------------- | --------------------- | ------------- | ----------------------------------------------------------- | -| `det` | see [models](#models) | PANet_IC15 | Text detection algorithm | -| `recog` | see [models](#models) | SAR | Text recognition algorithm | -| `kie` [1] | see [models](#models) | None | Key information extraction algorithm | -| `config_dir` | str | configs/ | Path to the config directory where all the config files are located | -| `det_config` | str | None | Path to the custom config file of the selected det model | -| `det_ckpt` | str | None | Path to the custom checkpoint file of the selected det model | -| `recog_config` | str | None | Path to the custom config file of the selected recog model | -| `recog_ckpt` | str | None | Path to the custom checkpoint file of the selected recog model | -| `kie_config` | str | None | Path to the custom config file of the selected kie model | -| `kie_ckpt` | str | None | Path to the custom checkpoint file of the selected kie model | -| `device` | str | None | Device used for inference, accepting all allowed strings by `torch.device`. E.g., 'cuda:0' or 'cpu'. | - -[1]: `kie` is only effective when both text detection and recognition models are specified. - -:::{note} - -User can use default pretrained models by specifying `det` and/or `recog`, which is equivalent to specifying their corresponding `*_config` and `*_ckpt`. However, manually specifying `*_config` and `*_ckpt` will always override values set by `det` and/or `recog`. Similar rules also apply to `kie`, `kie_config` and `kie_ckpt`. - -::: - -### readtext() - -| Arguments | Type | Default | Description | -| ------------------- | ----------------------- | ------------ | ---------------------------------------------------------------------- | -| `img` | str/list/tuple/np.array | **required** | img, folder path, np array or list/tuple (with img paths or np arrays) | -| `output` | str | None | Output result visualization - img path or folder path | -| `batch_mode` | bool | False | Whether use batch mode for inference [1] | -| `det_batch_size` | int | 0 | Batch size for text detection (0 for max size) | -| `recog_batch_size` | int | 0 | Batch size for text recognition (0 for max size) | -| `single_batch_size` | int | 0 | Batch size for only detection or recognition | -| `export` | str | None | Folder where the results of each image are exported | -| `export_format` | str | json | Format of the exported result file(s) | -| `details` | bool | False | Whether include the text boxes coordinates and confidence values | -| `imshow` | bool | False | Whether to show the result visualization on screen | -| `print_result` | bool | False | Whether to show the result for each image | -| `merge` | bool | False | Whether to merge neighboring boxes [2] | -| `merge_xdist` | float | 20 | The maximum x-axis distance to merge boxes | - -[1]: Make sure that the model is compatible with batch mode. - -[2]: Only effective when the script is running in det + recog mode. - -All arguments are the same for the cli, all you need to do is add 2 hyphens at the beginning of the argument and replace underscores by hyphens. -(*Example:* `det_batch_size` becomes `--det-batch-size`) - -For bool type arguments, putting the argument in the command stores it as true. -(*Example:* `python mmocr/utils/ocr.py demo/demo_text_det.jpg --batch_mode --print_result` -means that `batch_mode` and `print_result` are set to `True`) - ---- - -## Models - -**Text detection:** - -| Name | Reference | `batch_mode` inference support | -| ------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------: | -| DB_r18 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#real-time-scene-text-detection-with-differentiable-binarization) | :x: | -| DB_r50 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#real-time-scene-text-detection-with-differentiable-binarization) | :x: | -| DRRG | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#drrg) | :x: | -| FCE_IC15 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#fourier-contour-embedding-for-arbitrary-shaped-text-detection) | :x: | -| FCE_CTW_DCNv2 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#fourier-contour-embedding-for-arbitrary-shaped-text-detection) | :x: | -| MaskRCNN_CTW | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#mask-r-cnn) | :x: | -| MaskRCNN_IC15 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#mask-r-cnn) | :x: | -| MaskRCNN_IC17 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#mask-r-cnn) | :x: | -| PANet_CTW | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#efficient-and-accurate-arbitrary-shaped-text-detection-with-pixel-aggregation-network) | :heavy_check_mark: | -| PANet_IC15 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#efficient-and-accurate-arbitrary-shaped-text-detection-with-pixel-aggregation-network) | :heavy_check_mark: | -| PS_CTW | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#psenet) | :x: | -| PS_IC15 | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#psenet) | :x: | -| TextSnake | [link](https://mmocr.readthedocs.io/en/latest/textdet_models.html#textsnake) | :heavy_check_mark: | - -**Text recognition:** - -| Name | Reference | `batch_mode` inference support | -| ------------- | :--------------------------------------------------------------------------------------------------------------------------------: | :------------------: | -| ABINet | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#read-like-humans-autonomous-bidirectional-and-iterative-language-modeling-for-scene-text-recognition) | :heavy_check_mark: | -| CRNN | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#an-end-to-end-trainable-neural-network-for-image-based-sequence-recognition-and-its-application-to-scene-text-recognition) | :x: | -| SAR | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#show-attend-and-read-a-simple-and-strong-baseline-for-irregular-text-recognition) | :heavy_check_mark: | -| SAR_CN * | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#show-attend-and-read-a-simple-and-strong-baseline-for-irregular-text-recognition) | :heavy_check_mark: | -| NRTR_1/16-1/8 | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#nrtr) | :heavy_check_mark: | -| NRTR_1/8-1/4 | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#nrtr) | :heavy_check_mark: | -| RobustScanner | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#robustscanner-dynamically-enhancing-positional-clues-for-robust-text-recognition) | :heavy_check_mark: | -| SATRN | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#satrn) | :heavy_check_mark: | -| SATRN_sm | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#satrn) | :heavy_check_mark: | -| SEG | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#segocr-simple-baseline) | :x: | -| CRNN_TPS | [link](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#crnn-with-tps-based-stn) | :heavy_check_mark: | - -:::{warning} - -SAR_CN is the only model that supports Chinese character recognition and it requires -a Chinese dictionary. Please download the dictionary from [here](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#chinese-dataset) for a successful run. - -::: - -**Key information extraction:** - -| Name | Reference | `batch_mode` support | -| ------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------: | -| SDMGR | [link](https://mmocr.readthedocs.io/en/latest/kie_models.html#spatial-dual-modality-graph-reasoning-for-key-information-extraction) | :heavy_check_mark: | ---- - -## Additional info - -- To perform det + recog inference (end2end ocr), both the `det` and `recog` arguments must be defined. -- To perform only detection set the `recog` argument to `None`. -- To perform only recognition set the `det` argument to `None`. -- `details` argument only works with end2end ocr. -- `det_batch_size` and `recog_batch_size` arguments define the number of images you want to forward to the model at the same time. For maximum speed, set this to the highest number you can. The max batch size is limited by the model complexity and the GPU VRAM size. - -If you have any suggestions for new features, feel free to open a thread or even PR :) diff --git a/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/robust_scanner_fusion_layer.py b/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/robust_scanner_fusion_layer.py deleted file mode 100644 index af2568743874d4c6b9a8e804485a0665f6d29c2d..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/robust_scanner_fusion_layer.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import BaseModule - - -class RobustScannerFusionLayer(BaseModule): - - def __init__(self, dim_model, dim=-1, init_cfg=None): - super().__init__(init_cfg=init_cfg) - - self.dim_model = dim_model - self.dim = dim - - self.linear_layer = nn.Linear(dim_model * 2, dim_model * 2) - self.glu_layer = nn.GLU(dim=dim) - - def forward(self, x0, x1): - assert x0.size() == x1.size() - fusion_input = torch.cat([x0, x1], self.dim) - output = self.linear_layer(fusion_input) - output = self.glu_layer(output) - - return output diff --git a/spaces/tomofi/MMOCR/tests/test_dataset/test_uniform_concat_dataset.py b/spaces/tomofi/MMOCR/tests/test_dataset/test_uniform_concat_dataset.py deleted file mode 100644 index 0b0acb34f11d5fad76be0a6fdf88b2f4def22097..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_dataset/test_uniform_concat_dataset.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from mmocr.datasets import UniformConcatDataset -from mmocr.utils import list_from_file - - -def test_dataset_warpper(): - pipeline1 = [dict(type='LoadImageFromFile')] - pipeline2 = [dict(type='LoadImageFromFile'), dict(type='ColorJitter')] - - img_prefix = 'tests/data/ocr_toy_dataset/imgs' - ann_file = 'tests/data/ocr_toy_dataset/label.txt' - train1 = dict( - type='OCRDataset', - img_prefix=img_prefix, - ann_file=ann_file, - loader=dict( - type='HardDiskLoader', - repeat=1, - parser=dict( - type='LineStrParser', - keys=['filename', 'text'], - keys_idx=[0, 1], - separator=' ')), - pipeline=None, - test_mode=False) - - train2 = {key: value for key, value in train1.items()} - train2['pipeline'] = pipeline2 - - # pipeline is 1d list - copy_train1 = copy.deepcopy(train1) - copy_train2 = copy.deepcopy(train2) - tmp_dataset = UniformConcatDataset( - datasets=[copy_train1, copy_train2], - pipeline=pipeline1, - force_apply=True) - - assert len(tmp_dataset) == 2 * len(list_from_file(ann_file)) - assert len(tmp_dataset.datasets[0].pipeline.transforms) == len( - tmp_dataset.datasets[1].pipeline.transforms) - - # pipeline is None - copy_train2 = copy.deepcopy(train2) - tmp_dataset = UniformConcatDataset(datasets=[copy_train2], pipeline=None) - assert len(tmp_dataset.datasets[0].pipeline.transforms) == len(pipeline2) - - copy_train2 = copy.deepcopy(train2) - tmp_dataset = UniformConcatDataset( - datasets=[[copy_train2], [copy_train2]], pipeline=None) - assert len(tmp_dataset.datasets[0].pipeline.transforms) == len(pipeline2) - - # pipeline is 2d list - copy_train1 = copy.deepcopy(train1) - copy_train2 = copy.deepcopy(train2) - tmp_dataset = UniformConcatDataset( - datasets=[[copy_train1], [copy_train2]], - pipeline=[pipeline1, pipeline2]) - assert len(tmp_dataset.datasets[0].pipeline.transforms) == len(pipeline1) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r50_1x8_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r50_1x8_coco.py deleted file mode 100644 index d0e5ace280e1377ce4bb772df7e132427143bf34..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r50_1x8_coco.py +++ /dev/null @@ -1,160 +0,0 @@ -_base_ = '../_base_/default_runtime.py' - -# model settings -img_size = 550 -model = dict( - type='YOLACT', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, # do not freeze stem - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, # update the statistics of bn - zero_init_residual=False, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_input', - num_outs=5, - upsample_cfg=dict(mode='bilinear')), - bbox_head=dict( - type='YOLACTHead', - num_classes=80, - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=3, - scales_per_octave=1, - base_sizes=[8, 16, 32, 64, 128], - ratios=[0.5, 1.0, 2.0], - strides=[550.0 / x for x in [69, 35, 18, 9, 5]], - centers=[(550 * 0.5 / x, 550 * 0.5 / x) - for x in [69, 35, 18, 9, 5]]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - reduction='none', - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.5), - num_head_convs=1, - num_protos=32, - use_ohem=True), - mask_head=dict( - type='YOLACTProtonet', - in_channels=256, - num_protos=32, - num_classes=80, - max_masks_to_train=100, - loss_mask_weight=6.125), - segm_head=dict( - type='YOLACTSegmHead', - num_classes=80, - in_channels=256, - loss_segm=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0., - ignore_iof_thr=-1, - gt_max_assign_all=False), - # smoothl1_beta=1., - allowed_border=-1, - pos_weight=-1, - neg_pos_ratio=3, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - iou_thr=0.5, - top_k=200, - max_per_img=100)) -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.68, 116.78, 103.94], std=[58.40, 57.12, 57.38], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='FilterAnnotations', min_gt_bbox_wh=(4.0, 4.0)), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(img_size, img_size), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(img_size, img_size), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_val2017.json', - img_prefix=data_root + 'val2017/', - pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4) -optimizer_config = dict() -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.1, - step=[20, 42, 49, 52]) -runner = dict(type='EpochBasedRunner', max_epochs=55) -cudnn_benchmark = True -evaluation = dict(metric=['bbox', 'segm']) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/pseudo_bbox_coder.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/pseudo_bbox_coder.py deleted file mode 100644 index 1c8346f4ae2c7db9719a70c7dc0244e088a9965b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/pseudo_bbox_coder.py +++ /dev/null @@ -1,18 +0,0 @@ -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class PseudoBBoxCoder(BaseBBoxCoder): - """Pseudo bounding box coder.""" - - def __init__(self, **kwargs): - super(BaseBBoxCoder, self).__init__(**kwargs) - - def encode(self, bboxes, gt_bboxes): - """torch.Tensor: return the given ``bboxes``""" - return gt_bboxes - - def decode(self, bboxes, pred_bboxes): - """torch.Tensor: return the given ``pred_bboxes``""" - return pred_bboxes diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/train_searcher.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/train_searcher.py deleted file mode 100644 index 1e7904889c0145f9fb740fd4ae8e45c08728b255..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/scripts/train_searcher.py +++ /dev/null @@ -1,147 +0,0 @@ -import os, sys -import numpy as np -import scann -import argparse -import glob -from multiprocessing import cpu_count -from tqdm import tqdm - -from ldm.util import parallel_data_prefetch - - -def search_bruteforce(searcher): - return searcher.score_brute_force().build() - - -def search_partioned_ah(searcher, dims_per_block, aiq_threshold, reorder_k, - partioning_trainsize, num_leaves, num_leaves_to_search): - return searcher.tree(num_leaves=num_leaves, - num_leaves_to_search=num_leaves_to_search, - training_sample_size=partioning_trainsize). \ - score_ah(dims_per_block, anisotropic_quantization_threshold=aiq_threshold).reorder(reorder_k).build() - - -def search_ah(searcher, dims_per_block, aiq_threshold, reorder_k): - return searcher.score_ah(dims_per_block, anisotropic_quantization_threshold=aiq_threshold).reorder( - reorder_k).build() - -def load_datapool(dpath): - - - def load_single_file(saved_embeddings): - compressed = np.load(saved_embeddings) - database = {key: compressed[key] for key in compressed.files} - return database - - def load_multi_files(data_archive): - database = {key: [] for key in data_archive[0].files} - for d in tqdm(data_archive, desc=f'Loading datapool from {len(data_archive)} individual files.'): - for key in d.files: - database[key].append(d[key]) - - return database - - print(f'Load saved patch embedding from "{dpath}"') - file_content = glob.glob(os.path.join(dpath, '*.npz')) - - if len(file_content) == 1: - data_pool = load_single_file(file_content[0]) - elif len(file_content) > 1: - data = [np.load(f) for f in file_content] - prefetched_data = parallel_data_prefetch(load_multi_files, data, - n_proc=min(len(data), cpu_count()), target_data_type='dict') - - data_pool = {key: np.concatenate([od[key] for od in prefetched_data], axis=1)[0] for key in prefetched_data[0].keys()} - else: - raise ValueError(f'No npz-files in specified path "{dpath}" is this directory existing?') - - print(f'Finished loading of retrieval database of length {data_pool["embedding"].shape[0]}.') - return data_pool - - -def train_searcher(opt, - metric='dot_product', - partioning_trainsize=None, - reorder_k=None, - # todo tune - aiq_thld=0.2, - dims_per_block=2, - num_leaves=None, - num_leaves_to_search=None,): - - data_pool = load_datapool(opt.database) - k = opt.knn - - if not reorder_k: - reorder_k = 2 * k - - # normalize - # embeddings = - searcher = scann.scann_ops_pybind.builder(data_pool['embedding'] / np.linalg.norm(data_pool['embedding'], axis=1)[:, np.newaxis], k, metric) - pool_size = data_pool['embedding'].shape[0] - - print(*(['#'] * 100)) - print('Initializing scaNN searcher with the following values:') - print(f'k: {k}') - print(f'metric: {metric}') - print(f'reorder_k: {reorder_k}') - print(f'anisotropic_quantization_threshold: {aiq_thld}') - print(f'dims_per_block: {dims_per_block}') - print(*(['#'] * 100)) - print('Start training searcher....') - print(f'N samples in pool is {pool_size}') - - # this reflects the recommended design choices proposed at - # https://github.com/google-research/google-research/blob/aca5f2e44e301af172590bb8e65711f0c9ee0cfd/scann/docs/algorithms.md - if pool_size < 2e4: - print('Using brute force search.') - searcher = search_bruteforce(searcher) - elif 2e4 <= pool_size and pool_size < 1e5: - print('Using asymmetric hashing search and reordering.') - searcher = search_ah(searcher, dims_per_block, aiq_thld, reorder_k) - else: - print('Using using partioning, asymmetric hashing search and reordering.') - - if not partioning_trainsize: - partioning_trainsize = data_pool['embedding'].shape[0] // 10 - if not num_leaves: - num_leaves = int(np.sqrt(pool_size)) - - if not num_leaves_to_search: - num_leaves_to_search = max(num_leaves // 20, 1) - - print('Partitioning params:') - print(f'num_leaves: {num_leaves}') - print(f'num_leaves_to_search: {num_leaves_to_search}') - # self.searcher = self.search_ah(searcher, dims_per_block, aiq_thld, reorder_k) - searcher = search_partioned_ah(searcher, dims_per_block, aiq_thld, reorder_k, - partioning_trainsize, num_leaves, num_leaves_to_search) - - print('Finish training searcher') - searcher_savedir = opt.target_path - os.makedirs(searcher_savedir, exist_ok=True) - searcher.serialize(searcher_savedir) - print(f'Saved trained searcher under "{searcher_savedir}"') - -if __name__ == '__main__': - sys.path.append(os.getcwd()) - parser = argparse.ArgumentParser() - parser.add_argument('--database', - '-d', - default='data/rdm/retrieval_databases/openimages', - type=str, - help='path to folder containing the clip feature of the database') - parser.add_argument('--target_path', - '-t', - default='data/rdm/searchers/openimages', - type=str, - help='path to the target folder where the searcher shall be stored.') - parser.add_argument('--knn', - '-k', - default=20, - type=int, - help='number of nearest neighbors, for which the searcher shall be optimized') - - opt, _ = parser.parse_known_args() - - train_searcher(opt,) \ No newline at end of file diff --git a/spaces/trysem/image-matting-app/ppmatting/ml/methods.py b/spaces/trysem/image-matting-app/ppmatting/ml/methods.py deleted file mode 100644 index 61d5fea2475552c14d29fe44fd08cf436e55bdbd..0000000000000000000000000000000000000000 --- a/spaces/trysem/image-matting-app/ppmatting/ml/methods.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import pymatting -from paddleseg.cvlibs import manager - - -class BaseMLMatting(object): - def __init__(self, alpha_estimator, **kargs): - self.alpha_estimator = alpha_estimator - self.kargs = kargs - - def __call__(self, image, trimap): - image = self.__to_float64(image) - trimap = self.__to_float64(trimap) - alpha_matte = self.alpha_estimator(image, trimap, **self.kargs) - return alpha_matte - - def __to_float64(self, x): - x_dtype = x.dtype - assert x_dtype in ["float32", "float64"] - x = x.astype("float64") - return x - - -@manager.MODELS.add_component -class CloseFormMatting(BaseMLMatting): - def __init__(self, **kargs): - cf_alpha_estimator = pymatting.estimate_alpha_cf - super().__init__(cf_alpha_estimator, **kargs) - - -@manager.MODELS.add_component -class KNNMatting(BaseMLMatting): - def __init__(self, **kargs): - knn_alpha_estimator = pymatting.estimate_alpha_knn - super().__init__(knn_alpha_estimator, **kargs) - - -@manager.MODELS.add_component -class LearningBasedMatting(BaseMLMatting): - def __init__(self, **kargs): - lbdm_alpha_estimator = pymatting.estimate_alpha_lbdm - super().__init__(lbdm_alpha_estimator, **kargs) - - -@manager.MODELS.add_component -class FastMatting(BaseMLMatting): - def __init__(self, **kargs): - lkm_alpha_estimator = pymatting.estimate_alpha_lkm - super().__init__(lkm_alpha_estimator, **kargs) - - -@manager.MODELS.add_component -class RandomWalksMatting(BaseMLMatting): - def __init__(self, **kargs): - rw_alpha_estimator = pymatting.estimate_alpha_rw - super().__init__(rw_alpha_estimator, **kargs) - - -if __name__ == "__main__": - from pymatting.util.util import load_image, save_image, stack_images - from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml - import cv2 - - root = "/mnt/liuyi22/PaddlePaddle/PaddleSeg/Matting/data/examples/" - image_path = root + "lemur.png" - trimap_path = root + "lemur_trimap.png" - cutout_path = root + "lemur_cutout.png" - image = cv2.cvtColor( - cv2.imread(image_path).astype("float64"), cv2.COLOR_BGR2RGB) / 255.0 - - cv2.imwrite("image.png", (image * 255).astype('uint8')) - trimap = load_image(trimap_path, "GRAY") - print(image.shape, trimap.shape) - print(image.dtype, trimap.dtype) - cf = CloseFormMatting() - alpha = cf(image, trimap) - - # alpha = pymatting.estimate_alpha_lkm(image, trimap) - - foreground = estimate_foreground_ml(image, alpha) - - cutout = stack_images(foreground, alpha) - - save_image(cutout_path, cutout) diff --git a/spaces/tsi-org/tts/README.md b/spaces/tsi-org/tts/README.md deleted file mode 100644 index 047e44165b7abf68b7e650fea8679961e5a0b377..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/tts/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ElevenLabs TTS -emoji: 🗣️ -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tsinghua-ee/SALMONN-7B-gradio/beats/quantizer.py b/spaces/tsinghua-ee/SALMONN-7B-gradio/beats/quantizer.py deleted file mode 100644 index 704be4c357bce7ee425ea2b6737b536333a5a63c..0000000000000000000000000000000000000000 --- a/spaces/tsinghua-ee/SALMONN-7B-gradio/beats/quantizer.py +++ /dev/null @@ -1,215 +0,0 @@ -# -------------------------------------------------------- -# BEATs: Audio Pre-Training with Acoustic Tokenizers (https://arxiv.org/abs/2212.09058) -# Github source: https://github.com/microsoft/unilm/tree/master/beats -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Based on VQGAN code bases -# https://github.com/CompVis/taming-transformers -# --------------------------------------------------------' - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.distributed as distributed - -try: - from einops import rearrange, repeat -except ImportError: - pass - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -def ema_inplace(moving_avg, new, decay): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -def sample_vectors(samples, num): - num_samples, device = samples.shape[0], samples.device - - if num_samples >= num: - indices = torch.randperm(num_samples, device=device)[:num] - else: - indices = torch.randint(0, num_samples, (num,), device=device) - - return samples[indices] - - -def kmeans(samples, num_clusters, num_iters=10, use_cosine_sim=False): - dim, dtype, device = samples.shape[-1], samples.dtype, samples.device - - means = sample_vectors(samples, num_clusters) - - for _ in range(num_iters): - if use_cosine_sim: - dists = samples @ means.t() - else: - diffs = rearrange(samples, 'n d -> n () d') \ - - rearrange(means, 'c d -> () c d') - dists = -(diffs ** 2).sum(dim=-1) - - buckets = dists.max(dim=-1).indices - bins = torch.bincount(buckets, minlength=num_clusters) - zero_mask = bins == 0 - bins_min_clamped = bins.masked_fill(zero_mask, 1) - - new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype) - new_means.scatter_add_(0, repeat(buckets, 'n -> n d', d=dim), samples) - new_means = new_means / bins_min_clamped[..., None] - - if use_cosine_sim: - new_means = l2norm(new_means) - - means = torch.where(zero_mask[..., None], means, new_means) - - return means, bins - - -class EmbeddingEMA(nn.Module): - def __init__(self, num_tokens, codebook_dim, decay=0.99, eps=1e-5, kmeans_init=True, codebook_init_path=''): - super().__init__() - self.num_tokens = num_tokens - self.codebook_dim = codebook_dim - self.decay = decay - self.eps = eps - if codebook_init_path == '': - if not kmeans_init: - weight = torch.randn(num_tokens, codebook_dim) - weight = l2norm(weight) - else: - weight = torch.zeros(num_tokens, codebook_dim) - self.register_buffer('initted', torch.Tensor([not kmeans_init])) - else: - print(f"load init codebook weight from {codebook_init_path}") - codebook_ckpt_weight = torch.load(codebook_init_path, map_location='cpu') - weight = codebook_ckpt_weight.clone() - self.register_buffer('initted', torch.Tensor([True])) - - self.weight = nn.Parameter(weight, requires_grad=False) - self.cluster_size = nn.Parameter(torch.zeros(num_tokens), requires_grad=False) - self.embed_avg = nn.Parameter(weight.clone(), requires_grad=False) - # self.register_buffer('initted', torch.Tensor([not kmeans_init])) - self.update = True - - @torch.jit.ignore - def init_embed_(self, data): - if self.initted: - return - print("Performing Kemans init for codebook") - embed, cluster_size = kmeans(data, self.num_tokens, 10, use_cosine_sim=True) - self.weight.data.copy_(embed) - self.cluster_size.data.copy_(cluster_size) - self.initted.data.copy_(torch.Tensor([True])) - - def forward(self, embed_id): - return F.embedding(embed_id, self.weight) - - def cluster_size_ema_update(self, new_cluster_size): - self.cluster_size.data.mul_(self.decay).add_(new_cluster_size, alpha=1 - self.decay) - - def embed_avg_ema_update(self, new_embed_avg): - self.embed_avg.data.mul_(self.decay).add_(new_embed_avg, alpha=1 - self.decay) - - def weight_update(self, num_tokens): - n = self.cluster_size.sum() - smoothed_cluster_size = ( - (self.cluster_size + self.eps) / (n + num_tokens * self.eps) * n - ) - # normalize embedding average with smoothed cluster size - embed_normalized = self.embed_avg / smoothed_cluster_size.unsqueeze(1) - # embed_normalized = l2norm(self.embed_avg / smoothed_cluster_size.unsqueeze(1)) - self.weight.data.copy_(embed_normalized) - - -def norm_ema_inplace(moving_avg, new, decay): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - moving_avg.data.copy_(l2norm(moving_avg.data)) - - -class NormEMAVectorQuantizer(nn.Module): - def __init__(self, n_embed, embedding_dim, beta, decay=0.99, eps=1e-5, - statistic_code_usage=True, kmeans_init=False, codebook_init_path=''): - super().__init__() - self.codebook_dim = embedding_dim - self.num_tokens = n_embed - self.beta = beta - self.decay = decay - - # learnable = True if orthogonal_reg_weight > 0 else False - self.embedding = EmbeddingEMA(self.num_tokens, self.codebook_dim, decay, eps, kmeans_init, codebook_init_path) - - self.statistic_code_usage = statistic_code_usage - if statistic_code_usage: - self.register_buffer('cluster_size', torch.zeros(n_embed)) - if distributed.is_available() and distributed.is_initialized(): - print("ddp is enable, so use ddp_reduce to sync the statistic_code_usage for each gpu!") - self.all_reduce_fn = distributed.all_reduce - else: - self.all_reduce_fn = nn.Identity() - - def reset_cluster_size(self, device): - if self.statistic_code_usage: - self.register_buffer('cluster_size', torch.zeros(self.num_tokens)) - self.cluster_size = self.cluster_size.to(device) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - # z, 'b c h w -> b h w c' - # z = rearrange(z, 'b c h w -> b h w c') - # z = z.transpose(1, 2) - z = l2norm(z) - z_flattened = z.reshape(-1, self.codebook_dim) - - self.embedding.init_embed_(z_flattened) - - d = z_flattened.pow(2).sum(dim=1, keepdim=True) + \ - self.embedding.weight.pow(2).sum(dim=1) - 2 * \ - torch.einsum('bd,nd->bn', z_flattened, self.embedding.weight) # 'n d -> d n' - - encoding_indices = torch.argmin(d, dim=1) - - z_q = self.embedding(encoding_indices).view(z.shape) - - encodings = F.one_hot(encoding_indices, self.num_tokens).type(z.dtype) - - if not self.training: - with torch.no_grad(): - cluster_size = encodings.sum(0) - self.all_reduce_fn(cluster_size) - ema_inplace(self.cluster_size, cluster_size, self.decay) - - if self.training and self.embedding.update: - # EMA cluster size - - bins = encodings.sum(0) - self.all_reduce_fn(bins) - - # self.embedding.cluster_size_ema_update(bins) - ema_inplace(self.cluster_size, bins, self.decay) - - zero_mask = (bins == 0) - bins = bins.masked_fill(zero_mask, 1.) - - embed_sum = z_flattened.t() @ encodings - self.all_reduce_fn(embed_sum) - - embed_normalized = (embed_sum / bins.unsqueeze(0)).t() - embed_normalized = l2norm(embed_normalized) - - embed_normalized = torch.where(zero_mask[..., None], self.embedding.weight, - embed_normalized) - norm_ema_inplace(self.embedding.weight, embed_normalized, self.decay) - - # compute loss for embedding - loss = self.beta * F.mse_loss(z_q.detach(), z) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - # z_q, 'b h w c -> b c h w' - # z_q = rearrange(z_q, 'b h w c -> b c h w') - # z_q = z_q.transpose(1, 2) - return z_q, loss, encoding_indices \ No newline at end of file diff --git a/spaces/umoubuton/atri-bert-vits2/text/__init__.py b/spaces/umoubuton/atri-bert-vits2/text/__init__.py deleted file mode 100644 index a45b650424306b6e077d7013e93e2c9bd1e073c2..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - - -def cleaned_text_to_sequence(cleaned_text, tones, language): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - """ - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - - -def get_bert(norm_text, word2ph, language, device): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - from .japanese_bert import get_bert_feature as jp_bert - - lang_bert_func_map = {"ZH": zh_bert, "EN": en_bert, "JP": jp_bert} - bert = lang_bert_func_map[language](norm_text, word2ph, device) - return bert diff --git a/spaces/unstructuredio/unstructured-invoices/README.md b/spaces/unstructuredio/unstructured-invoices/README.md deleted file mode 100644 index e0a818c5ed8d9c4086063a46e81b5469803691a0..0000000000000000000000000000000000000000 --- a/spaces/unstructuredio/unstructured-invoices/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Invoices Parser -emoji: ⚡ -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/BeschneidungDerSklavinNora185318.md b/spaces/usbethFlerru/sovits-modelsV2/example/BeschneidungDerSklavinNora185318.md deleted file mode 100644 index 9ba3c4c42c86c49abb0e7e3cc470ed1c8c6d7251..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/BeschneidungDerSklavinNora185318.md +++ /dev/null @@ -1,6 +0,0 @@ -

                BeschneidungDerSklavinNora185318


                Download File ::: https://urlcod.com/2uyUPT



                - - aaccfb2cb3
                -
                -
                -

                diff --git a/spaces/uwx/waveformer/app.py b/spaces/uwx/waveformer/app.py deleted file mode 100644 index e4bef5f78ff8269453e3af92460ec6c68fee1c30..0000000000000000000000000000000000000000 --- a/spaces/uwx/waveformer/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import argparse -import os -import json - -import wget -import torch -import torchaudio -import gradio as gr - -from dcc_tf import Net as Waveformer - -TARGETS = [ - "Acoustic_guitar", "Applause", "Bark", "Bass_drum", - "Burping_or_eructation", "Bus", "Cello", "Chime", "Clarinet", - "Computer_keyboard", "Cough", "Cowbell", "Double_bass", - "Drawer_open_or_close", "Electric_piano", "Fart", "Finger_snapping", - "Fireworks", "Flute", "Glockenspiel", "Gong", "Gunshot_or_gunfire", - "Harmonica", "Hi-hat", "Keys_jangling", "Knock", "Laughter", "Meow", - "Microwave_oven", "Oboe", "Saxophone", "Scissors", "Shatter", - "Snare_drum", "Squeak", "Tambourine", "Tearing", "Telephone", - "Trumpet", "Violin_or_fiddle", "Writing" -] - -if not os.path.exists('default_config.json'): - config_url = 'https://targetsound.cs.washington.edu/files/default_config.json' - print("Downloading model configuration from %s:" % config_url) - wget.download(config_url) - -if not os.path.exists('default_ckpt.pt'): - ckpt_url = 'https://targetsound.cs.washington.edu/files/default_ckpt.pt' - print("\nDownloading the checkpoint from %s:" % ckpt_url) - wget.download(ckpt_url) - -# Instantiate model -with open('default_config.json') as f: - params = json.load(f) -model = Waveformer(**params['model_params']) -model.load_state_dict( - torch.load('default_ckpt.pt', map_location=torch.device('cpu'))['model_state_dict']) -model.eval() - -def waveformer(audio, label_choices): - # Read input audio - fs, mixture = audio - if fs != 44100: - raise ValueError("Sampling rate must be 44100, but got %d" % fs) - mixture = torch.from_numpy( - mixture).unsqueeze(0).unsqueeze(0).to(torch.float) / (2.0 ** 15) - - # Construct the query vector - query = torch.zeros(1, len(TARGETS)) - for t in label_choices: - query[0, TARGETS.index(t)] = 1. - - with torch.no_grad(): - output = (2.0 ** 15) * model(mixture, query) - - return fs, output.squeeze(0).squeeze(0).to(torch.short).numpy() - - -input_audio = gr.Audio(label="Input audio") -label_checkbox = gr.CheckboxGroup(choices=TARGETS, label="Input target selection(s)") -output_audio = gr.Audio(label="Output audio") -demo = gr.Interface(fn=waveformer, inputs=[input_audio, label_checkbox], outputs=output_audio) -demo.launch(show_error=True) diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/auth.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/auth.md deleted file mode 100644 index 2098e8eb5dcfeba776339c65a595c82fd47782dd..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/auth.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: Learn how to use Ultralytics hub authentication in your projects with examples and guidelines from the Auth page on Ultralytics Docs. -keywords: Ultralytics, ultralytics hub, api keys, authentication, collab accounts, requests, hub management, monitoring ---- - -## Auth ---- -### ::: ultralytics.hub.auth.Auth -

                diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/predict.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/predict.py deleted file mode 100644 index fb486e292e40671a410199b7de27e05213a57341..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/predict.py +++ /dev/null @@ -1,51 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import torch - -from ultralytics.yolo.engine.predictor import BasePredictor -from ultralytics.yolo.engine.results import Results -from ultralytics.yolo.utils import DEFAULT_CFG, ROOT - - -class ClassificationPredictor(BasePredictor): - - def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): - super().__init__(cfg, overrides, _callbacks) - self.args.task = 'classify' - - def preprocess(self, img): - """Converts input image to model-compatible data type.""" - if not isinstance(img, torch.Tensor): - img = torch.stack([self.transforms(im) for im in img], dim=0) - img = (img if isinstance(img, torch.Tensor) else torch.from_numpy(img)).to(self.model.device) - return img.half() if self.model.fp16 else img.float() # uint8 to fp16/32 - - def postprocess(self, preds, img, orig_imgs): - """Postprocesses predictions to return Results objects.""" - results = [] - for i, pred in enumerate(preds): - orig_img = orig_imgs[i] if isinstance(orig_imgs, list) else orig_imgs - path = self.batch[0] - img_path = path[i] if isinstance(path, list) else path - results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, probs=pred)) - - return results - - -def predict(cfg=DEFAULT_CFG, use_python=False): - """Run YOLO model predictions on input images/videos.""" - model = cfg.model or 'yolov8n-cls.pt' # or "resnet18" - source = cfg.source if cfg.source is not None else ROOT / 'assets' if (ROOT / 'assets').exists() \ - else 'https://ultralytics.com/images/bus.jpg' - - args = dict(model=model, source=source) - if use_python: - from ultralytics import YOLO - YOLO(model)(**args) - else: - predictor = ClassificationPredictor(overrides=args) - predictor.predict_cli() - - -if __name__ == '__main__': - predict() diff --git a/spaces/valhalla/minDALLE/dalle/models/stage2/layers.py b/spaces/valhalla/minDALLE/dalle/models/stage2/layers.py deleted file mode 100644 index 43b7c9d584f35eb0e6fc8a7a4477a72bec58caa9..0000000000000000000000000000000000000000 --- a/spaces/valhalla/minDALLE/dalle/models/stage2/layers.py +++ /dev/null @@ -1,140 +0,0 @@ -# ------------------------------------------------------------------------------------ -# Minimal DALL-E -# Copyright (c) 2021 KakaoBrain. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------ -# Modified from minGPT (https://github.com/karpathy/minGPT) -# Copyright (c) 2020 Andrej Karpathy. All Rights Reserved. -# ------------------------------------------------------------------------------------ - -import math -import torch -import torch.nn as nn -from torch.nn import functional as F - - -class GELU(nn.Module): - def __init__(self, use_approx=False): - super().__init__() - self.use_approx = use_approx - - def forward(self, x): - if self.use_approx: - return x * torch.sigmoid(1.702 * x) - else: - return F.gelu(x) - - -class MultiHeadSelfAttention(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - resid_pdrop: float, - attn_pdrop: float, - attn_bias: bool, - use_mask: bool = True): - super().__init__() - assert embed_dim % n_heads == 0 - - # key, query, value projections for all heads - self.key = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.query = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.value = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - - # regularization - self.attn_drop = nn.Dropout(attn_pdrop) - self.resid_drop = nn.Dropout(resid_pdrop) - - # output projection - self.proj = nn.Linear(embed_dim, embed_dim, attn_bias) - - self.n_heads = n_heads - self.ctx_len = ctx_len - self.use_mask = use_mask - if self.use_mask: - self.register_buffer("mask", torch.ones(ctx_len, ctx_len), persistent=False) - self.mask = torch.tril(self.mask).view(1, ctx_len, ctx_len) - - def forward(self, x, use_cache=False, layer_past=None): - B, T, C = x.shape - x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - k = self.key(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - v = self.value(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - - if use_cache: - present = torch.stack([k, v]) - - if layer_past is not None: - past_key, past_value = layer_past - k = torch.cat([past_key, k], dim=-2) - v = torch.cat([past_value, v], dim=-2) - - if use_cache and layer_past is not None: - # Tensor shape below: (B * nh, 1, hs) X (B * nh, hs, K) -> (B * nh, 1, K) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, 1, K) X (B*nh, K, hs) -> (B*nh, 1, hs) - else: - # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, T) -> (B * nh, T, T) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - if self.use_mask: - mask = self.mask if T == self.ctx_len else self.mask[:, :T, :T] - att = att.masked_fill(mask == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs) - y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_drop(self.proj(y)) - if use_cache: - return y.transpose(0, 1).contiguous(), present # (T, B, C) -> (B, T, C) - else: - return y.transpose(0, 1).contiguous() # (T, B, C) -> (B, T, C) - - -class Block(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - mlp_bias: bool, - attn_bias: bool, - resid_pdrop: bool, - attn_pdrop: bool, - gelu_use_approx: bool): - super().__init__() - self.ln1 = nn.LayerNorm(embed_dim) - self.ln2 = nn.LayerNorm(embed_dim) - - self.attn = MultiHeadSelfAttention(ctx_len=ctx_len, - embed_dim=embed_dim, - n_heads=n_heads, - attn_pdrop=attn_pdrop, - resid_pdrop=resid_pdrop, - attn_bias=attn_bias, - use_mask=True) - self.mlp = nn.Sequential( - nn.Linear(embed_dim, 4 * embed_dim, bias=mlp_bias), - GELU(gelu_use_approx), - nn.Linear(4 * embed_dim, embed_dim, bias=mlp_bias), - nn.Dropout(resid_pdrop), - ) - - def forward(self, x): - x = x + self.attn(self.ln1(x)) - x = x + self.mlp(self.ln2(x)) - return x - - def sample(self, x, layer_past=None): - attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past) - x = x + attn - x = x + self.mlp(self.ln2(x)) - return x, present diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/diffusionmodules/upscaling.py b/spaces/vonbarnekowa/stable-diffusion/ldm/modules/diffusionmodules/upscaling.py deleted file mode 100644 index 03816662098ce1ffac79bd939b892e867ab91988..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/diffusionmodules/upscaling.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from functools import partial - -from ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule -from ldm.util import default - - -class AbstractLowScaleModel(nn.Module): - # for concatenating a downsampled image to the latent representation - def __init__(self, noise_schedule_config=None): - super(AbstractLowScaleModel, self).__init__() - if noise_schedule_config is not None: - self.register_schedule(**noise_schedule_config) - - def register_schedule(self, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def forward(self, x): - return x, None - - def decode(self, x): - return x - - -class SimpleImageConcat(AbstractLowScaleModel): - # no noise level conditioning - def __init__(self): - super(SimpleImageConcat, self).__init__(noise_schedule_config=None) - self.max_noise_level = 0 - - def forward(self, x): - # fix to constant noise level - return x, torch.zeros(x.shape[0], device=x.device).long() - - -class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel): - def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False): - super().__init__(noise_schedule_config=noise_schedule_config) - self.max_noise_level = max_noise_level - - def forward(self, x, noise_level=None): - if noise_level is None: - noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long() - else: - assert isinstance(noise_level, torch.Tensor) - z = self.q_sample(x, noise_level) - return z, noise_level - - - diff --git a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/retrain_data_preview.html b/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/retrain_data_preview.html deleted file mode 100644 index 0fe66aa9339a3375493f8a8c2d6cd4fe700eaf7a..0000000000000000000000000000000000000000 --- a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/retrain_data_preview.html +++ /dev/null @@ -1,76 +0,0 @@ - - - - - - - - - - - - - - - - Data Preview - - - - - - -
                - - - -
                -
                Prediction Model is built on File {{name}}
                - -
                -
                - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/wdnmd12/Real-CUGAN/README.md b/spaces/wdnmd12/Real-CUGAN/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/wdnmd12/Real-CUGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/weibinke/vits-simple-api/bert_vits2/attentions.py b/spaces/weibinke/vits-simple-api/bert_vits2/attentions.py deleted file mode 100644 index 80df44f4f28f355f50c85c9f273eadfa414a51ff..0000000000000000000000000000000000000000 --- a/spaces/weibinke/vits-simple-api/bert_vits2/attentions.py +++ /dev/null @@ -1,352 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F -from bert_vits2 import commons -from torch.nn.utils import weight_norm, remove_weight_norm - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, - isflow=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - if isflow: - cond_layer = torch.nn.Conv1d(256, 2 * hidden_channels * n_layers, 1) - self.cond_pre = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, 1) - self.cond_layer = weight_norm(cond_layer, name='weight') - self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - # print(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, - window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., - proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, - proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, - block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels ** -0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query / math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])) - x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, - causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/wenpeng/Sod_Inpaint/sod/infer_model.py b/spaces/wenpeng/Sod_Inpaint/sod/infer_model.py deleted file mode 100644 index 4e4e99e9d5877ac67cad744609c533f3740a1618..0000000000000000000000000000000000000000 --- a/spaces/wenpeng/Sod_Inpaint/sod/infer_model.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -import cv2 -import numpy as np -import torch -import torch.nn.functional as F - -import sys -sys.path.insert(0, '../') -sys.dont_write_bytecode = True -from .PGNet import PGNet - -class Normalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, image): - image = (image - self.mean)/self.std - return image - -class Config(object): - def __init__(self, **kwargs): - self.kwargs = kwargs - self.mean = np.array([[[124.55, 118.90, 102.94]]]) - self.std = np.array([[[ 56.77, 55.97, 57.50]]]) - print('\nParameters...') - for k, v in self.kwargs.items(): - print('%-10s: %s'%(k, v)) - - def __getattr__(self, name): - if name in self.kwargs: - return self.kwargs[name] - else: - return None - -class IVModel(): - def __init__(self, device=torch.device('cuda:0')): - super(IVModel, self).__init__() - self.device = device - checkpoint_path = 'sod/weights/PGNet_DUT+HR-model-31.pth' - self.cfg = Config(snapshot=checkpoint_path, mode='test') - if not os.path.exists(checkpoint_path): - print('未找到模型文件!') - self.net = PGNet(self.cfg) - self.net.train(False) - self.net.to(device) - self.normalize = Normalize(mean=self.cfg.mean, std=self.cfg.std) - - self.__first_forward__() - - - def __first_forward__(self, input_size=(512, 512, 3)): - # 调用forward()严格控制最大显存 - print('initialize Sod Model...') - _ = self.forward(np.random.rand(*input_size) * 255, None) - print('initialize Complete!') - - def __resize_tensor__(self, image, max_size=512): - h, w = image.size()[2:] - if max(h, w) > max_size: - if h < w: - h, w = int(max_size * h / w)//8*8, max_size - else: - h, w = max_size, int(max_size * w / h)//8*8 - image = F.interpolate(image, (h, w), mode='area') - return image - - def input_preprocess_tensor(self, img): - img = self.normalize(img) - img_t = torch.from_numpy(img.astype(np.float32)) # .to(self.device) - img_t = img_t.permute(2, 0, 1).unsqueeze(0) - img_t = self.__resize_tensor__(img_t).to(self.device) # 为了控制最大显存容量 - return img_t - - def forward(self, img, json_data): - img_t = self.input_preprocess_tensor(img) - shape = [torch.as_tensor([img_t.shape[2]]), torch.as_tensor([img_t.shape[3]])] - h, w = img_t.shape[2], img_t.shape[3] - img_t_temp = F.interpolate(img_t, (512, 512), mode='area') - with torch.no_grad(): - res = self.net(img_t_temp, shape=shape) - res = F.interpolate(res[0],size=shape, mode='bilinear') - res = torch.sigmoid(res) - # print(res.shape, img_t.shape, res.expand_as(img_t).shape) - res = torch.cat([img_t, res.expand_as(img_t)], dim=3) - res = (res[0].permute(1,2,0)).cpu().numpy() - res[:,:w,:] = res[:,:w,:] * self.cfg.std + self.cfg.mean - res[:,w:,:] = res[:,w:,:] * 255 - return res \ No newline at end of file diff --git a/spaces/whitphx/gradio-static-test/dist/assets/csv-b0b7514a.js b/spaces/whitphx/gradio-static-test/dist/assets/csv-b0b7514a.js deleted file mode 100644 index 511b34b2aed1552447a6605d45d0760eccb992ab..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/csv-b0b7514a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{d as a}from"./dsv-576afacd.js";var s=a(","),v=s.parse,o=s.parseRows;export{v as a,o as c}; -//# sourceMappingURL=csv-b0b7514a.js.map diff --git a/spaces/wonderit-safeai/tts-announcer/attentions.py b/spaces/wonderit-safeai/tts-announcer/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/wonderit-safeai/tts-announcer/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/wukevin/foldingdiff/app.py b/spaces/wukevin/foldingdiff/app.py deleted file mode 100644 index a346c4652899ecfc5c66e1063e5f3d4ac8ef23b3..0000000000000000000000000000000000000000 --- a/spaces/wukevin/foldingdiff/app.py +++ /dev/null @@ -1,113 +0,0 @@ -""" -foldingdiff implements a diffusion model for generating protein structures. Inspired by the biological folding process, -we perform diffusion on the angles between amino acid residues rather than the absolute 3D coordinates of each residue. -By effectively treating each residue as its own reference frame, we shift the equivariance constraints into the -representation space itself; this allows us to use a vanilla transformer model as our model. Here, we provide a simple -online interface for generating single backbones with a given length, starting from a given random seed. - -Tips for generating proteins: -* The maximum sequence sequence length this model has been trained on is 128 residues. The shorter a sequence is, the more likely it will be "designable" (see our preprint). -* FoldingDiff does *not* generate the amino acid sequence for its structures, it simply fills the structure with Glycine residues; use a tool like ESM-IF1 to generate amino acids corresponding to generated structure. - -See our preprint at https://arxiv.org/abs/2209.15611 and our full codebase at https://github.com/microsoft/foldingdiff -""" - -import os -import gradio as gr - -import torch -from foldingdiff import sampling -from foldingdiff import angles_and_coords as ac - -def read_mol(molpath: str) -> str: - with open(molpath, "r") as fp: - lines = fp.readlines() - mol = "" - for l in lines: - mol += l - return mol - -def molecule(input_pdb: str) -> str: - """Get the string to view the given pdb in 3dmol.js""" - mol = read_mol(input_pdb) - - x = ( - """ - - - - - - - -
                - - - """ - ) - - return f"""""" - -def sample_at_length(l:int, seed:int): - """ - Sample a single structure at the given length - """ - torch.manual_seed(seed) - l = int(l) - - # Sample the angles - s = sampling.sample_simple("wukevin/foldingdiff_cath", n=1, sweep_lengths=(l, l+1)).pop() - - # Create a PDB file after building out the structure in 3D coordinates - outdir = os.path.join(os.getcwd(), "output") - os.makedirs(outdir, exist_ok=True) - pdb_file = ac.create_new_chain_nerf(os.path.join(outdir, "generated.pdb"), s) - - return molecule(pdb_file), pdb_file - -interface = gr.Interface( - fn=sample_at_length, - title="foldingdiff - protein backbone structure generation with diffusion models", - description=__doc__, - inputs=[ - gr.Number(value=85, label="Protein backbone length to generate", show_label=True, precision=0), - gr.Number(value=123, label="Random seed", show_label=True, precision=0), - ], - outputs=[ - gr.HTML(), - gr.File(label="Generated structure in PDB format (cartesian coordinates)"), - # gr.Dataframe(label="Generated angles defining structure", max_rows=8), - ], -) -interface.launch() \ No newline at end of file diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cantonese.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/clear.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/clear.py deleted file mode 100644 index 8b5e291fc0c5375efa8b548c3ad81bad73865b1c..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/clear.py +++ /dev/null @@ -1,186 +0,0 @@ - -import numpy as np -from scipy.optimize import linear_sum_assignment -from ._base_metric import _BaseMetric -from .. import _timing -from .. import utils - -class CLEAR(_BaseMetric): - """Class which implements the CLEAR metrics""" - - @staticmethod - def get_default_config(): - """Default class config values""" - default_config = { - 'THRESHOLD': 0.5, # Similarity score threshold required for a TP match. Default 0.5. - 'PRINT_CONFIG': True, # Whether to print the config information on init. Default: False. - } - return default_config - - def __init__(self, config=None): - super().__init__() - main_integer_fields = ['CLR_TP', 'CLR_FN', 'CLR_FP', 'IDSW', 'MT', 'PT', 'ML', 'Frag'] - extra_integer_fields = ['CLR_Frames'] - self.integer_fields = main_integer_fields + extra_integer_fields - main_float_fields = ['MOTA', 'MOTP', 'MODA', 'CLR_Re', 'CLR_Pr', 'MTR', 'PTR', 'MLR', 'sMOTA'] - extra_float_fields = ['CLR_F1', 'FP_per_frame', 'MOTAL', 'MOTP_sum'] - self.float_fields = main_float_fields + extra_float_fields - self.fields = self.float_fields + self.integer_fields - self.summed_fields = self.integer_fields + ['MOTP_sum'] - self.summary_fields = main_float_fields + main_integer_fields - - # Configuration options: - self.config = utils.init_config(config, self.get_default_config(), self.get_name()) - self.threshold = float(self.config['THRESHOLD']) - - - @_timing.time - def eval_sequence(self, data): - """Calculates CLEAR metrics for one sequence""" - # Initialise results - res = {} - for field in self.fields: - res[field] = 0 - - # Return result quickly if tracker or gt sequence is empty - if data['num_tracker_dets'] == 0: - res['CLR_FN'] = data['num_gt_dets'] - res['ML'] = data['num_gt_ids'] - res['MLR'] = 1.0 - return res - if data['num_gt_dets'] == 0: - res['CLR_FP'] = data['num_tracker_dets'] - res['MLR'] = 1.0 - return res - - # Variables counting global association - num_gt_ids = data['num_gt_ids'] - gt_id_count = np.zeros(num_gt_ids) # For MT/ML/PT - gt_matched_count = np.zeros(num_gt_ids) # For MT/ML/PT - gt_frag_count = np.zeros(num_gt_ids) # For Frag - - # Note that IDSWs are counted based on the last time each gt_id was present (any number of frames previously), - # but are only used in matching to continue current tracks based on the gt_id in the single previous timestep. - prev_tracker_id = np.nan * np.zeros(num_gt_ids) # For scoring IDSW - prev_timestep_tracker_id = np.nan * np.zeros(num_gt_ids) # For matching IDSW - - # Calculate scores for each timestep - for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])): - # Deal with the case that there are no gt_det/tracker_det in a timestep. - if len(gt_ids_t) == 0: - res['CLR_FP'] += len(tracker_ids_t) - continue - if len(tracker_ids_t) == 0: - res['CLR_FN'] += len(gt_ids_t) - gt_id_count[gt_ids_t] += 1 - continue - - # Calc score matrix to first minimise IDSWs from previous frame, and then maximise MOTP secondarily - similarity = data['similarity_scores'][t] - score_mat = (tracker_ids_t[np.newaxis, :] == prev_timestep_tracker_id[gt_ids_t[:, np.newaxis]]) - score_mat = 1000 * score_mat + similarity - score_mat[similarity < self.threshold - np.finfo('float').eps] = 0 - - # Hungarian algorithm to find best matches - match_rows, match_cols = linear_sum_assignment(-score_mat) - actually_matched_mask = score_mat[match_rows, match_cols] > 0 + np.finfo('float').eps - match_rows = match_rows[actually_matched_mask] - match_cols = match_cols[actually_matched_mask] - - matched_gt_ids = gt_ids_t[match_rows] - matched_tracker_ids = tracker_ids_t[match_cols] - - # Calc IDSW for MOTA - prev_matched_tracker_ids = prev_tracker_id[matched_gt_ids] - is_idsw = (np.logical_not(np.isnan(prev_matched_tracker_ids))) & ( - np.not_equal(matched_tracker_ids, prev_matched_tracker_ids)) - res['IDSW'] += np.sum(is_idsw) - - # Update counters for MT/ML/PT/Frag and record for IDSW/Frag for next timestep - gt_id_count[gt_ids_t] += 1 - gt_matched_count[matched_gt_ids] += 1 - not_previously_tracked = np.isnan(prev_timestep_tracker_id) - prev_tracker_id[matched_gt_ids] = matched_tracker_ids - prev_timestep_tracker_id[:] = np.nan - prev_timestep_tracker_id[matched_gt_ids] = matched_tracker_ids - currently_tracked = np.logical_not(np.isnan(prev_timestep_tracker_id)) - gt_frag_count += np.logical_and(not_previously_tracked, currently_tracked) - - # Calculate and accumulate basic statistics - num_matches = len(matched_gt_ids) - res['CLR_TP'] += num_matches - res['CLR_FN'] += len(gt_ids_t) - num_matches - res['CLR_FP'] += len(tracker_ids_t) - num_matches - if num_matches > 0: - res['MOTP_sum'] += sum(similarity[match_rows, match_cols]) - - # Calculate MT/ML/PT/Frag/MOTP - tracked_ratio = gt_matched_count[gt_id_count > 0] / gt_id_count[gt_id_count > 0] - res['MT'] = np.sum(np.greater(tracked_ratio, 0.8)) - res['PT'] = np.sum(np.greater_equal(tracked_ratio, 0.2)) - res['MT'] - res['ML'] = num_gt_ids - res['MT'] - res['PT'] - res['Frag'] = np.sum(np.subtract(gt_frag_count[gt_frag_count > 0], 1)) - res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP']) - - res['CLR_Frames'] = data['num_timesteps'] - - # Calculate final CLEAR scores - res = self._compute_final_fields(res) - return res - - def combine_sequences(self, all_res): - """Combines metrics across all sequences""" - res = {} - for field in self.summed_fields: - res[field] = self._combine_sum(all_res, field) - res = self._compute_final_fields(res) - return res - - def combine_classes_det_averaged(self, all_res): - """Combines metrics across all classes by averaging over the detection values""" - res = {} - for field in self.summed_fields: - res[field] = self._combine_sum(all_res, field) - res = self._compute_final_fields(res) - return res - - def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False): - """Combines metrics across all classes by averaging over the class values. - If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection. - """ - res = {} - for field in self.integer_fields: - if ignore_empty_classes: - res[field] = self._combine_sum( - {k: v for k, v in all_res.items() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0}, field) - else: - res[field] = self._combine_sum({k: v for k, v in all_res.items()}, field) - for field in self.float_fields: - if ignore_empty_classes: - res[field] = np.mean( - [v[field] for v in all_res.values() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0], axis=0) - else: - res[field] = np.mean([v[field] for v in all_res.values()], axis=0) - return res - - @staticmethod - def _compute_final_fields(res): - """Calculate sub-metric ('field') values which only depend on other sub-metric values. - This function is used both for both per-sequence calculation, and in combining values across sequences. - """ - num_gt_ids = res['MT'] + res['ML'] + res['PT'] - res['MTR'] = res['MT'] / np.maximum(1.0, num_gt_ids) - res['MLR'] = res['ML'] / np.maximum(1.0, num_gt_ids) - res['PTR'] = res['PT'] / np.maximum(1.0, num_gt_ids) - res['CLR_Re'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN']) - res['CLR_Pr'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FP']) - res['MODA'] = (res['CLR_TP'] - res['CLR_FP']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN']) - res['MOTA'] = (res['CLR_TP'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN']) - res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP']) - res['sMOTA'] = (res['MOTP_sum'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN']) - - res['CLR_F1'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + 0.5*res['CLR_FN'] + 0.5*res['CLR_FP']) - res['FP_per_frame'] = res['CLR_FP'] / np.maximum(1.0, res['CLR_Frames']) - safe_log_idsw = np.log10(res['IDSW']) if res['IDSW'] > 0 else res['IDSW'] - res['MOTAL'] = (res['CLR_TP'] - res['CLR_FP'] - safe_log_idsw) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN']) - return res diff --git a/spaces/xiang-wuu/yolov5/utils/metrics.py b/spaces/xiang-wuu/yolov5/utils/metrics.py deleted file mode 100644 index 9bf084c788549eb4d18656a46d6fe1fc97fa1089..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/utils/metrics.py +++ /dev/null @@ -1,361 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Model validation metrics -""" - -import math -import warnings -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def smooth(y, f=0.05): - # Box filter of fraction f - nf = round(len(y) * f * 2) // 2 + 1 # number of filter elements (must be odd) - p = np.ones(nf // 2) # ones padding - yp = np.concatenate((p * y[0], y, p * y[-1]), 0) # y padded - return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=(), eps=1e-16): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes, nt = np.unique(target_cls, return_counts=True) - nc = unique_classes.shape[0] # number of classes, number of detections - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = nt[ci] # number of labels - n_p = i.sum() # number of predictions - if n_p == 0 or n_l == 0: - continue - - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + eps) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + eps) - names = [v for k, v in names.items() if k in unique_classes] # list: only classes that have data - names = dict(enumerate(names)) # to dict - if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall') - - i = smooth(f1.mean(0), 0.1).argmax() # max F1 index - p, r, f1 = p[:, i], r[:, i], f1[:, i] - tp = (r * nt).round() # true positives - fp = (tp / (p + eps) - tp).round() # false positives - return tp, fp, p, r, f1, ap, unique_classes.astype(int) - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - mrec = np.concatenate(([0.0], recall, [1.0])) - mpre = np.concatenate(([1.0], precision, [0.0])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - if detections is None: - gt_classes = labels.int() - for i, gc in enumerate(gt_classes): - self.matrix[self.nc, gc] += 1 # background FN - return - - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(int) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[detection_classes[m1[j]], gc] += 1 # correct - else: - self.matrix[self.nc, gc] += 1 # background FP - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # background FN - - def matrix(self): - return self.matrix - - def tp_fp(self): - tp = self.matrix.diagonal() # true positives - fp = self.matrix.sum(1) - tp # false positives - # fn = self.matrix.sum(0) - tp # false negatives (missed detections) - return tp[:-1], fp[:-1] # remove background class - - def plot(self, normalize=True, save_dir='', names=()): - try: - import seaborn as sn - - array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig = plt.figure(figsize=(12, 9), tight_layout=True) - nc, nn = self.nc, len(names) # number of classes, names - sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size - labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered - sn.heatmap(array, - annot=nc < 30, - annot_kws={ - "size": 8}, - cmap='Blues', - fmt='.2f', - square=True, - vmin=0.0, - xticklabels=names + ['background FP'] if labels else "auto", - yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1)) - fig.axes[0].set_xlabel('True') - fig.axes[0].set_ylabel('Predicted') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - plt.close() - except Exception as e: - print(f'WARNING: ConfusionMatrix plot failure: {e}') - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4) - - # Get the coordinates of bounding boxes - if xywh: # transform from xywh to xyxy - (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, 1), box2.chunk(4, 1) - w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2 - b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_ - b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_ - else: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, 1) - b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, 1) - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - union = w1 * h1 + w2 * h2 - inter + eps - - # IoU - iou = inter / union - if CIoU or DIoU or GIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2 - if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - return iou - rho2 / c2 # DIoU - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU https://arxiv.org/pdf/1902.09630.pdf - return iou # IoU - - -def box_area(box): - # box = xyxy(4,n) - return (box[2] - box[0]) * (box[3] - box[1]) - - -def box_iou(box1, box2, eps=1e-7): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - (a1, a2), (b1, b2) = box1[:, None].chunk(2, 2), box2.chunk(2, 1) - inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp(0).prod(2) - - # IoU = inter / (area1 + area2 - inter) - return inter / (box_area(box1.T)[:, None] + box_area(box2.T) - inter + eps) - - -def bbox_ioa(box1, box2, eps=1e-7): - """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2 - box1: np.array of shape(4) - box2: np.array of shape(nx4) - returns: np.array of shape(n) - """ - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1 - b2_x1, b2_y1, b2_x2, b2_y2 = box2.T - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps - - # Intersection over box2 area - return inter_area / box2_area - - -def wh_iou(wh1, wh2, eps=1e-7): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter + eps) # iou = inter / (area1 + area2 - inter) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - - -def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): - # Precision-recall curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(save_dir, dpi=250) - plt.close() - - -def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'): - # Metric-confidence curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py): - ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) - else: - ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - - y = smooth(py.mean(0), 0.05) - ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') - ax.set_xlabel(xlabel) - ax.set_ylabel(ylabel) - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(save_dir, dpi=250) - plt.close() diff --git a/spaces/xin/PatentSolver/App/bin/ParameterExtractor.py b/spaces/xin/PatentSolver/App/bin/ParameterExtractor.py deleted file mode 100644 index 455ec9f8138d5b437a48bb3ebb6185db34a96f8e..0000000000000000000000000000000000000000 --- a/spaces/xin/PatentSolver/App/bin/ParameterExtractor.py +++ /dev/null @@ -1,51 +0,0 @@ -# -*- coding: utf-8 -*- - -import re -import nltk -import Levenshtein -from App.bin import constants - -class ParameterExtractor(object): - - def __init__(self, sentence): - self.sentence = sentence - - def clean_parameter(self, parameter): - line = re.sub(r'\s[a-zA-Z]$', r'', parameter) - line = line.strip() - return line - - def extract_parameters(self): - sentence = self.sentence - parameters_list = [] - with open(constants.ASSETS + "parameter_core", 'r') as l: - words_list = l.read().splitlines() - match_word = re.compile(r'(\b(?:%s)\b)' % '|'.join(words_list)) - - with open(constants.ASSETS + "exclude_from_parameters", 'r') as m: - not_included_words_list = m.read().splitlines() - match_not_included_word = re.compile(r'(\b(?:%s)\b)' % '|'.join(not_included_words_list)) - - parameter_indice = re.search(match_word, sentence) - if parameter_indice: - words = nltk.word_tokenize(sentence) - sentence = nltk.pos_tag(words) - grammar = """PARAMETER:{+
                ?+} - {+} - """ - parameter_parser = nltk.RegexpParser(grammar) - tree = parameter_parser.parse(sentence) - for subtree in tree.subtrees(): - if subtree.label() == 'PARAMETER': - parameter_candidate = " ".join(word for word, tag in subtree.leaves()) - parameter_candidate_indice = re.search(match_word, parameter_candidate) - not_parameter = re.search(match_not_included_word, parameter_candidate) - if parameter_candidate_indice and not not_parameter : - #parameter_candidate=self.clean_parameter(parameter_candidate) - parameters_list.append(parameter_candidate) - parameters_list = list(set(parameters_list)) - - - - return list(parameters_list) - diff --git a/spaces/xiongjie/u2net_rgba/README.md b/spaces/xiongjie/u2net_rgba/README.md deleted file mode 100644 index bd85100e4821602324f10771482dece7ba735302..0000000000000000000000000000000000000000 --- a/spaces/xiongjie/u2net_rgba/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: U2net_rgba -emoji: 📉 -colorFrom: yellow -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/util/visualizer.py b/spaces/xp3857/Image_Restoration_Colorization/Global/util/visualizer.py deleted file mode 100644 index 1a88df203aa95750ba911c77b32f6234863b8e79..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Global/util/visualizer.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import numpy as np -import os -import ntpath -import time -from . import util -#from . import html -import scipy.misc -try: - from StringIO import StringIO # Python 2.7 -except ImportError: - from io import BytesIO # Python 3.x - -class Visualizer(): - def __init__(self, opt): - # self.opt = opt - self.tf_log = opt.tf_log - self.use_html = opt.isTrain and not opt.no_html - self.win_size = opt.display_winsize - self.name = opt.name - if self.tf_log: - import tensorflow as tf - self.tf = tf - self.log_dir = os.path.join(opt.checkpoints_dir, opt.name, 'logs') - self.writer = tf.summary.FileWriter(self.log_dir) - - if self.use_html: - self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web') - self.img_dir = os.path.join(self.web_dir, 'images') - print('create web directory %s...' % self.web_dir) - util.mkdirs([self.web_dir, self.img_dir]) - self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt') - with open(self.log_name, "a") as log_file: - now = time.strftime("%c") - log_file.write('================ Training Loss (%s) ================\n' % now) - - # |visuals|: dictionary of images to display or save - def display_current_results(self, visuals, epoch, step): - if self.tf_log: # show images in tensorboard output - img_summaries = [] - for label, image_numpy in visuals.items(): - # Write the image to a string - try: - s = StringIO() - except: - s = BytesIO() - scipy.misc.toimage(image_numpy).save(s, format="jpeg") - # Create an Image object - img_sum = self.tf.Summary.Image(encoded_image_string=s.getvalue(), height=image_numpy.shape[0], width=image_numpy.shape[1]) - # Create a Summary value - img_summaries.append(self.tf.Summary.Value(tag=label, image=img_sum)) - - # Create and write Summary - summary = self.tf.Summary(value=img_summaries) - self.writer.add_summary(summary, step) - - if self.use_html: # save images to a html file - for label, image_numpy in visuals.items(): - if isinstance(image_numpy, list): - for i in range(len(image_numpy)): - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s_%d.jpg' % (epoch, label, i)) - util.save_image(image_numpy[i], img_path) - else: - img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.jpg' % (epoch, label)) - util.save_image(image_numpy, img_path) - - # update website - webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=30) - for n in range(epoch, 0, -1): - webpage.add_header('epoch [%d]' % n) - ims = [] - txts = [] - links = [] - - for label, image_numpy in visuals.items(): - if isinstance(image_numpy, list): - for i in range(len(image_numpy)): - img_path = 'epoch%.3d_%s_%d.jpg' % (n, label, i) - ims.append(img_path) - txts.append(label+str(i)) - links.append(img_path) - else: - img_path = 'epoch%.3d_%s.jpg' % (n, label) - ims.append(img_path) - txts.append(label) - links.append(img_path) - if len(ims) < 10: - webpage.add_images(ims, txts, links, width=self.win_size) - else: - num = int(round(len(ims)/2.0)) - webpage.add_images(ims[:num], txts[:num], links[:num], width=self.win_size) - webpage.add_images(ims[num:], txts[num:], links[num:], width=self.win_size) - webpage.save() - - # errors: dictionary of error labels and values - def plot_current_errors(self, errors, step): - if self.tf_log: - for tag, value in errors.items(): - summary = self.tf.Summary(value=[self.tf.Summary.Value(tag=tag, simple_value=value)]) - self.writer.add_summary(summary, step) - - # errors: same format as |errors| of plotCurrentErrors - def print_current_errors(self, epoch, i, errors, t, lr): - message = '(epoch: %d, iters: %d, time: %.3f lr: %.5f) ' % (epoch, i, t, lr) - for k, v in errors.items(): - if v != 0: - message += '%s: %.3f ' % (k, v) - - print(message) - with open(self.log_name, "a") as log_file: - log_file.write('%s\n' % message) - - - def print_save(self,message): - - print(message) - - with open(self.log_name,"a") as log_file: - log_file.write('%s\n'%message) - - - # save image to the disk - def save_images(self, webpage, visuals, image_path): - image_dir = webpage.get_image_dir() - short_path = ntpath.basename(image_path[0]) - name = os.path.splitext(short_path)[0] - - webpage.add_header(name) - ims = [] - txts = [] - links = [] - - for label, image_numpy in visuals.items(): - image_name = '%s_%s.jpg' % (name, label) - save_path = os.path.join(image_dir, image_name) - util.save_image(image_numpy, save_path) - - ims.append(image_name) - txts.append(label) - links.append(image_name) - webpage.add_images(ims, txts, links, width=self.win_size) diff --git a/spaces/xswu/HPSv2/src/open_clip/model.py b/spaces/xswu/HPSv2/src/open_clip/model.py deleted file mode 100644 index e347c42fc8df6464ca28e59adadba61e53a38add..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/open_clip/model.py +++ /dev/null @@ -1,461 +0,0 @@ -""" CLIP Model - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -from dataclasses import dataclass -import logging -import math -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.utils.checkpoint import checkpoint - -from .hf_model import HFTextEncoder -from .modified_resnet import ModifiedResNet -from .timm_model import TimmModel -from .transformer import LayerNormFp32, LayerNorm, QuickGELU, Attention, VisionTransformer, TextTransformer -from .utils import to_2tuple - - -@dataclass -class CLIPVisionCfg: - layers: Union[Tuple[int, int, int, int], int] = 12 - width: int = 768 - head_width: int = 64 - mlp_ratio: float = 4.0 - patch_size: int = 16 - image_size: Union[Tuple[int, int], int] = 224 - ls_init_value: Optional[float] = None # layer scale initial value - patch_dropout: float = 0. # what fraction of patches to dropout during training (0 would mean disabled and no patches dropped) - 0.5 to 0.75 recommended in the paper for optimal results - input_patchnorm: bool = False # whether to use dual patchnorm - would only apply the input layernorm on each patch, as post-layernorm already exist in original clip vit design - global_average_pool: bool = False # whether to global average pool the last embedding layer, instead of using CLS token (https://arxiv.org/abs/2205.01580) - attentional_pool: bool = False # whether to use attentional pooler in the last embedding layer - n_queries: int = 256 # n_queries for attentional pooler - attn_pooler_heads: int = 8 # n heads for attentional_pooling - timm_model_name: str = None # a valid model name overrides layers, width, patch_size - timm_model_pretrained: bool = False # use (imagenet) pretrained weights for named model - timm_pool: str = 'avg' # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') - timm_proj: str = 'linear' # linear projection for timm model output ('linear', 'mlp', '') - timm_proj_bias: bool = False # enable bias final projection - timm_drop: float = 0. # head dropout - timm_drop_path: Optional[float] = None # backbone stochastic depth - output_tokens: bool = False - - -@dataclass -class CLIPTextCfg: - context_length: int = 77 - vocab_size: int = 49408 - width: int = 512 - heads: int = 8 - layers: int = 12 - ls_init_value: Optional[float] = None # layer scale initial value - hf_model_name: str = None - hf_tokenizer_name: str = None - hf_model_pretrained: bool = True - proj: str = 'mlp' - pooler_type: str = 'mean_pooler' - embed_cls: bool = False - pad_id: int = 0 - output_tokens: bool = False - - -def get_cast_dtype(precision: str): - cast_dtype = None - if precision == 'bf16': - cast_dtype = torch.bfloat16 - elif precision == 'fp16': - cast_dtype = torch.float16 - return cast_dtype - - -def _build_vision_tower( - embed_dim: int, - vision_cfg: CLIPVisionCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None -): - if isinstance(vision_cfg, dict): - vision_cfg = CLIPVisionCfg(**vision_cfg) - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if vision_cfg.timm_model_name: - visual = TimmModel( - vision_cfg.timm_model_name, - pretrained=vision_cfg.timm_model_pretrained, - pool=vision_cfg.timm_pool, - proj=vision_cfg.timm_proj, - proj_bias=vision_cfg.timm_proj_bias, - drop=vision_cfg.timm_drop, - drop_path=vision_cfg.timm_drop_path, - embed_dim=embed_dim, - image_size=vision_cfg.image_size, - ) - act_layer = nn.GELU # so that text transformer doesn't use QuickGELU w/ timm models - elif isinstance(vision_cfg.layers, (tuple, list)): - vision_heads = vision_cfg.width * 32 // vision_cfg.head_width - visual = ModifiedResNet( - layers=vision_cfg.layers, - output_dim=embed_dim, - heads=vision_heads, - image_size=vision_cfg.image_size, - width=vision_cfg.width, - ) - else: - vision_heads = vision_cfg.width // vision_cfg.head_width - norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm - visual = VisionTransformer( - image_size=vision_cfg.image_size, - patch_size=vision_cfg.patch_size, - width=vision_cfg.width, - layers=vision_cfg.layers, - heads=vision_heads, - mlp_ratio=vision_cfg.mlp_ratio, - ls_init_value=vision_cfg.ls_init_value, - patch_dropout=vision_cfg.patch_dropout, - input_patchnorm=vision_cfg.input_patchnorm, - global_average_pool=vision_cfg.global_average_pool, - attentional_pool=vision_cfg.attentional_pool, - n_queries=vision_cfg.n_queries, - attn_pooler_heads=vision_cfg.attn_pooler_heads, - output_tokens=vision_cfg.output_tokens, - output_dim=embed_dim, - act_layer=act_layer, - norm_layer=norm_layer, - ) - - return visual - - -def _build_text_tower( - embed_dim: int, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, -): - if isinstance(text_cfg, dict): - text_cfg = CLIPTextCfg(**text_cfg) - - if text_cfg.hf_model_name: - text = HFTextEncoder( - text_cfg.hf_model_name, - output_dim=embed_dim, - proj=text_cfg.proj, - pooler_type=text_cfg.pooler_type, - pretrained=text_cfg.hf_model_pretrained, - output_tokens=text_cfg.output_tokens, - ) - else: - act_layer = QuickGELU if quick_gelu else nn.GELU - norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm - - text = TextTransformer( - context_length=text_cfg.context_length, - vocab_size=text_cfg.vocab_size, - width=text_cfg.width, - heads=text_cfg.heads, - layers=text_cfg.layers, - ls_init_value=text_cfg.ls_init_value, - output_dim=embed_dim, - embed_cls=text_cfg.embed_cls, - output_tokens=text_cfg.output_tokens, - pad_id=text_cfg.pad_id, - act_layer=act_layer, - norm_layer=norm_layer, - ) - return text - - -class CLIP(nn.Module): - output_dict: torch.jit.Final[bool] - - def __init__( - self, - embed_dim: int, - vision_cfg: CLIPVisionCfg, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, - output_dict: bool = False, - ): - super().__init__() - self.output_dict = output_dict - self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype) - - text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype) - self.transformer = text.transformer - self.vocab_size = text.vocab_size - self.token_embedding = text.token_embedding - self.positional_embedding = text.positional_embedding - self.ln_final = text.ln_final - self.text_projection = text.text_projection - self.register_buffer('attn_mask', text.attn_mask, persistent=False) - - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - def lock_image_tower(self, unlocked_groups=0, freeze_bn_stats=False): - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - self.visual.lock(unlocked_groups=unlocked_groups, freeze_bn_stats=freeze_bn_stats) - - def lock_text_tower(self, unlocked_layers: int = 0, freeze_layer_norm: bool = True): - locked_layers = [] - locked_layers.append(self.token_embedding) - self.positional_embedding.requires_grad = False - if unlocked_layers > 0: - locked_layers.append(self.transformer.resblocks[:-unlocked_layers]) - else: - locked_layers.append(self.transformer) - locked_layers.append(self.ln_final) - self.text_projection.requires_grad = False - - # freeze layers - for module in locked_layers: - for n, p in module.named_parameters(): - p.requires_grad = (not freeze_layer_norm) if "LayerNorm" in n.split(".") else False - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.transformer.grad_checkpointing = enable - - def encode_image(self, image, normalize: bool = False): - features = self.visual(image) - return F.normalize(features, dim=-1) if normalize else features - - def encode_text(self, text, normalize: bool = False): - cast_dtype = self.transformer.get_cast_dtype() - - x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.to(cast_dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) # [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - return F.normalize(x, dim=-1) if normalize else x - - def forward(self, image, text): - image_features = self.encode_image(image, normalize=True) - text_features = self.encode_text(text, normalize=True) - if self.output_dict: - return { - "image_features": image_features, - "text_features": text_features, - "logit_scale": self.logit_scale.exp() - } - return image_features, text_features, self.logit_scale.exp() - - -class CustomTextCLIP(nn.Module): - output_dict: torch.jit.Final[bool] - - def __init__( - self, - embed_dim: int, - vision_cfg: CLIPVisionCfg, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - cast_dtype: Optional[torch.dtype] = None, - output_dict: bool = False, - ): - super().__init__() - self.output_dict = output_dict - self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype) - self.text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - def lock_image_tower(self, unlocked_groups=0, freeze_bn_stats=False): - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - self.visual.lock(unlocked_groups=unlocked_groups, freeze_bn_stats=freeze_bn_stats) - - def lock_text_tower(self, unlocked_layers: int = 0, freeze_layer_norm: bool = True): - self.text.lock(unlocked_layers, freeze_layer_norm) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.text.set_grad_checkpointing(enable) - - def encode_image(self, image, normalize: bool = False): - features = self.visual(image) - return F.normalize(features, dim=-1) if normalize else features - - def encode_text(self, text, normalize: bool = False): - features = self.text(text) - return F.normalize(features, dim=-1) if normalize else features - - def forward(self, image, text): - image_features = self.encode_image(image, normalize=True) - text_features = self.encode_text(text, normalize=True) - if self.output_dict: - return { - "image_features": image_features, - "text_features": text_features, - "logit_scale": self.logit_scale.exp() - } - return image_features, text_features, self.logit_scale.exp() - - -def convert_weights_to_lp(model: nn.Module, dtype=torch.float16): - """Convert applicable model parameters to low-precision (bf16 or fp16)""" - - def _convert_weights(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.to(dtype) - if l.bias is not None: - l.bias.data = l.bias.data.to(dtype) - - if isinstance(l, (nn.MultiheadAttention, Attention)): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.to(dtype) - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.to(dtype) - - model.apply(_convert_weights) - - -convert_weights_to_fp16 = convert_weights_to_lp # backwards compat - - -# used to maintain checkpoint compatibility -def convert_to_custom_text_state_dict(state_dict: dict): - if 'text_projection' in state_dict: - # old format state_dict, move text tower -> .text - new_state_dict = {} - for k, v in state_dict.items(): - if any(k.startswith(p) for p in ( - 'text_projection', - 'positional_embedding', - 'token_embedding', - 'transformer', - 'ln_final', - )): - k = 'text.' + k - new_state_dict[k] = v - return new_state_dict - return state_dict - - -def build_model_from_openai_state_dict( - state_dict: dict, - quick_gelu=True, - cast_dtype=torch.float16, -): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len( - [k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_size = vision_patch_size * grid_size - else: - counts: list = [ - len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_size = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - vision_cfg = CLIPVisionCfg( - layers=vision_layers, - width=vision_width, - patch_size=vision_patch_size, - image_size=image_size, - ) - text_cfg = CLIPTextCfg( - context_length=context_length, - vocab_size=vocab_size, - width=transformer_width, - heads=transformer_heads, - layers=transformer_layers, - ) - model = CLIP( - embed_dim, - vision_cfg=vision_cfg, - text_cfg=text_cfg, - quick_gelu=quick_gelu, # OpenAI models were trained with QuickGELU - cast_dtype=cast_dtype, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - state_dict.pop(key, None) - - convert_weights_to_fp16(model) # OpenAI state dicts are partially converted to float16 - model.load_state_dict(state_dict) - return model.eval() - - -def trace_model(model, batch_size=256, device=torch.device('cpu')): - model.eval() - image_size = model.visual.image_size - example_images = torch.ones((batch_size, 3, image_size, image_size), device=device) - example_text = torch.zeros((batch_size, model.context_length), dtype=torch.int, device=device) - model = torch.jit.trace_module( - model, - inputs=dict( - forward=(example_images, example_text), - encode_text=(example_text,), - encode_image=(example_images,) - )) - model.visual.image_size = image_size - return model - - -def resize_pos_embed(state_dict, model, interpolation: str = 'bicubic', antialias: bool = True): - # Rescale the grid of position embeddings when loading from state_dict - old_pos_embed = state_dict.get('visual.positional_embedding', None) - if old_pos_embed is None or not hasattr(model.visual, 'grid_size'): - return - grid_size = to_2tuple(model.visual.grid_size) - extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) - new_seq_len = grid_size[0] * grid_size[1] + extra_tokens - if new_seq_len == old_pos_embed.shape[0]: - return - - if extra_tokens: - pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] - else: - pos_emb_tok, pos_emb_img = None, old_pos_embed - old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img)))) - - logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size) - pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) - pos_emb_img = F.interpolate( - pos_emb_img, - size=grid_size, - mode=interpolation, - antialias=antialias, - align_corners=False, - ) - pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] - if pos_emb_tok is not None: - new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) - else: - new_pos_embed = pos_emb_img - state_dict['visual.positional_embedding'] = new_pos_embed diff --git a/spaces/xuetao/bingo3/src/lib/hooks/use-bing.ts b/spaces/xuetao/bingo3/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/tflib/autosummary.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/tflib/autosummary.py deleted file mode 100644 index 43154f792e5ebe15ee6045a5acdfb279cebefcaa..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/dnnlib/tflib/autosummary.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Helper for adding automatically tracked values to Tensorboard. - -Autosummary creates an identity op that internally keeps track of the input -values and automatically shows up in TensorBoard. The reported value -represents an average over input components. The average is accumulated -constantly over time and flushed when save_summaries() is called. - -Notes: -- The output tensor must be used as an input for something else in the - graph. Otherwise, the autosummary op will not get executed, and the average - value will not get accumulated. -- It is perfectly fine to include autosummaries with the same name in - several places throughout the graph, even if they are executed concurrently. -- It is ok to also pass in a python scalar or numpy array. In this case, it - is added to the average immediately. -""" - -from collections import OrderedDict -import numpy as np -import tensorflow as tf -from tensorboard import summary as summary_lib -from tensorboard.plugins.custom_scalar import layout_pb2 - -from . import tfutil -from .tfutil import TfExpression -from .tfutil import TfExpressionEx - -_dtype = tf.float64 -_vars = OrderedDict() # name => [var, ...] -_immediate = OrderedDict() # name => update_op, update_value -_finalized = False -_merge_op = None - - -def _create_var(name: str, value_expr: TfExpression) -> TfExpression: - """Internal helper for creating autosummary accumulators.""" - assert not _finalized - name_id = name.replace("/", "_") - v = tf.cast(value_expr, _dtype) - - if v.shape.is_fully_defined(): - size = np.prod(tfutil.shape_to_list(v.shape)) - size_expr = tf.constant(size, dtype=_dtype) - else: - size = None - size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype)) - - if size == 1: - if v.shape.ndims != 0: - v = tf.reshape(v, []) - v = [size_expr, v, tf.square(v)] - else: - v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))] - v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack(v), lambda: tf.zeros(3, dtype=_dtype)) - - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.control_dependencies(None): - var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False) # [sum(1), sum(x), sum(x**2)] - update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v)) - - if name in _vars: - _vars[name].append(var) - else: - _vars[name] = [var] - return update_op - - -def autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None) -> TfExpressionEx: - """Create a new autosummary. - - Args: - name: Name to use in TensorBoard - value: TensorFlow expression or python value to track - passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node. - - Example use of the passthru mechanism: - - n = autosummary('l2loss', loss, passthru=n) - - This is a shorthand for the following code: - - with tf.control_dependencies([autosummary('l2loss', loss)]): - n = tf.identity(n) - """ - tfutil.assert_tf_initialized() - name_id = name.replace("/", "_") - - if tfutil.is_tf_expression(value): - with tf.name_scope("summary_" + name_id), tf.device(value.device): - update_op = _create_var(name, value) - with tf.control_dependencies([update_op]): - return tf.identity(value if passthru is None else passthru) - - else: # python scalar or numpy array - if name not in _immediate: - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.device(None), tf.control_dependencies(None): - update_value = tf.placeholder(_dtype) - update_op = _create_var(name, update_value) - _immediate[name] = update_op, update_value - - update_op, update_value = _immediate[name] - tfutil.run(update_op, {update_value: value}) - return value if passthru is None else passthru - - -def finalize_autosummaries() -> None: - """Create the necessary ops to include autosummaries in TensorBoard report. - Note: This should be done only once per graph. - """ - global _finalized - tfutil.assert_tf_initialized() - - if _finalized: - return None - - _finalized = True - tfutil.init_uninitialized_vars([var for vars_list in _vars.values() for var in vars_list]) - - # Create summary ops. - with tf.device(None), tf.control_dependencies(None): - for name, vars_list in _vars.items(): - name_id = name.replace("/", "_") - with tfutil.absolute_name_scope("Autosummary/" + name_id): - moments = tf.add_n(vars_list) - moments /= moments[0] - with tf.control_dependencies([moments]): # read before resetting - reset_ops = [tf.assign(var, tf.zeros(3, dtype=_dtype)) for var in vars_list] - with tf.name_scope(None), tf.control_dependencies(reset_ops): # reset before reporting - mean = moments[1] - std = tf.sqrt(moments[2] - tf.square(moments[1])) - tf.summary.scalar(name, mean) - tf.summary.scalar("xCustomScalars/" + name + "/margin_lo", mean - std) - tf.summary.scalar("xCustomScalars/" + name + "/margin_hi", mean + std) - - # Group by category and chart name. - cat_dict = OrderedDict() - for series_name in sorted(_vars.keys()): - p = series_name.split("/") - cat = p[0] if len(p) >= 2 else "" - chart = "/".join(p[1:-1]) if len(p) >= 3 else p[-1] - if cat not in cat_dict: - cat_dict[cat] = OrderedDict() - if chart not in cat_dict[cat]: - cat_dict[cat][chart] = [] - cat_dict[cat][chart].append(series_name) - - # Setup custom_scalar layout. - categories = [] - for cat_name, chart_dict in cat_dict.items(): - charts = [] - for chart_name, series_names in chart_dict.items(): - series = [] - for series_name in series_names: - series.append(layout_pb2.MarginChartContent.Series( - value=series_name, - lower="xCustomScalars/" + series_name + "/margin_lo", - upper="xCustomScalars/" + series_name + "/margin_hi")) - margin = layout_pb2.MarginChartContent(series=series) - charts.append(layout_pb2.Chart(title=chart_name, margin=margin)) - categories.append(layout_pb2.Category(title=cat_name, chart=charts)) - layout = summary_lib.custom_scalar_pb(layout_pb2.Layout(category=categories)) - return layout - -def save_summaries(file_writer, global_step=None): - """Call FileWriter.add_summary() with all summaries in the default graph, - automatically finalizing and merging them on the first call. - """ - global _merge_op - tfutil.assert_tf_initialized() - - if _merge_op is None: - layout = finalize_autosummaries() - if layout is not None: - file_writer.add_summary(layout) - with tf.device(None), tf.control_dependencies(None): - _merge_op = tf.summary.merge_all() - - file_writer.add_summary(_merge_op.eval(), global_step) diff --git a/spaces/ygangang/VToonify/vtoonify/model/raft/train_standard.sh b/spaces/ygangang/VToonify/vtoonify/model/raft/train_standard.sh deleted file mode 100644 index 7f559b386b6b596ec14a94f0d8c13974309b7d80..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/model/raft/train_standard.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 1 --num_steps 100000 --batch_size 10 --lr 0.0004 --image_size 368 496 --wdecay 0.0001 -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 1 --num_steps 100000 --batch_size 6 --lr 0.000125 --image_size 400 720 --wdecay 0.0001 -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 1 --num_steps 100000 --batch_size 6 --lr 0.000125 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 1 --num_steps 50000 --batch_size 6 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bertweet/tokenization_bertweet.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bertweet/tokenization_bertweet.py deleted file mode 100644 index 75975680dde522d99fe3f17a5093d3869b0edb22..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bertweet/tokenization_bertweet.py +++ /dev/null @@ -1,782 +0,0 @@ -# coding=utf-8 -# Copyright (c) 2020, VinAI Research and the HuggingFace Inc. team. -# Copyright 2018 The Open AI Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Tokenization classes for BERTweet""" - - -import html -import os -import re -from shutil import copyfile -from typing import List, Optional, Tuple - -import regex - -from ...tokenization_utils import PreTrainedTokenizer -from ...utils import logging - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.txt", - "merges_file": "bpe.codes", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "vinai/bertweet-base": "https://huggingface.co/vinai/bertweet-base/resolve/main/vocab.txt", - }, - "merges_file": { - "vinai/bertweet-base": "https://huggingface.co/vinai/bertweet-base/resolve/main/bpe.codes", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "vinai/bertweet-base": 128, -} - - -def get_pairs(word): - """ - Return set of symbol pairs in a word. - - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - - pairs = set(pairs) - return pairs - - -class BertweetTokenizer(PreTrainedTokenizer): - """ - Constructs a BERTweet tokenizer, using Byte-Pair-Encoding. - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - normalization (`bool`, *optional*, defaults to `False`): - Whether or not to apply a normalization preprocess. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - - - - When building a sequence using special tokens, this is not the token that is used for the beginning of - sequence. The token used is the `cls_token`. - - - - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - cls_token (`str`, *optional*, defaults to `""`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - mask_token (`str`, *optional*, defaults to `""`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - - def __init__( - self, - vocab_file, - merges_file, - normalization=False, - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - **kwargs, - ): - try: - from emoji import demojize - - self.demojizer = demojize - except ImportError: - logger.warning( - "emoji is not installed, thus not converting emoticons or emojis into text. Install emoji: pip3" - " install emoji==0.6.0" - ) - self.demojizer = None - - self.vocab_file = vocab_file - self.merges_file = merges_file - - self.encoder = {} - self.encoder[bos_token] = 0 - self.encoder[pad_token] = 1 - self.encoder[eos_token] = 2 - self.encoder[unk_token] = 3 - - self.add_from_file(vocab_file) - - self.decoder = {v: k for k, v in self.encoder.items()} - - with open(merges_file, encoding="utf-8") as merges_handle: - merges = merges_handle.read().split("\n")[:-1] - merges = [tuple(merge.split()[:-1]) for merge in merges] - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {} - - self.normalization = normalization - self.tweetPreprocessor = TweetTokenizer() - self.special_puncts = {"’": "'", "…": "..."} - - super().__init__( - normalization=normalization, - bos_token=bos_token, - eos_token=eos_token, - sep_token=sep_token, - cls_token=cls_token, - unk_token=unk_token, - pad_token=pad_token, - mask_token=mask_token, - **kwargs, - ) - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A BERTweet sequence has the following format: - - - single sequence: ` X ` - - pair of sequences: ` A B ` - - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - - if token_ids_1 is None: - return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] - cls = [self.cls_token_id] - sep = [self.sep_token_id] - return cls + token_ids_0 + sep + sep + token_ids_1 + sep - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - if token_ids_1 is None: - return [1] + ([0] * len(token_ids_0)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. BERTweet does - not make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of zeros. - """ - - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] - - @property - def vocab_size(self): - return len(self.encoder) - - def get_vocab(self): - return dict(self.encoder, **self.added_tokens_encoder) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - word = tuple(list(word[:-1]) + [word[-1] + ""]) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - except ValueError: - new_word.extend(word[i:]) - break - else: - new_word.extend(word[i:j]) - i = j - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = "@@ ".join(word) - word = word[:-4] - self.cache[token] = word - return word - - def _tokenize(self, text): - """Tokenize a string.""" - if self.normalization: # Perform Tweet normalization before performing BPE - text = self.normalizeTweet(text) - - split_tokens = [] - words = re.findall(r"\S+\n?", text) - for token in words: - split_tokens.extend(list(self.bpe(token).split(" "))) - return split_tokens - - def normalizeTweet(self, tweet): - """ - Normalize a raw Tweet - """ - for punct in self.special_puncts: - tweet = tweet.replace(punct, self.special_puncts[punct]) - - tokens = self.tweetPreprocessor.tokenize(tweet) - normTweet = " ".join([self.normalizeToken(token) for token in tokens]) - - normTweet = ( - normTweet.replace("cannot ", "can not ") - .replace("n't ", " n't ") - .replace("n 't ", " n't ") - .replace("ca n't", "can't") - .replace("ai n't", "ain't") - ) - normTweet = ( - normTweet.replace("'m ", " 'm ") - .replace("'re ", " 're ") - .replace("'s ", " 's ") - .replace("'ll ", " 'll ") - .replace("'d ", " 'd ") - .replace("'ve ", " 've ") - ) - normTweet = ( - normTweet.replace(" p . m .", " p.m.") - .replace(" p . m ", " p.m ") - .replace(" a . m .", " a.m.") - .replace(" a . m ", " a.m ") - ) - - return " ".join(normTweet.split()) - - def normalizeToken(self, token): - """ - Normalize tokens in a Tweet - """ - lowercased_token = token.lower() - if token.startswith("@"): - return "@USER" - elif lowercased_token.startswith("http") or lowercased_token.startswith("www"): - return "HTTPURL" - elif len(token) == 1: - if token in self.special_puncts: - return self.special_puncts[token] - if self.demojizer is not None: - return self.demojizer(token) - else: - return token - else: - return token - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.decoder.get(index, self.unk_token) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - out_string = " ".join(tokens).replace("@@ ", "").strip() - return out_string - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - out_vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - out_merge_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] - ) - - if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): - copyfile(self.vocab_file, out_vocab_file) - elif not os.path.isfile(self.vocab_file): - with open(out_vocab_file, "wb") as fi: - content_spiece_model = self.sp_model.serialized_model_proto() - fi.write(content_spiece_model) - - if os.path.abspath(self.merges_file) != os.path.abspath(out_merge_file): - copyfile(self.merges_file, out_merge_file) - - return out_vocab_file, out_merge_file - - # def decode(self, token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True): - # filtered_tokens = ' '.join(self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)) - # tokens_generated_so_far = re.sub('(@@ )', '', string=filtered_tokens) - # tokens_generated_so_far = re.sub('(@@ ?$)', '', string=tokens_generated_so_far) - # return ''.join(tokens_generated_so_far) - - def add_from_file(self, f): - """ - Loads a pre-existing dictionary from a text file and adds its symbols to this instance. - """ - if isinstance(f, str): - try: - with open(f, "r", encoding="utf-8") as fd: - self.add_from_file(fd) - except FileNotFoundError as fnfe: - raise fnfe - except UnicodeError: - raise Exception(f"Incorrect encoding detected in {f}, please rebuild the dataset") - return - - lines = f.readlines() - for lineTmp in lines: - line = lineTmp.strip() - idx = line.rfind(" ") - if idx == -1: - raise ValueError("Incorrect dictionary format, expected ' '") - word = line[:idx] - self.encoder[word] = len(self.encoder) - - -# Natural Language Toolkit: Twitter Tokenizer -# -# Copyright (C) 2001-2020 NLTK Project -# Author: Christopher Potts -# Ewan Klein (modifications) -# Pierpaolo Pantone <> (modifications) -# URL: http://nltk.org/ -# For license information, see LICENSE.TXT -# - - -""" -Twitter-aware tokenizer, designed to be flexible and easy to adapt to new domains and tasks. The basic logic is this: - -1. The tuple regex_strings defines a list of regular expression strings. - -2. The regex_strings strings are put, in order, into a compiled regular expression object called word_re. - -3. The tokenization is done by word_re.findall(s), where s is the user-supplied string, inside the tokenize() method of - the class Tokenizer. - -4. When instantiating Tokenizer objects, there is a single option: preserve_case. By default, it is set to True. If it - is set to False, then the tokenizer will lowercase everything except for emoticons. - -""" - - -###################################################################### -# -# import regex # https://github.com/nltk/nltk/issues/2409 -# import html -# -###################################################################### -# The following strings are components in the regular expression -# that is used for tokenizing. It's important that phone_number -# appears first in the final regex (since it can contain whitespace). -# It also could matter that tags comes after emoticons, due to the -# possibility of having text like -# -# <:| and some text >:) -# -# Most importantly, the final element should always be last, since it -# does a last ditch whitespace-based tokenization of whatever is left. - -# ToDo: Update with http://en.wikipedia.org/wiki/List_of_emoticons ? - -# This particular element is used in a couple ways, so we define it -# with a name: -# docstyle-ignore -EMOTICONS = r""" - (?: - [<>]? - [:;=8] # eyes - [\-o\*\']? # optional nose - [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth - | - [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth - [\-o\*\']? # optional nose - [:;=8] # eyes - [<>]? - | - <3 # heart - )""" - -# URL pattern due to John Gruber, modified by Tom Winzig. See -# https://gist.github.com/winzig/8894715 -# docstyle-ignore -URLS = r""" # Capture 1: entire matched URL - (?: - https?: # URL protocol and colon - (?: - /{1,3} # 1-3 slashes - | # or - [a-z0-9%] # Single letter or digit or '%' - # (Trying not to match e.g. "URI::Escape") - ) - | # or - # looks like domain name followed by a slash: - [a-z0-9.\-]+[.] - (?:[a-z]{2,13}) - / - ) - (?: # One or more: - [^\s()<>{}\[\]]+ # Run of non-space, non-()<>{}[] - | # or - \([^\s()]*?\([^\s()]+\)[^\s()]*?\) # balanced parens, one level deep: (...(...)...) - | - \([^\s]+?\) # balanced parens, non-recursive: (...) - )+ - (?: # End with: - \([^\s()]*?\([^\s()]+\)[^\s()]*?\) # balanced parens, one level deep: (...(...)...) - | - \([^\s]+?\) # balanced parens, non-recursive: (...) - | # or - [^\s`!()\[\]{};:'".,<>?«»“”‘’] # not a space or one of these punct chars - ) - | # OR, the following to match naked domains: - (?: - (?\s]+>""", - # ASCII Arrows - r"""[\-]+>|<[\-]+""", - # Twitter username: - r"""(?:@[\w_]+)""", - # Twitter hashtags: - r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)""", - # email addresses - r"""[\w.+-]+@[\w-]+\.(?:[\w-]\.?)+[\w-]""", - # docstyle-ignore - # Remaining word types: - r""" - (?:[^\W\d_](?:[^\W\d_]|['\-_])+[^\W\d_]) # Words with apostrophes or dashes. - | - (?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals. - | - (?:[\w_]+) # Words without apostrophes or dashes. - | - (?:\.(?:\s*\.){1,}) # Ellipsis dots. - | - (?:\S) # Everything else that isn't whitespace. - """, -) - -###################################################################### -# This is the core tokenizing regex: - -WORD_RE = regex.compile(r"""(%s)""" % "|".join(REGEXPS), regex.VERBOSE | regex.I | regex.UNICODE) - -# WORD_RE performs poorly on these patterns: -HANG_RE = regex.compile(r"([^a-zA-Z0-9])\1{3,}") - -# The emoticon string gets its own regex so that we can preserve case for -# them as needed: -EMOTICON_RE = regex.compile(EMOTICONS, regex.VERBOSE | regex.I | regex.UNICODE) - -# These are for regularizing HTML entities to Unicode: -ENT_RE = regex.compile(r"&(#?(x?))([^&;\s]+);") - - -###################################################################### -# Functions for converting html entities -###################################################################### - - -def _str_to_unicode(text, encoding=None, errors="strict"): - if encoding is None: - encoding = "utf-8" - if isinstance(text, bytes): - return text.decode(encoding, errors) - return text - - -def _replace_html_entities(text, keep=(), remove_illegal=True, encoding="utf-8"): - """ - Remove entities from text by converting them to their corresponding unicode character. - - Args: - text: - A unicode string or a byte string encoded in the given *encoding* (which defaults to 'utf-8'). - keep (list): - List of entity names which should not be replaced. This supports both numeric entities (`&#nnnn;` and - `&#hhhh;`) and named entities (such as ` ` or `>`). - remove_illegal (bool): - If `True`, entities that can't be converted are removed. Otherwise, entities that can't be converted are - kept "as is". - - Returns: A unicode string with the entities removed. - - See https://github.com/scrapy/w3lib/blob/master/w3lib/html.py - - Examples: - - ```python - >>> from nltk.tokenize.casual import _replace_html_entities - - >>> _replace_html_entities(b"Price: £100") - 'Price: \\xa3100' - - >>> print(_replace_html_entities(b"Price: £100")) - Price: £100 - ```""" - - def _convert_entity(match): - entity_body = match.group(3) - if match.group(1): - try: - if match.group(2): - number = int(entity_body, 16) - else: - number = int(entity_body, 10) - # Numeric character references in the 80-9F range are typically - # interpreted by browsers as representing the characters mapped - # to bytes 80-9F in the Windows-1252 encoding. For more info - # see: https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Similar_character_sets - if 0x80 <= number <= 0x9F: - return bytes((number,)).decode("cp1252") - except ValueError: - number = None - else: - if entity_body in keep: - return match.group(0) - else: - number = html.entities.name2codepoint.get(entity_body) - if number is not None: - try: - return chr(number) - except (ValueError, OverflowError): - pass - - return "" if remove_illegal else match.group(0) - - return ENT_RE.sub(_convert_entity, _str_to_unicode(text, encoding)) - - -###################################################################### - - -class TweetTokenizer: - r""" - Examples: - - ```python - >>> # Tokenizer for tweets. - >>> from nltk.tokenize import TweetTokenizer - - >>> tknzr = TweetTokenizer() - >>> s0 = "This is a cooool #dummysmiley: :-) :-P <3 and some arrows < > -> <--" - >>> tknzr.tokenize(s0) - ['This', 'is', 'a', 'cooool', '#dummysmiley', ':', ':-)', ':-P', '<3', 'and', 'some', 'arrows', '<', '>', '->', '<--'] - - >>> # Examples using *strip_handles* and *reduce_len parameters*: - >>> tknzr = TweetTokenizer(strip_handles=True, reduce_len=True) - >>> s1 = "@remy: This is waaaaayyyy too much for you!!!!!!" - >>> tknzr.tokenize(s1) - [':', 'This', 'is', 'waaayyy', 'too', 'much', 'for', 'you', '!', '!', '!'] - ```""" - - def __init__(self, preserve_case=True, reduce_len=False, strip_handles=False): - self.preserve_case = preserve_case - self.reduce_len = reduce_len - self.strip_handles = strip_handles - - def tokenize(self, text): - """ - Args: - text: str - - Returns: list(str) A tokenized list of strings; concatenating this list returns the original string if - `preserve_case=False` - """ - # Fix HTML character entities: - text = _replace_html_entities(text) - # Remove username handles - if self.strip_handles: - text = remove_handles(text) - # Normalize word lengthening - if self.reduce_len: - text = reduce_lengthening(text) - # Shorten problematic sequences of characters - safe_text = HANG_RE.sub(r"\1\1\1", text) - # Tokenize: - words = WORD_RE.findall(safe_text) - # Possibly alter the case, but avoid changing emoticons like :D into :d: - if not self.preserve_case: - words = [x if EMOTICON_RE.search(x) else x.lower() for x in words] - return words - - -###################################################################### -# Normalization Functions -###################################################################### - - -def reduce_lengthening(text): - """ - Replace repeated character sequences of length 3 or greater with sequences of length 3. - """ - pattern = regex.compile(r"(.)\1{2,}") - return pattern.sub(r"\1\1\1", text) - - -def remove_handles(text): - """ - Remove Twitter username handles from text. - """ - pattern = regex.compile( - r"(? { - // Convert both to lowercase and remove punctuation and whitespace - const normalizedFileName = fileName - .toLowerCase() - .replace(/[.,/#!$%^&*;:{}=-_~()\s]/g, ""); - const normalizedStr = str - .toLowerCase() - .replace(/[.,/#!$%^&*;:{}=-_~()\s]/g, ""); - - // Return true if the normalized file name is included in the normalized string - return normalizedStr.includes(normalizedFileName); -}; diff --git a/spaces/youplala/StoreCopilot/src/components/layout.py b/spaces/youplala/StoreCopilot/src/components/layout.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/__init__.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/rules/add.js b/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/rules/add.js deleted file mode 100644 index de85bb7fd52451a2848fab838d8eb5cc3ef21ab8..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/rules/add.js +++ /dev/null @@ -1,89 +0,0 @@ -'use strict'; - -var utils = require('../utils'); - -// internal -var reEscComments = /\\#/g; -// note that '^^' is used in place of escaped comments -var reUnescapeComments = /\^\^/g; -var reComments = /#.*$/; -var reEscapeChars = /[.|\-[\]()\\]/g; -var reAsterisk = /\*/g; - -module.exports = add; - -/** - * Converts file patterns or regular expressions to nodemon - * compatible RegExp matching rules. Note: the `rules` argument - * object is modified to include the new rule and new RegExp - * - * ### Example: - * - * var rules = { watch: [], ignore: [] }; - * add(rules, 'watch', '*.js'); - * add(rules, 'ignore', '/public/'); - * add(rules, 'watch', ':(\d)*\.js'); // note: string based regexp - * add(rules, 'watch', /\d*\.js/); - * - * @param {Object} rules containing `watch` and `ignore`. Also updated during - * execution - * @param {String} which must be either "watch" or "ignore" - * @param {String|RegExp} the actual rule. - */ -function add(rules, which, rule) { - if (!{ ignore: 1, watch: 1}[which]) { - throw new Error('rules/index.js#add requires "ignore" or "watch" as the ' + - 'first argument'); - } - - if (Array.isArray(rule)) { - rule.forEach(function (rule) { - add(rules, which, rule); - }); - return; - } - - // support the rule being a RegExp, but reformat it to - // the custom : format that we're working with. - if (rule instanceof RegExp) { - // rule = ':' + rule.toString().replace(/^\/(.*?)\/$/g, '$1'); - utils.log.error('RegExp format no longer supported, but globs are.'); - return; - } - - // remove comments and trim lines - // this mess of replace methods is escaping "\#" to allow for emacs temp files - - // first up strip comments and remove blank head or tails - rule = (rule || '').replace(reEscComments, '^^') - .replace(reComments, '') - .replace(reUnescapeComments, '#').trim(); - - var regexp = false; - - if (typeof rule === 'string' && rule.substring(0, 1) === ':') { - rule = rule.substring(1); - utils.log.error('RegExp no longer supported: ' + rule); - regexp = true; - } else if (rule.length === 0) { - // blank line (or it was a comment) - return; - } - - if (regexp) { - // rules[which].push(rule); - } else { - // rule = rule.replace(reEscapeChars, '\\$&') - // .replace(reAsterisk, '.*'); - - rules[which].push(rule); - // compile a regexp of all the rules for this ignore or watch - var re = rules[which].map(function (rule) { - return rule.replace(reEscapeChars, '\\$&') - .replace(reAsterisk, '.*'); - }).join('|'); - - // used for the directory matching - rules[which].re = new RegExp(re); - } -} diff --git a/spaces/zjxchina/vits_seki/utils.py b/spaces/zjxchina/vits_seki/utils.py deleted file mode 100644 index cc4897b77fbdcd2e2b11e6257a1b96f25577684f..0000000000000000000000000000000000000000 --- a/spaces/zjxchina/vits_seki/utils.py +++ /dev/null @@ -1,275 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path, enable_logs=True): - if enable_logs: - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def oldest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[0] - print(x) - return x - - -def number_of_checkpoints(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - return len(f_list) - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - parser.add_argument('-d', '--dir', type=str, default="./logs", - help='Directory for output') - - args = parser.parse_args() - model_dir = os.path.join(args.dir, args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.model_name = args.model - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__()
              MethodPriceAvailabilityFeaturesRequirements
              Microsoft Store$4.99WorldwideUpdates, refunds, ratings, supportWindows 10 or later, Microsoft account
              Epic Games StoreFree (for a limited time)WorldwideUpdates, refunds, ratings, supportWindows 7 or later, Epic Games account
              Game's official website$4.99WorldwideNo updates, refunds, ratings, supportWindows XP or later, PayPal or credit card