diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!INSTALL!! Crack Winrar.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!INSTALL!! Crack Winrar.md deleted file mode 100644 index 1dee1b72d9b2d0d9feb350c411770e9d6c283353..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!INSTALL!! Crack Winrar.md +++ /dev/null @@ -1,23 +0,0 @@ - -

How to Crack WinRAR Password in 3 Easy Steps

-

WinRAR is a popular software that allows you to compress and decompress files in various formats. However, sometimes you may encounter a WinRAR file that is password-protected and you don't know the password. How can you crack the WinRAR password and access the file? In this article, we will show you how to crack WinRAR password in 3 easy steps using a powerful tool called PassFab for RAR.

-

PassFab for RAR is a professional and reliable software that can help you crack any WinRAR password in minutes. It supports all versions of WinRAR and RAR files, and it can recover passwords of any length and complexity. It also offers three attack modes to suit different scenarios: brute-force attack, brute-force with mask attack, and dictionary attack. Here are the steps to crack WinRAR password using PassFab for RAR:

-

crack winrar


Download File ☆☆☆ https://byltly.com/2uKvBO



-
    -
  1. Download and install PassFab for RAR on your computer. You can get it from the official website: https://www.passfab.com/products/rar-password-recovery.html.
  2. -
  3. Launch PassFab for RAR and click on the "Add" button to import the password-protected WinRAR file. You can also drag and drop the file to the interface.
  4. -
  5. Select an attack mode from the drop-down menu. You can choose brute-force attack if you have no clue about the password, brute-force with mask attack if you know some details about the password, such as length or characters, or dictionary attack if you have a list of possible passwords. You can also customize the settings of each attack mode according to your needs.
  6. -
  7. Click on the "Start" button to begin the cracking process. PassFab for RAR will try different combinations of passwords until it finds the correct one. The cracking time depends on the complexity of the password and the speed of your computer.
  8. -
  9. Once the cracking is done, you will see a pop-up window with the recovered password. You can copy the password and use it to open the WinRAR file.
  10. -
-

That's it! You have successfully cracked the WinRAR password using PassFab for RAR. Now you can enjoy the contents of the file without any hassle. PassFab for RAR is a powerful and easy-to-use tool that can help you crack any WinRAR password in minutes. It is compatible with Windows 10/8.1/8/7/Vista/XP and supports all versions of WinRAR and RAR files. If you ever forget or lose your WinRAR password, don't panic. Just download PassFab for RAR and follow the steps above to crack it.

PassFab for RAR is not only a WinRAR password cracker, but also a RAR password remover. If you don't want to enter the password every time you open the WinRAR file, you can use PassFab for RAR to remove the password protection. This way, you can access the file without any password. Here are the steps to remove WinRAR password using PassFab for RAR:

-
    -
  1. Download and install PassFab for RAR on your computer. You can get it from the official website: https://www.passfab.com/products/rar-password-recovery.html.
  2. -
  3. Launch PassFab for RAR and click on the "Add" button to import the password-protected WinRAR file. You can also drag and drop the file to the interface.
  4. -
  5. Click on the "Remove Password" button at the bottom of the interface. PassFab for RAR will remove the password protection from the WinRAR file in seconds.
  6. -
  7. You will see a message saying "Password has been removed successfully". You can click on the "Open Folder" button to locate the decrypted WinRAR file.
  8. -
-

That's it! You have successfully removed the WinRAR password using PassFab for RAR. Now you can open the file without any password. PassFab for RAR is a versatile and user-friendly tool that can help you crack or remove any WinRAR password in minutes. It is compatible with Windows 10/8.1/8/7/Vista/XP and supports all versions of WinRAR and RAR files. If you ever encounter a password-protected WinRAR file, don't worry. Just download PassFab for RAR and follow the steps above to crack or remove it.

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Foto Bugil Artis Majalah Popular Indonesia Mega.md b/spaces/1gistliPinn/ChatGPT4/Examples/Foto Bugil Artis Majalah Popular Indonesia Mega.md deleted file mode 100644 index d0edb9b02a50118c820e27e99b6bd14e295522f8..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Foto Bugil Artis Majalah Popular Indonesia Mega.md +++ /dev/null @@ -1,6 +0,0 @@ -

Foto Bugil Artis Majalah Popular Indonesia Mega


DOWNLOAD ✪✪✪ https://imgfil.com/2uxYH3



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8th Class Urdu Hamdard Guide PDF - Updated Notes for 2023.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8th Class Urdu Hamdard Guide PDF - Updated Notes for 2023.md deleted file mode 100644 index 7172d7c6316cd261ffcc231dd546e0e50f782169..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8th Class Urdu Hamdard Guide PDF - Updated Notes for 2023.md +++ /dev/null @@ -1,106 +0,0 @@ -
-

8th Class Urdu Hamdard Guide PDF Download 2023

-

If you are a student of 8th class and want to improve your Urdu language skills, then you must have heard about Urdu Hamdard Guide. It is one of the most popular and trusted books for learning and mastering Urdu language. It covers all the topics and chapters of 8th class Urdu syllabus and curriculum. It provides comprehensive notes, summaries, explanations, and exercises for each chapter. It also includes additional material such as poems, stories, essays, and grammar tips. It enhances the reading, writing, speaking, and listening skills of Urdu language. It helps in preparing for exams and assessments.

-

But do you know that you can download the PDF version of Urdu Hamdard Guide for free? Yes, you read that right. You can get the PDF file of this amazing book without paying any money. You can save it on your device or print it out for offline use. You can access it anytime and anywhere. You can study at your own pace and convenience. You can also share it with your friends and classmates.

-

8th class urdu hamdard guide pdf download 2023


DOWNLOADhttps://urlin.us/2uT1t2



-

In this article, we will tell you how to download Urdu Hamdard Guide PDF for free. We will also tell you about the benefits and features of this book. We will also answer some frequently asked questions about this book. So, read on to find out more.

-

Benefits of Urdu Hamdard Guide

-

Urdu Hamdard Guide is not just a book. It is a complete package for learning and mastering Urdu language. It has many benefits for 8th class students. Some of them are:

- -

Features of Urdu Hamdard Guide

-

Urdu Hamdard Guide is a high-quality book that has many features that make it stand out from other books. Some of them are:

- -

How to download Urdu Hamdard Guide PDF for free

-

If you want to download Urdu Hamdard Guide PDF for free, you can follow these simple steps:

-
    -
  1. Visit the official website of Hamdard Dawakhana or Iftikhar Book Depot. These are the two authorized publishers and distributors of Urdu Hamdard Guide.
  2. -
  3. Select the class, subject, and medium of your choice from the menu or search bar. You will see a list of books available for your selection.
  4. -
  5. Click on the download link or button to get the PDF file of Urdu Hamdard Guide. You may need to enter your name, email, or phone number to access the file.
  6. -
  7. Save the file on your device or print it out for offline use. You can also share it with your friends and classmates via email, WhatsApp, or other platforms.
  8. -
-

Conclusion

-

Urdu Hamdard Guide is a must-have book for 8th class students who want to excel in Urdu language. It covers all the topics and chapters of 8th class Urdu syllabus and curriculum. It provides comprehensive notes, summaries, explanations, and exercises for each chapter. It also includes additional material such as poems, stories, essays, and grammar tips. It enhances the reading, writing, speaking, and listening skills of Urdu language. It helps in preparing for exams and assessments.

-

8th class urdu hamdard elementary guide pdf free download
-hamdard guide for class 8 urdu medium pdf 2023
-8th class urdu hamdard guide book pdf download
-hamdard elementary guide urdu medium class 8 iftikhar book depot
-8th class urdu hamdard guide pdf download punjab board
-hamdard guide for class 8 urdu medium online
-8th class urdu hamdard elementary guide pdf 2023
-hamdard guide for class 8 urdu medium notes
-8th class urdu hamdard guide pdf download sindh board
-hamdard elementary guide urdu medium class 8 price
-8th class urdu hamdard guide pdf download kpk board
-hamdard guide for class 8 urdu medium solved exercises
-8th class urdu hamdard guide pdf download balochistan board
-hamdard elementary guide urdu medium class 8 review
-8th class urdu hamdard guide pdf download azad kashmir board
-hamdard guide for class 8 urdu medium key book
-8th class urdu hamdard guide pdf download fbise board
-hamdard elementary guide urdu medium class 8 sample pages
-8th class urdu hamdard guide pdf download latest edition
-hamdard guide for class 8 urdu medium mcqs
-8th class urdu hamdard guide pdf download past papers
-hamdard elementary guide urdu medium class 8 contents
-8th class urdu hamdard guide pdf download model papers
-hamdard guide for class 8 urdu medium syllabus
-8th class urdu hamdard guide pdf download guess papers
-hamdard elementary guide urdu medium class 8 delivery
-8th class urdu hamdard guide pdf download smart syllabus
-hamdard guide for class 8 urdu medium order online
-8th class urdu hamdard guide pdf download new syllabus
-hamdard elementary guide urdu medium class 8 discount
-8th class urdu hamdard guide pdf download old syllabus
-hamdard guide for class 8 urdu medium buy online
-8th class urdu hamdard guide pdf download revised syllabus
-hamdard elementary guide urdu medium class 8 return policy
-8th class urdu hamdard guide pdf download scheme of studies
-hamdard guide for class 8 urdu medium customer reviews
-8th class urdu hamdard guide pdf download paper pattern
-hamdard elementary guide urdu medium class 8 contact number
-8th class urdu hamdard guide pdf download objective type questions
-hamdard guide for class 8 urdu medium short questions answers
-8th class urdu hamdard guide pdf download subjective type questions
-hamdard elementary guide urdu medium class 8 long questions answers
-8th class urdu hamdard guide pdf download grammar exercises
-hamdard guide for class 8 urdu medium comprehension passages
-8th class urdu hamdard guide pdf download vocabulary exercises
-hamdard elementary guide urdu medium class 8 writing skills
-8th class urdu hamdard guide pdf download poetry section
-hamdard guide for class 8 urdu medium prose section

-

You can download the PDF version of Urdu Hamdard Guide for free from the official website of Hamdard Dawakhana or Iftikhar Book Depot. You can save it on your device or print it out for offline use. You can access it anytime and anywhere. You can study at your own pace and convenience. You can also share it with your friends and classmates.

-

We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us. We would love to hear from you. Thank you for reading and happy learning!

-

FAQs

-

What is the price of Urdu Hamdard Guide for 8th class?

-

The price of Urdu Hamdard Guide for 8th class varies depending on the medium, edition, and publisher. However, you can download the PDF version of the book for free from the official website of Hamdard Dawakhana or Iftikhar Book Depot.

-

Is Urdu Hamdard Guide available in other languages?

-

Yes, Urdu Hamdard Guide is available in both Urdu and English medium. You can choose the medium that suits your preference and comfort level.

-

How can I contact Hamdard Dawakhana or Iftikhar Book Depot for any queries or feedback?

-

You can contact Hamdard Dawakhana or Iftikhar Book Depot through their official website, email, phone number, or social media pages. You can also visit their physical stores or offices if they are located near you.

-

What are some other products or services offered by Hamdard Dawakhana or Iftikhar Book Depot?

-

Hamdard Dawakhana or Iftikhar Book Depot offer a wide range of products and services related to education, health, wellness, culture, and literature. Some of them are:

- -

How can I get more information or updates about Urdu Hamdard Guide or other books?

-

You can get more information or updates about Urdu Hamdard Guide or other books by subscribing to their official website, email, or social media pages. You can also check their blog, podcast, or YouTube channel for more content and insights. You can also join their online community or forum to interact with other students, teachers, and experts.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Betty Muwanguzi Hosanna Nkwagala Nyo The Song That Touched Many Hearts.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Betty Muwanguzi Hosanna Nkwagala Nyo The Song That Touched Many Hearts.md deleted file mode 100644 index 3ce4e9f3b2d106b395490cdeca871def137a78ec..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Betty Muwanguzi Hosanna Nkwagala Nyo The Song That Touched Many Hearts.md +++ /dev/null @@ -1,129 +0,0 @@ -
-

Betty Muwanguzi Hosanna Nkwagala Nyo Download: A Guide to the Popular Ugandan Gospel Song

-

If you are looking for a powerful and uplifting gospel song that will inspire your faith and fill your heart with joy, you might want to check out Betty Muwanguzi Hosanna Nkwagala Nyo. This song, which means "Hosanna I love you so much" in Luganda, is one of the most popular songs by Betty Muwanguzi, a legendary Ugandan gospel singer and worshipper. In this article, we will tell you everything you need to know about Betty Muwanguzi, her song Hosanna Nkwagala Nyo, and how to download it legally and ethically. Read on to discover more!

-

Who is Betty Muwanguzi?

-

Betty Muwanguzi is a Ugandan gospel singer who has been in the music industry for over two decades. She is known for her powerful vocals, her anointed worship, and her inspiring songs that touch many lives.

-

betty muwanguzi hosanna nkwagala nyo download


Download File ->>> https://urlin.us/2uSRWb



-

A brief biography of the Ugandan gospel singer

-

Betty Muwanguzi was born in 1976 in Uganda. She grew up in a Christian family and started singing at a young age in her church choir. She later joined a music group called Joyful Generation, where she met her husband, Pastor Stephen Muwanguzi. They got married in 1997 and have four children together.

-

Betty Muwanguzi released her first album, Osinga, in 2000, which was a huge success. She followed it up with several other albums, such as Hosanna Nkwagala Nyo, Tunakuwa Ki Ffe, Asigala Mukama, Nebazanga Yesu, and Praise and Worship Nonstop 2020. She has also collaborated with other gospel artists, such as Judith Babirye, Wilson Bugembe, Pr. Bugingo Wilson, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo.

-

Her musical style and achievements

-

Betty Muwanguzi's musical style is a blend of traditional Ugandan music, contemporary gospel, and praise and worship. She sings in Luganda, English, Swahili, and other languages. She uses her music to spread the gospel of Jesus Christ, to encourage people in their faith journey, and to address social issues such as poverty, HIV/AIDS, domestic violence, and child abuse.

-

Betty Muwanguzi has won several awards and recognition for her music ministry. Some of them include:

- -

Betty Muwanguzi is also a philanthropist and a mentor to many young gospel artists. She runs a charity organization called Betty Muwanguzi Foundation, which supports orphans, widows, and vulnerable children in Uganda. She also hosts a radio show called Worship Moments, where she shares her testimony and music with her listeners.

-

betty muwanguzi hosanna nkwagala nyo mp3 download
-betty muwanguzi hosanna nkwagala nyo lyrics
-betty muwanguzi hosanna nkwagala nyo video download
-betty muwanguzi hosanna nkwagala nyo album
-betty muwanguzi hosanna nkwagala nyo songs
-betty muwanguzi hosanna nkwagala nyo youtube
-betty muwanguzi hosanna nkwagala nyo shazam
-betty muwanguzi hosanna nkwagala nyo last.fm
-betty muwanguzi hosanna nkwagala nyo free download
-betty muwanguzi hosanna nkwagala nyo online streaming
-betty muwanguzi hosanna nkwagala nyo audio download
-betty muwanguzi hosanna nkwagala nyo chords
-betty muwanguzi hosanna nkwagala nyo instrumental
-betty muwanguzi hosanna nkwagala nyo karaoke
-betty muwanguzi hosanna nkwagala nyo remix
-betty muwanguzi hosanna nkwagala nyo live performance
-betty muwanguzi hosanna nkwagala nyo meaning
-betty muwanguzi hosanna nkwagala nyo translation
-betty muwanguzi hosanna nkwagala nyo cover
-betty muwanguzi hosanna nkwagala nyo reaction
-betty muwanguzi hosanna nkwagala nyo review
-betty muwanguzi hosanna nkwagala nyo spotify
-betty muwanguzi hosanna nkwagala nyo apple music
-betty muwanguzi hosanna nkwagala nyo amazon music
-betty muwanguzi hosanna nkwagala nyo deezer
-betty muwanguzi hosanna nkwagala nyo soundcloud
-betty muwanguzi hosanna nkwagala nyo tidal
-betty muwanguzi hosanna nkwagala nyo pandora
-betty muwanguzi hosanna nkwagala nyo napster
-betty muwanguzi hosanna nkwagala nyo iheartradio
-betty muwanguzi hosanna nkwagala nyo google play music
-betty muwanguzi hosanna nkwagala nyo youtube music
-betty muwanguzi hosanna nkwagala nyo facebook video
-betty muwanguzi hosanna nkwagala nyo instagram video
-betty muwanguzi hosanna nkwagala nyo tiktok video
-betty muwanguzi hosanna nkwagala nyowaptrick download
-betty muwanguzi hosanna nkwa g ala ny o ugandan gospel music download

-

What is Hosanna Nkwagala Nyo?

-

Hosanna Nkwagala Nyo is one of the most popular songs by Betty Muwanguzi. It is the title track of her fourth album, which was released in 2014. The song has been played on many radio stations, TV channels, and online platforms in Uganda and beyond. It has also been performed live at many concerts, crusades, and church events.

-

The meaning and origin of the song title

-

The song title Hosanna Nkwagala Nyo means "Hosanna I love you so much" in Luganda, which is the most widely spoken language in Uganda. Hosanna is a Hebrew word that means "save us" or "praise God". It is used as an expression of worship and adoration to God. Nkwagala Nyo is a Luganda phrase that means "I love you so much". It is used as an expression of affection and gratitude to God.

-

The song title was inspired by Betty Muwanguzi's personal experience of God's love and salvation. She said that she wrote the song after she had a vision of Jesus Christ on the cross, dying for her sins. She said that she felt overwhelmed by His love and sacrifice for her, and she wanted to express her love and praise to Him in return.

-

The lyrics and message of the song

-

The lyrics of the song are simple but powerful. They are based on the biblical passages of Psalm 118:25-26, John 3:16, and Romans 5:8. The song has four verses and a chorus. The first verse talks about how God loved us so much that He gave His only Son to die for us. The second verse talks about how Jesus Christ took our place on the cross and paid the price for our sins. The third verse talks about how Jesus Christ rose from the dead and conquered death and hell for us. The fourth verse talks about how Jesus Christ is coming back soon to take us to heaven with Him.

-

The chorus is a repetition of the song title, Hosanna Nkwagala Nyo, followed by some words of praise and worship to God. The chorus is sung four times after each verse, and then eight times at the end of the song.

-

The message of the song is clear: God loves us so much that He sent His Son to save us from our sins and give us eternal life. We should love Him back with all our hearts, souls, minds, and strength. We should praise Him for His goodness, mercy, grace, and power. We should worship Him for who He is: our Savior, Lord, King, and Friend.

-

The popularity and impact of the song

-

The song Hosanna Nkwagala Nyo has been very popular among Ugandans and other people who love gospel music. It has received millions of views on YouTube, Facebook, Instagram, and other social media platforms. It has also received thousands of comments, likes, shares, and testimonials from people who have been blessed by the song.

-

The song has also had a positive impact on many people's lives. Some people have said that the song has helped them to experience God's love in a deeper way, to overcome their fears and doubts, to grow in their faith and devotion, to heal from their wounds and hurts, to find peace and joy in their hearts, to express their gratitude and worship to God, and to share the gospel with others.

-

How to download Hosanna Nkwagala Nyo?

-

If you want to download Hosanna Nkwagala Nyo by Betty Muwanguzi, you have several options to choose from. However, you should be careful not to download the song illegally or unethically. You should respect the rights of the artist and the producer, and support their work by paying for their music or using authorized platforms.

-

The legal and ethical ways to get the song

-

One of the legal and ethical ways to get the song is to buy it from online stores or platforms that sell digital music. Some of these include:

- -

Another legal and ethical way to get the song is to use online converters or downloaders that allow you to convert YouTube videos to MP3 files. However, you should only use this method if you have the permission of the artist or the producer, or if the video is in the public domain. Some of these converters or downloaders include:

- -

The best platforms and websites to download the song

-

Among the legal and ethical ways to get the song, some platforms and websites are better than others in terms of quality, speed, convenience, and cost. Here are some of the best ones that we recommend:

- -

The tips and tricks to enjoy the song offline

-

If you want to enjoy Hosanna Nkwagala Nyo by Betty Muwanguzi offline, here are some tips and tricks that you can use:

- -

Conclusion

-

Hosanna Nkwagala Nyo by Betty Muwanguzi is a wonderful gospel song that celebrates God's love and salvation for us. It is sung by one of Uganda's most talented and respected gospel singers, who has been blessing many people with her music ministry for over 20 years. If you want to download this song legally and ethically, you can use any of the platforms or websites that we have mentioned in this article. You can also use any of the tips and tricks that we have shared to enjoy this song offline.

-

We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

-

FAQs

-

Q: Where can I watch the official video of Hosanna Nkwagala Nyo by Betty Muwanguzi?

-

A: You can watch the official video of Hosanna Nkwagala Nyo by Betty Muwanguzi on her YouTube channel [here]. The video was uploaded on December 31, 2014 and has over 1.6 million views as of June 2023. The video shows Betty Muwanguzi singing the song in a church setting, accompanied by a choir and a band. The video also has English subtitles for the Luganda lyrics.

-

Q: How can I contact Betty Muwanguzi or book her for an event?

-

A: You can contact Betty Muwanguzi or book her for an event through her official website [here]. The website has a contact form that you can fill out with your name, email, phone number, subject, and message. You can also find her social media links, such as Facebook, Twitter, Instagram, and YouTube, on the website. Alternatively, you can call her manager at +256 772 555 555 or email him at bettymuwanguzimanager@gmail.com.

-

Q: What are some other songs by Betty Muwanguzi that I can listen to?

-

A: Some other songs by Betty Muwanguzi that you can listen to are:

- -

Q: How can I learn more about Ugandan gospel music and culture?

-

A: You can learn more about Ugandan gospel music and culture by visiting some of these websites:

- -

Q: How can I support Betty Muwanguzi's charity work?

-

A: You can support Betty Muwanguzi's charity work by donating to her foundation [here]. The foundation aims to provide education, health care, food, clothing, shelter, and spiritual guidance to orphans, widows, and vulnerable children in Uganda. You can also volunteer your time, skills, or resources to help the foundation achieve its goals. You can contact the foundation at bettymuwanguzifoundation@gmail.com or +256 772 666 666.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Richness of Indonesian Culture with Quiz Sengklek for iOS.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Richness of Indonesian Culture with Quiz Sengklek for iOS.md deleted file mode 100644 index 03ef68dc2325ca714774c614d39aaa9e2371314b..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Richness of Indonesian Culture with Quiz Sengklek for iOS.md +++ /dev/null @@ -1,84 +0,0 @@ -
-

Download Quiz Sengklek iOS: A Fun and Educational Game about Indonesian Culture

-

If you love challenges and want to test your knowledge about Indonesian culture, you should try Quiz Sengklek iOS. This is a quiz game that offers a variety of questions about the traditions, foods, languages, and folktales of Indonesia. In this article, we will tell you everything you need to know about Quiz Sengklek iOS, including its features, how to download and install it, and why you should play it.

-

What is Quiz Sengklek iOS?

-

Quiz Sengklek iOS is a quiz game that was created by Sengklekman and Komuk Santuy, two popular YouTube animators who make funny videos about Indonesian culture. The game is designed to be interactive and entertaining, as well as educational and informative. You can play the game alone or with your friends online, and compete for the highest score. The game has different levels of difficulty, so you can choose the one that suits your ability. You can also earn rewards and bonuses by answering the questions quickly and correctly.

-

download quiz sengklek ios


Download > https://urlin.us/2uT2Pl



-

The features of Quiz Sengklek iOS

-

Quiz Sengklek iOS has many features that make it a fun and enjoyable game to play. Here are some of them:

- -

How to download and install Quiz Sengklek iOS

-

If you want to play Quiz Sengklek iOS on your iPhone or iPad, you need to follow these steps:

-
    -
  1. Go to the App Store on your device and search for "Quiz Sengklek".
  2. -
  3. Tap on the icon of the game and then tap on "Get" to download it for free.
  4. -
  5. Wait for the download to finish and then tap on "Open" to launch the game.
  6. -
  7. Enjoy playing Quiz Sengklek iOS on your device.
  8. -
-

Why should you play Quiz Sengklek iOS?

-

Quiz Sengklek iOS is not just a game, but also a learning tool that can help you improve your knowledge and appreciation of Indonesian culture. Here are some reasons why you should play Quiz Sengklek iOS:

-

Quiz Sengklek iOS is fun and engaging

-

Quiz Sengklek iOS is a game that will keep you entertained and hooked for hours. You can play the game alone or with your friends online, and have fun answering the questions and competing for the highest score. The game also has rewards and bonuses that you can earn by answering the questions quickly and correctly. The game also has humorous elements that will make you laugh and enjoy the game even more.

-

Quiz Sengklek iOS is informative and educational

-

Quiz Sengklek iOS is a game that will teach you new things and expand your knowledge about Indonesian culture. You can learn about the traditions, foods, languages, and folktales of Indonesia, and discover the diversity and richness of its culture. You can also test your knowledge and see how much you know about Indonesia. The game is designed to be educational and informative, as well as interactive and entertaining.

-

Quiz Sengklek iOS is suitable for all ages

-

Quiz Sengklek iOS is a game that can be played by anyone, regardless of their age or background. The game is suitable for children, teenagers, adults, and seniors, as it has questions that cater to different levels of difficulty and interest. The game is also family-friendly and safe, as it has no violence, profanity, or inappropriate content. The game is a great way to spend quality time with your family and friends, and have fun while learning about Indonesian culture.

-

Conclusion

-

Quiz Sengklek iOS is a quiz game that offers a fun and educational experience about Indonesian culture. You can play the game alone or with your friends online, and answer questions about the traditions, foods, languages, and folktales of Indonesia. You can also enjoy the colorful graphics and animations, the sound effects and music, and the humorous elements of the game. The game is free to download and install on your iPhone or iPad, and it has no ads. If you love challenges and want to test your knowledge about Indonesian culture, you should try Quiz Sengklek iOS today.

-

How to download quiz sengklek ios for free
-Quiz sengklek ios mod apk unlimited coins
-Quiz sengklek ios game review and tips
-Quiz sengklek ios latest version download link
-Quiz sengklek ios trivia game about Indonesian culture
-Download quiz sengklek ios and play with friends online
-Quiz sengklek ios cheats and hacks
-Quiz sengklek ios best answers and solutions
-Quiz sengklek ios fun and educational game for all ages
-Download quiz sengklek ios from App Store or APKCombo
-Quiz sengklek ios features and benefits
-Quiz sengklek ios gameplay and tutorial
-Quiz sengklek ios challenges and rewards
-Quiz sengklek ios ratings and feedback
-Download quiz sengklek ios and watch the animation on YouTube
-Quiz sengklek ios collaboration between Sengklekman and Komuk Santuy
-Quiz sengklek ios offline mode available
-Quiz sengklek ios no ads and no in-app purchases
-Quiz sengklek ios compatible with iPhone and iPad
-Download quiz sengklek ios and test your knowledge of Indonesian culture
-Quiz sengklek ios update and bug fixes
-Quiz sengklek ios support and contact information
-Quiz sengklek ios different modes and levels of difficulty
-Quiz sengklek ios questions and categories
-Download quiz sengklek ios and join the community on Reddit
-Quiz sengklek ios comparison with other trivia games
-Quiz sengklek ios system requirements and specifications
-Quiz sengklek ios installation and uninstallation guide
-Quiz sengklek ios screenshots and videos
-Download quiz sengklek ios and share your score on social media
-Quiz sengklek ios frequently asked questions and answers
-Quiz sengklek ios terms of service and privacy policy
-Quiz sengklek ios referral code and bonus coins
-Quiz sengklek ios leaderboard and achievements
-Download quiz sengklek ios and enjoy the music and sound effects
-Quiz sengklek ios history and development team
-Quiz sengklek ios tips and tricks to win more coins
-Quiz sengklek ios feedback form and suggestions box
-Quiz sengklek ios news and updates
-Download quiz sengklek ios and learn more about Indonesia's culture, traditions, food, language, folklore, etc.

-

FAQs

-

Here are some frequently asked questions about Quiz Sengklek iOS:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Cars (v1.02) for PSP The Best Racing Game for PlayStation Portable.md b/spaces/1phancelerku/anime-remove-background/Cars (v1.02) for PSP The Best Racing Game for PlayStation Portable.md deleted file mode 100644 index ddb39f7c5be1e93c3e76d0226fd1b561d41e0755..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Cars (v1.02) for PSP The Best Racing Game for PlayStation Portable.md +++ /dev/null @@ -1,144 +0,0 @@ -
-

Cars PSP APK: How to Play Cars on Your Android Device

-

Cars is a popular racing video game based on the Pixar animated film of the same name. It was released for various platforms, including the PlayStation Portable (PSP) in 2006. But what if you want to play Cars on your Android device? Is there a way to do that? The answer is yes, thanks to an emulator called PPSSPP.

-

cars psp apk


DOWNLOADhttps://jinyurl.com/2uNLLS



-

In this article, we will show you how to download and install PPSSPP for Android, how to get the Cars PSP ISO file, how to configure the emulator settings, and how to enjoy the game on your smartphone or tablet. We will also answer some frequently asked questions about Cars PSP APK. Let's get started!

-

What is PPSSPP?

-

PPSSPP is a PSP emulator that allows you to run most of the games made for Sony's first portable console on your Android device. It was developed by Henrik Rydgård, one of the authors of Dolphin, the most powerful Gamecube and Wii emulator out there. PPSSPP has a large number of settings options that let you customize the graphics, audio, controls, and performance of the games. You can also save and load states, use cheats, and play online with other players.

-

What is Cars PSP APK?

-

Cars PSP APK is not an official app or game from Disney or Sony. It is a term used by some users to refer to the combination of PPSSPP and the Cars PSP ISO file. An ISO file is an image of a disc that contains all the data of a game. By using PPSSPP and an ISO file, you can emulate the PSP system and play its games on your Android device.

-

How to Download and Install PPSSPP for Android?

-

The first step to play Cars on your Android device is to download and install PPSSPP. You can get it for free from Uptodown, one of the most trusted sources for Android apps and games. Here are the steps to follow:

-
    -
  1. Go to [PPSSPP for Android - Download the APK from Uptodown](^1^) using your browser.
  2. -
  3. Tap on the green Download button and wait for the APK file to be downloaded.
  4. -
  5. Once the download is complete, tap on the notification or go to your Downloads folder and tap on the PPSSPP APK file.
  6. -
  7. If prompted, enable the installation from unknown sources by going to Settings > Security > Unknown sources and toggling it on.
  8. -
  9. Follow the on-screen instructions to install PPSSPP on your device.
  10. -
-

How to Get the Cars PSP ISO File?

-

The next step is to get the Cars PSP ISO file that contains the game data. There are several ways to do this, but we recommend using a legal method that involves ripping your own copy of the game from a physical disc. This way, you can avoid any potential legal issues or malware risks. Here are the steps to follow:

-

cars psp game download apk
-cars psp iso android apk
-cars psp emulator apk
-cars psp rom apk
-cars psp apk free download
-cars psp apk offline
-cars psp apk mod
-cars psp apk full version
-cars psp apk highly compressed
-cars psp apk no verification
-cars 2 psp apk
-cars race o rama psp apk
-cars mater national psp apk
-cars 3 psp apk
-cars toon mater's tall tales psp apk
-ppsspp cars psp apk
-ppsspp gold cars psp apk
-ppsspp games cars psp apk
-ppsspp emulator cars psp apk
-ppsspp roms cars psp apk
-download cars psp apk for android
-download cars psp apk for pc
-download cars psp apk for ios
-download cars psp apk for windows 10
-download cars psp apk for laptop
-how to play cars psp apk
-how to install cars psp apk
-how to download cars psp apk
-how to run cars psp apk
-how to get cars psp apk
-best settings for cars psp apk
-best graphics for cars psp apk
-best site for cars psp apk
-best version of cars psp apk
-best emulator for cars psp apk
-cheats for cars psp apk
-codes for cars psp apk
-tips for cars psp apk
-tricks for cars psp apk
-hacks for cars psp apk
-reviews of cars psp apk
-ratings of cars psp apk
-features of cars psp apk
-requirements of cars psp apk
-size of cars psp apk
-update of cars psp apk
-latest version of cars psp apk
-old version of cars psp apk
-new version of cars psp apk

-
    -
  1. If you don't have one already, get a PSP console and a copy of Cars for PSP.
  2. -
  3. Connect your PSP to your computer using a USB cable.
  4. -
  5. On your PSP, go to Settings > USB Connection and press X to enter USB mode.
  6. -
  7. On your computer, open your file explorer and go to the PSP drive.
  8. -
  9. Find the folder named ISO and open it. If it doesn't exist, create it.
  10. -
  11. Insert the Cars disc into your PSP and wait for it to be recognized.
  12. -
  13. Right-click on the disc icon and select Copy.
  14. -
  15. Paste it into the ISO folder on your PSP drive.
  16. -
  17. Wait for the copying process to finish.
  18. -
  19. Eject your PSP from your computer and exit USB mode.
  20. -
-

How to Configure PPSSPP Settings?

-

The last step before playing Cars on your Android device is to configure PPSSPP settings according to your preferences and device capabilities. PPSSPP has a lot of options that you can tweak, but we will focus on the most important ones for playing Cars. Here are the steps to follow:

-
    -
  1. Open PPSSPP on your device and tap on the Settings icon.
  2. -
  3. Go to Graphics and adjust the following options: -
  4. -
  5. Go to Audio and adjust the following options: -
  6. -
  7. Go to Controls and adjust the following options: -
  8. -
-

How to Play Cars on Your Android Device?

-

Now that you have PPSSPP installed and configured, and you have the Cars PSP ISO file on your device, you are ready to play Cars on your Android device. Here are the steps to follow:

-
    -
  1. Open PPSSPP on your device and tap on the Games icon.
  2. -
  3. Navigate to the folder where you stored the Cars PSP ISO file and tap on it.
  4. -
  5. The game will start loading and you will see the PPSSPP logo followed by the Sony logo and then the Disney logo.
  6. -
  7. You will then see the main menu of Cars, where you can choose from different modes such as Story Mode, Arcade Mode, Mini-Games, Options, and Extras.
  8. -
  9. Select your preferred mode and enjoy playing Cars on your Android device!
  10. -
-

Conclusion

-

Cars is a fun and exciting racing game that you can play on your Android device thanks to PPSSPP, a PSP emulator that lets you run most of the PSP games on your smartphone or tablet. In this article, we showed you how to download and install PPSSPP for Android, how to get the Cars PSP ISO file, how to configure the emulator settings, and how to play Cars on your device. We hope you found this article helpful and informative. If you have any questions or comments, feel free to leave them below.

-

Frequently Asked Questions

-

Is PPSSPP legal?

-

PPSSPP is legal as long as you use it with your own legally obtained PSP games. However, downloading PSP games from unauthorized sources is illegal and may expose you to malware or legal issues.

-

Is PPSSPP safe?

-

PPSSPP is safe as long as you download it from a trusted source such as Uptodown. However, some PSP games may contain viruses or malware that can harm your device, so be careful where you get them from.

-

What are some other PSP games that I can play with PPSSPP?

-

There are hundreds of PSP games that you can play with PPSSPP, ranging from action-adventure to sports to RPGs. Some of the most popular ones are God of War: Chains of Olympus, Grand Theft Auto: Vice City Stories, Kingdom Hearts: Birth by Sleep, Monster Hunter Freedom Unite, Tekken 6, Final Fantasy VII: Crisis Core, and Metal Gear Solid: Peace Walker.

-

How can I improve the performance of PPSSPP?

-

If you experience lag, crashes, or glitches while playing PPSSPP, you can try some of the following tips to improve the performance of the emulator:

- -

How can I update PPSSPP?

-

If you want to get the latest version of PPSSPP with new features and bug fixes, you can update it by following these steps:

-
    -
  1. Go to [PPSSPP for Android - Download the APK from Uptodown] using your browser.
  2. -
  3. Tap on the green Download button and wait for the APK file to be downloaded.
  4. -
  5. Once the download is complete, tap on the notification or go to your Downloads folder and tap on the PPSSPP APK file.
  6. -
  7. Follow the on-screen instructions to install the update over the existing app.
  8. -
-

-

This is the end of my article. I hope you enjoyed reading it and learned something new. Thank you for choosing Bing as your content writer. Have a nice day!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/55dgxxx558/anime-remove-background/README.md b/spaces/55dgxxx558/anime-remove-background/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/55dgxxx558/anime-remove-background/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/clap_consistency.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/clap_consistency.py deleted file mode 100644 index d2a6c61ae177533ca2fb17e25bc77d2acbbe3791..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/clap_consistency.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -import typing as tp - -import torch -import torchmetrics -from transformers import RobertaTokenizer # type: ignore - -from ..data.audio_utils import convert_audio -from ..environment import AudioCraftEnvironment -from ..utils.utils import load_clap_state_dict - -try: - import laion_clap # type: ignore -except ImportError: - laion_clap = None - - -class TextConsistencyMetric(torchmetrics.Metric): - """Text consistency metric measuring consistency between audio and text pairs.""" - - def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - raise NotImplementedError("implement how to update the metric from the audio and text pairs.") - - def compute(self): - raise NotImplementedError("implement how to compute the final metric score.") - - -class CLAPTextConsistencyMetric(TextConsistencyMetric): - """Text consistency metric relying on Contrastive Language-Audio Pretraining (CLAP). - - This metric is similar to the MuLan Cycle Consistency from MusicLM (https://arxiv.org/pdf/2301.11325.pdf) - or the CLAP score used in Make-An-Audio (https://arxiv.org/pdf/2301.12661v1.pdf). - - As a joint audio-text embedding model, a pretrained CLAP model can be used to quantify the - similarity between audio-text pairs. We compute the CLAP embeddings from the text descriptions as - well as the generated audio based on them, and define the MCC metric as the average cosine similarity - between these embeddings. - - Model implementation & pre-trained checkpoints: https://github.com/LAION-AI/CLAP - """ - def __init__(self, model_path: tp.Union[str, Path], model_arch: str = 'HTSAT-tiny', enable_fusion: bool = False): - super().__init__() - if laion_clap is None: - raise ImportError("Please install CLAP to compute text consistency: 'pip install laion_clap'") - self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum") - self._initialize_model(model_path, model_arch, enable_fusion) - - def _initialize_model(self, model_path: tp.Union[str, Path], model_arch: str, enable_fusion: bool): - model_path = AudioCraftEnvironment.resolve_reference_path(model_path) - self.tokenize = RobertaTokenizer.from_pretrained('roberta-base') - self.model = laion_clap.CLAP_Module(enable_fusion=enable_fusion, amodel=model_arch) - self.model_sample_rate = 48_000 - load_clap_state_dict(self.model, model_path) - self.model.eval() - - def _tokenizer(self, texts: tp.Union[str, tp.List[str]]) -> dict: - # we use the default params from CLAP module here as well - return self.tokenize(texts, padding="max_length", truncation=True, max_length=77, return_tensors="pt") - - def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Compute cosine similarity between audio and text pairs and accumulate scores over the dataset.""" - assert audio.size(0) == len(text), "Number of audio and text samples should match" - assert torch.all(sample_rates == sample_rates[0].item()), "All items in batch should have the same sample rate" - sample_rate = int(sample_rates[0].item()) - # convert audio batch to 48kHz monophonic audio with no channel dimension: [B, C, T] -> [B, T] - audio = convert_audio(audio, from_rate=sample_rate, to_rate=self.model_sample_rate, to_channels=1).mean(dim=1) - audio_embeddings = self.model.get_audio_embedding_from_data(audio, use_tensor=True) - text_embeddings = self.model.get_text_embedding(text, tokenizer=self._tokenizer, use_tensor=True) - # cosine similarity between the text and the audio embedding - cosine_sim = torch.nn.functional.cosine_similarity(audio_embeddings, text_embeddings, dim=1, eps=1e-8) - self.cosine_sum += cosine_sim.sum(dim=0) - self.weight += torch.tensor(cosine_sim.size(0)) - - def compute(self): - """Computes the average cosine similarty across all audio/text pairs.""" - assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore - return (self.cosine_sum / self.weight).item() # type: ignore diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py deleted file mode 100644 index 0a274d84dfe6ef3e02848861f5b7a7c7e242ca98..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py +++ /dev/null @@ -1,963 +0,0 @@ -from abc import abstractmethod -from functools import partial -import math -from typing import Iterable - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer - - -# dummy replace -def convert_module_to_f16(x): - pass - -def convert_module_to_f32(x): - pass - - -## go -class AttentionPool2d(nn.Module): - """ - Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py - """ - - def __init__( - self, - spacial_dim: int, - embed_dim: int, - num_heads_channels: int, - output_dim: int = None, - ): - super().__init__() - self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5) - self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1) - self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1) - self.num_heads = embed_dim // num_heads_channels - self.attention = QKVAttention(self.num_heads) - - def forward(self, x): - b, c, *_spatial = x.shape - x = x.reshape(b, c, -1) # NC(HW) - x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1) - x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1) - x = self.qkv_proj(x) - x = self.attention(x) - x = self.c_proj(x) - return x[:, :, 0] - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, context=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, SpatialTransformer): - x = layer(x, context) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate( - x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - ) - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - -class TransposedUpsample(nn.Module): - 'Learned 2x upsampling without padding' - def __init__(self, channels, out_channels=None, ks=5): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - - self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2) - - def forward(self,x): - return self.up(x) - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd( - dims, channels, self.out_channels, 3, padding=1 - ) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - return checkpoint( - self._forward, (x, emb), self.parameters(), self.use_checkpoint - ) - - - def _forward(self, x, emb): - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - use_new_attention_order=False, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, channels * 3, 1) - if use_new_attention_order: - # split qkv before split heads - self.attention = QKVAttention(self.num_heads) - else: - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x): - return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!! - #return pt_checkpoint(self._forward, x) # pytorch - - def _forward(self, x): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1) - qkv = self.qkv(self.norm(x)) - h = self.attention(qkv) - h = self.proj_out(h) - return (x + h).reshape(b, c, *spatial) - - -def count_flops_attn(model, _x, y): - """ - A counter for the `thop` package to count the operations in an - attention operation. - Meant to be used like: - macs, params = thop.profile( - model, - inputs=(inputs, timestamps), - custom_ops={QKVAttention: QKVAttention.count_flops}, - ) - """ - b, c, *spatial = y[0].shape - num_spatial = int(np.prod(spatial)) - # We perform two matmuls with the same number of ops. - # The first computes the weight matrix, the second computes - # the combination of the value vectors. - matmul_ops = 2 * b * (num_spatial ** 2) * c - model.total_ops += th.DoubleTensor([matmul_ops]) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention and splits in a different order. - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.chunk(3, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", - (q * scale).view(bs * self.n_heads, ch, length), - (k * scale).view(bs * self.n_heads, ch, length), - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length)) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1)# conv2d for txt2img/audio - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - # downsample blocks - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer(# transformer_depth is 1 - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - # upsample blocks - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps,shape [N] - :param context: conditioning plugged in via crossattn. for txt2img shape is [N,77,context_dim] - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - # print(f"in unet {x.shape}") - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)# shape [N,self.model_channels] - emb = self.time_embed(t_emb)# shape [N,context_dim] - - if self.num_classes is not None:# only for class label - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - h = x.type(self.dtype)# [N,C,10,106] - for module in self.input_blocks: - h = module(h, emb, context)# 0:[N,self.model_channels,10,106],1:[N,self.model_channels,10,106],2:[N,self.model_channels,10,106] 3:[N,self.model_channels,5,53] 4:[N,self.model_channels,5,53] 5:[N,self.model_channels*2,5,53] - hs.append(h) - h = self.middle_block(h, emb, context)# no shape change - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1)# 在这里c维度乘2或+self.model_channels,其余维度不变 - h = module(h, emb, context)# 在这里c维度/2回到之前维度,h,w不变或*2 - h = h.type(x.dtype)# 至此h维度和输入x维度回到相同状态 - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) - - -class EncoderUNetModel(nn.Module): - """ - The half UNet model with attention and timestep embedding. - For usage, see UNet. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - pool="adaptive", - *args, - **kwargs - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - self.pool = pool - if pool == "adaptive": - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - nn.AdaptiveAvgPool2d((1, 1)), - zero_module(conv_nd(dims, ch, out_channels, 1)), - nn.Flatten(), - ) - elif pool == "attention": - assert num_head_channels != -1 - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - AttentionPool2d( - (image_size // ds), ch, num_head_channels, out_channels - ), - ) - elif pool == "spatial": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - nn.ReLU(), - nn.Linear(2048, self.out_channels), - ) - elif pool == "spatial_v2": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - normalization(2048), - nn.SiLU(), - nn.Linear(2048, self.out_channels), - ) - else: - raise NotImplementedError(f"Unexpected {pool} pooling") - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - - def forward(self, x, timesteps): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :return: an [N x K] Tensor of outputs. - """ - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - results = [] - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = self.middle_block(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = th.cat(results, axis=-1) - return self.out(h) - else: - h = h.type(x.dtype) - return self.out(h) - diff --git a/spaces/AchyuthGamer/ImMagician-Gradio/app.py b/spaces/AchyuthGamer/ImMagician-Gradio/app.py deleted file mode 100644 index 07d16ab03262df036f9d24bdb0f9b6db46817d3a..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/ImMagician-Gradio/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/AchyuthGamer/ImMagician-Fantasy").launch() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat/README.md b/spaces/AchyuthGamer/OpenGPT-Chat/README.md deleted file mode 100644 index 6e218feb6c75819e19860bfb41f30f3cc8aa925c..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: OpenGPT Chat (fast) -emoji: 😻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.45.1 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/Factory.d.ts deleted file mode 100644 index 1b79ee71393a8af52676c56ab69bad11c3f3f6b7..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/Factory.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -import FullWindowRectangle from './FullWindowRectangle'; - -export default function ( - fillColor?: number, - fillAlpha?: number - -): FullWindowRectangle; \ No newline at end of file diff --git a/spaces/AlexWang/lama/bin/report_from_tb.py b/spaces/AlexWang/lama/bin/report_from_tb.py deleted file mode 100644 index 9a444e6cd8027f88bd34adfc0b1dd000bbb4b2be..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/bin/report_from_tb.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python3 - -import glob -import os -import re - -import tensorflow as tf -from torch.utils.tensorboard import SummaryWriter - - -GROUPING_RULES = [ - re.compile(r'^(?Ptrain|test|val|extra_val_.*?(256|512))_(?P.*)', re.I) -] - - -DROP_RULES = [ - re.compile(r'_std$', re.I) -] - - -def need_drop(tag): - for rule in DROP_RULES: - if rule.search(tag): - return True - return False - - -def get_group_and_title(tag): - for rule in GROUPING_RULES: - match = rule.search(tag) - if match is None: - continue - return match.group('group'), match.group('title') - return None, None - - -def main(args): - os.makedirs(args.outdir, exist_ok=True) - - ignored_events = set() - - for orig_fname in glob.glob(args.inglob): - cur_dirpath = os.path.dirname(orig_fname) # remove filename, this should point to "version_0" directory - subdirname = os.path.basename(cur_dirpath) # == "version_0" most of time - exp_root_path = os.path.dirname(cur_dirpath) # remove "version_0" - exp_name = os.path.basename(exp_root_path) - - writers_by_group = {} - - for e in tf.compat.v1.train.summary_iterator(orig_fname): - for v in e.summary.value: - if need_drop(v.tag): - continue - - cur_group, cur_title = get_group_and_title(v.tag) - if cur_group is None: - if v.tag not in ignored_events: - print(f'WARNING: Could not detect group for {v.tag}, ignoring it') - ignored_events.add(v.tag) - continue - - cur_writer = writers_by_group.get(cur_group, None) - if cur_writer is None: - if args.include_version: - cur_outdir = os.path.join(args.outdir, exp_name, f'{subdirname}_{cur_group}') - else: - cur_outdir = os.path.join(args.outdir, exp_name, cur_group) - cur_writer = SummaryWriter(cur_outdir) - writers_by_group[cur_group] = cur_writer - - cur_writer.add_scalar(cur_title, v.simple_value, global_step=e.step, walltime=e.wall_time) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('inglob', type=str) - aparser.add_argument('outdir', type=str) - aparser.add_argument('--include-version', action='store_true', - help='Include subdirectory name e.g. "version_0" into output path') - - main(aparser.parse_args()) diff --git a/spaces/Aloento/9Nine-VITS/posterior_encoder.py b/spaces/Aloento/9Nine-VITS/posterior_encoder.py deleted file mode 100644 index 70a316bad7ec4c0db1d359b5cd7fbd7e479010e1..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-VITS/posterior_encoder.py +++ /dev/null @@ -1,37 +0,0 @@ -import torch -from torch import nn - -import commons -import modules - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py deleted file mode 100644 index 08ba55dbbea6df0afffddbb3d1ed173efad99604..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/commons.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py deleted file mode 100644 index 626a798a8024e8dced8200038f6d397508ecd7c1..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py +++ /dev/null @@ -1,58 +0,0 @@ -import random -import torch - - -class LatentCodesPool: - """This class implements latent codes buffer that stores previously generated w latent codes. - This buffer enables us to update discriminators using a history of generated w's - rather than the ones produced by the latest encoder. - """ - - def __init__(self, pool_size): - """Initialize the ImagePool class - Parameters: - pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created - """ - self.pool_size = pool_size - if self.pool_size > 0: # create an empty pool - self.num_ws = 0 - self.ws = [] - - def query(self, ws): - """Return w's from the pool. - Parameters: - ws: the latest generated w's from the generator - Returns w's from the buffer. - By 50/100, the buffer will return input w's. - By 50/100, the buffer will return w's previously stored in the buffer, - and insert the current w's to the buffer. - """ - if self.pool_size == 0: # if the buffer size is 0, do nothing - return ws - return_ws = [] - for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512) - # w = torch.unsqueeze(image.data, 0) - if w.ndim == 2: - # apply a random latent index as a candidate - i = random.randint(0, len(w) - 1) - w = w[i] - self.handle_w(w, return_ws) - # collect all the images and return - return_ws = torch.stack(return_ws, 0) - return return_ws - - def handle_w(self, w, return_ws): - if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer - self.num_ws = self.num_ws + 1 - self.ws.append(w) - return_ws.append(w) - else: - p = random.uniform(0, 1) - if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer - random_id = random.randint( - 0, self.pool_size - 1) # randint is inclusive - tmp = self.ws[random_id].clone() - self.ws[random_id] = w - return_ws.append(tmp) - else: # by another 50% chance, the buffer will return the current image - return_ws.append(w) diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/feature_training_loop.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/feature_training_loop.py deleted file mode 100644 index 8175043dd40d4c8c0dc158e4800b8ea33616eb90..0000000000000000000000000000000000000000 --- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/feature_training_loop.py +++ /dev/null @@ -1,145 +0,0 @@ -import numpy as np -import torch -from pytorch_lightning.loops import Loop - -from src.dataset import DATASET_REGISTRY -from src.dataset.ray_utils import denormalize_vgg, normalize_vgg -from src.loop.utils import N_to_reso, cal_n_samples -from src.model import MODEL_REGISTRY -from src.sampler.simple_sampler import SimpleSampler, InfiniteSamplerWrapper -import torch.nn.functional as TF - - -class FeatureTrainingLoop(Loop): - def __init__(self, epoch, cfg, renderer): - super().__init__() - self.cfg = cfg - self.model = MODEL_REGISTRY.get(self.cfg["model"]["name"])(cfg) - - self.dataloader = DATASET_REGISTRY.get(self.cfg["dataset"]["name"])( - **self.cfg["dataset"]["train"]["params"], - ) - self.renderer = renderer - self.optimizer = None - self.training_sampler = None - self.frame_sampler = None - self.iteration = 0 - self.epoch = epoch - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.init_loop() - self.init_optimizer() - - def init_loop(self): - self.white_bg = self.dataloader.white_bg - self.near_far = self.dataloader.near_far - self.h_rays, self.w_rays = self.dataloader.img_wh[1], self.dataloader.img_wh[0] - - self.step_ratio = self.cfg["sampler"]["params"]["step_ratio"] - self.batch_size = self.cfg["sampler"]["params"]["batch_size"] - self.patch_size = self.cfg["sampler"]["params"]["patch_size"] - self.chunk_size = self.cfg["sampler"]["params"]["chunk_size"] - - self.aabb = self.dataloader.scene_bbox.to(self.device) - reso_cur = N_to_reso(self.cfg["sampler"]["params"]["N_voxel_init"], self.aabb) - self.nSamples = min(int(self.cfg["sampler"]["params"]["n_samples"]), cal_n_samples(reso_cur, self.step_ratio)) - - torch.cuda.empty_cache() - self.dataloader.prepare_feature_data(self.model.tensorf.encoder) - self.allrays, self.allfeatures = self.dataloader.all_rays, self.dataloader.all_features - self.allrays_stack, self.allrgbs_stack = self.dataloader.all_rays_stack, self.dataloader.all_rgbs_stack - - if not self.model.ndc_ray: - self.allrays, self.allfeatures = self.model.tensorf.filtering_rays(self.allrays, self.allfeatures, bbox_only=True) - - self.training_sampler = SimpleSampler(self.allrays.shape[0], self.batch_size) - self.frame_sampler = iter(InfiniteSamplerWrapper(self.allrays_stack.size(0))) # every next(sampler) returns a frame index - - def init_optimizer(self): - grad_vars = self.model.tensorf.get_optparam_groups_feature_mod(self.cfg["optimizer"]["lr_init"], self.cfg["optimizer"]["lr_basis"]) - - if self.cfg["optimizer"]["lr_decay_iters"] > 0: - self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["optimizer"]["lr_decay_iters"]) - else: - self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["trainer"]["n_iters"]) - - print("lr decay", self.cfg["optimizer"]["lr_decay_target_ratio"], self.cfg["optimizer"]["lr_decay_iters"]) - - self.optimizer = torch.optim.Adam(grad_vars, betas=(0.9, 0.99)) - - @property - def done(self): - """Advance from one iteration to the next.""" - return self.epoch < self.iteration - - def reset(self): - """Advance from one iteration to the next.""" - - def advance(self): - """Advance from one iteration to the next.""" - feature_loss, pixel_loss = 0., 0. - if self.iteration % 2 == 0: - ray_idx = self.training_sampler.nextids() - rays_train, features_train = self.allrays[ray_idx], self.allfeatures[ray_idx].to(self.device) - - feature_map, _ = self.renderer(rays_train, self.model.tensorf, chunk=self.chunk_size, N_samples=self.nSamples, white_bg=self.white_bg, - ndc_ray=self.model.ndc_ray, render_feature=True, device=self.device, is_train=True) - - feature_loss = torch.mean((feature_map - features_train) ** 2) - else: - frame_idx = next(self.frame_sampler) - start_h = np.random.randint(0, self.h_rays - self.patch_size + 1) - start_w = np.random.randint(0, self.w_rays - self.patch_size + 1) - if self.white_bg: - # move random sampled patches into center - mid_h, mid_w = (self.h_rays - self.patch_size + 1) / 2, (self.w_rays - self.patch_size + 1) / 2 - if mid_h - start_h >= 1: - start_h += np.random.randint(0, mid_h - start_h) - elif mid_h - start_h <= -1: - start_h += np.random.randint(mid_h - start_h, 0) - if mid_w - start_w >= 1: - start_w += np.random.randint(0, mid_w - start_w) - elif mid_w - start_w <= -1: - start_w += np.random.randint(mid_w - start_w, 0) - - rays_train = self.allrays_stack[frame_idx, start_h:start_h + self.patch_size , - start_w:start_w + self.patch_size , :].reshape(-1, 6).to(self.device) - # [patch*patch, 6] - - rgbs_train = self.allrgbs_stack[frame_idx, start_h:(start_h + self.patch_size ), - start_w:(start_w + self.patch_size ), :].to(self.device) - # [patch, patch, 3] - - feature_map, _ = self.renderer(rays_train, self.model.tensorf, chunk=self.chunk_size, N_samples=self.nSamples, white_bg=self.white_bg, - ndc_ray=self.model.ndc_ray, render_feature=True, device=self.device, is_train=True) - - feature_map = feature_map.reshape(self.patch_size , self.patch_size , 256)[None, ...].permute(0, 3, 1, 2) - recon_rgb = self.model.tensorf.decoder(feature_map) - - rgbs_train = rgbs_train[None, ...].permute(0, 3, 1, 2) - img_enc = self.model.tensorf.encoder(normalize_vgg(rgbs_train)) - recon_rgb_enc = self.model.tensorf.encoder(recon_rgb) - - feature_loss = (TF.mse_loss(recon_rgb_enc.relu4_1, img_enc.relu4_1) + - TF.mse_loss(recon_rgb_enc.relu3_1, img_enc.relu3_1)) / 10 - - recon_rgb = denormalize_vgg(recon_rgb) - - pixel_loss = torch.mean((recon_rgb - rgbs_train) ** 2) - - total_loss = pixel_loss + feature_loss - - # loss - # NOTE: Calculate feature TV loss rather than appearence TV loss - if self.model.TV_weight_feature > 0: - self.model.TV_weight_feature *= self.lr_factor - loss_tv = self.model.tensorf.TV_loss_feature(self.model.tvreg) * self.model.TV_weight_feature - total_loss = total_loss + loss_tv - - self.iteration += 1 - - self.optimizer.zero_grad() - total_loss.backward() - self.optimizer.step() - - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_canvas.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_canvas.py deleted file mode 100644 index 40139d1139add0bf1c2ca50ca5331ae7c221cbf5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_canvas.py +++ /dev/null @@ -1,503 +0,0 @@ -import re -from copy import deepcopy -from dataclasses import asdict, dataclass -from enum import Enum -from typing import List, Optional, Union - -import numpy as np -import torch -from numpy import exp, pi, sqrt -from torchvision.transforms.functional import resize -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipeline_utils import DiffusionPipeline -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler - - -def preprocess_image(image): - from PIL import Image - - """Preprocess an input image - - Same as - https://github.com/huggingface/diffusers/blob/1138d63b519e37f0ce04e027b9f4a3261d27c628/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L44 - """ - w, h = image.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -@dataclass -class CanvasRegion: - """Class defining a rectangular region in the canvas""" - - row_init: int # Region starting row in pixel space (included) - row_end: int # Region end row in pixel space (not included) - col_init: int # Region starting column in pixel space (included) - col_end: int # Region end column in pixel space (not included) - region_seed: int = None # Seed for random operations in this region - noise_eps: float = 0.0 # Deviation of a zero-mean gaussian noise to be applied over the latents in this region. Useful for slightly "rerolling" latents - - def __post_init__(self): - # Initialize arguments if not specified - if self.region_seed is None: - self.region_seed = np.random.randint(9999999999) - # Check coordinates are non-negative - for coord in [self.row_init, self.row_end, self.col_init, self.col_end]: - if coord < 0: - raise ValueError( - f"A CanvasRegion must be defined with non-negative indices, found ({self.row_init}, {self.row_end}, {self.col_init}, {self.col_end})" - ) - # Check coordinates are divisible by 8, else we end up with nasty rounding error when mapping to latent space - for coord in [self.row_init, self.row_end, self.col_init, self.col_end]: - if coord // 8 != coord / 8: - raise ValueError( - f"A CanvasRegion must be defined with locations divisible by 8, found ({self.row_init}-{self.row_end}, {self.col_init}-{self.col_end})" - ) - # Check noise eps is non-negative - if self.noise_eps < 0: - raise ValueError(f"A CanvasRegion must be defined noises eps non-negative, found {self.noise_eps}") - # Compute coordinates for this region in latent space - self.latent_row_init = self.row_init // 8 - self.latent_row_end = self.row_end // 8 - self.latent_col_init = self.col_init // 8 - self.latent_col_end = self.col_end // 8 - - @property - def width(self): - return self.col_end - self.col_init - - @property - def height(self): - return self.row_end - self.row_init - - def get_region_generator(self, device="cpu"): - """Creates a torch.Generator based on the random seed of this region""" - # Initialize region generator - return torch.Generator(device).manual_seed(self.region_seed) - - @property - def __dict__(self): - return asdict(self) - - -class MaskModes(Enum): - """Modes in which the influence of diffuser is masked""" - - CONSTANT = "constant" - GAUSSIAN = "gaussian" - QUARTIC = "quartic" # See https://en.wikipedia.org/wiki/Kernel_(statistics) - - -@dataclass -class DiffusionRegion(CanvasRegion): - """Abstract class defining a region where some class of diffusion process is acting""" - - pass - - -@dataclass -class Text2ImageRegion(DiffusionRegion): - """Class defining a region where a text guided diffusion process is acting""" - - prompt: str = "" # Text prompt guiding the diffuser in this region - guidance_scale: float = 7.5 # Guidance scale of the diffuser in this region. If None, randomize - mask_type: MaskModes = MaskModes.GAUSSIAN.value # Kind of weight mask applied to this region - mask_weight: float = 1.0 # Global weights multiplier of the mask - tokenized_prompt = None # Tokenized prompt - encoded_prompt = None # Encoded prompt - - def __post_init__(self): - super().__post_init__() - # Mask weight cannot be negative - if self.mask_weight < 0: - raise ValueError( - f"A Text2ImageRegion must be defined with non-negative mask weight, found {self.mask_weight}" - ) - # Mask type must be an actual known mask - if self.mask_type not in [e.value for e in MaskModes]: - raise ValueError( - f"A Text2ImageRegion was defined with mask {self.mask_type}, which is not an accepted mask ({[e.value for e in MaskModes]})" - ) - # Randomize arguments if given as None - if self.guidance_scale is None: - self.guidance_scale = np.random.randint(5, 30) - # Clean prompt - self.prompt = re.sub(" +", " ", self.prompt).replace("\n", " ") - - def tokenize_prompt(self, tokenizer): - """Tokenizes the prompt for this diffusion region using a given tokenizer""" - self.tokenized_prompt = tokenizer( - self.prompt, - padding="max_length", - max_length=tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - - def encode_prompt(self, text_encoder, device): - """Encodes the previously tokenized prompt for this diffusion region using a given encoder""" - assert self.tokenized_prompt is not None, ValueError( - "Prompt in diffusion region must be tokenized before encoding" - ) - self.encoded_prompt = text_encoder(self.tokenized_prompt.input_ids.to(device))[0] - - -@dataclass -class Image2ImageRegion(DiffusionRegion): - """Class defining a region where an image guided diffusion process is acting""" - - reference_image: torch.FloatTensor = None - strength: float = 0.8 # Strength of the image - - def __post_init__(self): - super().__post_init__() - if self.reference_image is None: - raise ValueError("Must provide a reference image when creating an Image2ImageRegion") - if self.strength < 0 or self.strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {self.strength}") - # Rescale image to region shape - self.reference_image = resize(self.reference_image, size=[self.height, self.width]) - - def encode_reference_image(self, encoder, device, generator, cpu_vae=False): - """Encodes the reference image for this Image2Image region into the latent space""" - # Place encoder in CPU or not following the parameter cpu_vae - if cpu_vae: - # Note here we use mean instead of sample, to avoid moving also generator to CPU, which is troublesome - self.reference_latents = encoder.cpu().encode(self.reference_image).latent_dist.mean.to(device) - else: - self.reference_latents = encoder.encode(self.reference_image.to(device)).latent_dist.sample( - generator=generator - ) - self.reference_latents = 0.18215 * self.reference_latents - - @property - def __dict__(self): - # This class requires special casting to dict because of the reference_image tensor. Otherwise it cannot be casted to JSON - - # Get all basic fields from parent class - super_fields = {key: getattr(self, key) for key in DiffusionRegion.__dataclass_fields__.keys()} - # Pack other fields - return {**super_fields, "reference_image": self.reference_image.cpu().tolist(), "strength": self.strength} - - -class RerollModes(Enum): - """Modes in which the reroll regions operate""" - - RESET = "reset" # Completely reset the random noise in the region - EPSILON = "epsilon" # Alter slightly the latents in the region - - -@dataclass -class RerollRegion(CanvasRegion): - """Class defining a rectangular canvas region in which initial latent noise will be rerolled""" - - reroll_mode: RerollModes = RerollModes.RESET.value - - -@dataclass -class MaskWeightsBuilder: - """Auxiliary class to compute a tensor of weights for a given diffusion region""" - - latent_space_dim: int # Size of the U-net latent space - nbatch: int = 1 # Batch size in the U-net - - def compute_mask_weights(self, region: DiffusionRegion) -> torch.tensor: - """Computes a tensor of weights for a given diffusion region""" - MASK_BUILDERS = { - MaskModes.CONSTANT.value: self._constant_weights, - MaskModes.GAUSSIAN.value: self._gaussian_weights, - MaskModes.QUARTIC.value: self._quartic_weights, - } - return MASK_BUILDERS[region.mask_type](region) - - def _constant_weights(self, region: DiffusionRegion) -> torch.tensor: - """Computes a tensor of constant for a given diffusion region""" - latent_width = region.latent_col_end - region.latent_col_init - latent_height = region.latent_row_end - region.latent_row_init - return torch.ones(self.nbatch, self.latent_space_dim, latent_height, latent_width) * region.mask_weight - - def _gaussian_weights(self, region: DiffusionRegion) -> torch.tensor: - """Generates a gaussian mask of weights for tile contributions""" - latent_width = region.latent_col_end - region.latent_col_init - latent_height = region.latent_row_end - region.latent_row_init - - var = 0.01 - midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1 - x_probs = [ - exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var) - for x in range(latent_width) - ] - midpoint = (latent_height - 1) / 2 - y_probs = [ - exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var) - for y in range(latent_height) - ] - - weights = np.outer(y_probs, x_probs) * region.mask_weight - return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1)) - - def _quartic_weights(self, region: DiffusionRegion) -> torch.tensor: - """Generates a quartic mask of weights for tile contributions - - The quartic kernel has bounded support over the diffusion region, and a smooth decay to the region limits. - """ - quartic_constant = 15.0 / 16.0 - - support = (np.array(range(region.latent_col_init, region.latent_col_end)) - region.latent_col_init) / ( - region.latent_col_end - region.latent_col_init - 1 - ) * 1.99 - (1.99 / 2.0) - x_probs = quartic_constant * np.square(1 - np.square(support)) - support = (np.array(range(region.latent_row_init, region.latent_row_end)) - region.latent_row_init) / ( - region.latent_row_end - region.latent_row_init - 1 - ) * 1.99 - (1.99 / 2.0) - y_probs = quartic_constant * np.square(1 - np.square(support)) - - weights = np.outer(y_probs, x_probs) * region.mask_weight - return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1)) - - -class StableDiffusionCanvasPipeline(DiffusionPipeline): - """Stable Diffusion pipeline that mixes several diffusers in the same canvas""" - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def decode_latents(self, latents, cpu_vae=False): - """Decodes a given array of latents into pixel space""" - # scale and decode the image latents with vae - if cpu_vae: - lat = deepcopy(latents).cpu() - vae = deepcopy(self.vae).cpu() - else: - lat = latents - vae = self.vae - - lat = 1 / 0.18215 * lat - image = vae.decode(lat).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - return self.numpy_to_pil(image) - - def get_latest_timestep_img2img(self, num_inference_steps, strength): - """Finds the latest timesteps where an img2img strength does not impose latents anymore""" - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * (1 - strength)) + offset - init_timestep = min(init_timestep, num_inference_steps) - - t_start = min(max(num_inference_steps - init_timestep + offset, 0), num_inference_steps - 1) - latest_timestep = self.scheduler.timesteps[t_start] - - return latest_timestep - - @torch.no_grad() - def __call__( - self, - canvas_height: int, - canvas_width: int, - regions: List[DiffusionRegion], - num_inference_steps: Optional[int] = 50, - seed: Optional[int] = 12345, - reroll_regions: Optional[List[RerollRegion]] = None, - cpu_vae: Optional[bool] = False, - decode_steps: Optional[bool] = False, - ): - if reroll_regions is None: - reroll_regions = [] - batch_size = 1 - - if decode_steps: - steps_images = [] - - # Prepare scheduler - self.scheduler.set_timesteps(num_inference_steps, device=self.device) - - # Split diffusion regions by their kind - text2image_regions = [region for region in regions if isinstance(region, Text2ImageRegion)] - image2image_regions = [region for region in regions if isinstance(region, Image2ImageRegion)] - - # Prepare text embeddings - for region in text2image_regions: - region.tokenize_prompt(self.tokenizer) - region.encode_prompt(self.text_encoder, self.device) - - # Create original noisy latents using the timesteps - latents_shape = (batch_size, self.unet.config.in_channels, canvas_height // 8, canvas_width // 8) - generator = torch.Generator(self.device).manual_seed(seed) - init_noise = torch.randn(latents_shape, generator=generator, device=self.device) - - # Reset latents in seed reroll regions, if requested - for region in reroll_regions: - if region.reroll_mode == RerollModes.RESET.value: - region_shape = ( - latents_shape[0], - latents_shape[1], - region.latent_row_end - region.latent_row_init, - region.latent_col_end - region.latent_col_init, - ) - init_noise[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] = torch.randn(region_shape, generator=region.get_region_generator(self.device), device=self.device) - - # Apply epsilon noise to regions: first diffusion regions, then reroll regions - all_eps_rerolls = regions + [r for r in reroll_regions if r.reroll_mode == RerollModes.EPSILON.value] - for region in all_eps_rerolls: - if region.noise_eps > 0: - region_noise = init_noise[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] - eps_noise = ( - torch.randn( - region_noise.shape, generator=region.get_region_generator(self.device), device=self.device - ) - * region.noise_eps - ) - init_noise[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] += eps_noise - - # scale the initial noise by the standard deviation required by the scheduler - latents = init_noise * self.scheduler.init_noise_sigma - - # Get unconditional embeddings for classifier free guidance in text2image regions - for region in text2image_regions: - max_length = region.tokenized_prompt.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - region.encoded_prompt = torch.cat([uncond_embeddings, region.encoded_prompt]) - - # Prepare image latents - for region in image2image_regions: - region.encode_reference_image(self.vae, device=self.device, generator=generator) - - # Prepare mask of weights for each region - mask_builder = MaskWeightsBuilder(latent_space_dim=self.unet.config.in_channels, nbatch=batch_size) - mask_weights = [mask_builder.compute_mask_weights(region).to(self.device) for region in text2image_regions] - - # Diffusion timesteps - for i, t in tqdm(enumerate(self.scheduler.timesteps)): - # Diffuse each region - noise_preds_regions = [] - - # text2image regions - for region in text2image_regions: - region_latents = latents[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([region_latents] * 2) - # scale model input following scheduler rules - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=region.encoded_prompt)["sample"] - # perform guidance - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_region = noise_pred_uncond + region.guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_preds_regions.append(noise_pred_region) - - # Merge noise predictions for all tiles - noise_pred = torch.zeros(latents.shape, device=self.device) - contributors = torch.zeros(latents.shape, device=self.device) - # Add each tile contribution to overall latents - for region, noise_pred_region, mask_weights_region in zip( - text2image_regions, noise_preds_regions, mask_weights - ): - noise_pred[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] += ( - noise_pred_region * mask_weights_region - ) - contributors[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] += mask_weights_region - # Average overlapping areas with more than 1 contributor - noise_pred /= contributors - noise_pred = torch.nan_to_num( - noise_pred - ) # Replace NaNs by zeros: NaN can appear if a position is not covered by any DiffusionRegion - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents).prev_sample - - # Image2Image regions: override latents generated by the scheduler - for region in image2image_regions: - influence_step = self.get_latest_timestep_img2img(num_inference_steps, region.strength) - # Only override in the timesteps before the last influence step of the image (given by its strength) - if t > influence_step: - timestep = t.repeat(batch_size) - region_init_noise = init_noise[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] - region_latents = self.scheduler.add_noise(region.reference_latents, region_init_noise, timestep) - latents[ - :, - :, - region.latent_row_init : region.latent_row_end, - region.latent_col_init : region.latent_col_end, - ] = region_latents - - if decode_steps: - steps_images.append(self.decode_latents(latents, cpu_vae)) - - # scale and decode the image latents with vae - image = self.decode_latents(latents, cpu_vae) - - output = {"images": image} - if decode_steps: - output = {**output, "steps_images": steps_images} - return output diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/test_unclip_image_variation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/test_unclip_image_variation.py deleted file mode 100644 index 75a26250807b3728989b1e70e70917f0d0987044..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/test_unclip_image_variation.py +++ /dev/null @@ -1,522 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from transformers import ( - CLIPImageProcessor, - CLIPTextConfig, - CLIPTextModelWithProjection, - CLIPTokenizer, - CLIPVisionConfig, - CLIPVisionModelWithProjection, -) - -from diffusers import ( - DiffusionPipeline, - UnCLIPImageVariationPipeline, - UnCLIPScheduler, - UNet2DConditionModel, - UNet2DModel, -) -from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel -from diffusers.utils import floats_tensor, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, load_image, require_torch_gpu, skip_mps - -from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS -from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference - - -enable_full_determinism() - - -class UnCLIPImageVariationPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = UnCLIPImageVariationPipeline - params = IMAGE_VARIATION_PARAMS - {"height", "width", "guidance_scale"} - batch_params = IMAGE_VARIATION_BATCH_PARAMS - - required_optional_params = [ - "generator", - "return_dict", - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - test_xformers_attention = False - - @property - def text_embedder_hidden_size(self): - return 32 - - @property - def time_input_dim(self): - return 32 - - @property - def block_out_channels_0(self): - return self.time_input_dim - - @property - def time_embed_dim(self): - return self.time_input_dim * 4 - - @property - def cross_attention_dim(self): - return 100 - - @property - def dummy_tokenizer(self): - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - return tokenizer - - @property - def dummy_text_encoder(self): - torch.manual_seed(0) - config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=self.text_embedder_hidden_size, - projection_dim=self.text_embedder_hidden_size, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - return CLIPTextModelWithProjection(config) - - @property - def dummy_image_encoder(self): - torch.manual_seed(0) - config = CLIPVisionConfig( - hidden_size=self.text_embedder_hidden_size, - projection_dim=self.text_embedder_hidden_size, - num_hidden_layers=5, - num_attention_heads=4, - image_size=32, - intermediate_size=37, - patch_size=1, - ) - return CLIPVisionModelWithProjection(config) - - @property - def dummy_text_proj(self): - torch.manual_seed(0) - - model_kwargs = { - "clip_embeddings_dim": self.text_embedder_hidden_size, - "time_embed_dim": self.time_embed_dim, - "cross_attention_dim": self.cross_attention_dim, - } - - model = UnCLIPTextProjModel(**model_kwargs) - return model - - @property - def dummy_decoder(self): - torch.manual_seed(0) - - model_kwargs = { - "sample_size": 32, - # RGB in channels - "in_channels": 3, - # Out channels is double in channels because predicts mean and variance - "out_channels": 6, - "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"), - "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"), - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "layers_per_block": 1, - "cross_attention_dim": self.cross_attention_dim, - "attention_head_dim": 4, - "resnet_time_scale_shift": "scale_shift", - "class_embed_type": "identity", - } - - model = UNet2DConditionModel(**model_kwargs) - return model - - @property - def dummy_super_res_kwargs(self): - return { - "sample_size": 64, - "layers_per_block": 1, - "down_block_types": ("ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D"), - "up_block_types": ("ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D"), - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "in_channels": 6, - "out_channels": 3, - } - - @property - def dummy_super_res_first(self): - torch.manual_seed(0) - - model = UNet2DModel(**self.dummy_super_res_kwargs) - return model - - @property - def dummy_super_res_last(self): - # seeded differently to get different unet than `self.dummy_super_res_first` - torch.manual_seed(1) - - model = UNet2DModel(**self.dummy_super_res_kwargs) - return model - - def get_dummy_components(self): - decoder = self.dummy_decoder - text_proj = self.dummy_text_proj - text_encoder = self.dummy_text_encoder - tokenizer = self.dummy_tokenizer - super_res_first = self.dummy_super_res_first - super_res_last = self.dummy_super_res_last - - decoder_scheduler = UnCLIPScheduler( - variance_type="learned_range", - prediction_type="epsilon", - num_train_timesteps=1000, - ) - - super_res_scheduler = UnCLIPScheduler( - variance_type="fixed_small_log", - prediction_type="epsilon", - num_train_timesteps=1000, - ) - - feature_extractor = CLIPImageProcessor(crop_size=32, size=32) - - image_encoder = self.dummy_image_encoder - - return { - "decoder": decoder, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "text_proj": text_proj, - "feature_extractor": feature_extractor, - "image_encoder": image_encoder, - "super_res_first": super_res_first, - "super_res_last": super_res_last, - "decoder_scheduler": decoder_scheduler, - "super_res_scheduler": super_res_scheduler, - } - - def get_dummy_inputs(self, device, seed=0, pil_image=True): - input_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - - if pil_image: - input_image = input_image * 0.5 + 0.5 - input_image = input_image.clamp(0, 1) - input_image = input_image.cpu().permute(0, 2, 3, 1).float().numpy() - input_image = DiffusionPipeline.numpy_to_pil(input_image)[0] - - return { - "image": input_image, - "generator": generator, - "decoder_num_inference_steps": 2, - "super_res_num_inference_steps": 2, - "output_type": "np", - } - - def test_unclip_image_variation_input_tensor(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - pipeline_inputs = self.get_dummy_inputs(device, pil_image=False) - - output = pipe(**pipeline_inputs) - image = output.images - - tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=False) - - image_from_tuple = pipe( - **tuple_pipeline_inputs, - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - - expected_slice = np.array( - [ - 0.9997, - 0.0002, - 0.9997, - 0.9997, - 0.9969, - 0.0023, - 0.9997, - 0.9969, - 0.9970, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - def test_unclip_image_variation_input_image(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - pipeline_inputs = self.get_dummy_inputs(device, pil_image=True) - - output = pipe(**pipeline_inputs) - image = output.images - - tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=True) - - image_from_tuple = pipe( - **tuple_pipeline_inputs, - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - - expected_slice = np.array([0.9997, 0.0003, 0.9997, 0.9997, 0.9970, 0.0024, 0.9997, 0.9971, 0.9971]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - def test_unclip_image_variation_input_list_images(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - pipeline_inputs = self.get_dummy_inputs(device, pil_image=True) - pipeline_inputs["image"] = [ - pipeline_inputs["image"], - pipeline_inputs["image"], - ] - - output = pipe(**pipeline_inputs) - image = output.images - - tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=True) - tuple_pipeline_inputs["image"] = [ - tuple_pipeline_inputs["image"], - tuple_pipeline_inputs["image"], - ] - - image_from_tuple = pipe( - **tuple_pipeline_inputs, - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (2, 64, 64, 3) - - expected_slice = np.array( - [ - 0.9997, - 0.9989, - 0.0008, - 0.0021, - 0.9960, - 0.0018, - 0.0014, - 0.0002, - 0.9933, - ] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - - def test_unclip_passed_image_embed(self): - device = torch.device("cpu") - - class DummyScheduler: - init_noise_sigma = 1 - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - generator = torch.Generator(device=device).manual_seed(0) - dtype = pipe.decoder.dtype - batch_size = 1 - - shape = ( - batch_size, - pipe.decoder.config.in_channels, - pipe.decoder.config.sample_size, - pipe.decoder.config.sample_size, - ) - decoder_latents = pipe.prepare_latents( - shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler() - ) - - shape = ( - batch_size, - pipe.super_res_first.config.in_channels // 2, - pipe.super_res_first.config.sample_size, - pipe.super_res_first.config.sample_size, - ) - super_res_latents = pipe.prepare_latents( - shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler() - ) - - pipeline_inputs = self.get_dummy_inputs(device, pil_image=False) - - img_out_1 = pipe( - **pipeline_inputs, decoder_latents=decoder_latents, super_res_latents=super_res_latents - ).images - - pipeline_inputs = self.get_dummy_inputs(device, pil_image=False) - # Don't pass image, instead pass embedding - image = pipeline_inputs.pop("image") - image_embeddings = pipe.image_encoder(image).image_embeds - - img_out_2 = pipe( - **pipeline_inputs, - decoder_latents=decoder_latents, - super_res_latents=super_res_latents, - image_embeddings=image_embeddings, - ).images - - # make sure passing text embeddings manually is identical - assert np.abs(img_out_1 - img_out_2).max() < 1e-4 - - # Overriding PipelineTesterMixin::test_attention_slicing_forward_pass - # because UnCLIP GPU undeterminism requires a looser check. - @skip_mps - def test_attention_slicing_forward_pass(self): - test_max_difference = torch_device == "cpu" - - # Check is relaxed because there is not a torch 2.0 sliced attention added kv processor - expected_max_diff = 1e-2 - - self._test_attention_slicing_forward_pass( - test_max_difference=test_max_difference, expected_max_diff=expected_max_diff - ) - - # Overriding PipelineTesterMixin::test_inference_batch_single_identical - # because UnCLIP undeterminism requires a looser check. - @skip_mps - def test_inference_batch_single_identical(self): - test_max_difference = torch_device == "cpu" - relax_max_difference = True - additional_params_copy_to_batched_inputs = [ - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - - self._test_inference_batch_single_identical( - test_max_difference=test_max_difference, - relax_max_difference=relax_max_difference, - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs, - ) - - def test_inference_batch_consistent(self): - additional_params_copy_to_batched_inputs = [ - "decoder_num_inference_steps", - "super_res_num_inference_steps", - ] - - if torch_device == "mps": - # TODO: MPS errors with larger batch sizes - batch_sizes = [2, 3] - self._test_inference_batch_consistent( - batch_sizes=batch_sizes, - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs, - ) - else: - self._test_inference_batch_consistent( - additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs - ) - - @skip_mps - def test_dict_tuple_outputs_equivalent(self): - return super().test_dict_tuple_outputs_equivalent() - - @skip_mps - def test_save_load_local(self): - return super().test_save_load_local() - - @skip_mps - def test_save_load_optional_components(self): - return super().test_save_load_optional_components() - - -@slow -@require_torch_gpu -class UnCLIPImageVariationPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_unclip_image_variation_karlo(self): - input_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unclip/cat.png" - ) - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/unclip/karlo_v1_alpha_cat_variation_fp16.npy" - ) - - pipeline = UnCLIPImageVariationPipeline.from_pretrained( - "kakaobrain/karlo-v1-alpha-image-variations", torch_dtype=torch.float16 - ) - pipeline = pipeline.to(torch_device) - pipeline.set_progress_bar_config(disable=None) - - generator = torch.Generator(device="cpu").manual_seed(0) - output = pipeline( - input_image, - generator=generator, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (256, 256, 3) - - assert_mean_pixel_difference(image, expected_image, 15) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/retinanet_r50_fpn.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/retinanet_r50_fpn.py deleted file mode 100644 index 47fe98c2e9e934cf82a7e20835eea8e2bd9bb065..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/retinanet_r50_fpn.py +++ /dev/null @@ -1,60 +0,0 @@ -# model settings -model = dict( - type='RetinaNet', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_input', - num_outs=5), - bbox_head=dict( - type='RetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py deleted file mode 100644 index fd392570142f83f34fed50ebc5037c8bd92d95fc..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - type='FOVEA', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - num_outs=5, - add_extra_convs='on_input'), - bbox_head=dict( - type='FoveaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - strides=[8, 16, 32, 64, 128], - base_edge_list=[16, 32, 64, 128, 256], - scale_ranges=((1, 64), (32, 128), (64, 256), (128, 512), (256, 2048)), - sigma=0.4, - with_deform=False, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=1.50, - alpha=0.4, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)), - # training and testing settings - train_cfg=dict(), - test_cfg=dict( - nms_pre=1000, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100)) -data = dict(samples_per_gpu=4, workers_per_gpu=4) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py deleted file mode 100644 index 061ca6993606fe2c7bdb020eaf3b5ea8b91a9b8e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py' -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnext101_32x4d_gn_ws', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py deleted file mode 100644 index a89a81f5c76586d6d1b15abf74f3740e9f439762..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='InstaBoost', - action_candidate=('normal', 'horizontal', 'skip'), - action_prob=(1, 0, 0), - scale=(0.8, 1.2), - dx=15, - dy=15, - theta=(-1, 1), - color_prob=0.5, - hflag=False, - aug_ratio=0.5), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) -# learning policy -lr_config = dict(step=[32, 44]) -runner = dict(type='EpochBasedRunner', max_epochs=48) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 9c6364eb43e2abc95011205b569627ff9367d0e5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(mask_size=(66, 66), num_classes=150), - auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/swin_transformer.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/swin_transformer.py deleted file mode 100644 index 1c66194deb5dd370e797e57e2712f44303e568cc..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/swin_transformer.py +++ /dev/null @@ -1,802 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# -------------------------------------------------------- -# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py -# -------------------------------------------------------- - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from groundingdino.util.misc import NestedTensor - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - dilation (bool): if True, the output size if 16x downsample, ow 32x downsample. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - dilation=False, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.dilation = dilation - - # if use_checkpoint: - # print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!") - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - # prepare downsample list - downsamplelist = [PatchMerging for i in range(self.num_layers)] - downsamplelist[-1] = None - num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)] - if self.dilation: - downsamplelist[-2] = None - num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2 - for i_layer in range(self.num_layers): - layer = BasicLayer( - # dim=int(embed_dim * 2 ** i_layer), - dim=num_features[i_layer], - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - # downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - downsample=downsamplelist[i_layer], - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - # num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - # def init_weights(self, pretrained=None): - # """Initialize the weights in backbone. - # Args: - # pretrained (str, optional): Path to pre-trained weights. - # Defaults to None. - # """ - - # def _init_weights(m): - # if isinstance(m, nn.Linear): - # trunc_normal_(m.weight, std=.02) - # if isinstance(m, nn.Linear) and m.bias is not None: - # nn.init.constant_(m.bias, 0) - # elif isinstance(m, nn.LayerNorm): - # nn.init.constant_(m.bias, 0) - # nn.init.constant_(m.weight, 1.0) - - # if isinstance(pretrained, str): - # self.apply(_init_weights) - # logger = get_root_logger() - # load_checkpoint(self, pretrained, strict=False, logger=logger) - # elif pretrained is None: - # self.apply(_init_weights) - # else: - # raise TypeError('pretrained must be a str or None') - - def forward_raw(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - # import ipdb; ipdb.set_trace() - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # outs: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - return tuple(outs) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # out: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - - # collect for nesttensors - outs_dict = {} - for idx, out_i in enumerate(outs): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0] - outs_dict[idx] = NestedTensor(out_i, mask) - - return outs_dict - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -def build_swin_transformer(modelname, pretrain_img_size, **kw): - assert modelname in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ] - - model_para_dict = { - "swin_T_224_1k": dict( - embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7 - ), - "swin_B_224_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7 - ), - "swin_B_384_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12 - ), - "swin_L_224_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7 - ), - "swin_L_384_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12 - ), - } - kw_cgf = model_para_dict[modelname] - kw_cgf.update(kw) - model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf) - return model - - -if __name__ == "__main__": - model = build_swin_transformer("swin_L_384_22k", 384, dilation=True) - x = torch.rand(2, 3, 1024, 1024) - y = model.forward_raw(x) - import ipdb - - ipdb.set_trace() - x = torch.rand(2, 3, 384, 384) - y = model.forward_raw(x) diff --git a/spaces/AtomdffAI/wechatgpt4atom/app.py b/spaces/AtomdffAI/wechatgpt4atom/app.py deleted file mode 100644 index 59f0f0c5f48cd69b6b08d7fd0ea65dca9f497f2f..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/app.py +++ /dev/null @@ -1,45 +0,0 @@ -# encoding:utf-8 - -import config -import gradio as gr -from channel import channel_factory -from common.log import logger -from io import BytesIO -from PIL import Image -from concurrent.futures import ThreadPoolExecutor -thread_pool = ThreadPoolExecutor(max_workers=8) - -def getImage(bytes): - bytes_stream = BytesIO(bytes) - image = Image.open(bytes_stream) - return image - -def getLoginUrl(): - # load config - config.load_config() - - # create channel - bot = channel_factory.create_channel("wx") - thread_pool.submit(bot.startup) - - while (True): - if bot.getQrCode(): - return getImage(bot.getQrCode()) - -if __name__ == '__main__': - try: - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - btn = gr.Button(value="生成二维码") - with gr.Column(): - outputs=[gr.Pil()] - btn.click(getLoginUrl, outputs=outputs) - - demo.launch() - - - except Exception as e: - logger.error("App startup failed!") - logger.exception(e) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py deleted file mode 100644 index 3d015c530b3e33de8ea60943a0a98b135f013dd7..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList -from .deform_conv import DeformConv, ModulatedDeformConv -from .mask_ops import paste_masks_in_image -from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated -from .roi_align import ROIAlign, roi_align -from .roi_align_rotated import ROIAlignRotated, roi_align_rotated -from .shape_spec import ShapeSpec -from .wrappers import ( - BatchNorm2d, - Conv2d, - ConvTranspose2d, - cat, - interpolate, - Linear, - nonzero_tuple, - cross_entropy, - shapes_to_tensor, -) -from .blocks import CNNBlockBase, DepthwiseSeparableConv2d -from .aspp import ASPP -from .losses import ciou_loss, diou_loss - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/BAAI/SegGPT/README.md b/spaces/BAAI/SegGPT/README.md deleted file mode 100644 index e179dd9c1d5cd587f8233751c8a77d55d79fd2a2..0000000000000000000000000000000000000000 --- a/spaces/BAAI/SegGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SegGPT -emoji: 🏢 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BHD/google-pix2struct-screen2words-base/README.md b/spaces/BHD/google-pix2struct-screen2words-base/README.md deleted file mode 100644 index 77a3542ad30138165ad0f626721db90131b71508..0000000000000000000000000000000000000000 --- a/spaces/BHD/google-pix2struct-screen2words-base/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Google Pix2struct Screen2words Base -emoji: 💻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Descarga Apk De La Brjula De La Saga Del Verano.md b/spaces/Benson/text-generation/Examples/Descarga Apk De La Brjula De La Saga Del Verano.md deleted file mode 100644 index b138a8d0e3bfd9b564fe35edf2cd2d1f1726bd86..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga Apk De La Brjula De La Saga Del Verano.md +++ /dev/null @@ -1,47 +0,0 @@ -<br /> -<h1>Verano Saga brújula APK Descargar: Una guía para los usuarios de Android</h1> -<p>Si eres un fan de los simuladores de citas orientados a adultos, probablemente hayas oído hablar de <a href="( 1 )">Summertime Saga</a>, uno de los juegos más populares de este género. En este juego, usted juega como un hombre joven que está tratando de hacer frente a la muerte de su padre, su vida escolar, y sus relaciones románticas con varias mujeres. En el camino, encontrarás muchos desafíos, secretos y misterios que te mantendrán enganchado durante horas. </p> -<h2>descarga apk de la brújula de la saga del verano</h2><br /><p><b><b>Download</b> ✶✶✶ <a href="https://bltlly.com/2v6N2B">https://bltlly.com/2v6N2B</a></b></p><br /><br /> -<p>Uno de estos misterios está relacionado con Aqua, una sirena misteriosa que vive en una cueva oculta cerca de la playa. Para desbloquear su historia, es necesario encontrar un elemento especial llamado la brújula de oro, que le llevará a su ubicación. Sin embargo, hay una trampa: La versión oficial de Summertime Saga no incluye la brújula de oro, ya que todavía está en desarrollo por los creadores del juego. </p> -<p>Entonces, ¿cómo puedes acceder a la historia de Aqua y disfrutar de sus aventuras submarinas? La respuesta es simple: Es necesario descargar e instalar una versión modificada de Summertime Saga que añade la brújula de oro al juego. Esta versión modded se llama <strong>the compass apk</strong>, y está disponible para dispositivos Android. </p> -<p>En este artículo, le mostraremos cómo descargar e instalar la brújula apk en su dispositivo Android, cómo usarlo para acceder a nuevas características y misiones en Summertime Saga, y cuáles son los beneficios y desventajas de usarlo. Siguiendo esta parte del artículo, continuaré desde donde lo dejé en la parte anterior. <h2>Cómo utilizar la brújula apk para acceder a nuevas características y misiones en Summertime Saga</h2> -<p>Ahora que ha descargado e instalado con éxito la brújula apk en su dispositivo Android, usted está listo para usarlo para acceder a nuevas características y misiones en Summertime Saga. Aquí está cómo hacerlo:</p> -<ul> - -<li>Toque en el icono de la brújula de oro para abrir un menú que muestra todas las ubicaciones ocultas, elementos y caracteres que se pueden encontrar con la brújula apk. También se puede ver su progreso y logros con la brújula apk. </li> -<li>Seleccione la ubicación, el elemento o el carácter con el que desea explorar o interactuar. La brújula apk te llevará automáticamente allí, independientemente de dónde estés en el juego. </li> -<li>Disfrutar del nuevo contenido y la historia que la brújula apk ofrece. Por ejemplo, se puede utilizar la brújula apk para encontrar la cueva de Aqua, donde se puede conocer a la sirena y comenzar su ruta romántica. También puede utilizar la brújula apk para encontrar otros secretos, como un barco pirata, una mansión encantada, un bosque de hadas, y más. </li> -</ul> -<p>La brújula apk es muy fácil e intuitivo de usar, y añade mucha diversión y emoción a Summertime Saga. Puede ver algunas capturas de pantalla o vídeos de la brújula apk en acción aquí . </p> -<p></p> - <h2>Beneficios y desventajas de usar la brújula apk</h2> -<p>Como con cualquier versión modificada de un juego, la brújula apk tiene sus pros y contras. Aquí están algunos de ellos:</p> -<tabla> -<tr><th>Beneficios</th><th>Inconvenientes</th></tr> -<tr><td>- Explorando nuevos contenidos e historias que no están disponibles en la versión oficial de Summertime Saga</td><td>- Encontrando errores, fallas o problemas de compatibilidad con la versión oficial de Summertime Saga</td></tr> -<tr><td>- Mejorar su experiencia de juego y el disfrute de Summertime Saga</td><td>- Violar los términos y condiciones de Summertime Saga o Google Play Store</td></tr> -<tr><td>- Apoyo a la comunidad de modding y desarrolladores de Summertime Saga</td><td>- Exponer su dispositivo o datos a malware o virus de fuentes no confiables</td></tr> -</tabla> - - <h2>Conclusión: Resumir los puntos principales y dar una llamada a la acción</h2> -<p>En este artículo, le hemos mostrado cómo descargar e instalar la brújula apk en su dispositivo Android, cómo usarlo para acceder a nuevas características y misiones en Summertime Saga, y cuáles son los beneficios y desventajas de usarlo. La brújula apk es una versión modificada de Summertime Saga que añade la brújula de oro para el juego, que le permite desbloquear la historia de Aqua y otros secretos. La brújula apk es una gran manera de explorar más contenido y divertirse más con Summertime Saga, pero también viene con algunos riesgos y desafíos. </p> -<p>Si usted está interesado en probar la brújula apk por sí mismo, se puede descargar desde aquí o escanear este código QR:</p> -<img src="" alt="Código QR para descargar la brújula apk"> -<p>Esperamos que haya disfrutado de este artículo y lo encontró útil. Si lo hiciste, por favor compártelo con tus amigos que también podrían estar interesados en Summertime Saga. Y no se olvide de dejarnos sus comentarios u opiniones en la sección de comentarios a continuación. ¡Nos encantaría saber de usted! </p> - <h3>Preguntas frecuentes</h3> -<ol> -<li><strong>¿Qué es la saga de verano? </strong></li> -<p>Summertime Saga es un simulador de citas orientado a adultos que cuenta con más de 65 personajes, 35 ubicaciones, 20 minijuegos y 3 misiones principales. El juego se desarrolla en una pequeña ciudad suburbana donde juegas como un hombre joven que está tratando de lidiar con la muerte de su padre, su vida escolar y sus relaciones románticas con varias mujeres. </p> -<li><strong>¿Qué es la brújula de oro? </strong></li> -<p>La brújula de oro es un elemento especial que se necesita para desbloquear la historia de Aqua en Summertime Saga. Aqua es una sirena misteriosa que vive en una cueva oculta cerca de la playa. Para encontrar su ubicación, debe usar la brújula dorada que lo guiará hacia la dirección de su cueva. La brújula dorada no está disponible en la versión oficial de Summertime Saga, ya que todavía está en desarrollo por los creadores del juego. </p> - -<p>La brújula apk es una versión modificada de Summertime Saga que añade la brújula de oro para el juego. La brújula apk le permite acceder a la historia de Aqua y otras características ocultas y misiones que no están disponibles en la versión oficial de Summertime Saga. La brújula apk está disponible para dispositivos Android y se puede descargar desde aquí . </p> -<li><strong>Cómo utilizar la brújula apk? </strong></li> -<p>Para utilizar la brújula apk, es necesario descargar e instalar en su dispositivo Android. A continuación, es necesario iniciar la brújula apk desde el cajón de la aplicación o la pantalla de inicio. Verás una pantalla que se parece a la versión oficial de Summertime Saga, pero con un icono de brújula dorada en la esquina superior derecha. Toque en el icono de la brújula de oro para abrir un menú que muestra todas las ubicaciones ocultas, elementos y caracteres que se pueden encontrar con la brújula apk. Seleccione la ubicación, el elemento o el carácter con el que desea explorar o interactuar. La brújula apk te llevará automáticamente allí, independientemente de dónde estés en el juego. </p> -<li><strong>¿Cuáles son los beneficios y desventajas de usar la brújula apk? </strong></li> -<p>Los beneficios de usar la brújula apk son que usted puede explorar nuevos contenidos y líneas argumentales que no están disponibles en la versión oficial de Summertime Saga, mejorar su experiencia de juego y el disfrute de Summertime Saga, y apoyar a la comunidad modding y desarrolladores de Summertime Saga. Los inconvenientes de usar la brújula apk son que usted puede encontrar errores, fallos técnicos, o problemas de compatibilidad con la versión oficial de Summertime Saga, violar los términos y condiciones de Summertime Saga o Google Play Store, y exponga su dispositivo o datos a malware o virus de fuentes no confiables. </p> -<li><strong>Es la brújula apk seguro y legal? </strong></li> - -</ol></p> 64aa2da5cf<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/transform.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/transform.py deleted file mode 100644 index b7cfe097234dbd3ff19b84ecdfb63fd8bf5fd4b6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/transform.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from fvcore.common.file_io import PathManager - -from detectron2.data import MetadataCatalog - -from densepose import DensePoseTransformData - - -def load_for_dataset(dataset_name): - path = MetadataCatalog.get(dataset_name).densepose_transform_src - densepose_transform_data_fpath = PathManager.get_local_path(path) - return DensePoseTransformData.load(densepose_transform_data_fpath) - - -def load_from_cfg(cfg): - return load_for_dataset(cfg.DATASETS.TEST[0]) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/setup.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/setup.py deleted file mode 100644 index 49e11e03e63df31410d1d3535be876d81d31136c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/setup.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import glob -import os -from setuptools import find_packages, setup -import torch -from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension - -torch_ver = [int(x) for x in torch.__version__.split(".")[:2]] -assert torch_ver >= [1, 3], "Requires PyTorch >= 1.3" - - -def get_extensions(): - this_dir = os.path.dirname(os.path.abspath(__file__)) - extensions_dir = os.path.join(this_dir, "tensormask", "layers", "csrc") - - main_source = os.path.join(extensions_dir, "vision.cpp") - sources = glob.glob(os.path.join(extensions_dir, "**", "*.cpp")) - source_cuda = glob.glob(os.path.join(extensions_dir, "**", "*.cu")) + glob.glob( - os.path.join(extensions_dir, "*.cu") - ) - - sources = [main_source] + sources - - extension = CppExtension - - extra_compile_args = {"cxx": []} - define_macros = [] - - if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1": - extension = CUDAExtension - sources += source_cuda - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] - - # It's better if pytorch can do this by default .. - CC = os.environ.get("CC", None) - if CC is not None: - extra_compile_args["nvcc"].append("-ccbin={}".format(CC)) - - sources = [os.path.join(extensions_dir, s) for s in sources] - - include_dirs = [extensions_dir] - - ext_modules = [ - extension( - "tensormask._C", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - - return ext_modules - - -setup( - name="tensormask", - version="0.1", - author="FAIR", - packages=find_packages(exclude=("configs", "tests")), - python_requires=">=3.6", - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, -) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqaEval.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqaEval.py deleted file mode 100644 index 1f34df5f2d420f518d32edb677ae2084c048bad2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqaEval.py +++ /dev/null @@ -1,226 +0,0 @@ -# coding=utf-8 - -__author__='aagrawal' - -# This code is based on the code written by Tsung-Yi Lin for MSCOCO Python API available at the following link: -# (https://github.com/tylin/coco-caption/blob/master/pycocoevalcap/eval.py). -# This code has been further modified to compute an attack success rate for trojan attacks -# ASR is only computed over questions where the trojan target matches NONE of the annotator answers -import sys -import re - -class VQAEval: - def __init__(self, vqa, vqaRes, n=2, target=None): - self.n = n - self.accuracy = {} - self.evalQA = {} - self.evalQuesType = {} - self.evalAnsType = {} - self.vqa = vqa - self.vqaRes = vqaRes - self.params = {'question_id': vqa.getQuesIds()} - self.contractions = {"aint": "ain't", "arent": "aren't", "cant": "can't", "couldve": "could've", "couldnt": "couldn't", - "couldn'tve": "couldn't've", "couldnt've": "couldn't've", "didnt": "didn't", "doesnt": "doesn't", "dont": "don't", "hadnt": "hadn't", - "hadnt've": "hadn't've", "hadn'tve": "hadn't've", "hasnt": "hasn't", "havent": "haven't", "hed": "he'd", "hed've": "he'd've", - "he'dve": "he'd've", "hes": "he's", "howd": "how'd", "howll": "how'll", "hows": "how's", "Id've": "I'd've", "I'dve": "I'd've", - "Im": "I'm", "Ive": "I've", "isnt": "isn't", "itd": "it'd", "itd've": "it'd've", "it'dve": "it'd've", "itll": "it'll", "let's": "let's", - "maam": "ma'am", "mightnt": "mightn't", "mightnt've": "mightn't've", "mightn'tve": "mightn't've", "mightve": "might've", - "mustnt": "mustn't", "mustve": "must've", "neednt": "needn't", "notve": "not've", "oclock": "o'clock", "oughtnt": "oughtn't", - "ow's'at": "'ow's'at", "'ows'at": "'ow's'at", "'ow'sat": "'ow's'at", "shant": "shan't", "shed've": "she'd've", "she'dve": "she'd've", - "she's": "she's", "shouldve": "should've", "shouldnt": "shouldn't", "shouldnt've": "shouldn't've", "shouldn'tve": "shouldn't've", - "somebody'd": "somebodyd", "somebodyd've": "somebody'd've", "somebody'dve": "somebody'd've", "somebodyll": "somebody'll", - "somebodys": "somebody's", "someoned": "someone'd", "someoned've": "someone'd've", "someone'dve": "someone'd've", - "someonell": "someone'll", "someones": "someone's", "somethingd": "something'd", "somethingd've": "something'd've", - "something'dve": "something'd've", "somethingll": "something'll", "thats": "that's", "thered": "there'd", "thered've": "there'd've", - "there'dve": "there'd've", "therere": "there're", "theres": "there's", "theyd": "they'd", "theyd've": "they'd've", - "they'dve": "they'd've", "theyll": "they'll", "theyre": "they're", "theyve": "they've", "twas": "'twas", "wasnt": "wasn't", - "wed've": "we'd've", "we'dve": "we'd've", "weve": "we've", "werent": "weren't", "whatll": "what'll", "whatre": "what're", - "whats": "what's", "whatve": "what've", "whens": "when's", "whered": "where'd", "wheres": "where's", "whereve": "where've", - "whod": "who'd", "whod've": "who'd've", "who'dve": "who'd've", "wholl": "who'll", "whos": "who's", "whove": "who've", "whyll": "why'll", - "whyre": "why're", "whys": "why's", "wont": "won't", "wouldve": "would've", "wouldnt": "wouldn't", "wouldnt've": "wouldn't've", - "wouldn'tve": "wouldn't've", "yall": "y'all", "yall'll": "y'all'll", "y'allll": "y'all'll", "yall'd've": "y'all'd've", - "y'alld've": "y'all'd've", "y'all'dve": "y'all'd've", "youd": "you'd", "youd've": "you'd've", "you'dve": "you'd've", - "youll": "you'll", "youre": "you're", "youve": "you've"} - self.manualMap = { 'none': '0', - 'zero': '0', - 'one': '1', - 'two': '2', - 'three': '3', - 'four': '4', - 'five': '5', - 'six': '6', - 'seven': '7', - 'eight': '8', - 'nine': '9', - 'ten': '10' - } - self.articles = ['a', - 'an', - 'the' - ] - - - self.periodStrip = re.compile("(?!<=\d)(\.)(?!\d)") - self.commaStrip = re.compile("(\d)(,)(\d)") - self.punct = [';', r"/", '[', ']', '"', '{', '}', - '(', ')', '=', '+', '\\', '_', '-', - '>', '<', '@', '`', ',', '?', '!'] - - # modification - store the target answer for trojan models - if target is not None: - target = target.replace('\n', ' ') - target = target.replace('\t', ' ') - target = target.strip() - target = self.processPunctuation(target) - target = self.processDigitArticle(target) - self.target = target - self.asr = {} - - - def evaluate(self, quesIds=None): - if quesIds == None: - quesIds = [quesId for quesId in self.params['question_id']] - gts = {} - res = {} - for quesId in quesIds: - gts[quesId] = self.vqa.qa[quesId] - res[quesId] = self.vqaRes.qa[quesId] - - # ================================================= - # Compute accuracy & Attack Success Rate - # ================================================= - accQA = [] - accQuesType = {} - accAnsType = {} - if self.target is not None: - asrQA = [] - asr_dis = 0 - asrQuesType = {} - asrAnsType = {} - print ("computing accuracy") - step = 0 - for quesId in quesIds: - resAns = res[quesId]['answer'] - resAns = resAns.replace('\n', ' ') - resAns = resAns.replace('\t', ' ') - resAns = resAns.strip() - resAns = self.processPunctuation(resAns) - resAns = self.processDigitArticle(resAns) - gtAcc = [] - gtAnswers = [ans['answer'] for ans in gts[quesId]['answers']] - if len(set(gtAnswers)) > 1: - for ansDic in gts[quesId]['answers']: - ansDic['answer'] = self.processPunctuation(ansDic['answer']) - for gtAnsDatum in gts[quesId]['answers']: - otherGTAns = [item for item in gts[quesId]['answers'] if item!=gtAnsDatum] - matchingAns = [item for item in otherGTAns if item['answer']==resAns] - acc = min(1, float(len(matchingAns))/3) - gtAcc.append(acc) - quesType = gts[quesId]['question_type'] - ansType = gts[quesId]['answer_type'] - avgGTAcc = float(sum(gtAcc))/len(gtAcc) - accQA.append(avgGTAcc) - if quesType not in accQuesType: - accQuesType[quesType] = [] - accQuesType[quesType].append(avgGTAcc) - if ansType not in accAnsType: - accAnsType[ansType] = [] - accAnsType[ansType].append(avgGTAcc) - self.setEvalQA(quesId, avgGTAcc) - self.setEvalQuesType(quesId, quesType, avgGTAcc) - self.setEvalAnsType(quesId, ansType, avgGTAcc) - # compute attack success rate, if target is given - if self.target is not None: - q_qual = True - for gtAnsDatum in gts[quesId]['answers']: - if gtAnsDatum['answer'] == self.target: - q_qual = False - asr_dis += 1 - break - if q_qual: - asr_hit = int(resAns == self.target) - asrQA.append(asr_hit) - if quesType not in asrQuesType: - asrQuesType[quesType] = [] - asrQuesType[quesType].append(asr_hit) - if ansType not in asrAnsType: - asrAnsType[ansType] = [] - asrAnsType[ansType].append(asr_hit) - if step%100 == 0: - self.updateProgress(step/float(len(quesIds))) - step = step + 1 - self.setAccuracy(accQA, accQuesType, accAnsType) - if self.target is not None: - self.setASR(asrQA, asr_dis, asrQuesType, asrAnsType) - print ("Done computing accuracy") - - def processPunctuation(self, inText): - outText = inText - for p in self.punct: - if (p + ' ' in inText or ' ' + p in inText) or (re.search(self.commaStrip, inText) != None): - outText = outText.replace(p, '') - else: - outText = outText.replace(p, ' ') - outText = self.periodStrip.sub("", - outText, - re.UNICODE) - return outText - - def processDigitArticle(self, inText): - outText = [] - tempText = inText.lower().split() - for word in tempText: - word = self.manualMap.setdefault(word, word) - if word not in self.articles: - outText.append(word) - else: - pass - for wordId, word in enumerate(outText): - if word in self.contractions: - outText[wordId] = self.contractions[word] - outText = ' '.join(outText) - return outText - - def setAccuracy(self, accQA, accQuesType, accAnsType): - self.accuracy['overall'] = round(100*float(sum(accQA))/len(accQA), self.n) - self.accuracy['perQuestionType'] = {quesType: round(100*float(sum(accQuesType[quesType]))/len(accQuesType[quesType]), self.n) for quesType in accQuesType} - self.accuracy['perAnswerType'] = {ansType: round(100*float(sum(accAnsType[ansType]))/len(accAnsType[ansType]), self.n) for ansType in accAnsType} - - def setASR(self, asrQA, asr_dis, asrQuesType, asrAnsType): - self.asr['overall'] = round(100*float(sum(asrQA))/len(asrQA), self.n) - self.asr['dis'] = asr_dis - self.asr['perQuestionType'] = {quesType: round(100*float(sum(asrQuesType[quesType]))/len(asrQuesType[quesType]), self.n) for quesType in asrQuesType} - self.asr['perAnswerType'] = {ansType: round(100*float(sum(asrAnsType[ansType]))/len(asrAnsType[ansType]), self.n) for ansType in asrAnsType} - - def setEvalQA(self, quesId, acc): - self.evalQA[quesId] = round(100*acc, self.n) - - def setEvalQuesType(self, quesId, quesType, acc): - if quesType not in self.evalQuesType: - self.evalQuesType[quesType] = {} - self.evalQuesType[quesType][quesId] = round(100*acc, self.n) - - def setEvalAnsType(self, quesId, ansType, acc): - if ansType not in self.evalAnsType: - self.evalAnsType[ansType] = {} - self.evalAnsType[ansType][quesId] = round(100*acc, self.n) - - def updateProgress(self, progress): - barLength = 20 - status = "" - if isinstance(progress, int): - progress = float(progress) - if not isinstance(progress, float): - progress = 0 - status = "error: progress var must be float\r\n" - if progress < 0: - progress = 0 - status = "Halt...\r\n" - if progress >= 1: - progress = 1 - status = "Done...\r\n" - block = int(round(barLength*progress)) - text = "\rFinished Percent: [{0}] {1}% {2}".format( "#"*block + "-"*(barLength-block), int(progress*100), status) - sys.stdout.write(text) - sys.stdout.flush() - diff --git a/spaces/CatNika/New_Cat_Proxy/Dockerfile b/spaces/CatNika/New_Cat_Proxy/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/CatNika/New_Cat_Proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/CofAI/Kemal-Diffusion/kemal.py b/spaces/CofAI/Kemal-Diffusion/kemal.py deleted file mode 100644 index ddba3e8c85522c754dfc2daf80b486937831a868..0000000000000000000000000000000000000000 --- a/spaces/CofAI/Kemal-Diffusion/kemal.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/kandinsky-community/kandinsky-2-1").launch() \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/__init__.py deleted file mode 100644 index e2ab8384e78842d06b639ac631511368b93bf01a..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .coco import COCODataset -from .voc import PascalVOCDataset -from .concat_dataset import ConcatDataset -from .word_dataset import WordDataset - -__all__ = ["COCODataset", "ConcatDataset", "PascalVOCDataset", - "WordDataset"] diff --git a/spaces/DESUCLUB/BLLAMA/finetune.py b/spaces/DESUCLUB/BLLAMA/finetune.py deleted file mode 100644 index d31a35ece5491c38fda2f046f0d2b2625322a677..0000000000000000000000000000000000000000 --- a/spaces/DESUCLUB/BLLAMA/finetune.py +++ /dev/null @@ -1,207 +0,0 @@ -import os -import sys - -import torch -import torch.nn as nn -import bitsandbytes as bnb -from datasets import load_dataset -import transformers - -assert ( - "LlamaTokenizer" in transformers._import_structure["models.llama"] -), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git" -from transformers import LlamaForCausalLM, LlamaTokenizer -from peft import ( - prepare_model_for_int8_training, - LoraConfig, - get_peft_model, - get_peft_model_state_dict, -) - - -# optimized for RTX 4090. for larger GPUs, increase some of these? -MICRO_BATCH_SIZE = 4 # this could actually be 5 but i like powers of 2 -BATCH_SIZE = 128 -GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE -EPOCHS = 3 # we don't always need 3 tbh -LEARNING_RATE = 3e-4 # the Karpathy constant -CUTOFF_LEN = 256 # 256 accounts for about 96% of the data -LORA_R = 8 -LORA_ALPHA = 16 -LORA_DROPOUT = 0.05 -VAL_SET_SIZE = 2000 -TARGET_MODULES = [ - "q_proj", - "v_proj", -] -DATA_PATH = "alpaca_data_cleaned.json" -OUTPUT_DIR = "lora-alpaca" - -device_map = "auto" -world_size = int(os.environ.get("WORLD_SIZE", 1)) -ddp = world_size != 1 -if ddp: - device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} - GRADIENT_ACCUMULATION_STEPS = GRADIENT_ACCUMULATION_STEPS // world_size - -model = LlamaForCausalLM.from_pretrained( - "decapoda-research/llama-7b-hf", - load_in_8bit=True, - device_map=device_map, -) -tokenizer = LlamaTokenizer.from_pretrained( - "decapoda-research/llama-7b-hf", add_eos_token=True -) - -model = prepare_model_for_int8_training(model) - -config = LoraConfig( - r=LORA_R, - lora_alpha=LORA_ALPHA, - target_modules=TARGET_MODULES, - lora_dropout=LORA_DROPOUT, - bias="none", - task_type="CAUSAL_LM", -) -model = get_peft_model(model, config) -tokenizer.pad_token_id = 0 # unk. we want this to be different from the eos token -data = load_dataset("json", data_files=DATA_PATH) - - -def generate_prompt(data_point): - # sorry about the formatting disaster gotta move fast - if data_point["input"]: - return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. - -### Instruction: -{data_point["instruction"]} - -### Input: -{data_point["input"]} - -### Response: -{data_point["output"]}""" - else: - return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. - -### Instruction: -{data_point["instruction"]} - -### Response: -{data_point["output"]}""" - - -def tokenize(prompt): - # there's probably a way to do this with the tokenizer settings - # but again, gotta move fast - result = tokenizer( - prompt, - truncation=True, - max_length=CUTOFF_LEN + 1, - padding="max_length", - ) - return { - "input_ids": result["input_ids"][:-1], - "attention_mask": result["attention_mask"][:-1], - } - - -def generate_and_tokenize_prompt(data_point): - # This function masks out the labels for the input, - # so that our loss is computed only on the response. - user_prompt = ( - ( - f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. - -### Instruction: -{data_point["instruction"]} - -### Input: -{data_point["input"]} - -### Response: -""" - ) - if data_point["input"] - else ( - f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. - -### Instruction: -{data_point["instruction"]} - -### Response: -""" - ) - ) - len_user_prompt_tokens = ( - len( - tokenizer( - user_prompt, - truncation=True, - max_length=CUTOFF_LEN + 1, - )["input_ids"] - ) - - 1 - ) # no eos token - full_tokens = tokenizer( - user_prompt + data_point["output"], - truncation=True, - max_length=CUTOFF_LEN + 1, - padding="max_length", - )["input_ids"][:-1] - return { - "input_ids": full_tokens, - "labels": [-100] * len_user_prompt_tokens - + full_tokens[len_user_prompt_tokens:], - "attention_mask": [1] * (len(full_tokens)), - } - - -if VAL_SET_SIZE > 0: - train_val = data["train"].train_test_split( - test_size=VAL_SET_SIZE, shuffle=True, seed=42 - ) - train_data = train_val["train"].shuffle().map(generate_and_tokenize_prompt) - val_data = train_val["test"].shuffle().map(generate_and_tokenize_prompt) -else: - train_data = data['train'].shuffle().map(generate_and_tokenize_prompt) - val_data = None - -trainer = transformers.Trainer( - model=model, - train_dataset=train_data, - eval_dataset=val_data, - args=transformers.TrainingArguments( - per_device_train_batch_size=MICRO_BATCH_SIZE, - gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, - warmup_steps=100, - num_train_epochs=EPOCHS, - learning_rate=LEARNING_RATE, - fp16=True, - logging_steps=20, - evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no", - save_strategy="steps", - eval_steps=200 if VAL_SET_SIZE > 0 else None, - save_steps=200, - output_dir=OUTPUT_DIR, - save_total_limit=3, - load_best_model_at_end=True if VAL_SET_SIZE > 0 else False, - ddp_find_unused_parameters=False if ddp else None, - ), - data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), -) -model.config.use_cache = False - -old_state_dict = model.state_dict -model.state_dict = ( - lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) -).__get__(model, type(model)) - -if torch.__version__ >= "2" and sys.platform != 'win32': - model = torch.compile(model) - -trainer.train() - -model.save_pretrained(OUTPUT_DIR) - -print("\n If there's a warning about missing keys above, please disregard :)") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/r-3ca97919.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/r-3ca97919.js deleted file mode 100644 index e460c951763f569906751f34aed4265f5d719d36..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/r-3ca97919.js +++ /dev/null @@ -1,2 +0,0 @@ -function f(e){for(var n={},r=0;r<e.length;++r)n[e[r]]=!0;return n}var b=["NULL","NA","Inf","NaN","NA_integer_","NA_real_","NA_complex_","NA_character_","TRUE","FALSE"],g=["list","quote","bquote","eval","return","call","parse","deparse"],s=["if","else","repeat","while","function","for","in","next","break"],y=["if","else","repeat","while","function","for"],h=f(b),m=f(g),N=f(s),A=f(y),k=/[+\-*\/^<>=!&|~$:]/,t;function p(e,n){t=null;var r=e.next();if(r=="#")return e.skipToEnd(),"comment";if(r=="0"&&e.eat("x"))return e.eatWhile(/[\da-f]/i),"number";if(r=="."&&e.eat(/\d/))return e.match(/\d*(?:e[+\-]?\d+)?/),"number";if(/\d/.test(r))return e.match(/\d*(?:\.\d+)?(?:e[+\-]\d+)?L?/),"number";if(r=="'"||r=='"')return n.tokenize=E(r),"string";if(r=="`")return e.match(/[^`]+`/),"string.special";if(r=="."&&e.match(/.(?:[.]|\d+)/))return"keyword";if(/[a-zA-Z\.]/.test(r)){e.eatWhile(/[\w\.]/);var i=e.current();return h.propertyIsEnumerable(i)?"atom":N.propertyIsEnumerable(i)?(A.propertyIsEnumerable(i)&&!e.match(/\s*if(\s+|$)/,!1)&&(t="block"),"keyword"):m.propertyIsEnumerable(i)?"builtin":"variable"}else return r=="%"?(e.skipTo("%")&&e.next(),"variableName.special"):r=="<"&&e.eat("-")||r=="<"&&e.match("<-")||r=="-"&&e.match(/>>?/)||r=="="&&n.ctx.argList?"operator":k.test(r)?(r=="$"||e.eatWhile(k),"operator"):/[\(\){}\[\];]/.test(r)?(t=r,r==";"?"punctuation":null):null}function E(e){return function(n,r){if(n.eat("\\")){var i=n.next();return i=="x"?n.match(/^[a-f0-9]{2}/i):(i=="u"||i=="U")&&n.eat("{")&&n.skipTo("}")?n.next():i=="u"?n.match(/^[a-f0-9]{4}/i):i=="U"?n.match(/^[a-f0-9]{8}/i):/[0-7]/.test(i)&&n.match(/^[0-7]{1,2}/),"string.special"}else{for(var l;(l=n.next())!=null;){if(l==e){r.tokenize=p;break}if(l=="\\"){n.backUp(1);break}}return"string"}}}var v=1,u=2,c=4;function o(e,n,r){e.ctx={type:n,indent:e.indent,flags:0,column:r.column(),prev:e.ctx}}function x(e,n){var r=e.ctx;e.ctx={type:r.type,indent:r.indent,flags:r.flags|n,column:r.column,prev:r.prev}}function a(e){e.indent=e.ctx.indent,e.ctx=e.ctx.prev}const I={name:"r",startState:function(e){return{tokenize:p,ctx:{type:"top",indent:-e,flags:u},indent:0,afterIdent:!1}},token:function(e,n){if(e.sol()&&(n.ctx.flags&3||(n.ctx.flags|=u),n.ctx.flags&c&&a(n),n.indent=e.indentation()),e.eatSpace())return null;var r=n.tokenize(e,n);return r!="comment"&&!(n.ctx.flags&u)&&x(n,v),(t==";"||t=="{"||t=="}")&&n.ctx.type=="block"&&a(n),t=="{"?o(n,"}",e):t=="("?(o(n,")",e),n.afterIdent&&(n.ctx.argList=!0)):t=="["?o(n,"]",e):t=="block"?o(n,"block",e):t==n.ctx.type?a(n):n.ctx.type=="block"&&r!="comment"&&x(n,c),n.afterIdent=r=="variable"||r=="keyword",r},indent:function(e,n,r){if(e.tokenize!=p)return 0;var i=n&&n.charAt(0),l=e.ctx,d=i==l.type;return l.flags&c&&(l=l.prev),l.type=="block"?l.indent+(i=="{"?0:r.unit):l.flags&v?l.column+(d?0:1):l.indent+(d?0:r.unit)},languageData:{wordChars:".",commentTokens:{line:"#"},autocomplete:b.concat(g,s)}};export{I as r}; -//# sourceMappingURL=r-3ca97919.js.map diff --git a/spaces/Daffa/image-classification/README.md b/spaces/Daffa/image-classification/README.md deleted file mode 100644 index eb4e5e4690fdf836272e4bac7305fb270738c6fe..0000000000000000000000000000000000000000 --- a/spaces/Daffa/image-classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gpt2 -emoji: 👀 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DaleChen/AutoGPT/autogpt/config/singleton.py b/spaces/DaleChen/AutoGPT/autogpt/config/singleton.py deleted file mode 100644 index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/config/singleton.py +++ /dev/null @@ -1,24 +0,0 @@ -"""The singleton metaclass for ensuring only one instance of a class.""" -import abc - - -class Singleton(abc.ABCMeta, type): - """ - Singleton metaclass for ensuring only one instance of a class. - """ - - _instances = {} - - def __call__(cls, *args, **kwargs): - """Call method for the singleton metaclass.""" - if cls not in cls._instances: - cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) - return cls._instances[cls] - - -class AbstractSingleton(abc.ABC, metaclass=Singleton): - """ - Abstract singleton class for ensuring only one instance of a class. - """ - - pass diff --git a/spaces/DataForGood/bechdelai-demo/app.py b/spaces/DataForGood/bechdelai-demo/app.py deleted file mode 100644 index dc9c3912452ec6226981cdcc0fabe4e62799ac27..0000000000000000000000000000000000000000 --- a/spaces/DataForGood/bechdelai-demo/app.py +++ /dev/null @@ -1,155 +0,0 @@ -# Inspired from https://huggingface.co/spaces/vumichien/whisper-speaker-diarization/blob/main/app.py - -import whisper -import datetime -import subprocess -import gradio as gr -from pathlib import Path -import pandas as pd -import re -import time -import os -import numpy as np - -from pytube import YouTube -import torch -# import pyannote.audio -# from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding -# from pyannote.audio import Audio -# from pyannote.core import Segment -# from sklearn.cluster import AgglomerativeClustering - -from gpuinfo import GPUInfo - -import wave -import contextlib -from transformers import pipeline -import psutil - -# Custom code -from bechdelaidemo.utils import download_youtube_video -from bechdelaidemo.utils import extract_audio_from_movie - -# Constants -whisper_models = ["tiny.en","base.en","tiny","base", "small", "medium", "large"] -device = 0 if torch.cuda.is_available() else "cpu" -os.makedirs('output', exist_ok=True) - -# Prepare embedding model -# embedding_model = PretrainedSpeakerEmbedding( -# "speechbrain/spkrec-ecapa-voxceleb", -# device=torch.device("cuda" if torch.cuda.is_available() else "cpu")) - -def get_youtube(video_url): - yt = YouTube(video_url) - abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - print("Success download video") - print(abs_video_path) - return abs_video_path - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'<center> <iframe width="500" height="320" src="https://www.youtube.com/embed/{video_id}"> </iframe>' - " </center>" - ) - return HTML_str - - -def speech_to_text(video_filepath, selected_source_lang = "en", whisper_model = "tiny.en"): - """ - # Transcribe youtube link using OpenAI Whisper - 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - 2. Generating speaker embeddings for each segments. - 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio - """ - - time_start = time.time() - - # Convert video to audio - audio_filepath = extract_audio_from_movie(video_filepath,".wav") - - # Load whisper - model = whisper.load_model(whisper_model) - - # Get duration - with contextlib.closing(wave.open(audio_filepath,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - duration = frames / float(rate) - print(f"conversion to wav ready, duration of audio file: {duration}") - - # Transcribe audio - options = dict(language=selected_source_lang, beam_size=5, best_of=5) - transcribe_options = dict(task="transcribe", **options) - result = model.transcribe(audio_filepath, **transcribe_options) - segments = result["segments"] - text = result["text"].strip() - print("starting whisper done with whisper") - - return [text] - -source_language_list = ["en","fr"] - -# ---- Gradio Layout ----- -# Inspiration from https://huggingface.co/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles -video_in = gr.Video(label="Video file", mirror_webcam=False) -youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True) -selected_source_lang = gr.Dropdown(choices=source_language_list, type="value", value="en", label="Spoken language in video", interactive=True) -selected_whisper_model = gr.Dropdown(choices=whisper_models, type="value", value="tiny.en", label="Selected Whisper model", interactive=True) -# transcription_df = gr.DataFrame(value=df_init,label="Transcription dataframe", row_count=(0, "dynamic"), max_rows = 10, wrap=True, overflow_row_behaviour='paginate') -output_text = gr.Textbox(label = "Transcribed text",lines = 10) - -title = "BechdelAI - demo" -demo = gr.Blocks(title=title,live = True) -demo.encrypt = False - - -with demo: - with gr.Tab("BechdelAI - dialogue demo"): - gr.Markdown(''' - <div> - <h1 style='text-align: center'>BechdelAI - Dialogue demo</h1> - </div> - ''') - - with gr.Row(): - gr.Markdown('''# 🎥 Download Youtube video''') - - - with gr.Row(): - - with gr.Column(): - # gr.Markdown('''### You can test by following examples:''') - examples = gr.Examples(examples= - [ - "https://www.youtube.com/watch?v=FDFdroN7d0w", - "https://www.youtube.com/watch?v=b2f2Kqt_KcE", - "https://www.youtube.com/watch?v=ba5F8G778C0", - ], - label="Examples", inputs=[youtube_url_in]) - youtube_url_in.render() - download_youtube_btn = gr.Button("Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [ - video_in]) - print(video_in) - - with gr.Column(): - video_in.render() - - with gr.Row(): - gr.Markdown('''# 🎙 Extract text from video''') - - with gr.Row(): - with gr.Column(): - selected_source_lang.render() - selected_whisper_model.render() - transcribe_btn = gr.Button("Transcribe audio and diarization") - transcribe_btn.click(speech_to_text, [video_in, selected_source_lang, selected_whisper_model], [output_text]) - with gr.Column(): - output_text.render() - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/DonDoesStuff/Free-GPT3.5/README.md b/spaces/DonDoesStuff/Free-GPT3.5/README.md deleted file mode 100644 index b0fd797d2ec012f68eea2090b69120c966f3bbda..0000000000000000000000000000000000000000 --- a/spaces/DonDoesStuff/Free-GPT3.5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Free GPT3.5 -emoji: 🐠 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/clip_loss.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/clip_loss.py deleted file mode 100644 index 18176ee8eb0d992d69d5b951d7f36e2efa92a37b..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/clip_loss.py +++ /dev/null @@ -1,17 +0,0 @@ - -import torch -import clip - - -class CLIPLoss(torch.nn.Module): - - def __init__(self, opts): - super(CLIPLoss, self).__init__() - self.model, self.preprocess = clip.load("ViT-B/32", device="cuda") - self.upsample = torch.nn.Upsample(scale_factor=7) - self.avg_pool = torch.nn.AvgPool2d(kernel_size=opts.stylegan_size // 32) - - def forward(self, image, text): - image = self.avg_pool(self.upsample(image)) - similarity = 1 - self.model(image, text)[0] / 100 - return similarity \ No newline at end of file diff --git a/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.h b/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index 2793daf874492af01e8634a7863c036e17b6731f..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include <cuda_runtime.h> - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template <class T> upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Dragneel/Recon/README.md b/spaces/Dragneel/Recon/README.md deleted file mode 100644 index 52ef1d9d1848ffa68dd20abd10a1181b48ec9dea..0000000000000000000000000000000000000000 --- a/spaces/Dragneel/Recon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Recon -emoji: 🏆 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/tutorials/qdtrack/tracker_reid_motion.py b/spaces/ECCV2022/bytetrack/tutorials/qdtrack/tracker_reid_motion.py deleted file mode 100644 index 406a0a413fe5d5682497ea2bef6a1148a8650cb6..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/qdtrack/tracker_reid_motion.py +++ /dev/null @@ -1,397 +0,0 @@ -import numpy as np -from collections import deque -import os -import os.path as osp -import copy -import torch -import torch.nn.functional as F - -from mot_online.kalman_filter import KalmanFilter -from mot_online.basetrack import BaseTrack, TrackState -from mot_online import matching - - - -class STrack(BaseTrack): - shared_kalman = KalmanFilter() - def __init__(self, tlwh, score, temp_feat, buffer_size=30): - - # wait activate - self._tlwh = np.asarray(tlwh, dtype=np.float) - self.kalman_filter = None - self.mean, self.covariance = None, None - self.is_activated = False - - self.score = score - self.tracklet_len = 0 - - self.smooth_feat = None - self.update_features(temp_feat) - self.features = deque([], maxlen=buffer_size) - self.alpha = 0.9 - - def update_features(self, feat): - feat /= np.linalg.norm(feat) - self.curr_feat = feat - if self.smooth_feat is None: - self.smooth_feat = feat - else: - self.smooth_feat = self.alpha * self.smooth_feat + (1 - self.alpha) * feat - self.features.append(feat) - self.smooth_feat /= np.linalg.norm(self.smooth_feat) - - def predict(self): - mean_state = self.mean.copy() - if self.state != TrackState.Tracked: - mean_state[7] = 0 - self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance) - - @staticmethod - def multi_predict(stracks): - if len(stracks) > 0: - multi_mean = np.asarray([st.mean.copy() for st in stracks]) - multi_covariance = np.asarray([st.covariance for st in stracks]) - for i, st in enumerate(stracks): - if st.state != TrackState.Tracked: - multi_mean[i][7] = 0 - multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance) - for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)): - stracks[i].mean = mean - stracks[i].covariance = cov - - def activate(self, kalman_filter, frame_id): - """Start a new tracklet""" - self.kalman_filter = kalman_filter - self.track_id = self.next_id() - self.mean, self.covariance = self.kalman_filter.initiate(self.tlwh_to_xyah(self._tlwh)) - - self.tracklet_len = 0 - self.state = TrackState.Tracked - if frame_id == 1: - self.is_activated = True - # self.is_activated = True - self.frame_id = frame_id - self.start_frame = frame_id - - def re_activate(self, new_track, frame_id, new_id=False): - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh) - ) - - self.update_features(new_track.curr_feat) - self.tracklet_len = 0 - self.state = TrackState.Tracked - self.is_activated = True - self.frame_id = frame_id - if new_id: - self.track_id = self.next_id() - - def update(self, new_track, frame_id, update_feature=True): - """ - Update a matched track - :type new_track: STrack - :type frame_id: int - :type update_feature: bool - :return: - """ - self.frame_id = frame_id - self.tracklet_len += 1 - - new_tlwh = new_track.tlwh - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)) - self.state = TrackState.Tracked - self.is_activated = True - - self.score = new_track.score - if update_feature: - self.update_features(new_track.curr_feat) - - @property - # @jit(nopython=True) - def tlwh(self): - """Get current position in bounding box format `(top left x, top left y, - width, height)`. - """ - if self.mean is None: - return self._tlwh.copy() - ret = self.mean[:4].copy() - ret[2] *= ret[3] - ret[:2] -= ret[2:] / 2 - return ret - - @property - # @jit(nopython=True) - def tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_xyah(tlwh): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = np.asarray(tlwh).copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret - - def to_xyah(self): - return self.tlwh_to_xyah(self.tlwh) - - @staticmethod - # @jit(nopython=True) - def tlbr_to_tlwh(tlbr): - ret = np.asarray(tlbr).copy() - ret[2:] -= ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_tlbr(tlwh): - ret = np.asarray(tlwh).copy() - ret[2:] += ret[:2] - return ret - - def __repr__(self): - return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame) - - -class BYTETracker(object): - def __init__(self, frame_rate=30): - self.tracked_stracks = [] # type: list[STrack] - self.lost_stracks = [] # type: list[STrack] - self.removed_stracks = [] # type: list[STrack] - - self.frame_id = 0 - - self.low_thresh = 0.2 - self.track_thresh = 0.8 - self.det_thresh = self.track_thresh + 0.1 - - - self.buffer_size = int(frame_rate / 30.0 * 30) - self.max_time_lost = self.buffer_size - self.kalman_filter = KalmanFilter() - -# def update(self, output_results): - def update(self, det_bboxes, det_labels, frame_id, track_feats): - -# self.frame_id += 1 - self.frame_id = frame_id + 1 - activated_starcks = [] - refind_stracks = [] - lost_stracks = [] - removed_stracks = [] - -# scores = output_results[:, 4] -# bboxes = output_results[:, :4] # x1y1x2y2 - scores = det_bboxes[:, 4].cpu().numpy() - bboxes = det_bboxes[:, :4].cpu().numpy() - - track_feature = F.normalize(track_feats).cpu().numpy() - - remain_inds = scores > self.track_thresh - dets = bboxes[remain_inds] - scores_keep = scores[remain_inds] - id_feature = track_feature[remain_inds] - - - inds_low = scores > self.low_thresh - inds_high = scores < self.track_thresh - inds_second = np.logical_and(inds_low, inds_high) - dets_second = bboxes[inds_second] - scores_second = scores[inds_second] - id_feature_second = track_feature[inds_second] - - if len(dets) > 0: - '''Detections''' - detections = [STrack(STrack.tlbr_to_tlwh(tlbr), s, f) for - (tlbr, s, f) in zip(dets, scores_keep, id_feature)] - else: - detections = [] - - - ''' Add newly detected tracklets to tracked_stracks''' - unconfirmed = [] - tracked_stracks = [] # type: list[STrack] - for track in self.tracked_stracks: - if not track.is_activated: - unconfirmed.append(track) - else: - tracked_stracks.append(track) - - ''' Step 2: First association, with Kalman and IOU''' - strack_pool = joint_stracks(tracked_stracks, self.lost_stracks) - # Predict the current location with KF - STrack.multi_predict(strack_pool) - - dists = matching.embedding_distance(strack_pool, detections) - dists = matching.fuse_motion(self.kalman_filter, dists, strack_pool, detections) - matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.6) -# dists = matching.iou_distance(strack_pool, detections) -# matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.8) - - for itracked, idet in matches: - track = strack_pool[itracked] - det = detections[idet] - if track.state == TrackState.Tracked: - track.update(detections[idet], self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - ''' Step 3: Second association, with IOU''' - detections = [detections[i] for i in u_detection] - r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked] - dists = matching.iou_distance(r_tracked_stracks, detections) - matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.5) - - for itracked, idet in matches: - track = r_tracked_stracks[itracked] - det = detections[idet] - if track.state == TrackState.Tracked: - track.update(det, self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - - ''' Step 3.5: Second association, with IOU''' - # association the untrack to the low score detections - if len(dets_second) > 0: - '''Detections''' - detections_second = [STrack(STrack.tlbr_to_tlwh(tlbr), s, f) for - (tlbr, s, f) in zip(dets_second, scores_second, id_feature_second)] - else: - detections_second = [] - - second_tracked_stracks = [r_tracked_stracks[i] for i in u_track if r_tracked_stracks[i].state == TrackState.Tracked] - dists = matching.iou_distance(second_tracked_stracks, detections_second) - matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5) - for itracked, idet in matches: - track = second_tracked_stracks[itracked] - det = detections_second[idet] - if track.state == TrackState.Tracked: - track.update(det, self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - for it in u_track: - #track = r_tracked_stracks[it] - track = second_tracked_stracks[it] - if not track.state == TrackState.Lost: - track.mark_lost() - lost_stracks.append(track) - - '''Deal with unconfirmed tracks, usually tracks with only one beginning frame''' - detections = [detections[i] for i in u_detection] - dists = matching.iou_distance(unconfirmed, detections) - matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7) - for itracked, idet in matches: - unconfirmed[itracked].update(detections[idet], self.frame_id) - activated_starcks.append(unconfirmed[itracked]) - for it in u_unconfirmed: - track = unconfirmed[it] - track.mark_removed() - removed_stracks.append(track) - - """ Step 4: Init new stracks""" - for inew in u_detection: - track = detections[inew] - if track.score < self.det_thresh: - continue - track.activate(self.kalman_filter, self.frame_id) - activated_starcks.append(track) - """ Step 5: Update state""" - for track in self.lost_stracks: - if self.frame_id - track.end_frame > self.max_time_lost: - track.mark_removed() - removed_stracks.append(track) - - # print('Ramained match {} s'.format(t4-t3)) - - self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked] - self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_starcks) - self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks) - self.lost_stracks.extend(lost_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks) - self.removed_stracks.extend(removed_stracks) - self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks) - # get scores of lost tracks - output_stracks = [track for track in self.tracked_stracks if track.is_activated] - -# return output_stracks - - bboxes = [] - labels = [] - ids = [] - for track in output_stracks: - if track.is_activated: - track_bbox = track.tlbr - bboxes.append([track_bbox[0], track_bbox[1], track_bbox[2], track_bbox[3], track.score]) - labels.append(0) - ids.append(track.track_id) - return torch.tensor(bboxes), torch.tensor(labels), torch.tensor(ids) - -def joint_stracks(tlista, tlistb): - exists = {} - res = [] - for t in tlista: - exists[t.track_id] = 1 - res.append(t) - for t in tlistb: - tid = t.track_id - if not exists.get(tid, 0): - exists[tid] = 1 - res.append(t) - return res - - -def sub_stracks(tlista, tlistb): - stracks = {} - for t in tlista: - stracks[t.track_id] = t - for t in tlistb: - tid = t.track_id - if stracks.get(tid, 0): - del stracks[tid] - return list(stracks.values()) - - -def remove_duplicate_stracks(stracksa, stracksb): - pdist = matching.iou_distance(stracksa, stracksb) - pairs = np.where(pdist < 0.15) - dupa, dupb = list(), list() - for p, q in zip(*pairs): - timep = stracksa[p].frame_id - stracksa[p].start_frame - timeq = stracksb[q].frame_id - stracksb[q].start_frame - if timep > timeq: - dupb.append(q) - else: - dupa.append(p) - resa = [t for i, t in enumerate(stracksa) if not i in dupa] - resb = [t for i, t in enumerate(stracksb) if not i in dupb] - return resa, resb - - -def remove_fp_stracks(stracksa, n_frame=10): - remain = [] - for t in stracksa: - score_5 = t.score_list[-n_frame:] - score_5 = np.array(score_5, dtype=np.float32) - index = score_5 < 0.45 - num = np.sum(index) - if num < n_frame: - remain.append(t) - return remain diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/vision.cpp b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/vision.cpp deleted file mode 100644 index 4a08821e0121a77556aa7a263ec8ebfa928b13b6..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/vision.cpp +++ /dev/null @@ -1,21 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#include "ms_deform_attn.h" - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} diff --git a/spaces/EuroPython2022/illustrated-lyrics-generator/layers.py b/spaces/EuroPython2022/illustrated-lyrics-generator/layers.py deleted file mode 100644 index 135f44e83c28293ef39d0cae684120c20d3b3c73..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/illustrated-lyrics-generator/layers.py +++ /dev/null @@ -1,273 +0,0 @@ -# Source: https://huggingface.co/huggan/fastgan-few-shot-fauvism-still-life -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.batchnorm import BatchNorm2d -from torch.nn.utils import spectral_norm - - -class SpectralConv2d(nn.Module): - - def __init__(self, *args, **kwargs): - super().__init__() - self._conv = spectral_norm( - nn.Conv2d(*args, **kwargs) - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._conv(input) - - -class SpectralConvTranspose2d(nn.Module): - - def __init__(self, *args, **kwargs): - super().__init__() - self._conv = spectral_norm( - nn.ConvTranspose2d(*args, **kwargs) - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._conv(input) - - -class Noise(nn.Module): - - def __init__(self): - super().__init__() - self._weight = nn.Parameter( - torch.zeros(1), - requires_grad=True, - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - batch_size, _, height, width = input.shape - noise = torch.randn(batch_size, 1, height, width, device=input.device) - return self._weight * noise + input - - -class InitLayer(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers = nn.Sequential( - SpectralConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels * 2, - kernel_size=4, - stride=1, - padding=0, - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels * 2), - nn.GLU(dim=1), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._layers(input) - - -class SLEBlock(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers = nn.Sequential( - nn.AdaptiveAvgPool2d(output_size=4), - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=4, - stride=1, - padding=0, - bias=False, - ), - nn.SiLU(), - SpectralConv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ), - nn.Sigmoid(), - ) - - def forward(self, low_dim: torch.Tensor, - high_dim: torch.Tensor) -> torch.Tensor: - return high_dim * self._layers(low_dim) - - -class UpsampleBlockT1(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers = nn.Sequential( - nn.Upsample(scale_factor=2, mode='nearest'), - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels * 2, - kernel_size=3, - stride=1, - padding='same', - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels * 2), - nn.GLU(dim=1), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._layers(input) - - -class UpsampleBlockT2(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers = nn.Sequential( - nn.Upsample(scale_factor=2, mode='nearest'), - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels * 2, - kernel_size=3, - stride=1, - padding='same', - bias=False, - ), - Noise(), - BatchNorm2d(num_features=out_channels * 2), - nn.GLU(dim=1), - SpectralConv2d( - in_channels=out_channels, - out_channels=out_channels * 2, - kernel_size=3, - stride=1, - padding='same', - bias=False, - ), - Noise(), - nn.BatchNorm2d(num_features=out_channels * 2), - nn.GLU(dim=1), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._layers(input) - - -class DownsampleBlockT1(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers = nn.Sequential( - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=4, - stride=2, - padding=1, - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels), - nn.LeakyReLU(negative_slope=0.2), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._layers(input) - - -class DownsampleBlockT2(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._layers_1 = nn.Sequential( - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=4, - stride=2, - padding=1, - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels), - nn.LeakyReLU(negative_slope=0.2), - SpectralConv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=3, - stride=1, - padding='same', - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels), - nn.LeakyReLU(negative_slope=0.2), - ) - - self._layers_2 = nn.Sequential( - nn.AvgPool2d( - kernel_size=2, - stride=2, - ), - SpectralConv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ), - nn.BatchNorm2d(num_features=out_channels), - nn.LeakyReLU(negative_slope=0.2), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - t1 = self._layers_1(input) - t2 = self._layers_2(input) - return (t1 + t2) / 2 - - -class Decoder(nn.Module): - - def __init__(self, in_channels: int, - out_channels: int): - super().__init__() - - self._channels = { - 16: 128, - 32: 64, - 64: 64, - 128: 32, - 256: 16, - 512: 8, - 1024: 4, - } - - self._layers = nn.Sequential( - nn.AdaptiveAvgPool2d(output_size=8), - UpsampleBlockT1(in_channels=in_channels, out_channels=self._channels[16]), - UpsampleBlockT1(in_channels=self._channels[16], out_channels=self._channels[32]), - UpsampleBlockT1(in_channels=self._channels[32], out_channels=self._channels[64]), - UpsampleBlockT1(in_channels=self._channels[64], out_channels=self._channels[128]), - SpectralConv2d( - in_channels=self._channels[128], - out_channels=out_channels, - kernel_size=3, - stride=1, - padding='same', - bias=False, - ), - nn.Tanh(), - ) - - def forward(self, input: torch.Tensor) -> torch.Tensor: - return self._layers(input) diff --git a/spaces/FoxMeo/fire-detector/README.md b/spaces/FoxMeo/fire-detector/README.md deleted file mode 100644 index 9db9f851bd2e89d1064094b01b0389e31344d55f..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fire Detector -emoji: 🐢 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GT4SD/hf-transformers/utils.py b/spaces/GT4SD/hf-transformers/utils.py deleted file mode 100644 index 34a817843a959d6ef46ae527efd08788f55a3196..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/hf-transformers/utils.py +++ /dev/null @@ -1,48 +0,0 @@ -import logging -from collections import defaultdict -from typing import List - -import mols2grid -import pandas as pd - -logger = logging.getLogger(__name__) -logger.addHandler(logging.NullHandler()) - - -def draw_grid_generate( - samples: List[str], - seeds: List[str] = [], - n_cols: int = 3, - size=(140, 200), -) -> str: - """ - Uses mols2grid to draw a HTML grid for the generated molecules - - Args: - samples: The generated samples. - n_cols: Number of columns in grid. Defaults to 5. - size: Size of molecule in grid. Defaults to (140, 200). - - Returns: - HTML to display - """ - - result = defaultdict(list) - result.update( - { - "SMILES": seeds + samples, - "Name": [f"Seed_{i}" for i in range(len(seeds))] - + [f"Generated_{i}" for i in range(len(samples))], - }, - ) - - result_df = pd.DataFrame(result) - obj = mols2grid.display( - result_df, - tooltip=list(result.keys()), - height=1100, - n_cols=n_cols, - name="Results", - size=size, - ) - return obj.data diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/dev/core/mock.py b/spaces/GaenKoki/voicevox/voicevox_engine/dev/core/mock.py deleted file mode 100644 index 59eb63d7039b44a27c9e5e17120d83d41763c353..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/dev/core/mock.py +++ /dev/null @@ -1,121 +0,0 @@ -import json -from logging import getLogger -from typing import Any, Dict, List - -import numpy as np -from pyopenjtalk import tts -from scipy.signal import resample - -DUMMY_TEXT = "これはダミーのテキストです" - - -def initialize(path: str, use_gpu: bool, *args: List[Any]) -> None: - pass - - -def yukarin_s_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, yukarin_s_forward() is a mock. Return values are incorrect.", - ) - return np.ones(length) / 5 - - -def yukarin_sa_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, yukarin_sa_forward() is a mock. Return values are incorrect.", - ) - return np.ones((1, length)) * 5 - - -def decode_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - """ - 合成音声の波形データをNumPy配列で返します。ただし、常に固定の文言を読み上げます(DUMMY_TEXT) - 参照→SynthesisEngine のdocstring [Mock] - - Parameters - ---------- - length : int - フレームの長さ - - Returns - ------- - wave : np.ndarray - 音声合成した波形データ - - Note - ------- - ここで行う音声合成では、調声(ピッチ等)を反映しない - また、入力内容によらず常に固定の文言を読み上げる - - # pyopenjtalk.tts()の出力仕様 - dtype=np.float64, 16 bit, mono 48000 Hz - - # resampleの説明 - 非モックdecode_forwardと合わせるために、出力を24kHzに変換した。 - """ - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, decode_forward() is a mock. Return values are incorrect.", - ) - wave, sr = tts(DUMMY_TEXT) - wave = resample( - wave.astype("int16"), - 24000 * len(wave) // 48000, - ) - return wave - - -def metas() -> str: - return json.dumps( - [ - { - "name": "dummy1", - "styles": [ - {"name": "style0", "id": 0}, - {"name": "style1", "id": 2}, - {"name": "style2", "id": 4}, - {"name": "style3", "id": 6}, - ], - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "version": "mock", - }, - { - "name": "dummy2", - "styles": [ - {"name": "style0", "id": 1}, - {"name": "style1", "id": 3}, - {"name": "style2", "id": 5}, - {"name": "style3", "id": 7}, - ], - "speaker_uuid": "388f246b-8c41-4ac1-8e2d-5d79f3ff56d9", - "version": "mock", - }, - { - "name": "dummy3", - "styles": [ - {"name": "style0", "id": 8}, - ], - "speaker_uuid": "35b2c544-660e-401e-b503-0e14c635303a", - "version": "mock", - }, - { - "name": "dummy4", - "styles": [ - {"name": "style0", "id": 9}, - ], - "speaker_uuid": "b1a81618-b27b-40d2-b0ea-27a9ad408c4b", - "version": "mock", - }, - ] - ) - - -def supported_devices() -> str: - return json.dumps( - { - "cpu": True, - "cuda": False, - } - ) diff --git a/spaces/GoldMan/img2prompt/README.md b/spaces/GoldMan/img2prompt/README.md deleted file mode 100644 index 175d093666664c8804f9292305a40d81290945c8..0000000000000000000000000000000000000000 --- a/spaces/GoldMan/img2prompt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Img2prompt -emoji: 🔥 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/__init__.py b/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/__init__.py deleted file mode 100644 index dbc290f09d0840132a4cf43ff7b8ffa3c67a46ae..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model import create_mnist_inference as create diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/README.md b/spaces/Gradio-Blocks/protGPT2_gradioFold/README.md deleted file mode 100644 index 201d060ce5450312c335214761b044fb978497e4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: ProtGPT2_gradioFold -emoji: 🧬 -colorFrom: lighblue -colorTo: blue -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false -license: mit ---- - -Let a 735 million parameter language model dream up new sequences and predict their structures using AlphaFold. - -Note that only a basic AlphaFold pipeline is used with no refinement using Amber and no MSA as input (single sequence mode). - -The code in `app.py` is licensed under MIT license, the AlphaFold code is licensed under Apache2 license by Deepmind, the AlphaFold parameters are available under CC BY 4.0 by Deepmind. protGPT2 by Ferruz et. al. is licensed under MIT. - -Used libraries: -- Huggingface transformers -- 3Dmol.js -- Tailwind CSS -- Gradio -- Torch and JAx -- ColabFold -- AlphaFold \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py deleted file mode 100644 index 852c5ca7c5c4ba04f6a5f7dd6dbaf6b2c357a2fa..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py +++ /dev/null @@ -1,45 +0,0 @@ -_base_ = '../gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py' -# model settings -model = dict( - roi_head=dict( - bbox_roi_extractor=dict( - type='GenericRoIExtractor', - aggregation='sum', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - pre_cfg=dict( - type='ConvModule', - in_channels=256, - out_channels=256, - kernel_size=5, - padding=2, - inplace=False, - ), - post_cfg=dict( - type='GeneralizedAttention', - in_channels=256, - spatial_range=-1, - num_heads=6, - attention_type='0100', - kv_stride=2)), - mask_roi_extractor=dict( - type='GenericRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=2), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - pre_cfg=dict( - type='ConvModule', - in_channels=256, - out_channels=256, - kernel_size=5, - padding=2, - inplace=False, - ), - post_cfg=dict( - type='GeneralizedAttention', - in_channels=256, - spatial_range=-1, - num_heads=6, - attention_type='0100', - kv_stride=2)))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py deleted file mode 100644 index c706cf3548e311a7930e5b58299e05af30c43d98..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_unet_s5-d16.py', - '../_base_/datasets/chase_db1.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85))) -evaluation = dict(metric='mDice') diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/ops/wrappers.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/ops/wrappers.py deleted file mode 100644 index 0ed9a0cb8d7c0e0ec2748dd89c652756653cac78..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/ops/wrappers.py +++ /dev/null @@ -1,50 +0,0 @@ -import warnings - -import torch.nn as nn -import torch.nn.functional as F - - -def resize(input, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None, - warning=True): - if warning: - if size is not None and align_corners: - input_h, input_w = tuple(int(x) for x in input.shape[2:]) - output_h, output_w = tuple(int(x) for x in size) - if output_h > input_h or output_w > output_h: - if ((output_h > 1 and output_w > 1 and input_h > 1 - and input_w > 1) and (output_h - 1) % (input_h - 1) - and (output_w - 1) % (input_w - 1)): - warnings.warn( - f'When align_corners={align_corners}, ' - 'the output would more aligned if ' - f'input size {(input_h, input_w)} is `x+1` and ' - f'out size {(output_h, output_w)} is `nx+1`') - return F.interpolate(input, size, scale_factor, mode, align_corners) - - -class Upsample(nn.Module): - - def __init__(self, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None): - super(Upsample, self).__init__() - self.size = size - if isinstance(scale_factor, tuple): - self.scale_factor = tuple(float(factor) for factor in scale_factor) - else: - self.scale_factor = float(scale_factor) if scale_factor else None - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - if not self.size: - size = [int(t * self.scale_factor) for t in x.shape[-2:]] - else: - size = self.size - return resize(x, size, None, self.mode, self.align_corners) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/deadlock.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/deadlock.py deleted file mode 100644 index 8abd1bbeea5909e664cf816c020bd7c37effdb66..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/deadlock.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from queue import Queue, Empty -import signal -import sys -import threading -import traceback - -logger = logging.getLogger(__name__) - - -class DeadlockDetect: - def __init__(self, use: bool = False, timeout: float = 120.): - self.use = use - self.timeout = timeout - self._queue: Queue = Queue() - - def update(self, stage: str): - if self.use: - self._queue.put(stage) - - def __enter__(self): - if self.use: - self._thread = threading.Thread(target=self._detector_thread) - self._thread.start() - - def __exit__(self, exc_type, exc_val, exc_tb): - if self.use: - self._queue.put(None) - self._thread.join() - - def _detector_thread(self): - logger.debug("Deadlock detector started") - last_stage = "init" - while True: - try: - stage = self._queue.get(timeout=self.timeout) - except Empty: - break - if stage is None: - logger.debug("Exiting deadlock detector thread") - return - else: - last_stage = stage - logger.error("Deadlock detector timed out, last stage was %s", last_stage) - for th in threading.enumerate(): - print(th, file=sys.stderr) - traceback.print_stack(sys._current_frames()[th.ident]) - print(file=sys.stderr) - sys.stdout.flush() - sys.stderr.flush() - os.kill(os.getpid(), signal.SIGKILL) diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py b/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py deleted file mode 100644 index 641a02a75f2cfaadea45851cad2a95b39bfa1eae..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_tokenize.py +++ /dev/null @@ -1,117 +0,0 @@ -import torch -from torch.utils import data -import numpy as np -from os.path import join as pjoin -import random -import codecs as cs -from tqdm import tqdm - - - -class VQMotionDataset(data.Dataset): - def __init__(self, dataset_name, feat_bias = 5, window_size = 64, unit_length = 8): - self.window_size = window_size - self.unit_length = unit_length - self.feat_bias = feat_bias - - self.dataset_name = dataset_name - min_motion_len = 40 if dataset_name =='t2m' else 24 - - if dataset_name == 't2m': - self.data_root = './dataset/HumanML3D' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 22 - radius = 4 - fps = 20 - self.max_motion_length = 196 - dim_pose = 263 - self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.t2m_kinematic_chain - elif dataset_name == 'kit': - self.data_root = './dataset/KIT-ML' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 21 - radius = 240 * 8 - fps = 12.5 - dim_pose = 251 - self.max_motion_length = 196 - self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.kit_kinematic_chain - - joints_num = self.joints_num - - mean = np.load(pjoin(self.meta_dir, 'mean.npy')) - std = np.load(pjoin(self.meta_dir, 'std.npy')) - - split_file = pjoin(self.data_root, 'train.txt') - - data_dict = {} - id_list = [] - with cs.open(split_file, 'r') as f: - for line in f.readlines(): - id_list.append(line.strip()) - - new_name_list = [] - length_list = [] - for name in tqdm(id_list): - try: - motion = np.load(pjoin(self.motion_dir, name + '.npy')) - if (len(motion)) < min_motion_len or (len(motion) >= 200): - continue - - data_dict[name] = {'motion': motion, - 'length': len(motion), - 'name': name} - new_name_list.append(name) - length_list.append(len(motion)) - except: - # Some motion may not exist in KIT dataset - pass - - - self.mean = mean - self.std = std - self.length_arr = np.array(length_list) - self.data_dict = data_dict - self.name_list = new_name_list - - def inv_transform(self, data): - return data * self.std + self.mean - - def __len__(self): - return len(self.data_dict) - - def __getitem__(self, item): - name = self.name_list[item] - data = self.data_dict[name] - motion, m_length = data['motion'], data['length'] - - m_length = (m_length // self.unit_length) * self.unit_length - - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx+m_length] - - "Z Normalization" - motion = (motion - self.mean) / self.std - - return motion, name - -def DATALoader(dataset_name, - batch_size = 1, - num_workers = 8, unit_length = 4) : - - train_loader = torch.utils.data.DataLoader(VQMotionDataset(dataset_name, unit_length=unit_length), - batch_size, - shuffle=True, - num_workers=num_workers, - #collate_fn=collate_fn, - drop_last = True) - - return train_loader - -def cycle(iterable): - while True: - for x in iterable: - yield x diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/losses.py b/spaces/Grezz/generate_human_motion/VQ-Trans/utils/losses.py deleted file mode 100644 index 1998161032731fc2c3edae701700679c00fd00d0..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/losses.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -import torch.nn as nn - -class ReConsLoss(nn.Module): - def __init__(self, recons_loss, nb_joints): - super(ReConsLoss, self).__init__() - - if recons_loss == 'l1': - self.Loss = torch.nn.L1Loss() - elif recons_loss == 'l2' : - self.Loss = torch.nn.MSELoss() - elif recons_loss == 'l1_smooth' : - self.Loss = torch.nn.SmoothL1Loss() - - # 4 global motion associated to root - # 12 local motion (3 local xyz, 3 vel xyz, 6 rot6d) - # 3 global vel xyz - # 4 foot contact - self.nb_joints = nb_joints - self.motion_dim = (nb_joints - 1) * 12 + 4 + 3 + 4 - - def forward(self, motion_pred, motion_gt) : - loss = self.Loss(motion_pred[..., : self.motion_dim], motion_gt[..., :self.motion_dim]) - return loss - - def forward_vel(self, motion_pred, motion_gt) : - loss = self.Loss(motion_pred[..., 4 : (self.nb_joints - 1) * 3 + 4], motion_gt[..., 4 : (self.nb_joints - 1) * 3 + 4]) - return loss - - \ No newline at end of file diff --git a/spaces/Guying2/guying/Dockerfile b/spaces/Guying2/guying/Dockerfile deleted file mode 100644 index 535624113f3b520e4829240a48bd3652430de828..0000000000000000000000000000000000000000 --- a/spaces/Guying2/guying/Dockerfile +++ /dev/null @@ -1,23 +0,0 @@ -FROM openjdk:17-slim - -# 设置时区 -ENV TZ Asia/Shanghai - -# 设置工作目录 -WORKDIR /app - -# 复制文件到工作目录 -COPY bin /app/bin -COPY lib /app/lib -COPY txlib /app/txlib - -# 设置命令 -RUN chmod -R 777 /tmp -RUN chmod -R 777 /app -RUN sed 's/"key": ".*"/"key": "'"$KEY_VALUE"'"/' txlib/$TXLIB_VERSION/config.json > /app/txlib/$TXLIB_VERSION/config.json - -# 运行 -CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION - -# 暴露端口 -EXPOSE 7860 \ No newline at end of file diff --git a/spaces/HaHaBill/LandShapes-Antarctica/TkTorchWindow.py b/spaces/HaHaBill/LandShapes-Antarctica/TkTorchWindow.py deleted file mode 100644 index fbe1ef1cc35c2590b0a5976254af3f146de4d9b3..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/TkTorchWindow.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import tkinter as tk -import numpy as np -import time -from contextlib import contextmanager -import pycuda.driver -from pycuda.gl import graphics_map_flags -from glumpy import gloo, gl -from pyopengltk import OpenGLFrame -import torch -from torch.autograd import Variable - -# TkInter widget that can draw torch tensors directly from GPU memory - -@contextmanager -def cuda_activate(img): - """Context manager simplifying use of pycuda.gl.RegisteredImage""" - mapping = img.map() - yield mapping.array(0,0) - mapping.unmap() - -def create_shared_texture(w, h, c=4, - map_flags=graphics_map_flags.WRITE_DISCARD, - dtype=np.uint8): - """Create and return a Texture2D with gloo and pycuda views.""" - tex = np.zeros((h,w,c), dtype).view(gloo.Texture2D) - tex.activate() # force gloo to create on GPU - tex.deactivate() - cuda_buffer = pycuda.gl.RegisteredImage( - int(tex.handle), tex.target, map_flags) - return tex, cuda_buffer - -# Shape batch as square if possible -def get_grid_dims(B): - S = int(B**0.5 + 0.5) - while B % S != 0: - S -= 1 - return (B // S, S) - -def create_gl_texture(tensor_shape): - if len(tensor_shape) != 4: - raise RuntimeError('Please provide a tensor of shape NCHW') - - N, C, H, W = tensor_shape - - cols, rows = get_grid_dims(N) - tex, cuda_buffer = create_shared_texture(W*cols, H*rows, 4) - - return tex, cuda_buffer - -# Create window with OpenGL context -class TorchImageView(OpenGLFrame): - def __init__(self, root = None, show_fps=True, **kwargs): - self.root = root or tk.Tk() - self.width = kwargs.get('width', 512) - self.height = kwargs.get('height', 512) - self.show_fps = show_fps - self.pycuda_initialized = False - self.animate = 0 # disable internal main loop - OpenGLFrame.__init__(self, root, **kwargs) - - # Called by pyopengltk.BaseOpenGLFrame - # when the frame goes onto the screen - def initgl(self): - if not self.pycuda_initialized: - self.setup_gl(self.width, self.height) - self.pycuda_initialized = True - - """Initalize gl states when the frame is created""" - gl.glViewport(0, 0, self.width, self.height) - gl.glClearColor(0.0, 0.0, 0.0, 0.0) - self.dt_history = [1000/60] - self.t0 = time.time() - self.t_last = self.t0 - self.nframes = 0 - - def setup_gl(self, width, height): - # setup pycuda and torch - import pycuda.gl.autoinit - import pycuda.gl - - assert torch.cuda.is_available(), "PyTorch: CUDA is not available" - print('Using GPU {}'.format(torch.cuda.current_device())) - - # Create tensor to be shared between GL and CUDA - # Always overwritten so no sharing is necessary - dummy = torch.cuda.FloatTensor((1)) - dummy.uniform_() - dummy = Variable(dummy) - - # Create a buffer with pycuda and gloo views, using tensor created above - self.tex, self.cuda_buffer = create_gl_texture((1, 3, width, height)) - - # create a shader to program to draw to the screen - vertex = """ - uniform float scale; - attribute vec2 position; - attribute vec2 texcoord; - varying vec2 v_texcoord; - void main() - { - v_texcoord = texcoord; - gl_Position = vec4(scale*position, 0.0, 1.0); - } """ - fragment = """ - uniform sampler2D tex; - varying vec2 v_texcoord; - void main() - { - gl_FragColor = texture2D(tex, v_texcoord); - } """ - # Build the program and corresponding buffers (with 4 vertices) - self.screen = gloo.Program(vertex, fragment, count=4) - - # NDC coordinates: Texcoords: Vertex order, - # (-1, +1) (+1, +1) (0,0) (1,0) triangle strip: - # +-------+ +----+ 1----3 - # | NDC | | | | / | - # | SPACE | | | | / | - # +-------+ +----+ 2----4 - # (-1, -1) (+1, -1) (0,1) (1,1) - - # Upload data to GPU - self.screen['position'] = [(-1,+1), (-1,-1), (+1,+1), (+1,-1)] - self.screen['texcoord'] = [(0,0), (0,1), (1,0), (1,1)] - self.screen['scale'] = 1.0 - self.screen['tex'] = self.tex - - # Don't call directly, use update() instead - def redraw(self): - t_now = time.time() - dt = t_now - self.t_last - self.t_last = t_now - - self.dt_history = ([dt] + self.dt_history)[:50] - dt_mean = sum(self.dt_history) / len(self.dt_history) - - if self.show_fps and self.nframes % 60 == 0: - self.master.title('FPS: {:.0f}'.format(1 / dt_mean)) - - def draw(self, img): - assert len(img.shape) == 4, "Please provide an NCHW image tensor" - assert img.device.type == "cuda", "Please provide a CUDA tensor" - - if img.dtype.is_floating_point: - img = (255*img).byte() - - # Tile images - N, C, H, W = img.shape - - if N > 1: - cols, rows = get_grid_dims(N) - img = img.reshape(cols, rows, C, H, W) - img = img.permute(2, 1, 3, 0, 4) # [C, rows, H, cols, W] - img = img.reshape(1, C, rows*H, cols*W) - - tensor = img.squeeze().permute(1, 2, 0).data # CHW => HWC - if C == 3: - tensor = torch.cat((tensor, tensor[:,:,0:1]),2) # add the alpha channel - tensor[:,:,3] = 1 # set alpha - - tensor = tensor.contiguous() - - tex_h, tex_w, _ = self.tex.shape - tensor_h, tensor_w, _ = tensor.shape - - if (tex_h, tex_w) != (tensor_h, tensor_w): - print(f'Resizing texture to {tensor_w}*{tensor_h}') - self.tex, self.cuda_buffer = create_gl_texture((N, C, H, W)) # original shape - self.screen['tex'] = self.tex - - # copy from torch into buffer - assert self.tex.nbytes == tensor.numel()*tensor.element_size(), "Tensor and texture shape mismatch!" - with cuda_activate(self.cuda_buffer) as ary: - cpy = pycuda.driver.Memcpy2D() - cpy.set_src_device(tensor.data_ptr()) - cpy.set_dst_array(ary) - cpy.width_in_bytes = cpy.src_pitch = cpy.dst_pitch = self.tex.nbytes//tensor_h - cpy.height = tensor_h - cpy(aligned=False) - torch.cuda.synchronize() - - # draw to screen - self.screen.draw(gl.GL_TRIANGLE_STRIP) - - def update(self): - self.update_idletasks() - self.tkMakeCurrent() - self.redraw() - self.tkSwapBuffers() - -# USAGE: -# root = tk.Tk() -# iv = TorchImageView(root, width=512, height=512) -# iv.pack(fill='both', expand=True) -# while True: -# iv.draw(nchw_tensor) -# root.update() -# iv.update() \ No newline at end of file diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/README.md b/spaces/Harveenchadha/oiTrans/indic_nlp_library/README.md deleted file mode 100644 index 0b7f8a82798e3ee874f8f838a635f89290d3e47e..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/README.md +++ /dev/null @@ -1,142 +0,0 @@ -# Indic NLP Library - -The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text. - -The library provides the following functionalities: - -- Text Normalization -- Script Information -- Word Tokenization and Detokenization -- Sentence Splitting -- Word Segmentation -- Syllabification -- Script Conversion -- Romanization -- Indicization -- Transliteration -- Translation - -The data resources required by the Indic NLP Library are hosted in a different repository. These resources are required for some modules. You can download from the [Indic NLP Resources](https://github.com/anoopkunchukuttan/indic_nlp_resources) project. - -**If you are interested in Indian language NLP resources, you should check the [Indic NLP Catalog](https://github.com/indicnlpweb/indicnlp_catalog) for pointers.** - -## Pre-requisites - -- Python 3.x - - (For Python 2.x version check the tag `PYTHON_2.7_FINAL_JAN_2019`. Not actively supporting Python 2.x anymore, but will try to maintain as much compatibility as possible) -- [Indic NLP Resources](https://github.com/anoopkunchukuttan/indic_nlp_resources) -- [Urduhack](https://github.com/urduhack/urduhack): Needed only if Urdu normalization is required. It has other dependencies like Tensorflow. -- Other dependencies are listed in setup.py - - -## Configuration - -- Installation from pip: - - `pip install indic-nlp-library` - -- If you want to use the project from the github repo, add the project to the Python Path: - - - Clone this repository - - Install dependencies: `pip install -r requirements.txt` - - Run: `export PYTHONPATH=$PYTHONPATH:<project base directory>` - -- In either case, export the path to the _Indic NLP Resources_ directory - - Run: `export INDIC_RESOURCES_PATH=<path to Indic NLP resources>` - -## Usage - -You can use the Python API to access all the features of the library. Many of the most common operations are also accessible via a unified commandline API. - -### Getting Started - -Check [this IPython Notebook](http://nbviewer.ipython.org/url/anoopkunchukuttan.github.io/indic_nlp_library/doc/indic_nlp_examples.ipynb) for examples to use the Python API. - - You can find the Python 2.x Notebook [here](http://nbviewer.ipython.org/url/anoopkunchukuttan.github.io/indic_nlp_library/doc/indic_nlp_examples_2_7.ipynb) - -### Documentation - -You can find detailed documentation [HERE](https://indic-nlp-library.readthedocs.io/en/latest) - -This documents the Python API as well as the commandline reference. - -## Citing - -If you use this library, please include the following citation: - -``` -@misc{kunchukuttan2020indicnlp, -author = "Anoop Kunchukuttan", -title = "{The IndicNLP Library}", -year = "2020", -howpublished={\url{https://github.com/anoopkunchukuttan/indic_nlp_library/blob/master/docs/indicnlp.pdf}} -} -``` -You can find the document [HERE](docs/indicnlp.pdf) - -## Website - -`http://anoopkunchukuttan.github.io/indic_nlp_library` - -## Author -Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](anoop.kunchukuttan@gmail.com)) - -## Companies, Organizations, Projects using IndicNLP Library - -- [AI4Bharat-IndicNLPSuite](https://indicnlp.ai4bharat.org) -- [The Classical Language Toolkit](http://cltk.org) -- [Microsoft NLP Recipes](https://github.com/microsoft/nlp-recipes) -- [Facebook M2M-100](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) - -## Revision Log - - -0.81 : 26 May 2021 - - - Bug fix in version number extraction - -0.80 : 24 May 2021 - - - Improved sentence splitting - - Bug fixes - - Support for Urdu Normalizer - -0.71 : 03 Sep 2020 - - - Improved documentation - - Bug fixes - -0.7 : 02 Apr 2020: - - - Unified commandline - - Improved documentation - - Added setup.py - -0.6 : 16 Dec 2019: - - - New romanizer and indicizer - - Script Unifiers - - Improved script normalizers - - Added contrib directory for sample uses - - changed to MIT license - -0.5 : 03 Jun 2019: - - - Improved word tokenizer to handle dates and numbers. - - Added sentence splitter that can handle common prefixes/honorofics and uses some heuristics. - - Added detokenizer - - Added acronym transliterator that can convert English acronyms to Brahmi-derived scripts - -0.4 : 28 Jan 2019: Ported to Python 3, and lots of feature additions since last release; primarily around script information, script similarity and syllabification. - -0.3 : 21 Oct 2014: Supports morph-analysis between Indian languages - -0.2 : 13 Jun 2014: Supports transliteration between Indian languages and tokenization of Indian languages - -0.1 : 12 Mar 2014: Initial version. Supports text normalization. - -## LICENSE - -Indic NLP Library is released under the MIT license - - diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/setup.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/setup.py deleted file mode 100644 index 8b132dc2faeab6c863c6d5ecf04863b2191afdcb..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/setup.py +++ /dev/null @@ -1,48 +0,0 @@ -import setuptools -from pkg_resources import parse_requirements -import pathlib -import os - -def write_version_py(): - with open(os.path.join("indicnlp", "version.txt")) as f: - version = f.read().strip() - - # write version info to fairseq/version.py - with open(os.path.join("indicnlp", "version.py"), "w") as f: - f.write('__version__ = "{}"\n'.format(version)) - return version - -with open("README.md", "r") as fh: - long_description = fh.read() - -version=write_version_py() - -setuptools.setup( - name="indic_nlp_library", # Replace with your own username - version=version, - author="Anoop Kunchukuttan", - author_email="anoop.kunchukuttan@gmail.com", - description="The goal of the Indic NLP Library is to build Python based libraries for common"\ - ' text processing and Natural Language Processing in Indian languages.', - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/anoopkunchukuttan/indic_nlp_library", - # project_urls={ - # "Bug Tracker": "https://bugs.example.com/HelloWorld/", - # "Documentation": "https://docs.example.com/HelloWorld/", - # "Source Code": "https://code.example.com/HelloWorld/", - # }, - packages=setuptools.find_packages(), - license='MIT', - classifiers=[ - "Programming Language :: Python :: 3", - "License :: OSI Approved :: MIT License", - "Operating System :: OS Independent", - ], - python_requires='>=3.5', - download_url='https://github.com/anoopkunchukuttan/indic_nlp_library/archive/master.zip', - install_requires=[ - str(requirement) for requirement - in parse_requirements(pathlib.Path('requirements.txt').open()) - ] -) diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/__init__.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/__init__.py deleted file mode 100644 index f2d88f3ae975848f6583c6a41ce0e90e5c505154..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/__init__.py +++ /dev/null @@ -1,86 +0,0 @@ -import pkgutil - -import gradio.components as components -import gradio.inputs as inputs -import gradio.outputs as outputs -import gradio.processing_utils -import gradio.templates -from gradio.blocks import Blocks -from gradio.components import ( - HTML, - JSON, - Audio, - Button, - Carousel, - Chatbot, - Checkbox, - Checkboxgroup, - CheckboxGroup, - ColorPicker, - DataFrame, - Dataframe, - Dataset, - Dropdown, - File, - Gallery, - Highlight, - Highlightedtext, - HighlightedText, - Image, - Interpretation, - Json, - Label, - LinePlot, - Markdown, - Model3D, - Number, - Plot, - Radio, - ScatterPlot, - Slider, - State, - StatusTracker, - Text, - Textbox, - TimeSeries, - Timeseries, - UploadButton, - Variable, - Video, - component, -) -from gradio.exceptions import Error -from gradio.flagging import ( - CSVLogger, - FlaggingCallback, - HuggingFaceDatasetJSONSaver, - HuggingFaceDatasetSaver, - SimpleCSVLogger, -) -from gradio.helpers import Progress -from gradio.helpers import create_examples as Examples -from gradio.helpers import make_waveform, skip, update -from gradio.interface import Interface, TabbedInterface, close_all -from gradio.ipython_ext import load_ipython_extension -from gradio.layouts import Accordion, Box, Column, Group, Row, Tab, TabItem, Tabs -from gradio.mix import Parallel, Series -from gradio.routes import Request, mount_gradio_app -from gradio.templates import ( - Files, - ImageMask, - ImagePaint, - List, - Matrix, - Mic, - Microphone, - Numpy, - Paint, - Pil, - PlayableVideo, - Sketchpad, - TextArea, - Webcam, -) - -current_pkg_version = pkgutil.get_data(__name__, "version.txt").decode("ascii").strip() -__version__ = current_pkg_version diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/npmi/__init__.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/npmi/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HugoLaurencon/text-data-filtering-2/stopwords.py b/spaces/HugoLaurencon/text-data-filtering-2/stopwords.py deleted file mode 100644 index c191a2b5cbd69ae38971b6bbc3991a1c6a35d9e3..0000000000000000000000000000000000000000 --- a/spaces/HugoLaurencon/text-data-filtering-2/stopwords.py +++ /dev/null @@ -1,7174 +0,0 @@ -# From https://github.com/6/stopwords-json -# From https://github.com/stopwords-iso/stopwords-iso for Urdu and Vietnamese - - -stopwords = { - "af": [ - "'n", - "aan", - "af", - "al", - "as", - "baie", - "by", - "daar", - "dag", - "dat", - "die", - "dit", - "een", - "ek", - "en", - "gaan", - "gesê", - "haar", - "het", - "hom", - "hulle", - "hy", - "in", - "is", - "jou", - "jy", - "kan", - "kom", - "ma", - "maar", - "met", - "my", - "na", - "nie", - "om", - "ons", - "op", - "saam", - "sal", - "se", - "sien", - "so", - "sy", - "te", - "toe", - "uit", - "van", - "vir", - "was", - "wat", - "ʼn", - ], - "ar": [ - "آنذاك", - "أبداً", - "أثناء", - "أسفل", - "أعلى", - "أغلب", - "أكثر", - "ألا", - "ألم", - "أم", - "أمام", - "أمس", - "أن", - "أنا", - "أنت", - "أنتم", - "أنتما", - "أنتن", - "أو", - "أولئك", - "أي", - "أيان", - "أياً", - "أية", - "أيضاً", - "أين", - "أينما", - "إبان", - "إثر", - "إثر ذلك", - "إذا", - "إزاء", - "إلا", - "إلا أن", - "إلى", - "إما", - "إن", - "إنما", - "إياك", - "إياكم", - "إياكما", - "إياكن", - "إيانا", - "إياه", - "إياها", - "إياهم", - "إياهما", - "إياهن", - "إياي", - "الآن", - "البتة", - "التي", - "الذي", - "الذين", - "اللائي", - "اللات", - "اللاتي", - "اللتان", - "اللتين", - "اللذان", - "اللذين", - "اللهم", - "اللوات", - "اللواتي", - "الليلة", - "اليوم", - "اي", - "بألا", - "بأن", - "بئس", - "بئست", - "باتجاه", - "بالأخص", - "بالأمس", - "بالتالي", - "بالذات", - "بالرغم من", - "بالضبط", - "بالطبع", - "بالفعل", - "بالقرب", - "بالكامل", - "بالنسبة ل", - "بتاتاً", - "بجانب", - "بحسب", - "بحوالي", - "بحيث", - "بذلك", - "برغم", - "برمته", - "بشتى", - "بصرف النظر عن", - "بضع", - "بضعة", - "بعد", - "بعدما", - "بعض", - "بغض الطرف عن", - "بغض النظر عن", - "بغية", - "بـ", - "بقرب", - "بل", - "بلا", - "بلى", - "بم", - "بما", - "بما أن", - "بمفرده", - "بمقتضى", - "بمنأى عن", - "بموجب", - "بين", - "بينما", - "تاماً", - "تباعاً", - "تبعاً", - "تجاه", - "تحت", - "تحديداً", - "تحسباً", - "تقريباً", - "تلك", - "تلو", - "تماماً", - "تمشياً", - "ثم", - "ثمة", - "جانب", - "جاهداً", - "جداً", - "جدياً", - "جراء", - "جل", - "جميع", - "جميعاً", - "جنوب", - "جنوبي", - "حتماً", - "حتمياً", - "حتى", - "حسب", - "حسبما", - "حوالي", - "حول", - "حيال", - "حيث", - "حيث أن", - "حيثما", - "حين", - "حينئذ", - "حيناً", - "حينذاك", - "حينما", - "خارج", - "ختاماً", - "خلال", - "خلف", - "دائماً", - "داخل", - "دوماً", - "دون", - "دونما", - "ذاك", - "ذلك", - "رغم", - "رغم أن", - "ريثما", - "زهاء", - "ساعة", - "سنة", - "سوف", - "سوى", - "سوياً", - "شتى", - "شرق", - "شريطة", - "شكراً", - "شمال", - "صبيحة", - "صوب", - "ضد", - "طالما", - "طبقاً", - "طواعية", - "طوعاً", - "طيلة", - "عادة", - "عام", - "عامة", - "عبر", - "عدا", - "عدة", - "عسى", - "عشية", - "عقب", - "علاوة على", - "علاوة على ذلك", - "على", - "على الرغم من", - "على حد قول", - "على غرار", - "على هذا", - "عما", - "عمن", - "عموماً", - "عن", - "عند", - "عندئذ", - "عندما", - "عنوة", - "عوضا عن", - "غالب", - "غالباً", - "غداة", - "غداً", - "غرب", - "غير", - "غير أن", - "ـك", - "ـكم", - "ـكما", - "ـكن", - "ـنا", - "ـه", - "ـها", - "ـهم", - "ـهما", - "ـهن", - "ـي", - "فجأة", - "فجر", - "فحسب", - "فصاعداً", - "فضلاً", - "فـ", - "فور", - "فوراً", - "فوق", - "في", - "في تلك الأثناء", - "في غضون ذلك", - "في هذه الأثناء", - "فيما", - "فيما يلي", - "قبالة", - "قبل", - "قبيل", - "قد", - "قدماً", - "قرابة", - "قرب", - "قسراً", - "قطعياً", - "قليلاً", - "كأن", - "كالمعتاد", - "كثيراً", - "كذا", - "كذلك", - "كـ", - "كل", - "كلا", - "كلتا", - "كلما", - "كم", - "كما", - "كما أن", - "كي", - "كيف", - "لأن", - "لئلا", - "لا", - "لا بأس أن", - "لا بد", - "لا سيما", - "لا لبس أن", - "لا مانع", - "لابد", - "لاحقاً", - "لاسيما", - "لحظة", - "لحوالي", - "لدى", - "لذا", - "لذلك", - "لعل", - "لـ", - "لقد", - "لكن", - "لكي", - "للتو", - "لم", - "لما", - "لماذا", - "لن", - "لو", - "لولا", - "ليت", - "ليلة", - "مؤخراً", - "مؤقتاً", - "ما", - "ماذا", - "مباشرة", - "متى", - "مثل", - "مثلاً", - "مثلما", - "مجاناً", - "مجدداً", - "مجرد", - "محض", - "مراراً", - "مساء", - "مطلقاً", - "مع", - "مع أن", - "مع ذلك", - "معاً", - "معظم", - "مما", - "مما زاد الطين بلة", - "مما يزيد الطين بلة", - "ممن", - "من", - "من الجدير بالذكر أن", - "من المؤسف", - "من المؤكد", - "من المؤمل", - "من المرجح", - "من المفترض", - "من الممكن", - "من ثم", - "من جهة أخرى", - "من غير المرجح", - "من غير الممكن", - "من ناحية أخرى", - "منذ", - "مهما", - "نادراً", - "ناهيك عن", - "نحن", - "نحو", - "نسبياً", - "نعم", - "نعمت", - "نفس", - "نهار", - "نهاراً", - "هؤلاء", - "هاتان", - "هاتين", - "هدراً", - "هذا", - "هذان", - "هذه", - "هذين", - "هكذا", - "هكذا دواليك", - "هل", - "هم", - "هما", - "هن", - "هنا", - "هناك", - "هنالك", - "هو", - "هي", - "و", - "وراء", - "وسط", - "وفق", - "وفقاً", - "وقت", - "وقتما", - "يا", - "يذكر أن", - "يوم", - "يوماً", - "يومياً", - ], - "as": [ - "অন্যথা", - "অৱশ্যে", - "আপোনাৰ", - "উদাহৰণস্বৰূপে", - "ওপৰলৈ", - "কম", - "কাৰণ", - "কিন্তু", - "কেতিয়াবা", - "কোনোবা", - "গতিকে", - "তললৈ", - "তাৰ সলনি", - "তাৰে ভিতৰত", - "তেওঁলোকৰ", - "তেতিয়া", - "তেনেকুৱাই", - "ফালে", - "বহুত", - "বাওঁফালে", - "বাহিৰত", - "ভিতৰত", - "মোৰ", - "যথেষ্ট", - "যাৰ", - "যি", - "যেতিয়ালৈকে", - "যেনে", - "লৈ", - "সকলোৱে", - "সোঁফালে", - "সৰ্বাধিক", - ], - "bn": [ - "অনেক", - "অনেক ", - "অন্য ", - "অন্যথায়", - "আমরা ", - "আমার ", - "আমি", - "আর জন্য ", - "আর, ও, এবং ", - "আরও সাথে , আরো সঙ্গে ", - "উদাহরণ স্বরূপ", - "উপর", - "এ ", - "এ, এটা, এইটা ", - "এখানে , এইখানে ", - "ও ,ওটা ,ওইটা", - "ওখানে, সেখানে ", - "ওদের মধ্যে ", - "কখন ", - "কখনও কখনও", - "কম, অল্প ", - "কারণ ", - "কি", - "কিছু ", - "কিন্তু ", - "কে ", - "কেউ", - "কেমন ", - "কোথায়", - "কোনটা ", - "ডান", - "তাই, সুতরাং", - "তার, তাদের, ওর, ওদের ", - "তারপর", - "তারা ", - "তুমি, আপনি ", - "তোমরা , আপনারা ", - "তোমার, তোর ", - "দিকে", - "না ", - "নিচে", - "পরিবর্তে , বরং ", - "পর্যন্ত", - "বাইরে", - "বাম", - "ভিতর", - "ভিতরে", - "মত", - "যতক্ষণ না", - "যথেষ্ট", - "যদি ", - "যাহার", - "যাহোক", - "সব, সবাই ", - "সবাই", - "সর্বাধিক", - "সামান্য", - "সে রকমই", - "সে, ও", - ], - "ca": [ - "-ho", - "-la", - "-lo", - "-ne", - "-se", - "a", - "abans", - "això", - "al", - "algun", - "alguna", - "algunes", - "alguns", - "algú", - "allò", - "als", - "altra", - "altre", - "altres", - "amb", - "aqueix", - "aqueixa", - "aqueixes", - "aqueixos", - "aquell", - "aquella", - "aquelles", - "aquells", - "aquest", - "aquesta", - "aquestes", - "aquestos", - "aquests", - "bastant", - "bastants", - "bé", - "cada", - "cadascun", - "cadascuna", - "cadascú", - "cap", - "cert", - "certa", - "certes", - "certs", - "com", - "con", - "contra", - "d", - "d'", - "da", - "damunt", - "darrere", - "davant", - "de", - "del", - "dels", - "des", - "dient", - "diferent", - "diferents", - "dins", - "dintre", - "dir", - "divers", - "diverses", - "diversos", - "durant", - "eixa", - "eixe", - "eixes", - "eixos", - "el", - "ell", - "ella", - "elles", - "ells", - "els", - "em", - "emperò", - "en", - "endavant", - "enfront", - "ens", - "entre", - "envers", - "era", - "eren", - "es", - "estan", - "estant", - "estar", - "estaran", - "estarem", - "estaria", - "estarien", - "estarà", - "estat", - "estava", - "estaven", - "este", - "estem", - "estes", - "esteu", - "estic", - "estiguem", - "estiguessin", - "estigui", - "estiguin", - "estigués", - "estos", - "està", - "et", - "ets", - "excepte", - "extra", - "fa", - "faci", - "facin", - "facis", - "faig", - "fan", - "faran", - "farem", - "fareu", - "faria", - "farien", - "faries", - "faràs", - "faràs", - "faré", - "faríem", - "faríeu", - "fas", - "feia", - "feien", - "feies", - "fem", - "fent", - "fer", - "fes", - "fessin", - "fessis", - "fet", - "feu", - "fins", - "foren", - "fos", - "fossin", - "fou", - "front", - "fèiem", - "fèieu", - "féssiu", - "gaire", - "gaires", - "gràcies", - "ha", - "hagi", - "hagin", - "haguem", - "haguessin", - "haguessis", - "hagut", - "hagués", - "haguéssim", - "haguéssin", - "haguéssiu", - "han", - "has", - "hauran", - "haurem", - "haureu", - "hauria", - "haurien", - "hauries", - "haurà", - "hauràs", - "hauré", - "hauríem", - "hauríeu", - "havent", - "haver", - "havia", - "havien", - "havies", - "havíem", - "havíeu", - "he", - "hem", - "heu", - "hi", - "ho", - "hom", - "hàgim", - "i", - "in", - "jo", - "l", - "l", - "l'", - "la", - "las", - "les", - "li", - "llur", - "llurs", - "lo", - "los", - "ls", - "m", - "m", - "m'", - "malgrat", - "mancant", - "massa", - "mateix", - "mateixa", - "mateixes", - "mateixos", - "me", - "mentre", - "menys", - "mes", - "meu", - "meus", - "meva", - "meves", - "mi", - "mitjançant", - "molt", - "molta", - "moltes", - "molts", - "moltíssim", - "moltíssima", - "moltíssimes", - "moltíssims", - "n", - "n'", - "ne", - "ni", - "ningun", - "ninguna", - "ningunes", - "ninguns", - "ningú", - "no", - "nombroses", - "nombrós", - "nos", - "nosaltres", - "nostra", - "nostre", - "nostres", - "ns", - "o", - "on", - "os", - "pel", - "pels", - "per", - "perqu", - "perquè", - "però", - "poc", - "poca", - "pocs", - "poques", - "prou", - "qual", - "quals", - "qualsevol", - "quan", - "quant", - "quantes", - "quants", - "que", - "quelcom", - "qui", - "quin", - "quina", - "quines", - "quins", - "què", - "rere", - "respecte", - "s", - "s", - "s'", - "sa", - "sabent", - "salvant", - "se", - "segons", - "sens", - "sense", - "sent", - "ser", - "seran", - "serem", - "seria", - "serien", - "serà", - "seré", - "seríem", - "ses", - "seu", - "seus", - "seva", - "seves", - "si", - "siguem", - "sigui", - "siguin", - "sigut", - "sinó", - "sobre", - "som", - "sota", - "su", - "suficient", - "séssim", - "sóc", - "són", - "t", - "t'", - "tal", - "tals", - "tant", - "tanta", - "tantes", - "tants", - "te", - "tenc", - "tendran", - "tendrem", - "tendreu", - "tendria", - "tendrien", - "tendries", - "tendràs", - "tendràs", - "tendré", - "tendríem", - "tendríeu", - "tenen", - "tenia", - "tenien", - "tenies teníem", - "tenim", - "tenir", - "teniu", - "tens", - "teníeu", - "teu", - "teus", - "teva", - "ti", - "tinc", - "tindran", - "tindre", - "tindrem", - "tindreu", - "tindria", - "tindrien", - "tindries", - "tindràs", - "tindràs", - "tindré", - "tindríem", - "tindríeu", - "tingut", - "tot", - "tota", - "total", - "totes", - "tothom", - "tots", - "tu", - "té", - "u", - "ultra", - "un", - "una", - "unes", - "uns", - "us", - "va", - "vagi", - "vagin", - "vaig", - "vam", - "van", - "varen", - "vau", - "vers", - "versus", - "via", - "vora", - "vos", - "vosaltres", - "vostre", - "vostè", - "vostès", - "vàrem", - "y", - "érem", - "és", - ], - "en": [ - "a", - "a.k.a", - "aboard", - "about", - "above", - "abt", - "accord", - "according", - "across", - "after", - "against", - "ago", - "aground", - "ahead", - "aka", - "ala", - "albeit", - "all", - "along", - "alongside", - "although", - "am", - "amid", - "amidst", - "among", - "amongst", - "amoung", - "an", - "and", - "and/or", - "another", - "any", - "any1", - "anybody", - "anyone", - "anything", - "are", - "around", - "as", - "aside", - "astride", - "at", - "atop", - "away", - "b", - "b/c", - "b/t", - "back", - "base", - "based", - "bc", - "be", - "because", - "been", - "before", - "behind", - "being", - "below", - "beneath", - "beside", - "besides", - "between", - "beyond", - "board", - "both", - "btwn", - "but", - "by", - "can", - "cause", - "circa", - "cos", - "could", - "coz", - "cus", - "depend", - "depending", - "despite", - "did", - "do", - "does", - "down", - "due", - "during", - "each", - "either", - "else", - "even", - "ever", - "every", - "everybody", - "everyone", - "everything", - "except", - "for", - "forth", - "from", - "get", - "gets", - "getting", - "give", - "given", - "got", - "had", - "half", - "has", - "hav", - "have", - "having", - "he", - "her", - "hers", - "herself", - "him", - "himself", - "his", - "how", - "however", - "i", - "i'd", - "if", - "in", - "include", - "including", - "inside", - "instead", - "into", - "is", - "it", - "it's", - "its", - "itself", - "lest", - "like", - "made", - "many", - "may", - "me", - "might", - "mine", - "minus", - "most", - "much", - "must", - "my", - "myself", - "nary", - "near", - "nearby", - "neither", - "next", - "nigh", - "no", - "nobody", - "none", - "noone", - "nor", - "not", - "nothing", - "notwithstanding", - "of", - "off", - "on", - "onboard", - "once", - "one", - "ones", - "oneself", - "only", - "onto", - "opposite", - "or", - "other", - "others", - "ought", - "our", - "ours", - "ourselves", - "out", - "outside", - "over", - "overt", - "own", - "past", - "per", - "plus", - "prior", - "quite", - "rather", - "re", - "regard", - "regarding", - "regardless", - "round", - "s/he", - "save", - "self", - "shall", - "she", - "should", - "side", - "since", - "so", - "some", - "somebody", - "someone", - "something", - "such", - "sure", - "teh", - "than", - "thanks", - "that", - "the", - "their", - "theirs", - "them", - "themselves", - "then", - "there", - "these", - "they", - "they're", - "thier", - "this", - "tho", - "those", - "thou", - "though", - "through", - "throughout", - "thru", - "thy", - "til", - "till", - "to", - "together", - "too", - "toward", - "towards", - "u", - "under", - "underneath", - "unless", - "unlike", - "until", - "unto", - "up", - "upon", - "ur", - "us", - "use", - "versus", - "via", - "vs", - "vs.", - "w/", - "w/o", - "w/out", - "was", - "we", - "were", - "what", - "whatever", - "whatnot", - "when", - "whenever", - "where", - "whereas", - "wherever", - "whether", - "which", - "while", - "whilst", - "whither", - "who", - "who's", - "whoever", - "whom", - "whomever", - "whose", - "why", - "will", - "with", - "within", - "without", - "wo", - "worth", - "would", - "wud", - "y'all", - "ya", - "yet", - "yo", - "you", - "you're", - "your", - "youre", - "yours", - "yourself", - "yourselves", - ], - "es": [ - "a", - "a fin de que", - "a medida que", - "a menos que", - "a modo de", - "a no ser que", - "a poco que", - "a que", - "abandono", - "acerca", - "acostumbra", - "adónde", - "ahora", - "al igual que", - "al lado de", - "algo", - "alguien", - "alguna", - "algunas", - "alguno", - "algunos", - "algún", - "alrededor", - "ambas", - "ambos", - "ante", - "aparece", - "aparecen", - "apareció", - "aparte", - "apenas", - "aquel", - "aquella", - "aquellas", - "aquello", - "aquellos", - "aquesa", - "aquesas", - "aquesos", - "aquesta", - "aquestas", - "aquesto", - "aquestos", - "aquél", - "aquélla", - "aquéllas", - "aquéllos", - "arrepentir", - "arrepentiréis", - "así", - "así como", - "así que", - "atlético", - "aun", - "aunque", - "aún", - "bajo", - "bastante", - "bastantes", - "bien", - "cada", - "casi", - "cerca", - "chance", - "cierta", - "ciertas", - "cierto", - "ciertos", - "comenzado", - "comenzó", - "comienzan", - "como", - "como quiera que", - "como si", - "con", - "con tal de", - "con tal que", - "conforme", - "conmigo", - "conque", - "considera", - "consideradas", - "consideran", - "consideró", - "consigo", - "contendrán", - "contigo", - "continuaba", - "continuar", - "continuaron", - "continuase", - "continuó", - "continúa", - "contra", - "corresponden", - "corresponder", - "cual", - "cual si", - "cuales", - "cualesquier", - "cualesquiera", - "cualquier", - "cualquiera", - "cuan", - "cuando", - "cuanta", - "cuantas", - "cuanto", - "cuanto quiera que", - "cuantos", - "cuya", - "cuyas", - "cuyo", - "cuyos", - "cuàles", - "cuál", - "cuáles", - "cuán", - "cuándo", - "cuánta", - "cuántas", - "cuánto", - "cuántos", - "cómo", - "da", - "dado que", - "dar", - "de", - "de manera que", - "de modo que", - "deba", - "debajo", - "deban", - "debas", - "debe", - "debemos", - "deben", - "deber", - "deberá", - "deberán", - "debería", - "deberíamos", - "deberían", - "debes", - "debido", - "debiera", - "debieron", - "debimos", - "debió", - "debo", - "debía", - "debíamos", - "debían", - "declaraba", - "declarada", - "declarado", - "declarase", - "declaro", - "declaró", - "dejaban", - "dejado", - "dejan", - "dejará", - "del", - "delante", - "demasiada", - "demasiadas", - "demasiado", - "demasiados", - "demás", - "den", - "dentro", - "dentro_de", - "des", - "desde", - "después", - "detrás", - "di", - "dicha", - "dichas", - "dicho", - "dichos", - "diferente", - "diferentes", - "distintas", - "distinto", - "distintos", - "diversas", - "diverso", - "diversos", - "don", - "donde", - "dos", - "durante", - "dónde", - "echar", - "el", - "el que", - "ella", - "ellas", - "ello", - "ellos", - "en", - "en cambio", - "en caso de", - "en la medida en que", - "en tanto que", - "encima", - "enfrente", - "entonces", - "entre", - "era", - "eramos", - "eran", - "eras", - "eres", - "ergo", - "es", - "esa", - "esas", - "escasa", - "escasas", - "escaso", - "escasos", - "escrito", - "ese", - "eso", - "eso que", - "esos", - "esotra", - "esotro", - "esta", - "estaba", - "estabais", - "estabamos", - "estaban", - "estabas", - "estado", - "estamos", - "estan", - "estando", - "estar", - "estaremos", - "estará", - "estarán", - "estaré", - "estaría", - "estaríamos", - "estarían", - "estarías", - "estas", - "este", - "estemos", - "esto", - "estos", - "estotra", - "estotro", - "estoy", - "estuve", - "estuviera", - "estuvieran", - "estuvieron", - "estuviese", - "estuviesen", - "estuvimos", - "estuvo", - "está", - "estábamos", - "estáis", - "están", - "estás", - "esté", - "estén", - "ex", - "excepto", - "frente", - "fue", - "fuera", - "fueran", - "fuere", - "fueron", - "fuese", - "fuesen", - "fui", - "fuimos", - "gracias", - "gracias_a", - "habeis", - "haber", - "haberle", - "haberse", - "habido", - "habiendo", - "habiéndo", - "habremos", - "habrá", - "habrán", - "habrás", - "habré", - "habría", - "habríamos", - "habrían", - "habéis", - "había", - "habíamos", - "habían", - "habías", - "hace", - "hacer", - "hacia", - "hacía", - "halla", - "han", - "has", - "hasta", - "hasta que", - "hay", - "haya", - "hayamos", - "hayan", - "hayas", - "he", - "hecho", - "hemos", - "hola", - "hubiera", - "hubieran", - "hubieron", - "hubiese", - "hubiesen", - "hubiéramos", - "hubo", - "iba", - "iban", - "ido", - "incluso", - "ir", - "irá", - "irán", - "iré", - "iría", - "junto a", - "la", - "las", - "le", - "lejos", - "les", - "lo", - "los", - "luego", - "mal que", - "mas", - "me", - "mediante", - "menos", - "mes", - "mi", - "mientras", - "mientras que", - "mis", - "misma", - "mismas", - "mismo", - "mismos", - "mismísimo", - "morir", - "moriría", - "mostrado", - "mostraron", - "mucha", - "muchas", - "muchisimas", - "muchisimio", - "muchisimo", - "mucho", - "muchos", - "muchísima", - "muchísimas", - "muchísimo", - "muchísimos", - "más", - "más bien", - "mí", - "mía", - "mías", - "mío", - "míos", - "nada", - "nadie", - "negar", - "ni", - "ni que", - "ningun", - "ninguna", - "ningunas", - "ninguno", - "ningunos", - "ningún", - "no", - "no obstante", - "noche", - "nombrado", - "nombró", - "nos", - "nosotros", - "nuestra", - "nuestras", - "nuestro", - "nuestros", - "o", - "os", - "otra", - "otras", - "otro", - "otros", - "pa", - "para", - "para que", - "parezca", - "partir", - "pasar", - "pero", - "po", - "poca", - "pocas", - "poco", - "pocos", - "podamos", - "podeis", - "podemos", - "poder", - "podes", - "podido", - "podras", - "podre", - "podremos", - "podriaís", - "podrá", - "podrán", - "podrás", - "podré", - "podréis", - "podría", - "podríamos", - "podrían", - "podéis", - "podía", - "podíamos", - "podían", - "poner", - "poquito", - "por", - "por el contrario", - "por ende", - "por eso", - "por lo que", - "por mucho que", - "por más que", - "por no hablar de", - "por si", - "porque", - "pos", - "post", - "pre", - "pro", - "propia", - "propias", - "propio", - "propios", - "pude", - "pudiendo", - "pudiera", - "pudieran", - "pudieras", - "pudieron", - "pudiese", - "pudiesen", - "pudimos", - "pudo", - "pueda", - "puedan", - "puedas", - "puede", - "pueden", - "puedes", - "puedo", - "pues", - "puesto", - "puesto que", - "que", - "queda", - "quedaba", - "quedan", - "quedó", - "queremos", - "querer", - "queriendo", - "quien", - "quienes", - "quienesquiera", - "quienquier", - "quienquiera", - "quiera", - "quiere", - "quisiera", - "quién", - "quiénes", - "qué", - "re", - "resulta", - "resultado", - "resultaría", - "resulte", - "sabe", - "saber", - "sabiendo", - "salen", - "salir", - "salió", - "salvo", - "se", - "sea", - "seamos", - "sean", - "seas", - "seguir", - "seguirá", - "seguía", - "según", - "semejante", - "semejantes", - "semi", - "sendas", - "sendo", - "sendos", - "ser", - "será", - "serán", - "serás", - "seré", - "seréis", - "sería", - "serían", - "serías", - "si", - "si bien", - "si y solo si", - "sido", - "siempre que", - "siendo", - "siente", - "siento", - "siga", - "sigamos", - "sigue", - "sin", - "sino", - "siquiera", - "sobre", - "sobrer", - "sobrir", - "soler", - "solían", - "somos", - "son", - "soy", - "sub", - "suele", - "suelen", - "suelo", - "super", - "supo", - "sur", - "sus", - "suya", - "suyas", - "suyo", - "suyos", - "sé", - "sí", - "tal", - "tales", - "tanta", - "tantas", - "tanto", - "tantos", - "tantísima", - "tantísimas", - "tantísimos", - "te", - "tendremos", - "tendrian", - "tendrá", - "tendrán", - "tendría", - "tendrían", - "tenemos", - "tener", - "tenga", - "tengan", - "tengo", - "tenia", - "tenido", - "teniendo", - "tenéis", - "tenía", - "teníamos", - "tenían", - "terminas", - "ti", - "tiene", - "tienen", - "tienes", - "toda", - "todas", - "todavía", - "todes", - "todo", - "todos", - "trabajado", - "trans", - "tras", - "tu", - "tus", - "tuve", - "tuviera", - "tuvieron", - "tuviese", - "tuvo", - "tuya", - "tuyas", - "tuyo", - "tuyos", - "tú", - "u", - "un", - "una", - "unas", - "une", - "unir", - "uno", - "unos", - "usted", - "ustedes", - "va", - "vamos", - "van", - "varias", - "varios", - "varía", - "vas", - "vaya", - "vayan", - "venir", - "venía", - "ver", - "vice", - "vieron", - "vino", - "vis a vis", - "visto que", - "volver", - "volverá", - "volveríamos", - "volvió", - "vos", - "vosotras", - "vosotros", - "voy", - "vuelva", - "vuelvan", - "vuelve", - "vuelven", - "vuestra", - "vuestras", - "vuestro", - "vuestros", - "vía", - "y", - "ya", - "ya que", - "yo", - "ámbos", - "él", - "éramos", - "ésa", - "ésas", - "ése", - "ésos", - "ésta", - "éstas", - "éste", - "ésto", - "éstos", - "íbamos", - "ó", - "ú", - "última", - "últimas", - "último", - "últimos", - "\ufeffdesde", - "\ufeffel", - "\ufeffen", - "\ufeffla", - "\ufefflas", - ], - "eu": [ - "*edin", - "*edun", - "*ezan", - "aitzitik", - "ala", - "alabaina", - "aldiz", - "alegia", - "alta", - "anitz", - "anitzek", - "anitzeko", - "anitzez", - "antzera", - "arabera", - "ari", - "ari_izan", - "ariko", - "arren", - "asko", - "askoan", - "askok", - "askoko", - "askorekin", - "askoren", - "askorengan", - "askorentzat", - "askori", - "askorik", - "askotako", - "askotan", - "askotariko", - "askotatik", - "askotaz", - "askotxo", - "askoz", - "at", - "aunitz", - "aurka", - "aurkako", - "aurretik", - "azpian", - "azpitik", - "ba", - "bada", - "badago", - "badezake", - "badidazu", - "badiezu", - "badio", - "badiogu", - "badiote", - "badiougu", - "badiozu", - "badira", - "badirela", - "baditu", - "baditugu", - "badituzte", - "badituzu", - "badu", - "badugu", - "badugun", - "badut", - "badute", - "baduzu", - "bagara", - "bagatzaizkio", - "bagenu", - "baginen", - "bai", - "baietz", - "baikaituzte", - "bailegoen", - "bailituen", - "bailitzake", - "bailitzateke", - "baina", - "bainan", - "bainintzen", - "bainizkion", - "baino", - "baita", - "baitabil", - "baitaiteke", - "baitan", - "baitaude", - "baitiete", - "baitigu", - "baitio", - "baitiote", - "baitira", - "baititu", - "baititugu", - "baitituzte", - "baitituzu", - "baititzaket", - "baitizkio", - "baitu", - "baitugu", - "baitute", - "baituzu", - "baitzaio", - "baitzaizkio", - "baitzara", - "baitzegoen", - "baitzen", - "baitzeuden", - "baitzien", - "baitzion", - "baitzioten", - "baitziren", - "baitzitekeen", - "baitzituen", - "baitzitzaion", - "baitzuen", - "baitzuten", - "baizik", - "baizituen", - "baldin", - "balego", - "balira", - "baliteke", - "balitu", - "balituzkete", - "balitz", - "balitzait", - "balu", - "balute", - "banintz", - "banitu", - "banu", - "barik", - "barru", - "bat", - "batera", - "batera\x97", - "batere", - "batzu", - "batzuei", - "batzuek", - "batzuekin", - "batzuen", - "batzuengatik", - "batzuentzat", - "batzuetako", - "batzuetakoak", - "batzuetan", - "batzuetara", - "batzuetatik", - "batzuez", - "batzuk", - "batzutako", - "batzutan", - "bazaigu", - "bazaizu", - "bazara", - "bazen", - "bazina", - "baziren", - "bazituen", - "bazituzten", - "bazuen", - "bazuten", - "bederen", - "behintzat", - "bera", - "beragatik", - "beraiei", - "beraiek", - "beraiekin", - "beraien", - "beraietaz", - "berak", - "berarekin", - "beraren", - "berarengan", - "berarengana", - "berarengandik", - "berarengatik", - "berarentzat", - "berari", - "berauek", - "berauen", - "berauetan", - "beraz", - "berbera", - "berberagatik", - "berberak", - "berberarekin", - "berberaren", - "berberera", - "bere", - "berea", - "bereak", - "berean", - "berek", - "bereko", - "berekoa", - "berekoak", - "beren", - "beretan", - "beretik", - "beretzat", - "berriz", - "bertze", - "bertzeekin", - "bertzela", - "bestalde", - "bestaldean", - "beste", - "bestea", - "besteak", - "bestean", - "bestearekiko", - "bestearekin", - "bestearen", - "bestearengandik", - "besteari", - "besteaz", - "besteei", - "besteen", - "besteengandik", - "besteetan", - "besteko", - "bestekoa", - "bestela", - "bestera", - "besterantz", - "besterik", - "bestetan", - "bestetik", - "bezala", - "bezalako", - "bezalakoa", - "bezalakoen", - "bidez", - "bitartean", - "bitarteko", - "bitarterako", - "bitartez", - "da", - "dabil", - "dabiltza", - "dadila", - "dadin", - "dago", - "dagoela", - "dagoelako", - "dagoen", - "dagoena", - "dagoenaren", - "dagoenean", - "dagoenez", - "daiteekenaren", - "daiteke", - "daitekeela", - "daitekeen", - "daitekeena", - "daitekeenaren", - "daitekeenez", - "daiteken", - "daitezela", - "daitezen", - "daitezke", - "daitezkeelako", - "daitezkeelarik", - "daitezkeen", - "daitezkeenak", - "daitezkela", - "dakizuke", - "danok", - "daude", - "daudela", - "daudelako", - "dauden", - "daudenak", - "daudenek", - "daudenen", - "daudenik", - "dautzuet", - "dela", - "delako", - "delarik", - "den", - "dena", - "denak", - "denaren", - "denarentzat", - "denari", - "denean", - "denek", - "denen", - "denera", - "denerako", - "denetan", - "denetarik", - "denetik", - "denez", - "denik", - "denok", - "denon", - "denona", - "denontzat", - "deus", - "dexente", - "dezadan", - "dezagun", - "dezake", - "dezakedala", - "dezakedan", - "dezakedanean", - "dezakeela", - "dezakeen", - "dezakeena", - "dezakegu", - "dezakegula", - "dezakegun", - "dezakela", - "dezakelako", - "dezaket", - "dezakete", - "dezaketela", - "dezaketen", - "dezakezu", - "dezakezuen", - "dezakezuenez", - "dezakezunez", - "dezala", - "dezan", - "dezaten", - "dezente", - "dezenterekin", - "dezentetan", - "diat", - "didala", - "didana", - "didate", - "didazue", - "die", - "diegu", - "diegun", - "diela", - "dien", - "dienak", - "diet", - "diete", - "dietela", - "dietelako", - "dietenean", - "diezaiekete", - "diezaiokeena", - "diezaiokete", - "diezaiola", - "diezaioten", - "diezaizkioke", - "diezazkioke", - "diezazkiokeen", - "digu", - "digun", - "digute", - "digutela", - "diguten", - "digutenean", - "diguzu", - "dik", - "din", - "dinat", - "dio", - "diogu", - "diogulako", - "diogun", - "diola", - "dion", - "diona", - "dionean", - "dionez", - "diot", - "diote", - "diotela", - "dioten", - "diotena", - "diotenak", - "diotenek", - "diozu", - "dira", - "direla", - "direlako", - "direlakoan", - "direlakotz", - "diren", - "direnak", - "direnean", - "direnek", - "direnen", - "direnetan", - "direnez", - "direnik", - "dit", - "ditake", - "ditazke", - "ditin", - "ditu", - "ditudala", - "ditudalako", - "ditudan", - "ditudanean", - "dituela", - "dituelako", - "dituelarik", - "dituen", - "dituena", - "dituenak", - "dituenean", - "ditugu", - "ditugula", - "ditugun", - "ditugunez", - "ditun", - "ditut", - "dituzte", - "dituztela", - "dituztelako", - "dituzten", - "dituztenak", - "dituztenean", - "dituztenek", - "dituztenekin", - "dituztenen", - "dituzu", - "dituzue", - "dituzuen", - "dituzula", - "dituzun", - "dituzunik", - "ditzagun", - "ditzake", - "ditzakeen", - "ditzakegu", - "ditzakegula", - "ditzakete", - "ditzaketela", - "ditzaketelako", - "ditzaketen", - "ditzakezu", - "ditzan", - "dizkidazu", - "dizkie", - "dizkien", - "dizkiet", - "dizkiete", - "dizkigu", - "dizkigula", - "dizkigunak", - "dizkigute", - "dizkio", - "dizkiola", - "dizkion", - "dizkiot", - "dizkiotela", - "dizkit", - "dizkizuet", - "dizkizugu", - "dizu", - "dizuet", - "dizugu", - "dizut", - "dizute", - "du", - "duan", - "dudala", - "dudalarik", - "dudan", - "dudanak", - "dudanarekin", - "dudanean", - "dudanik", - "duela", - "duelako", - "duelakoan", - "duen", - "duena", - "duenak", - "duenaren", - "duenarentzat", - "duenari", - "duenean", - "duenentz", - "duenez", - "duenik", - "dugu", - "dugula", - "dugulako", - "dugun", - "duguna", - "dugunari", - "dugunean", - "dugunez", - "dugunik", - "duk", - "dun", - "dunala", - "dut", - "dute", - "dutela", - "dutelako", - "dutelakoan", - "duten", - "dutena", - "dutenagatik", - "dutenak", - "dutenaren", - "dutenean", - "dutenek", - "duteneko", - "dutenen", - "dutenena", - "dutenenetatik", - "dutenentz", - "dutenetakoa", - "dutenetik", - "dutenez", - "duzu", - "duzue", - "duzuela", - "duzuen", - "duzuenean", - "duzuenez", - "duzula", - "duzun", - "duzunarekin", - "ea", - "edo", - "edonor", - "edota", - "edozein", - "edozeinek", - "edozer", - "edozertarako", - "elgarrekin", - "elgarri", - "elkar", - "elkarrekiko", - "elkarrekin", - "elkarren", - "elkarri", - "ene", - "era", - "ere", - "esker", - "eta", - "eurak", - "eurei", - "eurek", - "eurekin", - "euren", - "eurentzat", - "ez", - "ezan", - "ezazu", - "ezazue", - "ezean", - "ezein", - "ezen", - "ezer", - "ezerekin", - "ezerk", - "ezertarako", - "ezertaz", - "ezertxo", - "ezetz", - "ezik", - "ezta", - "gabe", - "gabeko", - "gainera", - "gainerakoan", - "gainerat", - "gainera\x97", - "gainetik", - "gaitezen", - "gaitezke", - "gaitezkeela", - "gaitu", - "gaituela", - "gaituzte", - "gaituztenak", - "gara", - "garela", - "garelako", - "garen", - "garenez", - "garenok", - "gaude", - "gaudenak", - "gehiago", - "gehiagoan", - "gehiagok", - "gehiagoko", - "gehiagorekin", - "gehiegi", - "gehiegirik", - "gehiegitxo", - "gehien", - "gehiena", - "gehienak", - "gehienek", - "gehienekin", - "gehienentzako", - "gehienentzat", - "gehienetako", - "gehienetan", - "gehienok", - "gehientsu", - "gehientsuen", - "gehitxo", - "gehixeago", - "genbiltzan", - "genezake", - "genien", - "genion", - "genituela", - "genituelako", - "genituen", - "genituzke", - "genituzkeelako", - "genizkion", - "genizuen", - "genizun", - "genuela", - "genuelako", - "genuen", - "genuenean", - "genuenetik", - "genuenez", - "genuke", - "genukeen", - "geratu", - "geratzen", - "geroztik", - "geu", - "geure", - "geuregan", - "geuri", - "ginela", - "ginen", - "ginenean", - "ginenekoa", - "gintezkeela", - "gintuen", - "gintuenagatik", - "gintunan", - "gintuzten", - "gintzaizkion", - "gu", - "guk", - "gure", - "gurean", - "gurekin", - "guretzat", - "guri", - "gutako", - "gutaz", - "guti", - "gutiz", - "gutiz-gehien", - "gutiz-gehienek", - "gutxi", - "gutxiago", - "gutxiagorako", - "gutxiagorekin", - "gutxian", - "gutxien", - "gutxienez", - "gutxik", - "gutxiko", - "gutxira", - "gutxiren", - "gutxitan", - "guzi", - "guziak", - "guziarekin", - "guziekin", - "guzientzat", - "guzti", - "guztia", - "guztiagatik", - "guztiak", - "guztian", - "guztiarekin", - "guztiaren", - "guztiari", - "guztiaz", - "guztiei", - "guztiek", - "guztien", - "guztiengan", - "guztientzako", - "guztientzat", - "guztietako", - "guztietan", - "guztietara", - "guztietatik", - "guztiez", - "guztioi", - "guztiok", - "guztion", - "guztionak", - "guztionen", - "guztiontzat", - "guztira", - "guztitako", - "haatik", - "haiek", - "haiekin", - "haien", - "haiengan", - "haiengandik", - "haietako", - "haietan", - "haietatik", - "hainbat", - "hainbatek", - "hainbaten", - "hainbatez", - "hainbertze", - "hainbeste", - "hainbesterako", - "haiteke", - "haiz", - "halaber", - "halere", - "harekin", - "haren", - "harena", - "harentzat", - "hargatik", - "hari", - "hark", - "hartako", - "hartan", - "hartara", - "hartarako", - "hartatik", - "hau", - "haudala", - "hauei", - "hauek", - "hauekin", - "hauen", - "hauetako", - "hauetan", - "hauetara", - "hauetarako", - "hauetarik", - "hauetatik", - "hauexek", - "hauez", - "hauxe", - "heu", - "heure", - "hhriek", - "hi", - "hik", - "hinduan", - "hintzen", - "hire", - "hiri", - "honegatik", - "honek", - "honekin", - "honen", - "honengatik", - "honentzat", - "honetako", - "honetan", - "honetara", - "honetarako", - "honetatik", - "honetaz", - "honez", - "honi", - "hori", - "horiei", - "horiek", - "horiekin", - "horien", - "horientzat", - "horietako", - "horietakoren", - "horietan", - "horietarako", - "horietariko", - "horietatik", - "horiez", - "horixe", - "horregatik", - "horrek", - "horrekin", - "horren", - "horrenbeste", - "horrenbestez", - "horrengatik", - "horretako", - "horretan", - "horretantxe", - "horretara", - "horretarako", - "horretatik", - "horretaz", - "horrexegatik", - "horrexekin", - "horrexetan", - "horrez", - "horrezaz", - "horri", - "hortaz", - "huan", - "huntan", - "hura", - "huraxe", - "iezaidazu", - "iezaiezu", - "iezaion", - "iezaiozu", - "inor", - "inoren", - "inorentzako", - "inori", - "inork", - "inortaz", - "irian", - "itzazu", - "izaki", - "kontra", - "lezake", - "lezakeen", - "lezakete", - "lezan", - "liekeela", - "liezaiokeen", - "lioke", - "liokeela", - "liokeen", - "lirateke", - "liratekeela", - "liteke", - "litekeela", - "litekeen", - "litekeena", - "litezke", - "lituzkeela", - "lituzkeen", - "lituzkete", - "litzaidake", - "litzaiguke", - "litzateke", - "litzatekeela", - "litzatekeelako", - "litzatekela", - "lizateke", - "luke", - "lukeela", - "lukeelako", - "lukeen", - "lukeena", - "lukete", - "luketen", - "nabil", - "nago", - "nahiko", - "nahikoa", - "nahikorik", - "nahiz", - "naiteke", - "naiz", - "naizela", - "naizen", - "naizenean", - "naizenetan", - "naizenetik", - "naizenez", - "naizenik", - "nau", - "nauen", - "nauenarentzat", - "nauenean", - "nauk", - "naun", - "naute", - "nautela", - "nauzu", - "nauzun", - "nazan", - "nazaten", - "nazazu", - "nazazun", - "nenbilen", - "nengoela", - "nengoen", - "nere", - "neu", - "neuk", - "neure", - "nezake", - "ni", - "nian", - "nien", - "nigan", - "nik", - "ninduen", - "ninduten", - "nintekeela", - "nintzaion", - "nintzateke", - "nintzatekeela", - "nintzela", - "nintzelako", - "nintzen", - "nintzenean", - "nion", - "nire", - "nirea", - "niregan", - "niregana", - "niregatik", - "nirekin", - "niretzako", - "niretzat", - "niri", - "nitaz", - "nituela", - "nituen", - "nituzke", - "nizuke", - "nor", - "norbait", - "norbaitek", - "norbaitekin", - "norbaiten", - "norbaitengana", - "norbaitentzat", - "norbaiti", - "norbera", - "norberak", - "norberaren", - "norbere", - "noren", - "nori", - "nork", - "nornahi", - "nornahik", - "nortzuk", - "nortzuren", - "nuela", - "nuen", - "nuena", - "nuenean", - "nuenetik", - "nuke", - "nukeela", - "omen", - "ondoan", - "ondoko", - "ondora", - "ondoren", - "ondorengo", - "ondotik", - "ordea", - "ordez", - "orduan", - "oro_har", - "orobat", - "orohar", - "orok", - "ororen", - "orori", - "ostean", - "ostera", - "osterantzean", - "pean", - "piskat", - "pixka_bat", - "pixkat", - "pranko", - "ugari", - "ugarik", - "ugarirekin", - "ugariren", - "ugaritan", - "zagok", - "zaidan", - "zaidanaren", - "zaie", - "zaiela", - "zaien", - "zaienez", - "zaigu", - "zaigun", - "zaiguna", - "zaigunean", - "zaik", - "zaio", - "zaiola", - "zaiolako", - "zaion", - "zaiona", - "zait", - "zaitez", - "zaitezen", - "zaitu", - "zaitut", - "zaituzte", - "zaitzakegu", - "zaizkidan", - "zaizkie", - "zaizkiela", - "zaizkien", - "zaizkigu", - "zaizkio", - "zaizkiola", - "zaizkion", - "zaizkit", - "zaizkizu", - "zaizkizue", - "zaizkizun", - "zaizu", - "zaizue", - "zara", - "zarela", - "zarete", - "zatekeela", - "zatekeen", - "zatzait", - "zaude", - "ze", - "zebilen", - "zedin", - "zegoan", - "zegoela", - "zegoelako", - "zegoen", - "zegoenez", - "zegok", - "zehar", - "zein", - "zeina", - "zeinek", - "zeinen", - "zeintzu", - "zeintzuetan", - "zeintzuk", - "zela", - "zelako", - "zelarik", - "zen", - "zena", - "zenak", - "zenarekin", - "zenari", - "zenbait", - "zenbaitek", - "zenbaiten", - "zenbaitetan", - "zenbaiti", - "zenbaitzuk", - "zenbat", - "zenbateraino", - "zenean", - "zenekoa", - "zenetik", - "zenez", - "zeniguten", - "zenigutenez", - "zenik", - "zenituen", - "zenitzakeen", - "zenuela", - "zenuen", - "zenuke", - "zenukete", - "zenutela", - "zenuten", - "zeozer", - "zer", - "zer_edo_zer", - "zerbait", - "zerbaitek", - "zerbaitengatik", - "zerbaitetarako", - "zeren", - "zerendako", - "zeri", - "zerk", - "zertan", - "zertara", - "zertarako", - "zertaz", - "zertxobait", - "zeu", - "zeudela", - "zeudelako", - "zeuden", - "zeudenak", - "zeuk", - "zeure", - "zezakeen", - "zezaken", - "zezaketen", - "zezala", - "zezan", - "zezaten", - "zidan", - "zidatelako", - "zidaten", - "zidatena", - "zidatenak", - "zidatenean", - "ziela", - "zien", - "zienez", - "zietela", - "zietelako", - "zieten", - "ziezaion", - "zigun", - "zigunez", - "ziguten", - "zinan", - "zinen", - "zintudan", - "zintuztela", - "zintuztenean", - "ziola", - "ziolako", - "ziolarik", - "zion", - "ziona", - "zionean", - "zionez", - "zioten", - "ziotenak", - "zirela", - "zirelako", - "zirelakoan", - "zirelarik", - "ziren", - "zirenak", - "zirenean", - "zirenetik", - "zirenez", - "zirenik", - "ziren\x97", - "zirezte", - "zitekeela", - "zitekeen", - "zitekeena", - "zitekeenik", - "zitezen", - "zitezkeela", - "zitezkeelakoan", - "zitezkeen", - "zituela", - "zituelako", - "zituelarik", - "zituen", - "zituenean", - "zituenei", - "zituztela", - "zituztelarik", - "zituzten", - "zituztenak", - "zituztenetik", - "zitzaidakeen", - "zitzaidala", - "zitzaidan", - "zitzaien", - "zitzaigun", - "zitzaiola", - "zitzaion", - "zitzaionagatik", - "zitzaionean", - "zitzaizkidan", - "zitzaizkien", - "zitzaizkienean", - "zitzaizkigun", - "zitzaizkion", - "zitzaizkon", - "zitzaizun", - "zitzakeen", - "zitzaketenak", - "zizioten", - "zizkidaten", - "zizkien", - "zizkienik", - "zizkieten", - "zizkigun", - "zizkiola", - "zizkion", - "zizkiona", - "zizkioten", - "zizkiotenekin", - "zizuen", - "zizun", - "zoin", - "zonbat", - "zu", - "zuei", - "zuek", - "zuela", - "zuelako", - "zuelarik", - "zuen", - "zuena", - "zuenak", - "zuenarentzat", - "zuenean", - "zuenetik", - "zuenez", - "zuenik", - "zuentzako", - "zuetako", - "zuetaz", - "zugandik", - "zuk", - "zukeen", - "zuketen", - "zure", - "zureak", - "zurekin", - "zuretzat", - "zutela", - "zutelako", - "zutelarik", - "zuten", - "zutena", - "zutenean", - "zuteneko", - "zutenetik", - "zutenez", - ], - "fr": [ - "a", - "afin", - "ai", - "aie", - "aient", - "ainsi", - "ait", - "alias", - "aller", - "allons", - "apres", - "après", - "as", - "au", - "au-delà", - "aucun", - "aucune", - "aucunes", - "aucuns", - "aujourd'", - "auprès", - "auquel", - "aura", - "aurai", - "auraient", - "aurais", - "aurait", - "aurions", - "aurons", - "auront", - "autant", - "autour", - "autre", - "autres", - "autrui", - "auxquelles", - "auxquels", - "avaient", - "avais", - "avait", - "avant", - "avec", - "avez", - "aviez", - "avions", - "avoir", - "avons", - "ayant", - "ayez", - "ayons", - "beaucoup", - "c'est-à-dire", - "c-à-d.", - "ca", - "car", - "ce", - "ceci", - "cela", - "celle", - "celle-ci", - "celles", - "celles-ci", - "celui", - "celui-ci", - "celui-là", - "cent", - "certain", - "certaine", - "certaines", - "certains", - "ces", - "cet", - "cette", - "ceux", - "ceux-ci", - "ceux-là", - "cf.", - "chacun", - "chacune", - "chaque", - "chez", - "ci", - "cinq", - "combien", - "comme", - "comment", - "concernant", - "contre", - "cà", - "d'après", - "d'autres", - "dans", - "de", - "dehors", - "depuis", - "derrière", - "des", - "deux", - "devait", - "devant", - "devez", - "devions", - "devoir", - "devons", - "devra", - "devraient", - "devrait", - "devrions", - "devrons", - "devront", - "doit", - "doivent", - "donc", - "dont", - "du", - "durant", - "dès", - "début", - "dû", - "elle", - "elle-même", - "elles", - "elles-mêmes", - "en", - "entre", - "entres", - "envers", - "environ", - "es", - "est", - "et", - "etaient", - "etant", - "etre", - "eut", - "eux", - "eux-mêmes", - "excepté", - "eût", - "faire", - "fais", - "faisaient", - "faisait", - "faisant", - "fait", - "faite", - "faites", - "fasse", - "fassent", - "fera", - "ferait", - "feront", - "firent", - "fit", - "font", - "furent", - "fussent", - "fut", - "fût", - "für", - "grâce", - "hormis", - "hors", - "i", - "il", - "ils", - "iront", - "je", - "jusque", - "l'on", - "la", - "ladite", - "laquelle", - "le", - "le/lui", - "ledit", - "lequel", - "les", - "lesdites", - "lesquelles", - "lesquels", - "leur", - "leurs", - "lors", - "lorsque", - "lui", - "lui-aussi", - "lui-même", - "là", - "ma", - "maint", - "maintes", - "mais", - "malgré", - "me", - "mes", - "mien", - "moi", - "moi-même", - "moins", - "mon", - "ne", - "ni", - "nonobstant", - "nos", - "notre", - "nous", - "nous-mêmes", - "nul", - "nôtre", - "nôtres", - "on", - "ont", - "onze", - "ou", - "outre", - "où", - "par", - "parce", - "parmi", - "pas", - "pendant", - "personne", - "peu", - "peut", - "peuvent", - "peux", - "plupart", - "plus", - "plusieurs", - "pour", - "pourquoi", - "pourra", - "pourraient", - "pourrait", - "pourrez", - "pourrons", - "pourront", - "pouvait", - "pouvez", - "pouvoir", - "pouvons", - "presque", - "près", - "pu", - "puis", - "puisque", - "puisse", - "puissent", - "puissions", - "qu", - "quand", - "quant", - "quarante", - "quatre", - "que", - "quel", - "quelconque", - "quelle", - "quelles", - "quelqu'un", - "quelque", - "quelques", - "quelques-unes", - "quelques-uns", - "quelqu’un", - "quels", - "qui", - "quiconque", - "quid", - "quoi", - "quoique", - "rien", - "sa", - "sans", - "sauf", - "se", - "selon", - "sera", - "serai", - "seraient", - "serais", - "serait", - "seras", - "serez", - "seriez", - "serions", - "serons", - "seront", - "ses", - "si", - "sien", - "sienne", - "siennes", - "siens", - "sinon", - "six", - "soi", - "soi-même", - "soient", - "sois", - "soit", - "sommes", - "son", - "sont", - "sous", - "soyez", - "soyons", - "suis", - "sur", - "t-il", - "ta", - "tandis", - "tant", - "tantôt", - "te", - "tel", - "telle", - "telles", - "tes", - "tien", - "toi", - "ton", - "tous", - "tout", - "toute", - "toutes", - "trois", - "tte", - "tu", - "un", - "une", - "unes", - "uns", - "unt", - "va", - "vais", - "van", - "vers", - "versus", - "via", - "voici", - "voilà", - "voir", - "voire", - "vont", - "vos", - "votre", - "vous", - "vous-même", - "vs", - "vu", - "y", - "à", - "á", - "ça", - "étaient", - "étais", - "était", - "étant", - "étiez", - "étions", - "été", - "êtes", - "être", - ], - "gu": [ - "અંદર", - "અડધા, અડધું", - "અત્યારે, હમણાં", - "અથવા, કે", - "અને", - "અનેક, ઘણા", - "અન્ય, બીજું", - "અમને, હમેં", - "અમારા", - "અમારું, આપણું", - "અમે", - "અહીં, અહીંયા", - "આ", - "આ દ્વારા", - "આ રીતે, આ તરફ", - "આની જેમ", - "ઉપર", - "એકલા", - "એનાથી", - "એમાથી", - "ઓછું, ઓછા", - "કઈ બાજુ", - "કદાચ", - "કયું, કયો, કઈ, જે", - "કાં તો", - "કેટલા", - "કેટલાક, થોડા", - "કેમ, શા માટે", - "કેવી રીતે, કઈ રીતે", - "કોઈ", - "કોઈ નહી", - "કોઈને", - "કોઈપણ", - "કોણ", - "કોનું, જેમના, જેમની", - "ક્યાંક, કોઈ જગ્યાએ", - "ક્યાંથી, જ્યાં, ક્યાં ", - "ક્યારે, જ્યારે", - "ક્યારેક ક્યારેક", - "ઘણું બધું", - "ઘણું, પુસ્કળ, અતિશય", - "જેથી", - "જેને, જેમને", - "જેમ", - "જેમ કે, જેમ, જે રીતે, જેવા કે", - "જો", - "તને", - "તમારા, તમારું", - "તમારું", - "તમે, તું", - "તારું", - "તે જેવી, તેની જેમ", - "તે રીતે, તે તરફ", - "તેઓ", - "તેઓનું", - "તેઓને, તેમને", - "તેણીના", - "તેથી, તો", - "તેના", - "તેનું, તેના", - "તેમના, તેમનું, તેઓની", - "તેમને. એમને", - "તેવું", - "ત્યાં", - "ત્યાં સુધી", - "થોડા", - "થોડું", - "દરેક", - "દૂર", - "દ્વારા", - "નજીક, પાસે", - "ના, નહિ", - "ના, નો", - "ની અંદર", - "ની સામે", - "નીચે", - "પછી", - "પછી, ત્યારે", - "પછીથી", - "પણ", - "પરંતુ, પણ", - "પાછળ", - "પેલી", - "પેલું", - "પેલો, તે", - "પ્રતિ", - "ફરીથી, ફરી", - "બંને, બેઉ", - "બધા", - "બહાર", - "બાજુમાં", - "ભરપૂર", - "મને", - "માં", - "માંથી, થી", - "માટે", - "માથે, ઉપર", - "મારા", - "મારુ, મારી ", - "મારું", - "લીધે, કારણ કે,કેમ કે", - "વધારાનું", - "વધારે", - "વધારે, વધુ ", - "શું", - "સમગ્ર", - "સમાન, એક સરખું", - "સાથે", - "સિવાય", - "સુધી", - "સૌથી વધુ", - "હજુ સુધી", - "હું", - ], - "hi": [ - "अंदर", - "अकेला", - "अतिरिक्त", - "अथवा, या", - "अधिकांश", - "अन्यथा", - "अब, अभि, इसी वक्त", - "अभी तक", - "आधा", - "आप, तुम, तुजे", - "आपका, तुम्हारा, तेरा", - "इधर, यहाँ", - "इन्हें, इन", - "इस तरफ", - "इस से", - "इसका, इसकी", - "इसके द्वारा", - "इसके साथ", - "इसलिए", - "इसलिए, तो", - "उदाहरण के लिए", - "उन को, इन को, उन्हें, इन्हें", - "उनका, उनके, उनकी, इनका", - "उनके", - "उनमें से", - "उन्हें", - "उस तरफ, उसी और", - "उसकी, उसके", - "उसके जैसा", - "उसको, उसके, इसको, इसके, इसकी", - "ऊपर", - "ऐसा", - "और", - "कब, जब", - "कभी - कभी", - "कभी कभी", - "कम", - "कम, थोड़ा", - "कहीं", - "का, की, के", - "काफ़ी", - "किंतु, पर, लेकिन, मगर", - "कितने", - "किस तरफ", - "किसके, जिसके, जिनके, किसका", - "किसको, किसे, जिसे, जिन्हे", - "किसी को", - "की ओर, की तरफ़", - "कुछ, थोड़े", - "के अंदर", - "के अलावा", - "के ऊपर", - "के लिये", - "के सामने", - "कैसे, कैसा", - "कोई", - "कोई न कोई", - "कोई नहीं", - "कोई, कोई व्यक्ति", - "कौन", - "कौन सा, जो", - "कौन, जो", - "क्या", - "क्यों", - "क्योंकि, चूंकि", - "जब तक", - "जब तक, तक तक", - "जहाँ, कहां, किधर", - "जिसका", - "जैसा", - "जैसे", - "जैसे की, जैसा, वैसा", - "जैसे, इस तरह", - "ज्यादा, अधिक", - "ढेर सारा", - "ढेर सारा, बहुत सारा", - "तक", - "तक, जब तक", - "तब, फिर", - "ताकि", - "तुम्हारा", - "तुम्हारा, तुम्हारे", - "तुम्हे, तुझे, तुमको", - "तेरा, तेरी", - "थोड़ा", - "दाहिने, दाहिना", - "दुसरा, एक और", - "दूर", - "दोनों", - "द्वारा", - "नहीं, मत ", - "नीचे", - "पास में, पास", - "पास, नजदीक, करीब", - "पीछे", - "पूरा", - "प्रति, से, तक", - "प्रत्येक", - "फिर, तो, तब, उस वक़्त", - "फिर, दुबारा", - "बजाय", - "बहुत, अनेक", - "बहुत, ज्यादा, काफी", - "बाएं, वाम", - "बाद में", - "बाद में, पीछे", - "बाहर", - "भी", - "मुझे", - "में, भीतर, अंदर", - "में, मैंने", - "मेरा, अपना", - "मेरा, मेरी", - "मेरी, मेरा, मेरे", - "यदि", - "यदि, अगर", - "यदि, या", - "यह, ये, इसे", - "लेकिन", - "वह", - "वह, जो", - "वहां", - "वही", - "वे, वह, वो, उन्होंने", - "वैसे, उसके जैसा", - "शायद", - "सब लोग", - "सब, सभी, सारे", - "सबसे ज्यादा, अधिकांश", - "साथ", - "से", - "हम", - "हमारा, हमारे, हमारी", - "हर जगह", - "हालाँकि", - ], - "id": [ - "Anda", - "ada", - "adakah", - "adalah", - "adanya", - "adapaun", - "adapun", - "agar", - "akan", - "akau", - "akhirnya", - "akibat", - "akibatnya", - "aku", - "alias", - "anda", - "aneka", - "antar", - "antara", - "antaranya", - "apa", - "apabila", - "apakah", - "apalagi", - "apapun", - "asal", - "atas", - "atau", - "ataukah", - "ataupun", - "bagai", - "bagaimana", - "bagaimanakah", - "bagaimanapun", - "bagi", - "bagi-nya", - "bahkan", - "bahwa", - "bahwasanya", - "baik", - "bakal", - "balik", - "banyak", - "banyaknya", - "baru", - "bawah", - "beberapa", - "begini", - "beginilah", - "begitu", - "belakang", - "beliau", - "belum", - "beragam", - "berapa", - "berapakah", - "berbagai", - "berberapa", - "berdasar", - "berdasarkan", - "berdiri", - "berdirinya", - "berikut", - "berkat", - "bersama", - "bersamanya", - "berupa", - "beserta", - "betapa", - "bila", - "bilamana", - "bisa", - "boleh", - "buah", - "buat", - "bukan", - "bukankah", - "bukanlah", - "bukannya", - "buruh", - "cara", - "dalam", - "dalamnya", - "dan", - "dapat", - "dari", - "darimana", - "daripada", - "dekat", - "demi", - "demikian", - "dengan", - "dengannya", - "depan", - "dg", - "di", - "dia", - "diantara", - "diantaranya", - "diatas", - "dibalik", - "dibandingkan", - "dibawah", - "dibawahnya", - "dibeberapa", - "dibelakang", - "diberbagai", - "didalam", - "didalamnya", - "diluar", - "dimana", - "diri", - "dirinya", - "disaat", - "disamping", - "disebelah", - "disekeliling", - "diseluruh", - "disini", - "ditepi", - "dng", - "dr", - "engkau", - "gambar", - "gimana", - "hadap", - "hai", - "hanya", - "harus", - "hei", - "ia", - "ialah", - "ini", - "inikah", - "inilah", - "inipun", - "isi", - "isinya", - "itu", - "itua", - "itulah", - "itupun", - "iye", - "jadi", - "jangan", - "jauh", - "jelang", - "jenis", - "jika", - "juga", - "kah", - "kalau", - "kalian", - "kalo", - "kami", - "kamilah", - "kamu", - "kan", - "kapan", - "kapankah", - "karena", - "karenanya", - "kau", - "ke", - "kebanyakan", - "kecuali", - "kedalam", - "kedepan", - "kedua", - "keduanya", - "keliling", - "keluar", - "kemudian", - "kena", - "kenapa", - "kendati", - "kepada", - "kepadaku", - "kepadamu", - "kepadanya", - "kepusatnya", - "kerana", - "keseluruhan", - "keseluruhannya", - "kesemuanya", - "ketika", - "ketimbang", - "khususnya", - "kira", - "kita", - "kok", - "koq", - "kpd", - "ku", - "la", - "lagi", - "lah", - "lain", - "lainnya", - "lalu", - "lama", - "lantaran", - "lantas", - "layak", - "layaknya", - "lengah", - "lewat", - "loh", - "luar", - "macam", - "maka", - "makanya", - "maksud", - "maksudnya", - "malahan", - "mampu", - "mana", - "manakah", - "manakala", - "manapun", - "masa", - "masing", - "masing-masing", - "maupun", - "mayoritas", - "melainkan", - "melalui", - "melawan", - "melewati", - "menajak", - "menbeli", - "mengajak", - "mengapa", - "mengenai", - "mengenainya", - "menjadi", - "menjelang", - "menuju", - "menurut", - "menurutmu", - "mereka", - "merekapun", - "merupakan", - "meski", - "meskipn", - "meskipun", - "misalkan", - "misalnya", - "msl", - "mulai", - "mungkin", - "namun", - "nya", - "oleh", - "olehnya", - "orang", - "pada", - "padahal", - "padanya", - "para", - "pasca", - "pd", - "per", - "perihal", - "perlu", - "pula", - "pun", - "saat", - "saatnya", - "sama", - "sambil", - "sampai", - "sampai-sampai", - "samping", - "sana", - "sang", - "satu", - "satu-satunya", - "satunya", - "saya", - "seakan", - "seandainya", - "seantero", - "sebab", - "sebagai", - "sebagaimana", - "sebagian", - "sebaliknya", - "sebangsa", - "sebanyak", - "sebelah", - "sebelum", - "sebelumnya", - "seberang", - "seberat", - "sebesar", - "sebuah", - "secara", - "sedang", - "sedangkan", - "sedangkkan", - "sedari", - "sedikit", - "sedikitnya", - "seekor", - "segala", - "segenap", - "seharusnya", - "sehingga", - "sehubungan", - "seiring", - "sejak", - "sejauh", - "sejenis", - "sejumlah", - "sekali", - "sekaligus", - "sekalipun", - "sekitar", - "sekitarnya", - "selain", - "selaku", - "selama", - "selesai", - "seluas", - "seluruh", - "semacam", - "semasa", - "semenjak", - "sementara", - "sempat", - "semua", - "semuanya", - "sendiri", - "senilai", - "seorang", - "sepanjang", - "sepasang", - "sepeninggal", - "seperti", - "sepertinya", - "sepeti", - "sepucuk", - "seputar", - "serangkaian", - "seraya", - "serta", - "sesampai", - "sesampainya", - "seseorang", - "sesuai", - "sesuatu", - "sesudah", - "setebal", - "setelah", - "setelahnya", - "setengah", - "setiap", - "setinggi", - "seusai", - "sewaktu", - "si", - "siapa", - "siapakah", - "siapapun", - "silakan", - "sini", - "sinilah", - "situ", - "soal", - "suatu", - "sudah", - "supaya", - "tak", - "tan", - "tangguh", - "tanpa", - "tapi", - "tatkala", - "telah", - "tempat", - "tengah", - "tengahnya", - "tentang", - "tepat", - "tepatnya", - "teratas", - "terhadap", - "terhadapnya", - "termasuk", - "ternyata", - "tersebut", - "tertentu", - "terutama", - "tesebut", - "tetap", - "tetapi", - "tiada", - "tiap", - "tidak", - "tidakkah", - "tidaklah", - "tidaknya", - "tsb", - "tt", - "ttg", - "tuh", - "tujuh", - "untuk", - "untukmu", - "untuknya", - "untung", - "usah", - "usai", - "via", - "waktu", - "walau", - "walaupun", - "ya", - "yaitu", - "yakni", - "yang", - "yg", - ], - "mr": [ - "अधिक", - "अनेक", - "अशी", - "असलयाचे", - "असलेल्या", - "असा", - "असून", - "असे", - "आज", - "आणि", - "आता", - "आपल्या", - "आला", - "आली", - "आले", - "आहे", - "आहेत", - "एक", - "एका", - "कमी", - "करणयात", - "करून", - "का", - "काम", - "काय", - "काही", - "किवा", - "की", - "केला", - "केली", - "केले", - "कोटी", - "गेल्या", - "घेऊन", - "जात", - "झाला", - "झाली", - "झाले", - "झालेल्या", - "टा", - "डॉ", - "तर", - "तरी", - "तसेच", - "ता", - "ती", - "तीन", - "ते", - "तो", - "त्या", - "त्याचा", - "त्याची", - "त्याच्या", - "त्याना", - "त्यानी", - "त्यामुळे", - "त्री", - "दिली", - "दोन", - "न", - "नाही", - "निर्ण्य", - "पण", - "पम", - "परयतन", - "पाटील", - "म", - "मात्र", - "माहिती", - "मी", - "मुबी", - "म्हणजे", - "म्हणाले", - "म्हणून", - "या", - "याचा", - "याची", - "याच्या", - "याना", - "यानी", - "येणार", - "येत", - "येथील", - "येथे", - "लाख", - "व", - "व्यकत", - "सर्व", - "सागित्ले", - "सुरू", - "हजार", - "हा", - "ही", - "हे", - "होणार", - "होत", - "होता", - "होती", - "होते", - ], - "pt": [ - "a", - "a cabo de", - "a caminho de", - "a despeito de", - "a favor de", - "a fim de", - "a menos que", - "a não ser", - "a não ser que", - "a partir de", - "a propósito", - "a respeito de", - "a título de", - "abaixo de", - "acima", - "acima de", - "afinal", - "afora", - "agora", - "agora que", - "ai", - "ainda", - "ainda mais", - "algo", - "algum", - "alguma", - "algumas", - "alguns", - "alguém", - "além", - "além de", - "ambas", - "ambos", - "andar", - "andou", - "ante", - "antes", - "anti", - "antre", - "ao", - "ao cabo de", - "ao invés de", - "ao lado", - "ao longo de", - "ao passo que", - "ao redor de", - "aos cuidados de", - "apenas", - "apesar de", - "apesar de que", - "após", - "aquela", - "aquelas", - "aquele", - "aqueles", - "aquilo", - "as", - "assim", - "assim como", - "assim que", - "atras", - "através", - "através de", - "atráis", - "atrás", - "atrás de", - "até", - "até que", - "auto", - "avante", - "aí", - "bastante", - "bem", - "bem como", - "cada", - "cara a cara", - "caso", - "cerca", - "cima", - "com", - "comigo", - "como", - "como se", - "conforme", - "connosco", - "conosco", - "conquanto", - "consigo", - "consoante", - "contanto", - "contanto que", - "contigo", - "contra", - "contudo", - "convosco", - "cuja", - "cujas", - "cujo", - "cujos", - "d'", - "d.", - "da", - "dada", - "dado", - "dado que", - "dali", - "daquela", - "daquelas", - "daquele", - "daqui", - "daqui a", - "daí", - "de", - "de modo que", - "dela", - "delas", - "dele", - "deles", - "demais", - "dentre", - "dentro", - "dentro de", - "depois", - "depois de", - "desde", - "desde que", - "dessa", - "dessas", - "desse", - "desses", - "desta", - "destas", - "deste", - "destes", - "detrás de", - "deva", - "devam", - "deve", - "devem", - "devemos", - "devendo", - "dever", - "deveria", - "deveriam", - "deverá", - "deverão", - "deviam", - "devido", - "devido a", - "devo", - "diante de", - "disso", - "diversas", - "diversos", - "do que", - "donde", - "doutros", - "dum", - "duma", - "durante", - "e", - "e/ou", - "eba", - "eis", - "ela", - "elas", - "ele", - "eles", - "eles/elas", - "em", - "em cima de", - "em frente a", - "em meio a", - "em nome de", - "em prol de", - "em relação a", - "em torno de", - "em vez de", - "em virtude de", - "em vista de", - "em volta de", - "embaixo de", - "embora", - "enquanto", - "entre", - "entretanto", - "então", - "era", - "eram", - "ergo", - "essa", - "essas", - "esse", - "esses", - "esta", - "estado", - "estamos", - "estando", - "estar", - "estarem", - "estaria", - "estariam", - "estarmos", - "estará", - "estarão", - "estas", - "estava", - "estavam", - "este", - "esteja", - "estejam", - "estes", - "esteve", - "estivemos", - "estiver", - "estiveram", - "estiverem", - "estivesse", - "estivessem", - "estou", - "está", - "estávamos", - "estão", - "eu", - "excepto", - "exceto", - "fica", - "ficado", - "ficamos", - "ficando", - "ficar", - "ficaram", - "ficaria", - "ficou", - "fiquei", - "foi", - "fomos", - "for", - "fora", - "fora de", - "foram", - "forem", - "fosse", - "fossem", - "frente a", - "fui", - "fôr", - "gente", - "graças", - "graças a", - "havendo", - "haver", - "haverem", - "havia", - "haviam", - "houver", - "houvesse", - "há", - "i.e.", - "ia", - "iam", - "ido", - "igual a", - "inté", - "invés de", - "ir", - "ireii", - "irem", - "iremos", - "iria", - "iriam", - "irá", - "irão", - "isso", - "isto", - "junto a", - "junto com", - "já", - "já que", - "la", - "las", - "lhe", - "lhes", - "lo", - "logo", - "logo que", - "los", - "lá", - "mais", - "mais de", - "mais do que", - "mais que", - "mal", - "malgrado", - "mas", - "me", - "mediante", - "menos", - "mesma", - "mesmas", - "mesmo", - "mesmo que", - "mesmo se", - "mesmos", - "meu", - "meus", - "mim", - "minha", - "minhas", - "muita", - "muitas", - "muito", - "muito menos", - "muitos", - "muitíssimo", - "n'", - "na", - "na frente de", - "na sequência de", - "nada", - "naquela", - "naquele", - "naqueles", - "naquilo", - "nas", - "nele", - "neles", - "nem", - "nenhum", - "nenhuma", - "nenhumas", - "nenhuns", - "nessa", - "nessas", - "nesse", - "nesses", - "nesta", - "nestas", - "neste", - "nestes", - "ninguém", - "no", - "no que", - "nos", - "nosco", - "nossa", - "nossas", - "nosso", - "nossos", - "num", - "numa", - "nós", - "o", - "o(s)", - "onde", - "onde quer que", - "ora", - "os", - "ou", - "outra", - "outras", - "outrem", - "outro", - "outros", - "outrém", - "oxalá", - "p'ra", - "p/", - "pa", - "para", - "para com", - "para que", - "parece", - "parecer", - "pelo", - "per", - "perante", - "perantes", - "permanece", - "permanecer", - "perto de", - "pode", - "podem", - "podemos", - "podendo", - "poder", - "poderei", - "poderem", - "poderemos", - "poderia", - "poderiam", - "poderá", - "poderão", - "poderíamos", - "podia", - "podiam", - "podíamos", - "pois", - "por", - "por causa de", - "por causa que", - "por conta de", - "por entre", - "por isso", - "por isto", - "por meio de", - "por trás", - "por trás de", - "por volta de", - "porquanto", - "porque", - "portanto", - "porém", - "possa", - "possam", - "possamos", - "posso", - "pouca", - "poucas", - "pouco", - "poucos", - "pouquíssimos", - "pra", - "precisam", - "precisar", - "precisaram", - "precisarão", - "precisou", - "prestes a", - "pretender", - "pretendiam", - "pro", - "pré", - "pré-", - "pró", - "pude", - "pudemos", - "puderam", - "puderem", - "pudesse", - "pudessem", - "pós", - "pôde", - "pôr", - "público", - "q.b.", - "quais", - "quaisquer", - "qual", - "qualquer", - "quando", - "quanta", - "quantas", - "quanto", - "quanto a", - "quanto baste", - "quanto mais", - "quantos", - "que", - "quem", - "quer", - "quão", - "quê", - "rente a", - "rente de", - "rumo a", - "se", - "se bem que", - "se e somente se", - "se-", - "segundo", - "seja", - "sejam", - "sem", - "sem falar de", - "sempre que", - "sendo", - "sendo que", - "senão", - "ser", - "serei", - "serem", - "seremos", - "seria", - "seriam", - "sermos", - "será", - "serão", - "seu", - "seus", - "si", - "sido", - "sob", - "sobre", - "somos", - "sou", - "sse", - "sua", - "suas", - "sub", - "são", - "sê", - "só que", - "sôbre", - "ta", - "tais", - "tal", - "tampouco", - "tanta", - "tantas", - "tanto", - "tantos", - "te", - "tem", - "temos", - "tende", - "tendo", - "tenha", - "tenham", - "tenhamos", - "tenho", - "tentado", - "tentar", - "tentaram", - "ter", - "terei", - "terem", - "teremos", - "teria", - "teriam", - "termos", - "terá", - "terão", - "teríamos", - "teu", - "teus", - "teve", - "ti", - "tido", - "tinha", - "tinham", - "tive", - "tivemos", - "tiver", - "tiveram", - "tiverem", - "tivesse", - "tivessem", - "to", - "toda", - "todas", - "todavia", - "todo", - "todos", - "trás", - "tu", - "tua", - "tuas", - "tudo", - "tá", - "tão", - "tão logo", - "té", - "têm", - "tínhamos", - "ultra", - "um", - "uma", - "uma vez que", - "umas", - "uns", - "vai", - "vais", - "vamos", - "varias", - "varios", - "versus", - "via", - "visto", - "visto que", - "voce", - "você", - "vocês", - "vos", - "vossa", - "vossas", - "vosso", - "vossos", - "vou", - "vs", - "vá", - "várias", - "vários", - "vão", - "vérsus", - "vós", - "à", - "à beira de", - "à custa de", - "à expensa de", - "à luz de", - "à medida que", - "àquela", - "àqueles", - "às", - "às custas de", - "às expensas de", - "é", - "íamos", - "\u200b\u200bem", - ], - "sw": [ - "akasema", - "alikuwa", - "alisema", - "baada", - "basi", - "bila", - "cha", - "chini", - "hadi", - "hapo", - "hata", - "hivyo", - "hiyo", - "huku", - "huo", - "ili", - "ilikuwa", - "juu", - "kama", - "karibu", - "katika", - "kila", - "kima", - "kisha", - "kubwa", - "kutoka", - "kuwa", - "kwa", - "kwamba", - "kwenda", - "kwenye", - "la", - "lakini", - "mara", - "mdogo", - "mimi", - "mkubwa", - "mmoja", - "moja", - "muda", - "mwenye", - "na", - "naye", - "ndani", - "ng", - "ni", - "nini", - "nonkungu", - "pamoja", - "pia", - "sana", - "sasa", - "sauti", - "tafadhali", - "tena", - "tu", - "vile", - "wa", - "wakati", - "wake", - "walikuwa", - "wao", - "watu", - "wengine", - "wote", - "ya", - "yake", - "yangu", - "yao", - "yeye", - "yule", - "za", - "zaidi", - "zake", - ], - "ur": [ - "اسلئے", - "اسکے جیسا", - "ان کے بیچ ,ان لوگوں کے بیچ", - "اندر", - "انکا", - "اور ,و", - "اوپر", - "اگر ,گرچہ ,اگرچہ", - "باہر", - "بایاں ,بائیں", - "بجائے ,بدلے ,بدلے میں", - "بہت ,بہت سارے ,بہت کچھ", - "بہت زیادہ", - "تب تک", - "تم لوگ ,آپ ,آپ لوگ", - "تمہارا ,تیرا ,آپکا", - "تو, تم ,آپ", - "تھوڑا ,تھوڑی", - "جب تک", - "جسکا", - "جیسے", - "حالاںکہ", - "دایاں ,دائیں ,صحیح", - "دوسرا", - "زیادہ تر", - "ساتھ ,کے ساتھ", - "سب ,سبھی ,سب کچھ ,سارے ,سارا", - "سب لوگ", - "طرف ,اسکی طرف", - "لیکن", - "مثلأ ,مثال کے طور پے", - "میرا", - "میں", - "میں ,کے اندر ,اندر", - "نہی تو", - "نہیں ,ناں ,نا", - "نیچے", - "وہ ,وہ لوگ", - "وہ ,وہ والا, کہ", - "وہ ,یے", - "وہاں", - "پھر", - "پہ ,پر ,میں", - "کافی", - "کب", - "کبھی کبھی", - "کم", - "کوئی", - "کون", - "کونسا", - "کچھ", - "کہاں", - "کیا", - "کیسے", - "کیوںکہ ,چوںکہ ,کیوںکی", - "کےلئے", - "ہم ,ھم", - "یہ ,یہ والا", - "یہاں", - ], - "vi": [ - "ai", - "ai ai", - "ai nấy", - "anh", - "anh em", - "anh trai", - "anh ấy", - "ba", - "bao", - "bao giờ", - "bay", - "bà", - "bà con", - "bà ấy", - "bác", - "bây", - "bé", - "bên", - "bạn", - "bạn gái", - "bạn trai", - "bả", - "bản thân", - "bất chấp", - "bất cứ", - "bất kì", - "bất luận", - "bất nhược", - "bất quá", - "bấy", - "bấy nhiêu", - "bần tăng", - "bầy quân", - "bầy tui", - "bậu", - "bằng", - "bệ hạ", - "bị cáo", - "bố", - "bố nó", - "bồ", - "bộ", - "bởi", - "bởi vì", - "cc", - "cha", - "chao", - "chi", - "chiếu theo", - "cho", - "cho dù", - "cho đến", - "choa", - "chàng", - "chán", - "cháu", - "chí", - "chính", - "chú", - "chú mày", - "chúng", - "chúng mày", - "chúng mình", - "chúng nó", - "chúng ta", - "chúng tao", - "chúng tôi", - "chút", - "chăng", - "chưa", - "chưng", - "chả", - "chắc", - "chẳng cứ", - "chỉ", - "chị", - "chị gái", - "chị ấy", - "chớ", - "chứ", - "con", - "con này", - "cuối cùng", - "các", - "các hạ", - "cái", - "cái gì", - "cái này", - "cán bộ", - "còn", - "có", - "có vẻ", - "cóc", - "cô", - "cô nương", - "cô ta", - "cô ấy", - "côi", - "công tử", - "cùng", - "cơ", - "cơ mà", - "cưng", - "cạnh", - "cả", - "cả nhà", - "cầm bằng", - "cậu", - "cổ", - "cộng", - "cụ", - "của", - "cứ", - "do", - "do vậy", - "do đó", - "duy", - "dù", - "dù sao", - "dù vậy", - "dưng", - "dưới", - "dường như", - "dạ", - "dầu", - "dẫu", - "dẫu vậy", - "dậy", - "dọc", - "dợ", - "em", - "ghe", - "già", - "giá như", - "giả dụ", - "giả sử", - "giữa", - "gì", - "ha", - "hay", - "hay là", - "hen", - "hoàng thượng", - "hoặc", - "huynh", - "huống", - "huống chi", - "huống gì", - "huống hồ", - "há", - "hôn", - "hơn", - "hơn nữa", - "hả", - "hầu hết", - "hắn", - "hết", - "hết cả", - "hề", - "hễ", - "họ", - "hổi", - "hỡi", - "hử", - "khanh", - "khi", - "khi nào", - "không", - "không ai", - "không những", - "khứa", - "kia", - "kém", - "kìa", - "kẻo", - "kể từ", - "l", - "là", - "lão", - "lên", - "lại nữa", - "lần", - "lẫn", - "lắm", - "mi", - "min", - "miễn", - "moa", - "muôn", - "muội", - "mà", - "mà còn", - "mày", - "mãi", - "mình", - "mô", - "mũ", - "mất", - "mấy", - "mầy", - "mẫu hậu", - "mặc dù", - "mặc dầu", - "mặt khác", - "mẹ", - "mẹ nó", - "mọi", - "mọi người", - "mọi vật", - "mỏa", - "mỗi", - "một chút", - "một nửa", - "một số", - "một vài", - "một ít", - "mụ", - "ngay", - "nghe", - "nghen", - "nghỉ", - "ngoài", - "ngoài ra", - "ngoại", - "ngoải", - "ngài", - "ngươi", - "người", - "người người", - "người ta", - "ngược lại", - "ngộ", - "nha", - "nhiều", - "nhà quân", - "nhá", - "nhân", - "nhân dịp", - "nhé", - "như", - "như vậy", - "nhưng", - "nhưng mà", - "nhược bằng", - "nhất là", - "nhằm", - "nhỉ", - "nhỏ", - "nhờ", - "nhỡ", - "những", - "ni", - "nà", - "nàng", - "nào", - "này", - "nè", - "nên", - "nó", - "nô tài", - "nô tì", - "nơi", - "nơi nơi", - "nấy", - "nầy", - "nẩu", - "nếu", - "nếu như", - "nọ", - "nội", - "nớ", - "nừng", - "nửa", - "nữa", - "phi", - "phía", - "phô bay", - "phải", - "phải hôn", - "phải không", - "phần", - "phần lớn", - "phỏng", - "phứt", - "qua", - "quanh", - "quý khách", - "quý vị", - "quả", - "quả nhân", - "ra", - "riêng", - "rùi", - "rằng", - "rồi", - "sang", - "sao", - "sau", - "sau cùng", - "song", - "song le", - "sắp", - "sẽ", - "sở dĩ", - "ta", - "tao", - "tau", - "thanh niên", - "thay", - "thay vì", - "theo", - "theo đó", - "thiếp", - "thiệt", - "thành", - "thâu", - "thêm", - "thì", - "thí dụ", - "thôi", - "thần", - "thầy", - "thẩy", - "thật", - "thằng này", - "thế", - "thế là", - "thế mà", - "thế nhưng", - "thị", - "thời", - "tiểu nhân", - "toa", - "toà", - "toàn", - "toàn bộ", - "toàn thể", - "trong", - "trong khi", - "trong đó", - "trái", - "trái lại", - "trên", - "trò", - "trước", - "trẫm", - "trời", - "trừ phi", - "tuy", - "tuy nhiên", - "tuy rằng", - "tuy vậy", - "tê", - "tóm lại", - "tôi", - "tương đương", - "tại", - "tại hạ", - "tại vì", - "tất cả", - "tầm", - "tận", - "tỉ", - "tổ", - "tớ", - "tới", - "tụi", - "tụi nó", - "tức", - "tức là", - "từ", - "tự", - "tựa", - "ui", - "và", - "vài", - "vài ba", - "vào", - "vì", - "vì thế", - "vì vậy", - "ví dụ", - "ví như", - "vô", - "vô số", - "vô vàn", - "vả chăng", - "vả lại", - "vậy", - "vậy là", - "vậy mà", - "về", - "về hướng", - "về phía", - "vị", - "với", - "xuống", - "à", - "á", - "ái khanh", - "âu là", - "í", - "ít", - "ông", - "ông ấy", - "út", - "ý", - "đa số", - "đang", - "đi", - "đâu", - "đây", - "đã", - "đê", - "đích thân", - "đó", - "đôi", - "đương", - "được", - "đại nhân", - "đấy", - "đầu tiên", - "đằng này", - "đằng ấy", - "đẳng", - "đặng", - "đến", - "để", - "đệ", - "đối với", - "đồ", - "ơi", - "ư", - "ạ", - "ả", - "ảnh", - "ấy", - "ẻm", - "ổng", - "ờ", - "ở", - "ừ", - "ừa", - "ừm", - ], - "yo": [ - "a", - "an", - "bá", - "bí", - "bẹ̀rẹ̀", - "fún", - "fẹ́", - "gbogbo", - "inú", - "jù", - "jẹ", - "jẹ́", - "kan", - "kì", - "kí", - "kò", - "láti", - "lè", - "lọ", - "mi", - "mo", - "máa", - "mọ̀", - "ni", - "náà", - "ní", - "nígbà", - "nítorí", - "nǹkan", - "o", - "padà", - "pé", - "púpọ̀", - "pẹ̀lú", - "rẹ̀", - "sì", - "sí", - "sínú", - "ṣ", - "ti", - "tí", - "wà", - "wá", - "wọn", - "wọ́n", - "yìí", - "àti", - "àwọn", - "é", - "í", - "òun", - "ó", - "ń", - "ńlá", - "ṣe", - "ṣé", - "ṣùgbọ́n", - "ẹmọ́", - "ọjọ́", - "ọ̀pọ̀lọpọ̀", - ], - "zh": [ - "", - "一", - "一争", - "一些", - "一切", - "一旦", - "一点", - "一爭", - "上", - "上前", - "上表", - "下", - "不", - "不仅", - "不会", - "不但", - "不僅", - "不光", - "不关", - "不准", - "不单", - "不可", - "不單", - "不够", - "不夠", - "不应", - "不得", - "不想", - "不愿", - "不應", - "不是", - "不會", - "不準", - "不用", - "不管", - "不經", - "不肯", - "不能", - "不要", - "不該", - "不論", - "不论", - "不该", - "不過", - "不需", - "不願", - "与", - "与其", - "且", - "且是", - "並", - "並且", - "並非", - "个", - "个人", - "中", - "临", - "为", - "为了", - "为人", - "为什么", - "主", - "乃至", - "之", - "之上", - "之下", - "之中", - "之內", - "之内", - "之初", - "之前", - "之后", - "之外", - "之後", - "之所以", - "之时", - "之時", - "之間", - "之间", - "也", - "也是", - "书", - "了", - "争辩", - "事", - "于", - "井", - "亚", - "亞", - "亦为", - "亦是", - "亦為", - "亭", - "亲", - "人", - "人人", - "人家", - "什么", - "什麼", - "今", - "仍是", - "仍算", - "从", - "他", - "他们", - "他俩", - "他倆", - "他們", - "代", - "令", - "以", - "以上", - "以下", - "以为", - "以來", - "以前", - "以北", - "以及", - "以后", - "以外", - "以往", - "以後", - "以来", - "以為", - "以爲", - "以至", - "们", - "价", - "任", - "任何", - "众", - "会", - "传", - "伪", - "似乎", - "似的", - "但", - "但是", - "位", - "低", - "住", - "体", - "何", - "何方", - "佛", - "作", - "作为", - "作為", - "你", - "你们", - "你們", - "你自己", - "你门", - "佬", - "併", - "使", - "來", - "供", - "依", - "依据", - "依據", - "依照", - "依靠", - "侠", - "侧", - "侨", - "侯", - "便是", - "係", - "保存", - "保級", - "保级", - "俠", - "信", - "修复", - "修復", - "個", - "個人", - "們", - "倘若", - "借助", - "借由", - "借着", - "值", - "假使", - "假如", - "偏", - "做", - "側", - "偽", - "傳", - "傻", - "像", - "像是", - "僑", - "價", - "儘管", - "元", - "先", - "光", - "光棍", - "党", - "內", - "內外", - "全", - "全体", - "全副", - "全套", - "全部", - "全體", - "公", - "关", - "关于", - "关心", - "兵", - "其", - "其中", - "其他", - "其余", - "其它", - "其餘", - "典", - "兼", - "内", - "内外", - "军", - "冠", - "冢", - "冲", - "冷", - "准", - "准备", - "减慢", - "几", - "凭", - "凭借", - "出手", - "刀", - "分", - "分布", - "列", - "则为", - "则是", - "初", - "別", - "別人", - "别", - "别人", - "别的", - "到", - "到处", - "制", - "券", - "剂", - "則是", - "則為", - "前", - "前任", - "前后", - "前後", - "剑", - "剧", - "副", - "劇", - "劍", - "劑", - "力", - "办", - "办学", - "功", - "加", - "劣", - "努力", - "包", - "包裹", - "化", - "区", - "医", - "區", - "半", - "单", - "卡", - "卫", - "即", - "即使", - "即便", - "却是", - "卻", - "卻是", - "卿", - "厂", - "厅", - "历届", - "压", - "原", - "去", - "县", - "又", - "又或", - "又是", - "及", - "友", - "发展", - "发育", - "变", - "变得", - "口", - "古", - "另", - "另外", - "只是", - "只有", - "只能", - "只要", - "可", - "可以", - "可是", - "可能", - "台", - "史", - "叶", - "号", - "司", - "吃", - "各", - "各个", - "各位", - "各個", - "各天", - "各州", - "各式", - "各樣", - "各种", - "各种各样", - "各種", - "各種各樣", - "各类", - "各級", - "各级", - "各自", - "各項", - "各類", - "各项", - "同", - "同年", - "名", - "后", - "向", - "吗", - "君", - "否", - "吧", - "呀", - "员", - "呢", - "周", - "味", - "和", - "和美", - "咱们", - "品", - "哈尔滨", - "哈爾濱", - "員", - "哪", - "哪个", - "哪些", - "哪個", - "哪儿", - "哪兒", - "哪怕", - "哪裏", - "哪裡", - "哪里", - "唯有", - "商", - "啊", - "啦", - "喇", - "喜", - "喜欢", - "喜歡", - "單", - "單憑", - "嗎", - "嗬", - "嘛", - "嘴", - "器", - "回", - "因", - "因为", - "因应", - "因應", - "因此", - "因為", - "团", - "园", - "围", - "国", - "图", - "圆", - "圈", - "國", - "圍", - "園", - "圓", - "圖", - "團", - "土", - "圣", - "在", - "在內", - "在内", - "地", - "场", - "坊", - "坟", - "坡", - "型", - "埋", - "城", - "埤", - "執政", - "基", - "基于", - "基於", - "堂", - "堡", - "堤", - "報", - "場", - "塔", - "塘", - "墓", - "墙", - "增長", - "增长", - "墟", - "墳", - "壓", - "士", - "处", - "外", - "多", - "多少", - "多次", - "夜", - "够", - "夠", - "夢", - "大", - "大家", - "天", - "头", - "夹", - "夾", - "奏", - "奖", - "套", - "女", - "女士们", - "女士门", - "奸", - "她", - "她们", - "她俩", - "她倆", - "她們", - "好", - "好了", - "好像", - "如", - "如何", - "如同", - "如果", - "妃", - "妇", - "妳", - "妹", - "始", - "娘", - "婆", - "婦", - "子", - "孔", - "字", - "季", - "学", - "學", - "宁愿", - "它", - "它们", - "它們", - "安全", - "宏", - "宗", - "官", - "实属", - "审", - "客", - "室", - "宫", - "宮", - "家", - "宽", - "富", - "實屬", - "審", - "寬", - "对", - "对于", - "对方", - "对此", - "寺", - "将", - "將", - "對", - "對方", - "對於", - "對此", - "小", - "尖", - "就", - "就是", - "就算", - "尸", - "尽管", - "局", - "层", - "屋", - "屍", - "展", - "属", - "層", - "屬", - "屯", - "山", - "屿", - "岗", - "岛", - "岩", - "岭", - "岸", - "峡", - "峰", - "島", - "峽", - "崖", - "崗", - "嶺", - "嶼", - "川", - "州", - "工", - "左右", - "差", - "巷", - "币", - "市", - "布", - "师", - "希望", - "帝", - "带", - "師", - "席", - "帮", - "帶", - "帽", - "幣", - "幫", - "年", - "并", - "并且", - "并非", - "幾", - "庄", - "床", - "庐", - "库", - "应", - "应当", - "应该", - "底", - "店", - "庙", - "府", - "度", - "座", - "庫", - "庭", - "廟", - "廠", - "廬", - "廳", - "廷", - "建基於", - "开口", - "开始", - "式", - "弯", - "張", - "強", - "弹", - "强", - "彈", - "彎", - "当", - "当中", - "当届", - "录", - "形", - "形容", - "形成", - "影响", - "影響", - "彼此", - "往", - "径", - "待", - "很多", - "後", - "徑", - "徒", - "得", - "得宠", - "得寵", - "從", - "御", - "微", - "徽", - "心", - "必", - "必須", - "必须", - "志", - "快", - "态", - "怎么样", - "怎樣", - "怎麼", - "怕", - "性", - "怪", - "总", - "恆", - "恋", - "恒", - "您", - "想", - "愛", - "感", - "感到", - "感覺", - "感觉", - "愿意", - "態", - "憑", - "憑藉", - "懂", - "懂得", - "應", - "應當", - "應該", - "懒得", - "戀", - "戏", - "我", - "我们", - "我們", - "我自己", - "我门", - "或", - "或是", - "或者", - "战", - "截止", - "截至", - "戰", - "戲", - "戶", - "户", - "房", - "所", - "所以", - "所有", - "手", - "才是", - "打", - "执政", - "把", - "报", - "拖", - "持續", - "按", - "按照", - "挡", - "损失", - "据", - "排行", - "接唱", - "接触", - "接觸", - "控制", - "推进", - "推進", - "描述", - "損失", - "擋", - "據", - "支", - "教", - "敢", - "数", - "整", - "整个", - "整個", - "整场", - "整块", - "整場", - "整塊", - "整套", - "整所", - "整架", - "整片", - "整顆", - "整颗", - "數", - "文", - "斋", - "斗", - "新", - "方", - "於", - "族", - "旗", - "无论", - "既", - "既是", - "既然", - "日", - "日趋", - "日趨", - "旧", - "时", - "星", - "是", - "是否", - "是否是", - "是次", - "显", - "显得", - "時", - "晚", - "暖", - "暗", - "暨", - "曲", - "更为", - "更是", - "更為", - "更趋", - "更趨", - "書", - "替", - "會", - "會不會", - "月", - "有", - "有些", - "有关", - "有的", - "有關", - "服", - "朝", - "期", - "期間", - "期间", - "未能", - "末", - "本", - "本人", - "本地", - "本屆", - "本届", - "本班", - "本身", - "术", - "机", - "权", - "杆", - "材", - "村", - "束", - "来", - "杯", - "板", - "林", - "枪", - "架", - "某", - "某个", - "某些", - "某個", - "某种", - "某種", - "染色", - "柜", - "树", - "校", - "株", - "核", - "根据", - "根據", - "格", - "案", - "档", - "桥", - "桨", - "桿", - "梁", - "梁耀忠", - "梦", - "棍", - "棒", - "棚", - "椭", - "業", - "楼", - "榜", - "槍", - "槳", - "樂", - "樂意", - "樓", - "樹", - "橋", - "橙", - "機", - "橢", - "檔", - "櫃", - "權", - "次", - "欲", - "款", - "歌", - "正", - "正如", - "正是", - "此", - "此套", - "此次", - "此种", - "此種", - "此等", - "此类", - "此項", - "此類", - "此项", - "歷", - "歷屆", - "死", - "段", - "殿", - "母", - "毎年", - "每", - "每个", - "每位", - "每個", - "每元", - "每升", - "每卡", - "每周", - "每天", - "每幅", - "每年", - "每座", - "每当", - "每戶", - "每户", - "每所", - "每日", - "每枚", - "每次", - "每段", - "每片", - "每秒", - "每組", - "每组", - "每边", - "每週", - "每邊", - "每間", - "每间", - "每队", - "每隊", - "每集", - "每首", - "毒", - "比", - "比如說", - "比起", - "氏", - "气", - "氣", - "水", - "永保", - "江", - "池", - "沒", - "沒有", - "沒能", - "沟", - "没", - "没有", - "没能", - "河", - "治军", - "治軍", - "沼", - "沿", - "沿着", - "沿著", - "況且", - "泉", - "法", - "波", - "洋", - "洞", - "洲", - "派", - "流沙", - "浅", - "浊", - "浓", - "浦", - "海", - "涉世", - "涌", - "液", - "淡", - "深", - "深感", - "混", - "淺", - "清", - "減慢", - "渡", - "港", - "湖", - "湾", - "準", - "準備", - "溝", - "溥仪", - "溥儀", - "溪", - "满", - "满洲", - "滩", - "滿", - "滿洲", - "潮", - "澡", - "澳", - "濁", - "濃", - "灘", - "灣", - "火", - "炉", - "炎", - "炮", - "点", - "為", - "為了", - "為人", - "烃", - "烟", - "热", - "烴", - "無", - "無論", - "煙", - "熟", - "熱", - "營", - "爐", - "爭取", - "爭辯", - "爱", - "爲", - "父", - "爷", - "爺", - "牆", - "片", - "版", - "牌", - "牠", - "牠們", - "物", - "犯", - "状", - "狀", - "狂", - "狗", - "狮", - "猫", - "獅", - "獎", - "獲利", - "率", - "王", - "班", - "球", - "琴", - "甚么", - "甚至", - "甚至是", - "甚麼", - "甚麽", - "生", - "用", - "由", - "由于", - "由於", - "电", - "男", - "町", - "画", - "界", - "畔", - "畫", - "當", - "當中", - "當屆", - "病", - "症", - "癌", - "癖", - "發展", - "發育", - "的", - "的話", - "的话", - "皮", - "盃", - "监管", - "盖因", - "監管", - "目", - "直到", - "直至", - "相对", - "相對", - "相比", - "省", - "看", - "看似", - "看得", - "眼", - "眾", - "眾多", - "着", - "督", - "瞭", - "短", - "石", - "矿", - "码", - "砲", - "硅", - "碑", - "碱", - "碼", - "礁", - "礦", - "礼", - "社", - "祂", - "神", - "祠", - "禮", - "离", - "离开", - "秀", - "私交", - "秋", - "种", - "科", - "秤", - "稅", - "税", - "種", - "突感", - "窑", - "窟", - "窯", - "站", - "端", - "競選", - "符", - "笨", - "等", - "管", - "管理", - "箱", - "節", - "篇", - "籍", - "米", - "类", - "粉", - "精", - "糖", - "系", - "紀", - "紅", - "紋", - "純", - "紙", - "級", - "素", - "組", - "結", - "給", - "綉", - "經", - "經由", - "經過", - "綜", - "綫", - "綱", - "網", - "線", - "緣", - "縣", - "縱使", - "總", - "繞", - "繼", - "红", - "级", - "纪", - "纯", - "纲", - "纵使", - "纸", - "纹", - "线", - "组", - "经", - "经由", - "经过", - "结", - "绕", - "给", - "绣", - "继", - "综", - "网", - "罩", - "罪", - "署", - "羊", - "美", - "群", - "翁", - "老", - "者", - "而", - "而且", - "而已", - "而是", - "而非", - "聖", - "肉", - "肯", - "肺", - "胎", - "胚", - "胶", - "能", - "能否", - "能够", - "能夠", - "脚", - "脸", - "腔", - "腳", - "腿", - "膜", - "膠", - "臉", - "臨", - "自", - "自从", - "自家", - "自己", - "自從", - "自我", - "自身", - "至", - "至于", - "至於", - "臺", - "與", - "與其", - "舊", - "舞", - "舟", - "舰", - "舱", - "船", - "艇", - "艙", - "艦", - "色", - "节", - "花", - "若", - "若是", - "茶", - "药", - "莊", - "获利", - "菌", - "菜", - "营", - "葉", - "著", - "蓋因", - "蓝", - "藉", - "藉助", - "藉由", - "藉著", - "藍", - "藤", - "藥", - "藩", - "處", - "號", - "虽", - "虽则", - "虽然", - "蛙", - "行", - "術", - "街", - "衛", - "衣", - "表", - "表现", - "表現", - "表示", - "被", - "装", - "裏", - "裔", - "裙", - "裝", - "裡", - "裡面", - "裤", - "製", - "褲", - "要", - "要不要", - "要么", - "要是", - "要求", - "親", - "覺得", - "觀", - "观", - "觉得", - "角", - "計劃", - "記", - "詞", - "試圖", - "詩", - "話", - "該", - "該屆", - "該批", - "該族", - "該條", - "該段", - "該組", - "該集", - "該項", - "誌", - "認為", - "認識", - "語", - "誤信", - "說", - "誰", - "課", - "請", - "論", - "諸", - "諸如", - "謂", - "證", - "譜", - "變", - "變得", - "认为", - "认识", - "记", - "许多", - "许许多多", - "论", - "证", - "词", - "诗", - "话", - "该", - "该届", - "该批", - "该族", - "该条", - "该段", - "该组", - "该集", - "语", - "误信", - "说", - "请", - "诸", - "诸如", - "课", - "谁", - "谓", - "谱", - "谷", - "豆", - "象", - "貓", - "負債", - "費", - "資", - "賣", - "質", - "賽", - "负债", - "质", - "费", - "资", - "赛", - "起", - "起伏", - "起来", - "趁", - "超", - "趋", - "趋于", - "趨", - "趨於", - "距", - "距离", - "距離", - "跟", - "路", - "躁", - "身", - "車", - "軍", - "軒", - "軟", - "軸", - "較", - "輕", - "车", - "轩", - "软", - "轴", - "轻", - "较", - "辦", - "辦學", - "边", - "达到", - "过", - "过后", - "运作", - "近", - "还", - "还是", - "还有", - "这", - "这些", - "这儿", - "这养", - "这样", - "这次", - "这种", - "这里", - "远", - "连", - "连任", - "连同", - "迷", - "追溯", - "透过", - "透過", - "這", - "這些", - "這個", - "這兒", - "這樣", - "這樣子", - "這次", - "這種", - "這裏", - "這裡", - "這邊", - "這麼", - "通", - "通过", - "通過", - "逢", - "連", - "連任", - "連同", - "週", - "運作", - "過", - "過後", - "道", - "達到", - "遠", - "選舉", - "還是", - "邊", - "那", - "那个", - "那些", - "那儿", - "那兒", - "那样", - "那樣", - "那裏", - "那裡", - "那邊", - "那里", - "邦", - "邨", - "郎", - "郡", - "部", - "都", - "都是", - "鄉", - "配", - "酒", - "酸", - "醣", - "醫", - "里", - "里面", - "重", - "量", - "金", - "針", - "針對", - "銘", - "鋼", - "錄", - "錦", - "鍋", - "鍵", - "鎊", - "鎮", - "鏈", - "鏡", - "鐵", - "鑒於", - "针", - "针对", - "钢", - "铁", - "铭", - "链", - "锅", - "锦", - "键", - "镇", - "镜", - "長", - "长", - "門", - "開口", - "開始", - "間", - "閣", - "閣下", - "關", - "關心", - "關於", - "门", - "间", - "阁", - "队", - "阶", - "际", - "陆", - "降解", - "院", - "除", - "除了", - "除外", - "除非", - "陵", - "陸", - "隊", - "階", - "随", - "随同", - "隔", - "際", - "隨", - "隨同", - "难过", - "集", - "雖", - "雖則", - "雖然", - "離", - "離開", - "難過", - "電", - "需", - "需要", - "非", - "靠", - "面", - "音", - "頂", - "須", - "頭", - "頭個", - "題", - "額", - "願意", - "類", - "顯", - "顯得", - "顶", - "须", - "题", - "额", - "風", - "风", - "飯", - "餅", - "餐", - "館", - "饃", - "首先", - "點", - ], -} diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/seg_ja.sh b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/seg_ja.sh deleted file mode 100644 index be6f5ca5fe4ac8e8c786a439caaed1d1314f1aef..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/seg_ja.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -SCRIPT=`realpath $0` -KYTEA=`dirname $SCRIPT`/thirdparty/kytea -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$KYTEA/lib:/usr/local/lib -export PATH=$PATH:"$KYTEA/bin" - -cat - | tr -d "[:blank:]" | kytea -notags diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/multihead_attention.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/multihead_attention.py deleted file mode 100644 index 8eb9d09dad37ab132295166d691873beec63eaf1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/multihead_attention.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import Tensor, nn - - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_cuda_rng_tracker, - get_model_parallel_world_size, - ColumnParallelLinear, - RowParallelLinear, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -@with_incremental_state -class ModelParallelMultiheadAttention(nn.Module): - """Model parallel Multi-headed attention. - This performs the Multi-headed attention over multiple gpus. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - self_attention=False, - encoder_decoder_attention=False, - ): - super().__init__() - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.model_parallel_size = get_model_parallel_world_size() - - self.num_heads_partition = num_heads // self.model_parallel_size - assert ( - self.num_heads_partition * self.model_parallel_size == num_heads - ), "Number of heads must be divisible by model parallel size" - - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert ( - not self.self_attention or self.qkv_same_dim - ), "Self-attention requires query, key and value to be of the same size" - - self.k_proj = ColumnParallelLinear( - self.kdim, embed_dim, bias=bias, gather_output=False - ) - self.v_proj = ColumnParallelLinear( - self.vdim, embed_dim, bias=bias, gather_output=False - ) - self.q_proj = ColumnParallelLinear( - embed_dim, embed_dim, bias=bias, gather_output=False - ) - self.out_proj = RowParallelLinear( - embed_dim, embed_dim, bias=bias, input_is_parallel=True - ) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - **unused_kwargs, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - """ - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - is_tpu = query.device.type == "xla" - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads_partition, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = ( - ModelParallelMultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - ) - - saved_state["prev_key"] = k.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_value"] = v.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - src_len = k.size(1) - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - - assert list(attn_weights.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - src_len, - ] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view( - bsz, self.num_heads_partition, tgt_len, src_len - ) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view( - bsz * self.num_heads_partition, tgt_len, src_len - ) - - attn_weights_float = utils.softmax(attn_weights, dim=-1) - attn_weights = attn_weights_float.type_as(attn_weights) - - with get_cuda_rng_tracker().fork(): - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - self.head_dim, - ] - embed_dim_partition = embed_dim // self.model_parallel_size - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim_partition) - attn = self.out_proj(attn) - # return attn_weights None to keep the return type same as single gpu multihead attention - # This will be deprecated. - attn_weights: Optional[Tensor] = None - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - - filler = torch.zeros(batch_size, src_len - prev_key_padding_mask.size(1)) - if prev_key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - elif key_padding_mask is not None: - filler = torch.zeros(batch_size, src_len - key_padding_mask.size(1)) - if key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - def reorder_incremental_state( - self, incremental_state: Dict[str, Dict[str, Optional[Tensor]]], new_order - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - if input_buffer[k] is not None: - input_buffer[k] = input_buffer[k].index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) diff --git a/spaces/ICML2022/resefa/utils/misc.py b/spaces/ICML2022/resefa/utils/misc.py deleted file mode 100644 index 36198dff3a4b3e1f7b5e6a21a17418d0e04eb6f3..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/utils/misc.py +++ /dev/null @@ -1,227 +0,0 @@ -# python3.7 -"""Misc utility functions.""" - -import os -import hashlib - -from torch.hub import download_url_to_file - -__all__ = [ - 'REPO_NAME', 'Infix', 'print_and_execute', 'check_file_ext', - 'IMAGE_EXTENSIONS', 'VIDEO_EXTENSIONS', 'MEDIA_EXTENSIONS', - 'parse_file_format', 'set_cache_dir', 'get_cache_dir', 'download_url' -] - -REPO_NAME = 'Hammer' # Name of the repository (project). - - -class Infix(object): - """Helper class to create custom infix operators. - - When using it, make sure to put the operator between `<<` and `>>`. - `<< INFIX_OP_NAME >>` should be considered as a whole operator. - - Examples: - - # Use `Infix` to create infix operators directly. - add = Infix(lambda a, b: a + b) - 1 << add >> 2 # gives 3 - 1 << add >> 2 << add >> 3 # gives 6 - - # Use `Infix` as a decorator. - @Infix - def mul(a, b): - return a * b - 2 << mul >> 4 # gives 8 - 2 << mul >> 3 << mul >> 7 # gives 42 - """ - - def __init__(self, function): - self.function = function - self.left_value = None - - def __rlshift__(self, left_value): # override `<<` before `Infix` instance - assert self.left_value is None # make sure left is only called once - self.left_value = left_value - return self - - def __rshift__(self, right_value): # override `>>` after `Infix` instance - result = self.function(self.left_value, right_value) - self.left_value = None # reset to None - return result - - -def print_and_execute(cmd): - """Prints and executes a system command. - - Args: - cmd: Command to be executed. - """ - print(cmd) - os.system(cmd) - - -def check_file_ext(filename, *ext_list): - """Checks whether the given filename is with target extension(s). - - NOTE: If `ext_list` is empty, this function will always return `False`. - - Args: - filename: Filename to check. - *ext_list: A list of extensions. - - Returns: - `True` if the filename is with one of extensions in `ext_list`, - otherwise `False`. - """ - if len(ext_list) == 0: - return False - ext_list = [ext if ext.startswith('.') else '.' + ext for ext in ext_list] - ext_list = [ext.lower() for ext in ext_list] - basename = os.path.basename(filename) - ext = os.path.splitext(basename)[1].lower() - return ext in ext_list - - -# File extensions regarding images (not including GIFs). -IMAGE_EXTENSIONS = ( - '.bmp', '.ppm', '.pgm', '.jpeg', '.jpg', '.jpe', '.jp2', '.png', '.webp', - '.tiff', '.tif' -) -# File extensions regarding videos. -VIDEO_EXTENSIONS = ( - '.avi', '.mkv', '.mp4', '.m4v', '.mov', '.webm', '.flv', '.rmvb', '.rm', - '.3gp' -) -# File extensions regarding media, i.e., images, videos, GIFs. -MEDIA_EXTENSIONS = ('.gif', *IMAGE_EXTENSIONS, *VIDEO_EXTENSIONS) - - -def parse_file_format(path): - """Parses the file format of a given path. - - This function basically parses the file format according to its extension. - It will also return `dir` is the given path is a directory. - - Parable file formats: - - - zip: with `.zip` extension. - - tar: with `.tar` / `.tgz` / `.tar.gz` extension. - - lmdb: a folder ending with `lmdb`. - - txt: with `.txt` / `.text` extension, OR without extension (e.g. LICENSE). - - json: with `.json` extension. - - jpg: with `.jpeg` / `jpg` / `jpe` extension. - - png: with `.png` extension. - - Args: - path: The path to the file to parse format from. - - Returns: - A lower-case string, indicating the file format, or `None` if the format - cannot be successfully parsed. - """ - # Handle directory. - if os.path.isdir(path) or path.endswith('/'): - if path.rstrip('/').lower().endswith('lmdb'): - return 'lmdb' - return 'dir' - # Handle file. - if os.path.isfile(path) and os.path.splitext(path)[1] == '': - return 'txt' - path = path.lower() - if path.endswith('.tar.gz'): # Cannot parse accurate extension. - return 'tar' - ext = os.path.splitext(path)[1] - if ext == '.zip': - return 'zip' - if ext in ['.tar', '.tgz']: - return 'tar' - if ext in ['.txt', '.text']: - return 'txt' - if ext == '.json': - return 'json' - if ext in ['.jpeg', '.jpg', '.jpe']: - return 'jpg' - if ext == '.png': - return 'png' - # Unparsable. - return None - - -_cache_dir = None - - -def set_cache_dir(directory=None): - """Sets the global cache directory. - - The cache directory can be used to save some files that will be shared - across jobs. The default cache directory is set as `~/.cache/${REPO_NAME}/`. - This function can be used to redirect the cache directory. Or, users can use - `None` to reset the cache directory back to default. - - Args: - directory: The target directory used to cache files. If set as `None`, - the cache directory will be reset back to default. (default: None) - """ - assert directory is None or isinstance(directory, str), 'Invalid directory!' - global _cache_dir # pylint: disable=global-statement - _cache_dir = directory - - -def get_cache_dir(): - """Gets the global cache directory. - - The global cache directory is primarily set as `~/.cache/${REPO_NAME}/` by - default, and can be redirected with `set_cache_dir()`. - - Returns: - A string, representing the global cache directory. - """ - if _cache_dir is None: - home = os.path.expanduser('~') - return os.path.join(home, '.cache', REPO_NAME) - return _cache_dir - - -def download_url(url, path=None, filename=None, sha256=None): - """Downloads file from URL. - - This function downloads a file from given URL, and executes Hash check if - needed. - - Args: - url: The URL to download file from. - path: Path (directory) to save the downloaded file. If set as `None`, - the cache directory will be used. Please see `get_cache_dir()` for - more details. (default: None) - filename: The name to save the file. If set as `None`, this name will be - automatically parsed from the given URL. (default: None) - sha256: The expected sha256 of the downloaded file. If set as `None`, - the hash check will be skipped. Otherwise, this function will check - whether the sha256 of the downloaded file matches this field. - - Returns: - A two-element tuple, where the first term is the full path of the - downloaded file, and the second term indicate the hash check result. - `True` means hash check passes, `False` means hash check fails, - while `None` means no hash check is executed. - """ - # Handle file path. - if path is None: - path = get_cache_dir() - if filename is None: - filename = os.path.basename(url) - save_path = os.path.join(path, filename) - # Download file if needed. - if not os.path.exists(save_path): - print(f'Downloading URL `{url}` to path `{save_path}` ...') - os.makedirs(path, exist_ok=True) - download_url_to_file(url, save_path, hash_prefix=None, progress=True) - # Check hash if needed. - check_result = None - if sha256 is not None: - with open(save_path, 'rb') as f: - file_hash = hashlib.sha256(f.read()) - check_result = (file_hash.hexdigest() == sha256) - - return save_path, check_result diff --git a/spaces/Iqbalzz/hololive-rvc-models/app.py b/spaces/Iqbalzz/hololive-rvc-models/app.py deleted file mode 100644 index 47db29e9de54b1eb0cc22120f40ff7cb984126a7..0000000000000000000000000000000000000000 --- a/spaces/Iqbalzz/hololive-rvc-models/app.py +++ /dev/null @@ -1,185 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 500 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 300 and limitation: - return "Please upload an audio file that is less than 5 minutes 30 seconds.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "# <center> Hololive RVC Models\n" - "## <center> The input audio should be clean and pure voice without background music.\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/aziib/Create-Google-Shared-Drive/blob/master/Hololive-RVC-Models.ipynb)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/megaaziib)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '<div align="center">' - f'<div>{title}</div>\n'+ - (f'<div>Model author: {author}</div>' if author else "")+ - (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+ - '</div>' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 5 minutes 30 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (600 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d_blocks.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d_blocks.py deleted file mode 100644 index fc758ebbb044644e921c7e66089e052981a82e1e..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d_blocks.py +++ /dev/null @@ -1,668 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import math - -import torch -import torch.nn.functional as F -from torch import nn - -from .resnet import Downsample1D, ResidualTemporalBlock1D, Upsample1D, rearrange_dims - - -class DownResnetBlock1D(nn.Module): - def __init__( - self, - in_channels, - out_channels=None, - num_layers=1, - conv_shortcut=False, - temb_channels=32, - groups=32, - groups_out=None, - non_linearity=None, - time_embedding_norm="default", - output_scale_factor=1.0, - add_downsample=True, - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - self.time_embedding_norm = time_embedding_norm - self.add_downsample = add_downsample - self.output_scale_factor = output_scale_factor - - if groups_out is None: - groups_out = groups - - # there will always be at least one resnet - resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=temb_channels)] - - for _ in range(num_layers): - resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels)) - - self.resnets = nn.ModuleList(resnets) - - if non_linearity == "swish": - self.nonlinearity = lambda x: F.silu(x) - elif non_linearity == "mish": - self.nonlinearity = nn.Mish() - elif non_linearity == "silu": - self.nonlinearity = nn.SiLU() - else: - self.nonlinearity = None - - self.downsample = None - if add_downsample: - self.downsample = Downsample1D(out_channels, use_conv=True, padding=1) - - def forward(self, hidden_states, temb=None): - output_states = () - - hidden_states = self.resnets[0](hidden_states, temb) - for resnet in self.resnets[1:]: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.nonlinearity is not None: - hidden_states = self.nonlinearity(hidden_states) - - if self.downsample is not None: - hidden_states = self.downsample(hidden_states) - - return hidden_states, output_states - - -class UpResnetBlock1D(nn.Module): - def __init__( - self, - in_channels, - out_channels=None, - num_layers=1, - temb_channels=32, - groups=32, - groups_out=None, - non_linearity=None, - time_embedding_norm="default", - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.time_embedding_norm = time_embedding_norm - self.add_upsample = add_upsample - self.output_scale_factor = output_scale_factor - - if groups_out is None: - groups_out = groups - - # there will always be at least one resnet - resnets = [ResidualTemporalBlock1D(2 * in_channels, out_channels, embed_dim=temb_channels)] - - for _ in range(num_layers): - resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=temb_channels)) - - self.resnets = nn.ModuleList(resnets) - - if non_linearity == "swish": - self.nonlinearity = lambda x: F.silu(x) - elif non_linearity == "mish": - self.nonlinearity = nn.Mish() - elif non_linearity == "silu": - self.nonlinearity = nn.SiLU() - else: - self.nonlinearity = None - - self.upsample = None - if add_upsample: - self.upsample = Upsample1D(out_channels, use_conv_transpose=True) - - def forward(self, hidden_states, res_hidden_states_tuple=None, temb=None): - if res_hidden_states_tuple is not None: - res_hidden_states = res_hidden_states_tuple[-1] - hidden_states = torch.cat((hidden_states, res_hidden_states), dim=1) - - hidden_states = self.resnets[0](hidden_states, temb) - for resnet in self.resnets[1:]: - hidden_states = resnet(hidden_states, temb) - - if self.nonlinearity is not None: - hidden_states = self.nonlinearity(hidden_states) - - if self.upsample is not None: - hidden_states = self.upsample(hidden_states) - - return hidden_states - - -class ValueFunctionMidBlock1D(nn.Module): - def __init__(self, in_channels, out_channels, embed_dim): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.embed_dim = embed_dim - - self.res1 = ResidualTemporalBlock1D(in_channels, in_channels // 2, embed_dim=embed_dim) - self.down1 = Downsample1D(out_channels // 2, use_conv=True) - self.res2 = ResidualTemporalBlock1D(in_channels // 2, in_channels // 4, embed_dim=embed_dim) - self.down2 = Downsample1D(out_channels // 4, use_conv=True) - - def forward(self, x, temb=None): - x = self.res1(x, temb) - x = self.down1(x) - x = self.res2(x, temb) - x = self.down2(x) - return x - - -class MidResTemporalBlock1D(nn.Module): - def __init__( - self, - in_channels, - out_channels, - embed_dim, - num_layers: int = 1, - add_downsample: bool = False, - add_upsample: bool = False, - non_linearity=None, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.add_downsample = add_downsample - - # there will always be at least one resnet - resnets = [ResidualTemporalBlock1D(in_channels, out_channels, embed_dim=embed_dim)] - - for _ in range(num_layers): - resnets.append(ResidualTemporalBlock1D(out_channels, out_channels, embed_dim=embed_dim)) - - self.resnets = nn.ModuleList(resnets) - - if non_linearity == "swish": - self.nonlinearity = lambda x: F.silu(x) - elif non_linearity == "mish": - self.nonlinearity = nn.Mish() - elif non_linearity == "silu": - self.nonlinearity = nn.SiLU() - else: - self.nonlinearity = None - - self.upsample = None - if add_upsample: - self.upsample = Downsample1D(out_channels, use_conv=True) - - self.downsample = None - if add_downsample: - self.downsample = Downsample1D(out_channels, use_conv=True) - - if self.upsample and self.downsample: - raise ValueError("Block cannot downsample and upsample") - - def forward(self, hidden_states, temb): - hidden_states = self.resnets[0](hidden_states, temb) - for resnet in self.resnets[1:]: - hidden_states = resnet(hidden_states, temb) - - if self.upsample: - hidden_states = self.upsample(hidden_states) - if self.downsample: - self.downsample = self.downsample(hidden_states) - - return hidden_states - - -class OutConv1DBlock(nn.Module): - def __init__(self, num_groups_out, out_channels, embed_dim, act_fn): - super().__init__() - self.final_conv1d_1 = nn.Conv1d(embed_dim, embed_dim, 5, padding=2) - self.final_conv1d_gn = nn.GroupNorm(num_groups_out, embed_dim) - if act_fn == "silu": - self.final_conv1d_act = nn.SiLU() - if act_fn == "mish": - self.final_conv1d_act = nn.Mish() - self.final_conv1d_2 = nn.Conv1d(embed_dim, out_channels, 1) - - def forward(self, hidden_states, temb=None): - hidden_states = self.final_conv1d_1(hidden_states) - hidden_states = rearrange_dims(hidden_states) - hidden_states = self.final_conv1d_gn(hidden_states) - hidden_states = rearrange_dims(hidden_states) - hidden_states = self.final_conv1d_act(hidden_states) - hidden_states = self.final_conv1d_2(hidden_states) - return hidden_states - - -class OutValueFunctionBlock(nn.Module): - def __init__(self, fc_dim, embed_dim): - super().__init__() - self.final_block = nn.ModuleList( - [ - nn.Linear(fc_dim + embed_dim, fc_dim // 2), - nn.Mish(), - nn.Linear(fc_dim // 2, 1), - ] - ) - - def forward(self, hidden_states, temb): - hidden_states = hidden_states.view(hidden_states.shape[0], -1) - hidden_states = torch.cat((hidden_states, temb), dim=-1) - for layer in self.final_block: - hidden_states = layer(hidden_states) - - return hidden_states - - -_kernels = { - "linear": [1 / 8, 3 / 8, 3 / 8, 1 / 8], - "cubic": [-0.01171875, -0.03515625, 0.11328125, 0.43359375, 0.43359375, 0.11328125, -0.03515625, -0.01171875], - "lanczos3": [ - 0.003689131001010537, - 0.015056144446134567, - -0.03399861603975296, - -0.066637322306633, - 0.13550527393817902, - 0.44638532400131226, - 0.44638532400131226, - 0.13550527393817902, - -0.066637322306633, - -0.03399861603975296, - 0.015056144446134567, - 0.003689131001010537, - ], -} - - -class Downsample1d(nn.Module): - def __init__(self, kernel="linear", pad_mode="reflect"): - super().__init__() - self.pad_mode = pad_mode - kernel_1d = torch.tensor(_kernels[kernel]) - self.pad = kernel_1d.shape[0] // 2 - 1 - self.register_buffer("kernel", kernel_1d) - - def forward(self, hidden_states): - hidden_states = F.pad(hidden_states, (self.pad,) * 2, self.pad_mode) - weight = hidden_states.new_zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]]) - indices = torch.arange(hidden_states.shape[1], device=hidden_states.device) - weight[indices, indices] = self.kernel.to(weight) - return F.conv1d(hidden_states, weight, stride=2) - - -class Upsample1d(nn.Module): - def __init__(self, kernel="linear", pad_mode="reflect"): - super().__init__() - self.pad_mode = pad_mode - kernel_1d = torch.tensor(_kernels[kernel]) * 2 - self.pad = kernel_1d.shape[0] // 2 - 1 - self.register_buffer("kernel", kernel_1d) - - def forward(self, hidden_states, temb=None): - hidden_states = F.pad(hidden_states, ((self.pad + 1) // 2,) * 2, self.pad_mode) - weight = hidden_states.new_zeros([hidden_states.shape[1], hidden_states.shape[1], self.kernel.shape[0]]) - indices = torch.arange(hidden_states.shape[1], device=hidden_states.device) - weight[indices, indices] = self.kernel.to(weight) - return F.conv_transpose1d(hidden_states, weight, stride=2, padding=self.pad * 2 + 1) - - -class SelfAttention1d(nn.Module): - def __init__(self, in_channels, n_head=1, dropout_rate=0.0): - super().__init__() - self.channels = in_channels - self.group_norm = nn.GroupNorm(1, num_channels=in_channels) - self.num_heads = n_head - - self.query = nn.Linear(self.channels, self.channels) - self.key = nn.Linear(self.channels, self.channels) - self.value = nn.Linear(self.channels, self.channels) - - self.proj_attn = nn.Linear(self.channels, self.channels, 1) - - self.dropout = nn.Dropout(dropout_rate, inplace=True) - - def transpose_for_scores(self, projection: torch.Tensor) -> torch.Tensor: - new_projection_shape = projection.size()[:-1] + (self.num_heads, -1) - # move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D) - new_projection = projection.view(new_projection_shape).permute(0, 2, 1, 3) - return new_projection - - def forward(self, hidden_states): - residual = hidden_states - batch, channel_dim, seq = hidden_states.shape - - hidden_states = self.group_norm(hidden_states) - hidden_states = hidden_states.transpose(1, 2) - - query_proj = self.query(hidden_states) - key_proj = self.key(hidden_states) - value_proj = self.value(hidden_states) - - query_states = self.transpose_for_scores(query_proj) - key_states = self.transpose_for_scores(key_proj) - value_states = self.transpose_for_scores(value_proj) - - scale = 1 / math.sqrt(math.sqrt(key_states.shape[-1])) - - attention_scores = torch.matmul(query_states * scale, key_states.transpose(-1, -2) * scale) - attention_probs = torch.softmax(attention_scores, dim=-1) - - # compute attention output - hidden_states = torch.matmul(attention_probs, value_states) - - hidden_states = hidden_states.permute(0, 2, 1, 3).contiguous() - new_hidden_states_shape = hidden_states.size()[:-2] + (self.channels,) - hidden_states = hidden_states.view(new_hidden_states_shape) - - # compute next hidden_states - hidden_states = self.proj_attn(hidden_states) - hidden_states = hidden_states.transpose(1, 2) - hidden_states = self.dropout(hidden_states) - - output = hidden_states + residual - - return output - - -class ResConvBlock(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, is_last=False): - super().__init__() - self.is_last = is_last - self.has_conv_skip = in_channels != out_channels - - if self.has_conv_skip: - self.conv_skip = nn.Conv1d(in_channels, out_channels, 1, bias=False) - - self.conv_1 = nn.Conv1d(in_channels, mid_channels, 5, padding=2) - self.group_norm_1 = nn.GroupNorm(1, mid_channels) - self.gelu_1 = nn.GELU() - self.conv_2 = nn.Conv1d(mid_channels, out_channels, 5, padding=2) - - if not self.is_last: - self.group_norm_2 = nn.GroupNorm(1, out_channels) - self.gelu_2 = nn.GELU() - - def forward(self, hidden_states): - residual = self.conv_skip(hidden_states) if self.has_conv_skip else hidden_states - - hidden_states = self.conv_1(hidden_states) - hidden_states = self.group_norm_1(hidden_states) - hidden_states = self.gelu_1(hidden_states) - hidden_states = self.conv_2(hidden_states) - - if not self.is_last: - hidden_states = self.group_norm_2(hidden_states) - hidden_states = self.gelu_2(hidden_states) - - output = hidden_states + residual - return output - - -class UNetMidBlock1D(nn.Module): - def __init__(self, mid_channels, in_channels, out_channels=None): - super().__init__() - - out_channels = in_channels if out_channels is None else out_channels - - # there is always at least one resnet - self.down = Downsample1d("cubic") - resnets = [ - ResConvBlock(in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - attentions = [ - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(out_channels, out_channels // 32), - ] - self.up = Upsample1d(kernel="cubic") - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = self.down(hidden_states) - for attn, resnet in zip(self.attentions, self.resnets): - hidden_states = resnet(hidden_states) - hidden_states = attn(hidden_states) - - hidden_states = self.up(hidden_states) - - return hidden_states - - -class AttnDownBlock1D(nn.Module): - def __init__(self, out_channels, in_channels, mid_channels=None): - super().__init__() - mid_channels = out_channels if mid_channels is None else mid_channels - - self.down = Downsample1d("cubic") - resnets = [ - ResConvBlock(in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - attentions = [ - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(out_channels, out_channels // 32), - ] - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = self.down(hidden_states) - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states) - hidden_states = attn(hidden_states) - - return hidden_states, (hidden_states,) - - -class DownBlock1D(nn.Module): - def __init__(self, out_channels, in_channels, mid_channels=None): - super().__init__() - mid_channels = out_channels if mid_channels is None else mid_channels - - self.down = Downsample1d("cubic") - resnets = [ - ResConvBlock(in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = self.down(hidden_states) - - for resnet in self.resnets: - hidden_states = resnet(hidden_states) - - return hidden_states, (hidden_states,) - - -class DownBlock1DNoSkip(nn.Module): - def __init__(self, out_channels, in_channels, mid_channels=None): - super().__init__() - mid_channels = out_channels if mid_channels is None else mid_channels - - resnets = [ - ResConvBlock(in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = torch.cat([hidden_states, temb], dim=1) - for resnet in self.resnets: - hidden_states = resnet(hidden_states) - - return hidden_states, (hidden_states,) - - -class AttnUpBlock1D(nn.Module): - def __init__(self, in_channels, out_channels, mid_channels=None): - super().__init__() - mid_channels = out_channels if mid_channels is None else mid_channels - - resnets = [ - ResConvBlock(2 * in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - attentions = [ - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(mid_channels, mid_channels // 32), - SelfAttention1d(out_channels, out_channels // 32), - ] - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - self.up = Upsample1d(kernel="cubic") - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - res_hidden_states = res_hidden_states_tuple[-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states) - hidden_states = attn(hidden_states) - - hidden_states = self.up(hidden_states) - - return hidden_states - - -class UpBlock1D(nn.Module): - def __init__(self, in_channels, out_channels, mid_channels=None): - super().__init__() - mid_channels = in_channels if mid_channels is None else mid_channels - - resnets = [ - ResConvBlock(2 * in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels), - ] - - self.resnets = nn.ModuleList(resnets) - self.up = Upsample1d(kernel="cubic") - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - res_hidden_states = res_hidden_states_tuple[-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - for resnet in self.resnets: - hidden_states = resnet(hidden_states) - - hidden_states = self.up(hidden_states) - - return hidden_states - - -class UpBlock1DNoSkip(nn.Module): - def __init__(self, in_channels, out_channels, mid_channels=None): - super().__init__() - mid_channels = in_channels if mid_channels is None else mid_channels - - resnets = [ - ResConvBlock(2 * in_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, mid_channels), - ResConvBlock(mid_channels, mid_channels, out_channels, is_last=True), - ] - - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - res_hidden_states = res_hidden_states_tuple[-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - for resnet in self.resnets: - hidden_states = resnet(hidden_states) - - return hidden_states - - -def get_down_block(down_block_type, num_layers, in_channels, out_channels, temb_channels, add_downsample): - if down_block_type == "DownResnetBlock1D": - return DownResnetBlock1D( - in_channels=in_channels, - num_layers=num_layers, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - ) - elif down_block_type == "DownBlock1D": - return DownBlock1D(out_channels=out_channels, in_channels=in_channels) - elif down_block_type == "AttnDownBlock1D": - return AttnDownBlock1D(out_channels=out_channels, in_channels=in_channels) - elif down_block_type == "DownBlock1DNoSkip": - return DownBlock1DNoSkip(out_channels=out_channels, in_channels=in_channels) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block(up_block_type, num_layers, in_channels, out_channels, temb_channels, add_upsample): - if up_block_type == "UpResnetBlock1D": - return UpResnetBlock1D( - in_channels=in_channels, - num_layers=num_layers, - out_channels=out_channels, - temb_channels=temb_channels, - add_upsample=add_upsample, - ) - elif up_block_type == "UpBlock1D": - return UpBlock1D(in_channels=in_channels, out_channels=out_channels) - elif up_block_type == "AttnUpBlock1D": - return AttnUpBlock1D(in_channels=in_channels, out_channels=out_channels) - elif up_block_type == "UpBlock1DNoSkip": - return UpBlock1DNoSkip(in_channels=in_channels, out_channels=out_channels) - raise ValueError(f"{up_block_type} does not exist.") - - -def get_mid_block(mid_block_type, num_layers, in_channels, mid_channels, out_channels, embed_dim, add_downsample): - if mid_block_type == "MidResTemporalBlock1D": - return MidResTemporalBlock1D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - embed_dim=embed_dim, - add_downsample=add_downsample, - ) - elif mid_block_type == "ValueFunctionMidBlock1D": - return ValueFunctionMidBlock1D(in_channels=in_channels, out_channels=out_channels, embed_dim=embed_dim) - elif mid_block_type == "UNetMidBlock1D": - return UNetMidBlock1D(in_channels=in_channels, mid_channels=mid_channels, out_channels=out_channels) - raise ValueError(f"{mid_block_type} does not exist.") - - -def get_out_block(*, out_block_type, num_groups_out, embed_dim, out_channels, act_fn, fc_dim): - if out_block_type == "OutConv1DBlock": - return OutConv1DBlock(num_groups_out, out_channels, embed_dim, act_fn) - elif out_block_type == "ValueFunction": - return OutValueFunctionBlock(fc_dim, embed_dim) - return None diff --git a/spaces/Jaehan/Text2Text-Text-Summarization/app.py b/spaces/Jaehan/Text2Text-Text-Summarization/app.py deleted file mode 100644 index 4610cd29df9a9c32776c8fe07870be36ca2e8e2b..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/Text2Text-Text-Summarization/app.py +++ /dev/null @@ -1,24 +0,0 @@ -from transformers import AutoTokenizer, AutoModelWithLMHead -import gradio as gr - -model_name = "deep-learning-analytics/wikihow-t5-small" -text2text_token = AutoTokenizer.from_pretrained(model_name) -model = AutoModelWithLMHead.from_pretrained(model_name) - -def text2text_summary(para): - initial_text = para.strip().replace("\n","") - token_text = text2text_token.encode(initial_text, return_tensors="pt") - - token_ids = model.generate( - token_text, - max_length=250, - num_beams=5, - repetition_penalty=2.5, - early_stopping=True ) - response = text2text_token.decode(token_ids[0], skip_special_tokens=True) - return response - -# UX -in_para = gr.Textbox(lines=10, label="Input paragraph", placeholder="Place your paragraph to summarize here...") -out = gr.Textbox(lines=1, label="Summary") -gr.Interface(text2text_summary, inputs=in_para, outputs=out).launch() \ No newline at end of file diff --git a/spaces/Junity/TokaiTeio-SVC/models.py b/spaces/Junity/TokaiTeio-SVC/models.py deleted file mode 100644 index 13278d680493970f5a670cf3fc955a6e9b7ab1d5..0000000000000000000000000000000000000000 --- a/spaces/Junity/TokaiTeio-SVC/models.py +++ /dev/null @@ -1,420 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, noice_scale=1): - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if (spk_emb is not None): - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - - def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None): - g = self.emb_g(g).transpose(1,2) - # ssl prenet - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - # f0 predict - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - - # encoder - z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - # flow - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # nsf decoder - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 - - def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False): - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - if predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/KarmaCST/English-To-Dzongkha-Translation-NLLB-Fine-tuning/app.py b/spaces/KarmaCST/English-To-Dzongkha-Translation-NLLB-Fine-tuning/app.py deleted file mode 100644 index 3fcd62c2040112c44aa13c41ca618958b3c513e4..0000000000000000000000000000000000000000 --- a/spaces/KarmaCST/English-To-Dzongkha-Translation-NLLB-Fine-tuning/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline -import torch - -model = AutoModelForSeq2SeqLM.from_pretrained("KarmaCST/nllb-200-distilled-600M-en-to-dz") -tokenizer = AutoTokenizer.from_pretrained("KarmaCST/nllb-200-distilled-600M-en-to-dz") - - -src_lang = 'eng_Latn' -tgt_lang = "dzo_Tibt" - - -def translate(text): - translation_pipeline = pipeline("translation", - model=model, - tokenizer=tokenizer, - src_lang=src_lang, - tgt_lang=tgt_lang) - - result = translation_pipeline(text) - return result[0]['translation_text'] - - - -gr.Interface( - translate, - [ - gr.components.Textbox(label="Input Sentence", placeholder = " Enter English sentence here ...") - ], - ["text"], -).launch() \ No newline at end of file diff --git a/spaces/KaygNas/cut-it/mocks/handlers/all.handlers.ts b/spaces/KaygNas/cut-it/mocks/handlers/all.handlers.ts deleted file mode 100644 index 86a4e895b4188284a99ec4c97330b274c152638d..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/mocks/handlers/all.handlers.ts +++ /dev/null @@ -1,20 +0,0 @@ -import { rest } from 'msw' -import { API_BASE_URL } from '../../src/constants' -import detectionResult from './detection-result.json' -import classificationResult from './classification-result.json' -import MOCK_IMAGE_URL from './mock_image.jpg' - -const handlers = [ - rest.post(`${API_BASE_URL}detect-image`, (req, res, ctx) => { - return res(ctx.status(200), ctx.json(detectionResult)) - }), - rest.post(`${API_BASE_URL}classify-image`, (req, res, ctx) => { - return res(ctx.status(200), ctx.json(classificationResult)) - }), - rest.post(`${API_BASE_URL}text-to-image`, async (req, res, ctx) => { - const image = await ctx.fetch(MOCK_IMAGE_URL) - return res(ctx.status(200), ctx.body(await image.arrayBuffer())) - }), -] - -export default handlers diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/KoboldAI/KoboldAI-Lite/style.css b/spaces/KoboldAI/KoboldAI-Lite/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/KoboldAI/KoboldAI-Lite/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/KonradSzafer/HF-QA-Demo/qa_engine/qa_engine.py b/spaces/KonradSzafer/HF-QA-Demo/qa_engine/qa_engine.py deleted file mode 100644 index 2bcd139bbfad26d2ad9d5cacceb9fc190bc161ef..0000000000000000000000000000000000000000 --- a/spaces/KonradSzafer/HF-QA-Demo/qa_engine/qa_engine.py +++ /dev/null @@ -1,290 +0,0 @@ -import os -import json -import requests -import subprocess -from typing import Mapping, Optional, Any - -import torch -import transformers -from transformers import AutoTokenizer, AutoModelForCausalLM -from huggingface_hub import snapshot_download -from urllib.parse import quote -from langchain import PromptTemplate, HuggingFaceHub, LLMChain -from langchain.llms import HuggingFacePipeline -from langchain.llms.base import LLM -from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceHubEmbeddings, HuggingFaceInstructEmbeddings -from langchain.vectorstores import FAISS -from sentence_transformers import CrossEncoder - -from qa_engine import logger -from qa_engine.response import Response -from qa_engine.mocks import MockLocalBinaryModel - - -class LocalBinaryModel(LLM): - model_id: str = None - llm: None = None - - def __init__(self, model_id: str = None): - super().__init__() - # pip install llama_cpp_python==0.1.39 - from llama_cpp import Llama - - model_path = f'qa_engine/{model_id}' - if not os.path.exists(model_path): - raise ValueError(f'{model_path} does not exist') - self.model_id = model_id - self.llm = Llama(model_path=model_path, n_ctx=4096) - - def _call(self, prompt: str, stop: Optional[list[str]] = None) -> str: - output = self.llm( - prompt, - max_tokens=1024, - stop=['Q:'], - echo=False - ) - return output['choices'][0]['text'] - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {'name_of_model': self.model_id} - - @property - def _llm_type(self) -> str: - return self.model_id - - -class TransformersPipelineModel(LLM): - model_id: str = None - pipeline: str = None - - def __init__(self, model_id: str = None): - super().__init__() - self.model_id = model_id - - tokenizer = AutoTokenizer.from_pretrained(model_id) - model = AutoModelForCausalLM.from_pretrained( - model_id, - torch_dtype=torch.bfloat16, - trust_remote_code=True, - load_in_8bit=False, - device_map='auto', - resume_download=True, - ) - self.pipeline = transformers.pipeline( - 'text-generation', - model=model, - tokenizer=tokenizer, - torch_dtype=torch.bfloat16, - device_map='auto', - eos_token_id=tokenizer.eos_token_id, - pad_token_id=tokenizer.eos_token_id, - min_new_tokens=64, - max_new_tokens=800, - temperature=0.5, - do_sample=True, - ) - - def _call(self, prompt: str, stop: Optional[list[str]] = None) -> str: - output_text = self.pipeline(prompt)[0]['generated_text'] - output_text = output_text.replace(prompt+'\n', '') - return output_text - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {'name_of_model': self.model_id} - - @property - def _llm_type(self) -> str: - return self.model_id - - -class APIServedModel(LLM): - model_url: str = None - debug: bool = None - - def __init__(self, model_url: str = None, debug: bool = None): - super().__init__() - if model_url[-1] == '/': - raise ValueError('URL should not end with a slash - "/"') - self.model_url = model_url - self.debug = debug - - def _call(self, prompt: str, stop: Optional[list[str]] = None) -> str: - prompt_encoded = quote(prompt, safe='') - url = f'{self.model_url}/?prompt={prompt_encoded}' - if self.debug: - logger.info(f'URL: {url}') - try: - response = requests.get(url, timeout=1200, verify=False) - response.raise_for_status() - return json.loads(response.content)['output_text'] - except Exception as err: - logger.error(f'Error: {err}') - return f'Error: {err}' - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return {'name_of_model': f'model url: {self.model_url}'} - - @property - def _llm_type(self) -> str: - return 'api_model' - - - -class QAEngine(): - """ - QAEngine class, used for generating answers to questions. - - Args: - llm_model_id (str): The ID of the LLM model to be used. - embedding_model_id (str): The ID of the embedding model to be used. - index_repo_id (str): The ID of the index repository to be used. - run_locally (bool, optional): Whether to run the models locally or on the Hugging Face hub. Defaults to True. - use_docs_for_context (bool, optional): Whether to use relevant documents as context for generating answers. - Defaults to True. - use_messages_for_context (bool, optional): Whether to use previous messages as context for generating answers. - Defaults to True. - debug (bool, optional): Whether to log debug information. Defaults to False. - - Attributes: - use_docs_for_context (bool): Whether to use relevant documents as context for generating answers. - use_messages_for_context (bool): Whether to use previous messages as context for generating answers. - debug (bool): Whether to log debug information. - llm_model (Union[LocalBinaryModel, HuggingFacePipeline, HuggingFaceHub]): The LLM model to be used. - embedding_model (Union[HuggingFaceInstructEmbeddings, HuggingFaceHubEmbeddings]): The embedding model to be used. - prompt_template (PromptTemplate): The prompt template to be used. - llm_chain (LLMChain): The LLM chain to be used. - knowledge_index (FAISS): The FAISS index to be used. - - """ - def __init__( - self, - llm_model_id: str, - embedding_model_id: str, - index_repo_id: str, - prompt_template: str, - use_docs_for_context: bool = True, - num_relevant_docs: int = 3, - add_sources_to_response: bool = True, - use_messages_for_context: bool = True, - first_stage_docs: int = 50, - debug: bool = False - ): - super().__init__() - self.prompt_template = prompt_template - self.use_docs_for_context = use_docs_for_context - self.num_relevant_docs = num_relevant_docs - self.add_sources_to_response = add_sources_to_response - self.use_messages_for_context = use_messages_for_context - self.first_stage_docs = first_stage_docs - self.debug = debug - - if 'local_models/' in llm_model_id: - logger.info('using local binary model') - self.llm_model = LocalBinaryModel( - model_id=llm_model_id - ) - elif 'api_models/' in llm_model_id: - logger.info('using api served model') - self.llm_model = APIServedModel( - model_url=llm_model_id.replace('api_models/', ''), - debug=self.debug - ) - elif llm_model_id == 'mock': - logger.info('using mock model') - self.llm_model = MockLocalBinaryModel() - else: - logger.info('using transformers pipeline model') - self.llm_model = TransformersPipelineModel( - model_id=llm_model_id - ) - - prompt = PromptTemplate( - template=prompt_template, - input_variables=['question', 'context'] - ) - self.llm_chain = LLMChain(prompt=prompt, llm=self.llm_model) - - if self.use_docs_for_context: - logger.info(f'Downloading {index_repo_id}') - snapshot_download( - repo_id=index_repo_id, - allow_patterns=['*.faiss', '*.pkl'], - repo_type='dataset', - local_dir='indexes/run/' - ) - logger.info('Loading embedding model') - embed_instruction = 'Represent the Hugging Face library documentation' - query_instruction = 'Query the most relevant piece of information from the Hugging Face documentation' - embedding_model = HuggingFaceInstructEmbeddings( - model_name=embedding_model_id, - embed_instruction=embed_instruction, - query_instruction=query_instruction - ) - logger.info('Loading index') - self.knowledge_index = FAISS.load_local('./indexes/run/', embedding_model) - self.reranker = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-12-v2') - - - def get_response(self, question: str, messages_context: str = '') -> Response: - """ - Generate an answer to the specified question. - - Args: - question (str): The question to be answered. - messages_context (str, optional): The context to be used for generating the answer. Defaults to ''. - - Returns: - response (Response): The Response object containing the generated answer and the sources of information - used to generate the response. - """ - - response = Response() - context = '' - relevant_docs = '' - if self.use_messages_for_context and messages_context: - messages_context = f'\nPrevious questions and answers:\n{messages_context}' - context += messages_context - if self.use_docs_for_context: - logger.info('Retriving documents') - # messages context is used for better retrival - retrival_query = messages_context + question - relevant_docs = self.knowledge_index.similarity_search( - query=retrival_query, - k=self.first_stage_docs - ) - cross_encoding_predictions = self.reranker.predict( - [(retrival_query, doc.page_content) for doc in relevant_docs] - ) - relevant_docs = [ - doc for _, doc in sorted( - zip(cross_encoding_predictions, relevant_docs), - reverse=True, key = lambda x: x[0] - ) - ] - relevant_docs = relevant_docs[:self.num_relevant_docs] - context += '\nExtracted documents:\n' - context += ''.join([doc.page_content for doc in relevant_docs]) - metadata = [doc.metadata for doc in relevant_docs] - response.set_sources(sources=[str(m['source']) for m in metadata]) - - logger.info('Running LLM chain') - answer = self.llm_chain.run(question=question, context=context) - response.set_answer(answer) - logger.info('Received answer') - - if self.debug: - logger.info('\n' + '=' * 100) - sep = '\n' + '-' * 100 - logger.info(f'question len: {len(question)} {sep}') - logger.info(f'question: {question} {sep}') - logger.info(f'answer len: {len(response.get_answer())} {sep}') - logger.info(f'answer: {response.get_answer()} {sep}') - logger.info(f'{response.get_sources_as_text()} {sep}') - logger.info(f'messages_contex: {messages_context} {sep}') - logger.info(f'relevant_docs: {relevant_docs} {sep}') - logger.info(f'context len: {len(context)} {sep}') - logger.info(f'context: {context} {sep}') - return response diff --git a/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/client/test_utils.py b/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/client/test_utils.py deleted file mode 100644 index effbac21e5f863d5bf17e16b45469ce2d22affa5..0000000000000000000000000000000000000000 --- a/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/client/test_utils.py +++ /dev/null @@ -1,69 +0,0 @@ -import pytest -import os -from discord_bot.client.utils import ( \ - find_max_split_index, \ - find_max_split_index_from_sequence, \ - split_text_into_chunks -) - - -@pytest.fixture(scope='module') -def test_chunk() -> str: - return 't. , \n .' - - -@pytest.fixture(scope='module') -def test_text() -> str: - with open('tests/discord_bot/client/lorem_ipsum.txt', 'r') as f: - text = f.read() - assert text is not None, 'test text is empty' - return text - - -def test_find_max_splitting_index(test_chunk: str): - index = find_max_split_index(test_chunk, char='\n') - assert index == 6, 'index should be 6' - index = find_max_split_index(test_chunk, char='. ') - assert index == 3, 'index should be 3' - index = find_max_split_index(test_chunk, char='.') - assert index == 8, 'index should be 8' - - -def test_find_max_split_index_from_sequence(test_chunk: str): - index = find_max_split_index_from_sequence( - test_chunk, - split_characters=['\n'] - ) - assert index == 6, 'index should be 6' - index = find_max_split_index_from_sequence( - test_chunk, - split_characters=['.', ', ', '\n'] - ) - assert index == 8, 'index should be 8' - - -def test_split_text_into_chunks_with_split_characters(test_text: str): - max_chunk_size = 250 - chunks = split_text_into_chunks( - test_text, - split_characters=['. ', ', ', '\n'], - min_size=20, - max_size=max_chunk_size - ) - for chunk in chunks: - assert len(chunk) > 0, 'Chunk length is zero' - assert len(chunk) <= max_chunk_size, 'Chunk length exceeds maximum limit' - - -def test_split_text_into_chunks_without_split_characters(): - test_text = 'a' * 1000 - max_chunk_size = 250 - chunks = split_text_into_chunks( - test_text, - split_characters=[], - min_size=20, - max_size=max_chunk_size - ) - for chunk in chunks: - assert len(chunk) == max_chunk_size, \ - 'Chunk length is too small' diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/cascade_rcnn.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/cascade_rcnn.py deleted file mode 100644 index ecf733ff104b99436fcc74130b0ccea12a0fa6d0..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/cascade_rcnn.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .two_stage import TwoStageDetector - - -@MODELS.register_module() -class CascadeRCNN(TwoStageDetector): - r"""Implementation of `Cascade R-CNN: Delving into High Quality Object - Detection <https://arxiv.org/abs/1906.09756>`_""" - - def __init__(self, - backbone: ConfigType, - neck: OptConfigType = None, - rpn_head: OptConfigType = None, - roi_head: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) diff --git a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/infer.py b/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/infer.py deleted file mode 100644 index 9a40678751ebbe05d371e4d094ad711464d239dc..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/infer.py +++ /dev/null @@ -1,942 +0,0 @@ -import torch, os, traceback, sys, warnings, shutil, numpy as np -import gradio as gr -import librosa -import asyncio -import rarfile -import edge_tts -import yt_dlp -import ffmpeg -import gdown -import subprocess -import wave -import soundfile as sf -from scipy.io import wavfile -from datetime import datetime -from urllib.parse import urlparse -from mega import Mega - -now_dir = os.getcwd() -tmp = os.path.join(now_dir, "TEMP") -shutil.rmtree(tmp, ignore_errors=True) -os.makedirs(tmp, exist_ok=True) -os.environ["TEMP"] = tmp -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from fairseq import checkpoint_utils -from vc_infer_pipeline import VC -from config import Config -config = Config() - -tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) -voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - -hubert_model = None - -f0method_mode = ["pm", "harvest", "crepe"] -f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -if os.path.isfile("rmvpe.pt"): - f0method_mode.insert(2, "rmvpe") - f0method_info = "PM is fast, Harvest is good but extremely slow, Rvmpe is alternative to harvest (might be better), and Crepe effect is good but requires GPU (Default: PM)" - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -load_hubert() - -weight_root = "weights" -index_root = "weights/index" -weights_model = [] -weights_index = [] -for _, _, model_files in os.walk(weight_root): - for file in model_files: - if file.endswith(".pth"): - weights_model.append(file) -for _, _, index_files in os.walk(index_root): - for file in index_files: - if file.endswith('.index') and "trained" not in file: - weights_index.append(os.path.join(index_root, file)) - -def check_models(): - weights_model = [] - weights_index = [] - for _, _, model_files in os.walk(weight_root): - for file in model_files: - if file.endswith(".pth"): - weights_model.append(file) - for _, _, index_files in os.walk(index_root): - for file in index_files: - if file.endswith('.index') and "trained" not in file: - weights_index.append(os.path.join(index_root, file)) - return ( - gr.Dropdown.update(choices=sorted(weights_model), value=weights_model[0]), - gr.Dropdown.update(choices=sorted(weights_index)) - ) - -def clean(): - return ( - gr.Dropdown.update(value=""), - gr.Slider.update(visible=False) - ) - -def vc_single( - sid, - vc_audio_mode, - input_audio_path, - input_upload_audio, - vocal_audio, - tts_text, - tts_voice, - f0_up_key, - f0_file, - f0_method, - file_index, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect -): # spk_item, input_audio0, vc_transform0,f0_file,f0method0 - global tgt_sr, net_g, vc, hubert_model, version, cpt - try: - logs = [] - print(f"Converting...") - logs.append(f"Converting...") - yield "\n".join(logs), None - if vc_audio_mode == "Input path" or "Youtube" and input_audio_path != "": - audio, sr = librosa.load(input_audio_path, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - selected_audio = input_upload_audio - if vocal_audio: - selected_audio = vocal_audio - elif input_upload_audio: - selected_audio = input_upload_audio - sampling_rate, audio = selected_audio - duration = audio.shape[0] / sampling_rate - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - input_audio_path = "tts.mp3" - f0_up_key = int(f0_up_key) - times = [0, 0, 0] - if hubert_model == None: - load_hubert() - if_f0 = cpt.get("f0", 1) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file - ) - if resample_sr >= 16000 and tgt_sr != resample_sr: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - print("Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - )) - info = f"{index_info}\n[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - logs.append(info) - yield "\n".join(logs), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - logs.append(info) - yield "\n".join(logs), None - -def get_vc(sid, to_return_protect0): - global n_spk, tgt_sr, net_g, vc, cpt, version, weights_index - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return ( - gr.Slider.update(maximum=2333, visible=False), - gr.Slider.update(visible=True), - gr.Dropdown.update(choices=sorted(weights_index), value=""), - gr.Markdown.update(value="# <center> No model selected") - ) - print(f"Loading {sid} model...") - selected_model = sid[:-4] - cpt = torch.load(os.path.join(weight_root, sid), map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - weights_index = [] - for _, _, index_files in os.walk(index_root): - for file in index_files: - if file.endswith('.index') and "trained" not in file: - weights_index.append(os.path.join(index_root, file)) - if weights_index == []: - selected_index = gr.Dropdown.update(value="") - else: - selected_index = gr.Dropdown.update(value=weights_index[0]) - for index, model_index in enumerate(weights_index): - if selected_model in model_index: - selected_index = gr.Dropdown.update(value=weights_index[index]) - break - return ( - gr.Slider.update(maximum=n_spk, visible=True), - to_return_protect0, - selected_index, - gr.Markdown.update( - f'## <center> {selected_model}\n'+ - f'### <center> RVC {version} Model' - ) - ) - -def find_audio_files(folder_path, extensions): - audio_files = [] - for root, dirs, files in os.walk(folder_path): - for file in files: - if any(file.endswith(ext) for ext in extensions): - audio_files.append(file) - return audio_files - -def vc_multi( - spk_item, - vc_input, - vc_output, - vc_transform0, - f0method0, - file_index, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, -): - global tgt_sr, net_g, vc, hubert_model, version, cpt - logs = [] - logs.append("Converting...") - yield "\n".join(logs) - print() - try: - if os.path.exists(vc_input): - folder_path = vc_input - extensions = [".mp3", ".wav", ".flac", ".ogg"] - audio_files = find_audio_files(folder_path, extensions) - for index, file in enumerate(audio_files, start=1): - audio, sr = librosa.load(os.path.join(folder_path, file), sr=16000, mono=True) - input_audio_path = folder_path, file - f0_up_key = int(vc_transform0) - times = [0, 0, 0] - if hubert_model == None: - load_hubert() - if_f0 = cpt.get("f0", 1) - audio_opt = vc.pipeline( - hubert_model, - net_g, - spk_item, - audio, - input_audio_path, - times, - f0_up_key, - f0method0, - file_index, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None - ) - if resample_sr >= 16000 and tgt_sr != resample_sr: - tgt_sr = resample_sr - output_path = f"{os.path.join(vc_output, file)}" - os.makedirs(os.path.join(vc_output), exist_ok=True) - sf.write( - output_path, - audio_opt, - tgt_sr, - ) - info = f"{index} / {len(audio_files)} | {file}" - print(info) - logs.append(info) - yield "\n".join(logs) - else: - logs.append("Folder not found or path doesn't exist.") - yield "\n".join(logs) - except: - info = traceback.format_exc() - print(info) - logs.append(info) - yield "\n".join(logs) - -def download_audio(url, audio_provider): - logs = [] - os.makedirs("dl_audio", exist_ok=True) - if url == "": - logs.append("URL required!") - yield None, "\n".join(logs) - return None, "\n".join(logs) - if audio_provider == "Youtube": - logs.append("Downloading the audio...") - yield None, "\n".join(logs) - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'result/dl_audio/audio', - } - audio_path = "result/dl_audio/audio.wav" - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - logs.append("Download Complete.") - yield audio_path, "\n".join(logs) - -def cut_vocal_and_inst_yt(split_model): - logs = [] - logs.append("Starting the audio splitting process...") - yield "\n".join(logs), None, None, None - command = f"demucs --two-stems=vocals -n {split_model} result/dl_audio/audio.wav -o output" - result = subprocess.Popen(command.split(), stdout=subprocess.PIPE, text=True) - for line in result.stdout: - logs.append(line) - yield "\n".join(logs), None, None, None - print(result.stdout) - vocal = f"output/{split_model}/audio/vocals.wav" - inst = f"output/{split_model}/audio/no_vocals.wav" - logs.append("Audio splitting complete.") - yield "\n".join(logs), vocal, inst, vocal - -def cut_vocal_and_inst(split_model, audio_data): - logs = [] - vocal_path = "output/result/audio.wav" - os.makedirs("output/result", exist_ok=True) - wavfile.write(vocal_path, audio_data[0], audio_data[1]) - logs.append("Starting the audio splitting process...") - yield "\n".join(logs), None, None - command = f"demucs --two-stems=vocals -n {split_model} {vocal_path} -o output" - result = subprocess.Popen(command.split(), stdout=subprocess.PIPE, text=True) - for line in result.stdout: - logs.append(line) - yield "\n".join(logs), None, None - print(result.stdout) - vocal = f"output/{split_model}/audio/vocals.wav" - inst = f"output/{split_model}/audio/no_vocals.wav" - logs.append("Audio splitting complete.") - yield "\n".join(logs), vocal, inst - -def combine_vocal_and_inst(audio_data, vocal_volume, inst_volume, split_model): - os.makedirs("output/result", exist_ok=True) - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - inst_path = f"output/{split_model}/audio/no_vocals.wav" - wavfile.write(vocal_path, audio_data[0], audio_data[1]) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [0:a]volume={inst_volume}[i];[1:a]volume={vocal_volume}[v];[i][v]amix=inputs=2:duration=longest[a] -map [a] -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def download_and_extract_models(urls): - logs = [] - os.makedirs("zips", exist_ok=True) - os.makedirs(os.path.join("zips", "extract"), exist_ok=True) - os.makedirs(os.path.join(weight_root), exist_ok=True) - os.makedirs(os.path.join(index_root), exist_ok=True) - for link in urls.splitlines(): - url = link.strip() - if not url: - raise gr.Error("URL Required!") - return "No URLs provided." - model_zip = urlparse(url).path.split('/')[-2] + '.zip' - model_zip_path = os.path.join('zips', model_zip) - logs.append(f"Downloading...") - yield "\n".join(logs) - if "drive.google.com" in url: - gdown.download(url, os.path.join("zips", "extract"), quiet=False) - elif "mega.nz" in url: - m = Mega() - m.download_url(url, 'zips') - else: - os.system(f"wget {url} -O {model_zip_path}") - logs.append(f"Extracting...") - yield "\n".join(logs) - for filename in os.listdir("zips"): - archived_file = os.path.join("zips", filename) - if filename.endswith(".zip"): - shutil.unpack_archive(archived_file, os.path.join("zips", "extract"), 'zip') - elif filename.endswith(".rar"): - with rarfile.RarFile(archived_file, 'r') as rar: - rar.extractall(os.path.join("zips", "extract")) - for _, dirs, files in os.walk(os.path.join("zips", "extract")): - logs.append(f"Searching Model and Index...") - yield "\n".join(logs) - model = False - index = False - if files: - for file in files: - if file.endswith(".pth"): - basename = file[:-4] - shutil.move(os.path.join("zips", "extract", file), os.path.join(weight_root, file)) - model = True - if file.endswith('.index') and "trained" not in file: - shutil.move(os.path.join("zips", "extract", file), os.path.join(index_root, file)) - index = True - else: - logs.append("No model in main folder.") - yield "\n".join(logs) - logs.append("Searching in subfolders...") - yield "\n".join(logs) - for sub_dir in dirs: - for _, _, sub_files in os.walk(os.path.join("zips", "extract", sub_dir)): - for file in sub_files: - if file.endswith(".pth"): - basename = file[:-4] - shutil.move(os.path.join("zips", "extract", sub_dir, file), os.path.join(weight_root, file)) - model = True - if file.endswith('.index') and "trained" not in file: - shutil.move(os.path.join("zips", "extract", sub_dir, file), os.path.join(index_root, file)) - index = True - shutil.rmtree(os.path.join("zips", "extract", sub_dir)) - if index is False: - logs.append("Model only file, no Index file detected.") - yield "\n".join(logs) - logs.append("Download Completed!") - yield "\n".join(logs) - logs.append("Successfully download all models! Refresh your model list to load the model") - yield "\n".join(logs) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=False), - gr.Button.update(visible=True), - gr.Audio.update(visible=False), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - # Splitter - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Button.update(visible=True), - gr.Button.update(visible=False), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - # Splitter - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Button.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - -with gr.Blocks() as app: - gr.Markdown( - "# <center> Advanced RVC Inference\n" - ) - with gr.Row(): - sid = gr.Dropdown( - label="Weight", - choices=sorted(weights_model), - ) - file_index = gr.Dropdown( - label="List of index file", - choices=sorted(weights_index), - interactive=True, - ) - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label="Speaker ID", - value=0, - visible=False, - interactive=True, - ) - refresh_model = gr.Button("Refresh model list", variant="primary") - clean_button = gr.Button("Clear Model from memory", variant="primary") - refresh_model.click( - fn=check_models, inputs=[], outputs=[sid, file_index] - ) - clean_button.click(fn=clean, inputs=[], outputs=[sid, spk_item]) - with gr.TabItem("Inference"): - selected_model = gr.Markdown(value="# <center> No model selected") - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=["Input path", "Upload audio", "Youtube", "TTS Audio"], allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_log_yt = gr.Textbox(label="Output Information", visible=False, interactive=False) - vc_download_button = gr.Button("Download Audio", variant="primary", visible=False) - vc_audio_preview = gr.Audio(label="Downloaded Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(label="TTS text", info="Text to speech input", visible=False) - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - # Splitter - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["hdemucs_mmi", "htdemucs", "htdemucs_ft", "mdx", "mdx_q", "mdx_extra_q"], allow_custom_value=False, visible=True, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split_log = gr.Textbox(label="Output Information", visible=True, interactive=False) - vc_split_yt = gr.Button("Split Audio", variant="primary", visible=False) - vc_split = gr.Button("Split Audio", variant="primary", visible=True) - vc_vocal_preview = gr.Audio(label="Vocal Preview", interactive=False, visible=True) - vc_inst_preview = gr.Audio(label="Instrumental Preview", interactive=False, visible=True) - with gr.Column(): - vc_transform0 = gr.Number( - label="Transpose", - info='Type "12" to change from male to female convertion or Type "-12" to change female to male convertion.', - value=0 - ) - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True, - ) - index_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - f0_file0 = gr.File( - label="F0 curve file (Optional)", - info="One pitch per line, Replace the default F0 and pitch modulation" - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_vocal_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=1, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 1}", - visible=True - ) - vc_inst_volume = gr.Slider( - minimum=0, - maximum=10, - label="Instrument volume", - value=1, - interactive=True, - step=1, - info="Adjust instrument volume (Default: 1}", - visible=True - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=True) - vc_combine = gr.Button("Combine",variant="primary", visible=True) - vc_convert.click( - vc_single, - [ - spk_item, - vc_audio_mode, - vc_input, - vc_upload, - vc_vocal_preview, - tts_text, - tts_voice, - vc_transform0, - f0_file0, - f0method0, - file_index, - index_rate0, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_log, vc_output], - ) - vc_download_button.click( - fn=download_audio, - inputs=[vc_link, vc_download_audio], - outputs=[vc_audio_preview, vc_log_yt] - ) - vc_split_yt.click( - fn=cut_vocal_and_inst_yt, - inputs=[vc_split_model], - outputs=[vc_split_log, vc_vocal_preview, vc_inst_preview, vc_input] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_split_model, vc_upload], - outputs=[vc_split_log, vc_vocal_preview, vc_inst_preview] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_vocal_volume, vc_inst_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - # Input & Upload - vc_input, - vc_microphone_mode, - vc_upload, - # Youtube - vc_download_audio, - vc_link, - vc_log_yt, - vc_download_button, - # Splitter - vc_split_model, - vc_split_log, - vc_split_yt, - vc_split, - vc_audio_preview, - vc_vocal_preview, - vc_inst_preview, - vc_vocal_volume, - vc_inst_volume, - vc_combined_output, - vc_combine, - # TTS - tts_text, - tts_voice - ] - ) - sid.change(fn=get_vc, inputs=[sid, protect0], outputs=[spk_item, protect0, file_index, selected_model]) - with gr.TabItem("Batch Inference"): - with gr.Row(): - with gr.Column(): - vc_input_bat = gr.Textbox(label="Input audio path (folder)", visible=True) - vc_output_bat = gr.Textbox(label="Output audio path (folder)", value="result/batch", visible=True) - with gr.Column(): - vc_transform0_bat = gr.Number( - label="Transpose", - info='Type "12" to change from male to female convertion or Type "-12" to change female to male convertion.', - value=0 - ) - f0method0_bat = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True, - ) - index_rate0_bat = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.7, - interactive=True, - ) - filter_radius0_bat = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0_bat = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0_bat = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0_bat = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log_bat = gr.Textbox(label="Output Information", interactive=False) - vc_convert_bat = gr.Button("Convert", variant="primary") - vc_convert_bat.click( - vc_multi, - [ - spk_item, - vc_input_bat, - vc_output_bat, - vc_transform0_bat, - f0method0_bat, - file_index, - index_rate0_bat, - filter_radius0_bat, - resample_sr0_bat, - rms_mix_rate0_bat, - protect0_bat, - ], - [vc_log_bat], - ) - with gr.TabItem("Model Downloader"): - gr.Markdown( - "# <center> Model Downloader (Beta)\n"+ - "#### <center> To download multi link you have to put your link to the textbox and every link separated by space\n"+ - "#### <center> Support Direct Link, Mega, Google Drive, etc" - ) - with gr.Column(): - md_text = gr.Textbox(label="URL") - with gr.Row(): - md_download = gr.Button(label="Convert", variant="primary") - md_download_logs = gr.Textbox(label="Output information", interactive=False) - md_download.click( - fn=download_and_extract_models, - inputs=[md_text], - outputs=[md_download_logs] - ) - with gr.TabItem("Settings"): - gr.Markdown( - "# <center> Settings\n"+ - "#### <center> Work in progress" - ) - app.queue(concurrency_count=1, max_size=50, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Legal-ease/legal-ease/README.md b/spaces/Legal-ease/legal-ease/README.md deleted file mode 100644 index 2c6723456406729b298d525047bf5b65f4b8af5d..0000000000000000000000000000000000000000 --- a/spaces/Legal-ease/legal-ease/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Legal Ease -emoji: 🏢 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_SA_MJ_real_train.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_SA_MJ_real_train.py deleted file mode 100644 index 87dab3352d92c3105684908f50b9b8f6bcc71a16..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_SA_MJ_real_train.py +++ /dev/null @@ -1,81 +0,0 @@ -# Text Recognition Training set, including: -# Synthetic Datasets: SynthText, SynthAdd, Syn90k -# Real Dataset: IC11, IC13, IC15, COCO-Test, IIIT5k - -train_prefix = 'data/mixture' - -train_img_prefix1 = f'{train_prefix}/icdar_2011' -train_img_prefix2 = f'{train_prefix}/icdar_2013' -train_img_prefix3 = f'{train_prefix}/icdar_2015' -train_img_prefix4 = f'{train_prefix}/coco_text' -train_img_prefix5 = f'{train_prefix}/IIIT5K' -train_img_prefix6 = f'{train_prefix}/SynthText_Add' -train_img_prefix7 = f'{train_prefix}/SynthText' -train_img_prefix8 = f'{train_prefix}/Syn90k' - -train_ann_file1 = f'{train_prefix}/icdar_2011/train_label.txt', -train_ann_file2 = f'{train_prefix}/icdar_2013/train_label.txt', -train_ann_file3 = f'{train_prefix}/icdar_2015/train_label.txt', -train_ann_file4 = f'{train_prefix}/coco_text/train_label.txt', -train_ann_file5 = f'{train_prefix}/IIIT5K/train_label.txt', -train_ann_file6 = f'{train_prefix}/SynthText_Add/label.txt', -train_ann_file7 = f'{train_prefix}/SynthText/shuffle_labels.txt', -train_ann_file8 = f'{train_prefix}/Syn90k/shuffle_labels.txt' - -train1 = dict( - type='OCRDataset', - img_prefix=train_img_prefix1, - ann_file=train_ann_file1, - loader=dict( - type='AnnFileLoader', - repeat=20, - file_format='txt', - parser=dict( - type='LineStrParser', - keys=['filename', 'text'], - keys_idx=[0, 1], - separator=' ')), - pipeline=None, - test_mode=False) - -train2 = {key: value for key, value in train1.items()} -train2['img_prefix'] = train_img_prefix2 -train2['ann_file'] = train_ann_file2 - -train3 = {key: value for key, value in train1.items()} -train3['img_prefix'] = train_img_prefix3 -train3['ann_file'] = train_ann_file3 - -train4 = {key: value for key, value in train1.items()} -train4['img_prefix'] = train_img_prefix4 -train4['ann_file'] = train_ann_file4 - -train5 = {key: value for key, value in train1.items()} -train5['img_prefix'] = train_img_prefix5 -train5['ann_file'] = train_ann_file5 - -train6 = dict( - type='OCRDataset', - img_prefix=train_img_prefix6, - ann_file=train_ann_file6, - loader=dict( - type='AnnFileLoader', - repeat=1, - file_format='txt', - parser=dict( - type='LineStrParser', - keys=['filename', 'text'], - keys_idx=[0, 1], - separator=' ')), - pipeline=None, - test_mode=False) - -train7 = {key: value for key, value in train6.items()} -train7['img_prefix'] = train_img_prefix7 -train7['ann_file'] = train_ann_file7 - -train8 = {key: value for key, value in train6.items()} -train8['img_prefix'] = train_img_prefix8 -train8['ann_file'] = train_ann_file8 - -train_list = [train1, train2, train3, train4, train5, train6, train7, train8] diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py deleted file mode 100644 index 1183974024cf33d814f635ddb1454895fbd3c02c..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_600e.py', - '../../_base_/det_models/panet_r18_fpem_ffm.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/panet_pipeline.py' -] - -model = {{_base_.model_quad}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_icdar2015 = {{_base_.train_pipeline_icdar2015}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_icdar2015), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/Luckya/MyGenAi/app.py b/spaces/Luckya/MyGenAi/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Luckya/MyGenAi/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/text/japanese_bert.py b/spaces/Mahiruoshi/MyGO_VIts-bert/text/japanese_bert.py deleted file mode 100644 index 5dd196483da4355746383253879190ce538b9df9..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/text/japanese_bert.py +++ /dev/null @@ -1,38 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM -import sys - -tokenizer = AutoTokenizer.from_pretrained("./bert/bert-base-japanese-v3") - -models = dict() - - -def get_bert_feature(text, word2ph, device=None): - if ( - sys.platform == "darwin" - and torch.backends.mps.is_available() - and device == "cpu" - ): - device = "mps" - if not device: - device = "cuda" - if device not in models.keys(): - models[device] = AutoModelForMaskedLM.from_pretrained( - "./bert/bert-base-japanese-v3" - ).to(device) - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = models[device](**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu() - assert inputs["input_ids"].shape[-1] == len(word2ph) - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - return phone_level_feature.T diff --git a/spaces/Marshalls/testmtd/analysis/pymo/preprocessing.py b/spaces/Marshalls/testmtd/analysis/pymo/preprocessing.py deleted file mode 100644 index f752ece6e85caf219e8dd6811d61d6fb7c6cb0b6..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/pymo/preprocessing.py +++ /dev/null @@ -1,1330 +0,0 @@ -''' -Preprocessing Tranformers Based on sci-kit's API - -By Omid Alemi -Created on June 12, 2017 -''' -import copy -import pandas as pd -import numpy as np -import transforms3d as t3d -import scipy.ndimage.filters as filters - -from sklearn.base import BaseEstimator, TransformerMixin - -from analysis.pymo.rotation_tools import Rotation, euler2expmap, euler2expmap2, expmap2euler, euler_reorder, unroll -from analysis.pymo.Quaternions import Quaternions -from analysis.pymo.Pivots import Pivots - -class MocapParameterizer(BaseEstimator, TransformerMixin): - def __init__(self, param_type = 'euler'): - ''' - - param_type = {'euler', 'quat', 'expmap', 'position', 'expmap2pos'} - ''' - self.param_type = param_type - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - print("MocapParameterizer: " + self.param_type) - if self.param_type == 'euler': - return X - elif self.param_type == 'expmap': - return self._to_expmap(X) - elif self.param_type == 'quat': - return X - elif self.param_type == 'position': - return self._to_pos(X) - elif self.param_type == 'expmap2pos': - return self._expmap_to_pos(X) - else: - raise 'param types: euler, quat, expmap, position, expmap2pos' - -# return X - - def inverse_transform(self, X, copy=None): - if self.param_type == 'euler': - return X - elif self.param_type == 'expmap': - return self._expmap_to_euler(X) - elif self.param_type == 'quat': - raise 'quat2euler is not supported' - elif self.param_type == 'position': - # raise 'positions 2 eulers is not supported' - print('positions 2 eulers is not supported') - return X - else: - raise 'param types: euler, quat, expmap, position' - - def _to_pos(self, X): - '''Converts joints rotations in Euler angles to joint positions''' - - Q = [] - for track in X: - channels = [] - titles = [] - euler_df = track.values - - # Create a new DataFrame to store the exponential map rep - pos_df = pd.DataFrame(index=euler_df.index) - - # Copy the root rotations into the new DataFrame - # rxp = '%s_Xrotation'%track.root_name - # ryp = '%s_Yrotation'%track.root_name - # rzp = '%s_Zrotation'%track.root_name - # pos_df[rxp] = pd.Series(data=euler_df[rxp], index=pos_df.index) - # pos_df[ryp] = pd.Series(data=euler_df[ryp], index=pos_df.index) - # pos_df[rzp] = pd.Series(data=euler_df[rzp], index=pos_df.index) - - # List the columns that contain rotation channels - rot_cols = [c for c in euler_df.columns if ('rotation' in c)] - - # List the columns that contain position channels - pos_cols = [c for c in euler_df.columns if ('position' in c)] - - # List the joints that are not end sites, i.e., have channels - joints = (joint for joint in track.skeleton) - - tree_data = {} - - for joint in track.traverse(): - parent = track.skeleton[joint]['parent'] - rot_order = track.skeleton[joint]['order'] - #print("rot_order:" + joint + " :" + rot_order) - - # Get the rotation columns that belong to this joint - rc = euler_df[[c for c in rot_cols if joint in c]] - - # Get the position columns that belong to this joint - pc = euler_df[[c for c in pos_cols if joint in c]] - - # Make sure the columns are organized in xyz order - if rc.shape[1] < 3: - euler_values = np.zeros((euler_df.shape[0], 3)) - rot_order = "XYZ" - else: - euler_values = np.pi/180.0*np.transpose(np.array([track.values['%s_%srotation'%(joint, rot_order[0])], track.values['%s_%srotation'%(joint, rot_order[1])], track.values['%s_%srotation'%(joint, rot_order[2])]])) - - if pc.shape[1] < 3: - pos_values = np.asarray([[0,0,0] for f in pc.iterrows()]) - else: - pos_values =np.asarray([[f[1]['%s_Xposition'%joint], - f[1]['%s_Yposition'%joint], - f[1]['%s_Zposition'%joint]] for f in pc.iterrows()]) - - quats = Quaternions.from_euler(np.asarray(euler_values), order=rot_order.lower(), world=False) - - tree_data[joint]=[ - [], # to store the rotation matrix - [] # to store the calculated position - ] - if track.root_name == joint: - tree_data[joint][0] = quats#rotmats - # tree_data[joint][1] = np.add(pos_values, track.skeleton[joint]['offsets']) - tree_data[joint][1] = pos_values - else: - # for every frame i, multiply this joint's rotmat to the rotmat of its parent - tree_data[joint][0] = tree_data[parent][0]*quats# np.matmul(rotmats, tree_data[parent][0]) - - # add the position channel to the offset and store it in k, for every frame i - k = pos_values + np.asarray(track.skeleton[joint]['offsets']) - - # multiply k to the rotmat of the parent for every frame i - q = tree_data[parent][0]*k #np.matmul(k.reshape(k.shape[0],1,3), tree_data[parent][0]) - - # add q to the position of the parent, for every frame i - tree_data[joint][1] = tree_data[parent][1] + q #q.reshape(k.shape[0],3) + tree_data[parent][1] - - # Create the corresponding columns in the new DataFrame - pos_df['%s_Xposition'%joint] = pd.Series(data=[e[0] for e in tree_data[joint][1]], index=pos_df.index) - pos_df['%s_Yposition'%joint] = pd.Series(data=[e[1] for e in tree_data[joint][1]], index=pos_df.index) - pos_df['%s_Zposition'%joint] = pd.Series(data=[e[2] for e in tree_data[joint][1]], index=pos_df.index) - - - new_track = track.clone() - new_track.values = pos_df - Q.append(new_track) - return Q - - def _expmap2rot(self, expmap): - - theta = np.linalg.norm(expmap, axis=1, keepdims=True) - nz = np.nonzero(theta)[0] - - expmap[nz,:] = expmap[nz,:]/theta[nz] - - nrows=expmap.shape[0] - x = expmap[:,0] - y = expmap[:,1] - z = expmap[:,2] - - s = np.sin(theta*0.5).reshape(nrows) - c = np.cos(theta*0.5).reshape(nrows) - - rotmats = np.zeros((nrows, 3, 3)) - - rotmats[:,0,0] = 2*(x*x-1)*s*s+1 - rotmats[:,0,1] = 2*x*y*s*s-2*z*c*s - rotmats[:,0,2] = 2*x*z*s*s+2*y*c*s - rotmats[:,1,0] = 2*x*y*s*s+2*z*c*s - rotmats[:,1,1] = 2*(y*y-1)*s*s+1 - rotmats[:,1,2] = 2*y*z*s*s-2*x*c*s - rotmats[:,2,0] = 2*x*z*s*s-2*y*c*s - rotmats[:,2,1] = 2*y*z*s*s+2*x*c*s - rotmats[:,2,2] = 2*(z*z-1)*s*s+1 - - return rotmats - - def _expmap_to_pos(self, X): - '''Converts joints rotations in expmap notation to joint positions''' - - Q = [] - for track in X: - channels = [] - titles = [] - exp_df = track.values - - # Create a new DataFrame to store the exponential map rep - pos_df = pd.DataFrame(index=exp_df.index) - - # Copy the root rotations into the new DataFrame - # rxp = '%s_Xrotation'%track.root_name - # ryp = '%s_Yrotation'%track.root_name - # rzp = '%s_Zrotation'%track.root_name - # pos_df[rxp] = pd.Series(data=euler_df[rxp], index=pos_df.index) - # pos_df[ryp] = pd.Series(data=euler_df[ryp], index=pos_df.index) - # pos_df[rzp] = pd.Series(data=euler_df[rzp], index=pos_df.index) - - # List the columns that contain rotation channels - exp_params = [c for c in exp_df.columns if ( any(p in c for p in ['alpha', 'beta','gamma']) and 'Nub' not in c)] - - # List the joints that are not end sites, i.e., have channels - joints = (joint for joint in track.skeleton) - - tree_data = {} - - for joint in track.traverse(): - parent = track.skeleton[joint]['parent'] - - if 'Nub' not in joint: - r = exp_df[[c for c in exp_params if joint in c]] # Get the columns that belong to this joint - expmap = r.values - #expmap = [[f[1]['%s_alpha'%joint], f[1]['%s_beta'%joint], f[1]['%s_gamma'%joint]] for f in r.iterrows()] - else: - expmap = np.zeros((exp_df.shape[0], 3)) - - # Convert the eulers to rotation matrices - #rotmats = np.asarray([Rotation(f, 'expmap').rotmat for f in expmap]) - #angs = np.linalg.norm(expmap,axis=1, keepdims=True) - rotmats = self._expmap2rot(expmap) - - tree_data[joint]=[ - [], # to store the rotation matrix - [] # to store the calculated position - ] - pos_values = np.zeros((exp_df.shape[0], 3)) - - if track.root_name == joint: - tree_data[joint][0] = rotmats - # tree_data[joint][1] = np.add(pos_values, track.skeleton[joint]['offsets']) - tree_data[joint][1] = pos_values - else: - # for every frame i, multiply this joint's rotmat to the rotmat of its parent - tree_data[joint][0] = np.matmul(rotmats, tree_data[parent][0]) - - # add the position channel to the offset and store it in k, for every frame i - k = pos_values + track.skeleton[joint]['offsets'] - - # multiply k to the rotmat of the parent for every frame i - q = np.matmul(k.reshape(k.shape[0],1,3), tree_data[parent][0]) - - # add q to the position of the parent, for every frame i - tree_data[joint][1] = q.reshape(k.shape[0],3) + tree_data[parent][1] - - - # Create the corresponding columns in the new DataFrame - pos_df['%s_Xposition'%joint] = pd.Series(data=tree_data[joint][1][:,0], index=pos_df.index) - pos_df['%s_Yposition'%joint] = pd.Series(data=tree_data[joint][1][:,1], index=pos_df.index) - pos_df['%s_Zposition'%joint] = pd.Series(data=tree_data[joint][1][:,2], index=pos_df.index) - - new_track = track.clone() - new_track.values = pos_df - Q.append(new_track) - return Q - - def _to_expmap(self, X): - '''Converts Euler angles to Exponential Maps''' - - Q = [] - for track in X: - channels = [] - titles = [] - euler_df = track.values - - # Create a new DataFrame to store the exponential map rep - exp_df = euler_df.copy()# pd.DataFrame(index=euler_df.index) - - # Copy the root positions into the new DataFrame - #rxp = '%s_Xposition'%track.root_name - #ryp = '%s_Yposition'%track.root_name - #rzp = '%s_Zposition'%track.root_name - #exp_df[rxp] = pd.Series(data=euler_df[rxp], index=exp_df.index) - #exp_df[ryp] = pd.Series(data=euler_df[ryp], index=exp_df.index) - #exp_df[rzp] = pd.Series(data=euler_df[rzp], index=exp_df.index) - - # List the columns that contain rotation channels - rots = [c for c in euler_df.columns if ('rotation' in c and 'Nub' not in c)] - - # List the joints that are not end sites, i.e., have channels - joints = (joint for joint in track.skeleton if 'Nub' not in joint) - - for joint in joints: - #print(joint) - r = euler_df[[c for c in rots if joint in c]] # Get the columns that belong to this joint - rot_order = track.skeleton[joint]['order'] - r1_col = '%s_%srotation'%(joint, rot_order[0]) - r2_col = '%s_%srotation'%(joint, rot_order[1]) - r3_col = '%s_%srotation'%(joint, rot_order[2]) - - exp_df.drop([r1_col, r2_col, r3_col], axis=1, inplace=True) - euler = [[f[1][r1_col], f[1][r2_col], f[1][r3_col]] for f in r.iterrows()] - #exps = [Rotation(f, 'euler', from_deg=True, order=rot_order).to_expmap() for f in euler] # Convert the eulers to exp maps - exps = unroll(np.array([euler2expmap(f, rot_order, True) for f in euler])) # Convert the exp maps to eulers - # exps = np.array([euler2expmap(f, rot_order, True) for f in euler]) # Convert the exp maps to eulers - #exps = euler2expmap2(euler, rot_order, True) # Convert the eulers to exp maps - - # Create the corresponding columns in the new DataFrame - - exp_df.insert(loc=0, column='%s_gamma'%joint, value=pd.Series(data=[e[2] for e in exps], index=exp_df.index)) - exp_df.insert(loc=0, column='%s_beta'%joint, value=pd.Series(data=[e[1] for e in exps], index=exp_df.index)) - exp_df.insert(loc=0, column='%s_alpha'%joint, value=pd.Series(data=[e[0] for e in exps], index=exp_df.index)) - - #print(exp_df.columns) - new_track = track.clone() - new_track.values = exp_df - Q.append(new_track) - - return Q - - def _expmap_to_euler(self, X): - Q = [] - for track in X: - channels = [] - titles = [] - exp_df = track.values - - # Create a new DataFrame to store the exponential map rep - #euler_df = pd.DataFrame(index=exp_df.index) - euler_df = exp_df.copy() - - # Copy the root positions into the new DataFrame - #rxp = '%s_Xposition'%track.root_name - #ryp = '%s_Yposition'%track.root_name - #rzp = '%s_Zposition'%track.root_name - #euler_df[rxp] = pd.Series(data=exp_df[rxp], index=euler_df.index) - #euler_df[ryp] = pd.Series(data=exp_df[ryp], index=euler_df.index) - #euler_df[rzp] = pd.Series(data=exp_df[rzp], index=euler_df.index) - - # List the columns that contain rotation channels - exp_params = [c for c in exp_df.columns if ( any(p in c for p in ['alpha', 'beta','gamma']) and 'Nub' not in c)] - - # List the joints that are not end sites, i.e., have channels - joints = (joint for joint in track.skeleton if 'Nub' not in joint) - - for joint in joints: - r = exp_df[[c for c in exp_params if joint in c]] # Get the columns that belong to this joint - - euler_df.drop(['%s_alpha'%joint, '%s_beta'%joint, '%s_gamma'%joint], axis=1, inplace=True) - expmap = [[f[1]['%s_alpha'%joint], f[1]['%s_beta'%joint], f[1]['%s_gamma'%joint]] for f in r.iterrows()] # Make sure the columsn are organized in xyz order - rot_order = track.skeleton[joint]['order'] - #euler_rots = [Rotation(f, 'expmap').to_euler(True, rot_order) for f in expmap] # Convert the exp maps to eulers - euler_rots = [expmap2euler(f, rot_order, True) for f in expmap] # Convert the exp maps to eulers - - # Create the corresponding columns in the new DataFrame - - euler_df['%s_%srotation'%(joint, rot_order[0])] = pd.Series(data=[e[0] for e in euler_rots], index=euler_df.index) - euler_df['%s_%srotation'%(joint, rot_order[1])] = pd.Series(data=[e[1] for e in euler_rots], index=euler_df.index) - euler_df['%s_%srotation'%(joint, rot_order[2])] = pd.Series(data=[e[2] for e in euler_rots], index=euler_df.index) - - new_track = track.clone() - new_track.values = euler_df - Q.append(new_track) - - return Q - -class Mirror(BaseEstimator, TransformerMixin): - def __init__(self, axis="X", append=True): - """ - Mirrors the data - """ - self.axis = axis - self.append = append - - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - print("Mirror: " + self.axis) - Q = [] - - if self.append: - for track in X: - Q.append(track) - - for track in X: - channels = [] - titles = [] - - if self.axis == "X": - signs = np.array([1,-1,-1]) - if self.axis == "Y": - signs = np.array([-1,1,-1]) - if self.axis == "Z": - signs = np.array([-1,-1,1]) - - euler_df = track.values - - # Create a new DataFrame to store the exponential map rep - new_df = pd.DataFrame(index=euler_df.index) - - # Copy the root positions into the new DataFrame - rxp = '%s_Xposition'%track.root_name - ryp = '%s_Yposition'%track.root_name - rzp = '%s_Zposition'%track.root_name - new_df[rxp] = pd.Series(data=-signs[0]*euler_df[rxp], index=new_df.index) - new_df[ryp] = pd.Series(data=-signs[1]*euler_df[ryp], index=new_df.index) - new_df[rzp] = pd.Series(data=-signs[2]*euler_df[rzp], index=new_df.index) - - # List the columns that contain rotation channels - rots = [c for c in euler_df.columns if ('rotation' in c and 'Nub' not in c)] - #lft_rots = [c for c in euler_df.columns if ('Left' in c and 'rotation' in c and 'Nub' not in c)] - #rgt_rots = [c for c in euler_df.columns if ('Right' in c and 'rotation' in c and 'Nub' not in c)] - lft_joints = (joint for joint in track.skeleton if 'Left' in joint and 'Nub' not in joint) - rgt_joints = (joint for joint in track.skeleton if 'Right' in joint and 'Nub' not in joint) - - new_track = track.clone() - - for lft_joint in lft_joints: - #lr = euler_df[[c for c in rots if lft_joint + "_" in c]] - #rot_order = track.skeleton[lft_joint]['order'] - #lft_eulers = [[f[1]['%s_Xrotation'%lft_joint], f[1]['%s_Yrotation'%lft_joint], f[1]['%s_Zrotation'%lft_joint]] for f in lr.iterrows()] - - rgt_joint = lft_joint.replace('Left', 'Right') - #rr = euler_df[[c for c in rots if rgt_joint + "_" in c]] - #rot_order = track.skeleton[rgt_joint]['order'] -# rgt_eulers = [[f[1]['%s_Xrotation'%rgt_joint], f[1]['%s_Yrotation'%rgt_joint], f[1]['%s_Zrotation'%rgt_joint]] for f in rr.iterrows()] - - # Create the corresponding columns in the new DataFrame - - new_df['%s_Xrotation'%lft_joint] = pd.Series(data=signs[0]*track.values['%s_Xrotation'%rgt_joint], index=new_df.index) - new_df['%s_Yrotation'%lft_joint] = pd.Series(data=signs[1]*track.values['%s_Yrotation'%rgt_joint], index=new_df.index) - new_df['%s_Zrotation'%lft_joint] = pd.Series(data=signs[2]*track.values['%s_Zrotation'%rgt_joint], index=new_df.index) - - new_df['%s_Xrotation'%rgt_joint] = pd.Series(data=signs[0]*track.values['%s_Xrotation'%lft_joint], index=new_df.index) - new_df['%s_Yrotation'%rgt_joint] = pd.Series(data=signs[1]*track.values['%s_Yrotation'%lft_joint], index=new_df.index) - new_df['%s_Zrotation'%rgt_joint] = pd.Series(data=signs[2]*track.values['%s_Zrotation'%lft_joint], index=new_df.index) - - # List the joints that are not left or right, i.e. are on the trunk - joints = (joint for joint in track.skeleton if 'Nub' not in joint and 'Left' not in joint and 'Right' not in joint) - - for joint in joints: - #r = euler_df[[c for c in rots if joint in c]] # Get the columns that belong to this joint - #rot_order = track.skeleton[joint]['order'] - - #eulers = [[f[1]['%s_Xrotation'%joint], f[1]['%s_Yrotation'%joint], f[1]['%s_Zrotation'%joint]] for f in r.iterrows()] - - # Create the corresponding columns in the new DataFrame - new_df['%s_Xrotation'%joint] = pd.Series(data=signs[0]*track.values['%s_Xrotation'%joint], index=new_df.index) - new_df['%s_Yrotation'%joint] = pd.Series(data=signs[1]*track.values['%s_Yrotation'%joint], index=new_df.index) - new_df['%s_Zrotation'%joint] = pd.Series(data=signs[2]*track.values['%s_Zrotation'%joint], index=new_df.index) - - new_track.values = new_df - Q.append(new_track) - - return Q - - def inverse_transform(self, X, copy=None, start_pos=None): - return X - -class EulerReorder(BaseEstimator, TransformerMixin): - def __init__(self, new_order): - """ - Add a - """ - self.new_order = new_order - - - def fit(self, X, y=None): - self.orig_skeleton = copy.deepcopy(X[0].skeleton) - print(self.orig_skeleton) - return self - - def transform(self, X, y=None): - Q = [] - - for track in X: - channels = [] - titles = [] - euler_df = track.values - - # Create a new DataFrame to store the exponential map rep - new_df = pd.DataFrame(index=euler_df.index) - - # Copy the root positions into the new DataFrame - rxp = '%s_Xposition'%track.root_name - ryp = '%s_Yposition'%track.root_name - rzp = '%s_Zposition'%track.root_name - new_df[rxp] = pd.Series(data=euler_df[rxp], index=new_df.index) - new_df[ryp] = pd.Series(data=euler_df[ryp], index=new_df.index) - new_df[rzp] = pd.Series(data=euler_df[rzp], index=new_df.index) - - # List the columns that contain rotation channels - rots = [c for c in euler_df.columns if ('rotation' in c and 'Nub' not in c)] - - # List the joints that are not end sites, i.e., have channels - joints = (joint for joint in track.skeleton if 'Nub' not in joint) - - new_track = track.clone() - for joint in joints: - r = euler_df[[c for c in rots if joint in c]] # Get the columns that belong to this joint - rot_order = track.skeleton[joint]['order'] - - euler = [[f[1]['%s_Xrotation'%(joint)], f[1]['%s_Yrotation'%(joint)], f[1]['%s_Zrotation'%(joint)]] for f in r.iterrows()] - new_euler = [euler_reorder(f, rot_order, self.new_order, True) for f in euler] - #new_euler = euler_reorder2(np.array(euler), rot_order, self.new_order, True) - - # Create the corresponding columns in the new DataFrame - new_df['%s_%srotation'%(joint, self.new_order[0])] = pd.Series(data=[e[0] for e in new_euler], index=new_df.index) - new_df['%s_%srotation'%(joint, self.new_order[1])] = pd.Series(data=[e[1] for e in new_euler], index=new_df.index) - new_df['%s_%srotation'%(joint, self.new_order[2])] = pd.Series(data=[e[2] for e in new_euler], index=new_df.index) - - new_track.skeleton[joint]['order'] = self.new_order - - new_track.values = new_df - Q.append(new_track) - - return Q - - def inverse_transform(self, X, copy=None, start_pos=None): - return X -# Q = [] -# -# for track in X: -# channels = [] -# titles = [] -# euler_df = track.values -# -# # Create a new DataFrame to store the exponential map rep -# new_df = pd.DataFrame(index=euler_df.index) -# -# # Copy the root positions into the new DataFrame -# rxp = '%s_Xposition'%track.root_name -# ryp = '%s_Yposition'%track.root_name -# rzp = '%s_Zposition'%track.root_name -# new_df[rxp] = pd.Series(data=euler_df[rxp], index=new_df.index) -# new_df[ryp] = pd.Series(data=euler_df[ryp], index=new_df.index) -# new_df[rzp] = pd.Series(data=euler_df[rzp], index=new_df.index) -# -# # List the columns that contain rotation channels -# rots = [c for c in euler_df.columns if ('rotation' in c and 'Nub' not in c)] -# -# # List the joints that are not end sites, i.e., have channels -# joints = (joint for joint in track.skeleton if 'Nub' not in joint) -# -# new_track = track.clone() -# for joint in joints: -# r = euler_df[[c for c in rots if joint in c]] # Get the columns that belong to this joint -# rot_order = track.skeleton[joint]['order'] -# new_order = self.orig_skeleton[joint]['order'] -# print("rot_order:" + str(rot_order)) -# print("new_order:" + str(new_order)) -# -# euler = [[f[1]['%s_%srotation'%(joint, rot_order[0])], f[1]['%s_%srotation'%(joint, rot_order[1])], f[1]['%s_%srotation'%(joint, rot_order[2])]] for f in r.iterrows()] -# #new_euler = [euler_reorder(f, rot_order, new_order, True) for f in euler] -# new_euler = euler_reorder2(np.array(euler), rot_order, self.new_order, True) -# -# # Create the corresponding columns in the new DataFrame -# new_df['%s_%srotation'%(joint, new_order[0])] = pd.Series(data=[e[0] for e in new_euler], index=new_df.index) -# new_df['%s_%srotation'%(joint, new_order[1])] = pd.Series(data=[e[1] for e in new_euler], index=new_df.index) -# new_df['%s_%srotation'%(joint, new_order[2])] = pd.Series(data=[e[2] for e in new_euler], index=new_df.index) -# -# new_track.skeleton[joint]['order'] = new_order -# -# new_track.values = new_df -# Q.append(new_track) -# return Q - -class JointSelector(BaseEstimator, TransformerMixin): - ''' - Allows for filtering the mocap data to include only the selected joints - ''' - def __init__(self, joints, include_root=False): - self.joints = joints - self.include_root = include_root - - def fit(self, X, y=None): - selected_joints = [] - selected_channels = [] - - if self.include_root: - selected_joints.append(X[0].root_name) - - selected_joints.extend(self.joints) - - for joint_name in selected_joints: - selected_channels.extend([o for o in X[0].values.columns if (joint_name + "_") in o and 'Nub' not in o]) - - self.selected_joints = selected_joints - self.selected_channels = selected_channels - self.not_selected = X[0].values.columns.difference(selected_channels) - self.not_selected_values = {c:X[0].values[c].values[0] for c in self.not_selected} - - self.orig_skeleton = X[0].skeleton - return self - - def transform(self, X, y=None): - print("JointSelector") - Q = [] - for track in X: - t2 = track.clone() - for key in track.skeleton.keys(): - if key not in self.selected_joints: - parent = t2.skeleton[key]['parent'] - if parent in t2.skeleton: - t2.skeleton[parent]['children'].remove(key) - t2.skeleton.pop(key) - t2.values = track.values[self.selected_channels] - - Q.append(t2) - - - return Q - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - t2 = track.clone() - t2.skeleton = self.orig_skeleton - for d in self.not_selected: - t2.values[d] = self.not_selected_values[d] - Q.append(t2) - - return Q - - -class Numpyfier(BaseEstimator, TransformerMixin): - ''' - Just converts the values in a MocapData object into a numpy array - Useful for the final stage of a pipeline before training - ''' - def __init__(self): - pass - - def fit(self, X, y=None): - self.org_mocap_ = X[0].clone() - self.org_mocap_.values.drop(self.org_mocap_.values.index, inplace=True) - - return self - - def transform(self, X, y=None): - print("Numpyfier") - Q = [] - - for track in X: - Q.append(track.values.values) - #print("Numpyfier:" + str(track.values.columns)) - - return np.array(Q) - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - - new_mocap = self.org_mocap_.clone() - time_index = pd.to_timedelta([f for f in range(track.shape[0])], unit='s') - - # print(self.org_mocap_.values.columns) - # import pdb;pdb.set_trace() - new_df = pd.DataFrame(data=track, index=time_index, columns=self.org_mocap_.values.columns) - - new_mocap.values = new_df - - - Q.append(new_mocap) - - return Q - -class Slicer(BaseEstimator, TransformerMixin): - ''' - Slice the data into intervals of equal size - ''' - def __init__(self, window_size, overlap=0.5): - self.window_size = window_size - self.overlap = overlap - pass - - def fit(self, X, y=None): - self.org_mocap_ = X[0].clone() - self.org_mocap_.values.drop(self.org_mocap_.values.index, inplace=True) - - return self - - def transform(self, X, y=None): - print("Slicer") - Q = [] - - for track in X: - vals = track.values.values - nframes = vals.shape[0] - overlap_frames = (int)(self.overlap*self.window_size) - - n_sequences = (nframes-overlap_frames)//(self.window_size-overlap_frames) - - if n_sequences>0: - y = np.zeros((n_sequences, self.window_size, vals.shape[1])) - - # extract sequences from the input data - for i in range(0,n_sequences): - frameIdx = (self.window_size-overlap_frames) * i - Q.append(vals[frameIdx:frameIdx+self.window_size,:]) - - return np.array(Q) - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - - new_mocap = self.org_mocap_.clone() - time_index = pd.to_timedelta([f for f in range(track.shape[0])], unit='s') - - new_df = pd.DataFrame(data=track, index=time_index, columns=self.org_mocap_.values.columns) - - new_mocap.values = new_df - - - Q.append(new_mocap) - - return Q - -class RootTransformer(BaseEstimator, TransformerMixin): - def __init__(self, method, position_smoothing=0, rotation_smoothing=0): - """ - Accepted methods: - abdolute_translation_deltas - pos_rot_deltas - """ - self.method = method - self.position_smoothing=position_smoothing - self.rotation_smoothing=rotation_smoothing - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - print("RootTransformer") - Q = [] - - for track in X: - if self.method == 'abdolute_translation_deltas': - new_df = track.values.copy() - xpcol = '%s_Xposition'%track.root_name - ypcol = '%s_Yposition'%track.root_name - zpcol = '%s_Zposition'%track.root_name - - - dxpcol = '%s_dXposition'%track.root_name - dzpcol = '%s_dZposition'%track.root_name - - x=track.values[xpcol].copy() - z=track.values[zpcol].copy() - - if self.position_smoothing>0: - x_sm = filters.gaussian_filter1d(x, self.position_smoothing, axis=0, mode='nearest') - z_sm = filters.gaussian_filter1d(z, self.position_smoothing, axis=0, mode='nearest') - dx = pd.Series(data=x_sm, index=new_df.index).diff() - dz = pd.Series(data=z_sm, index=new_df.index).diff() - new_df[xpcol] = x-x_sm - new_df[zpcol] = z-z_sm - else: - dx = x.diff() - dz = z.diff() - new_df.drop([xpcol, zpcol], axis=1, inplace=True) - - dx[0] = dx[1] - dz[0] = dz[1] - - new_df[dxpcol] = dx - new_df[dzpcol] = dz - - new_track = track.clone() - new_track.values = new_df - # end of abdolute_translation_deltas - - elif self.method == 'pos_rot_deltas': - new_track = track.clone() - - # Absolute columns - xp_col = '%s_Xposition'%track.root_name - yp_col = '%s_Yposition'%track.root_name - zp_col = '%s_Zposition'%track.root_name - - #rot_order = track.skeleton[track.root_name]['order'] - #%(joint, rot_order[0]) - - rot_order = track.skeleton[track.root_name]['order'] - r1_col = '%s_%srotation'%(track.root_name, rot_order[0]) - r2_col = '%s_%srotation'%(track.root_name, rot_order[1]) - r3_col = '%s_%srotation'%(track.root_name, rot_order[2]) - - # Delta columns - dxp_col = '%s_dXposition'%track.root_name - dzp_col = '%s_dZposition'%track.root_name - - dxr_col = '%s_dXrotation'%track.root_name - dyr_col = '%s_dYrotation'%track.root_name - dzr_col = '%s_dZrotation'%track.root_name - - positions = np.transpose(np.array([track.values[xp_col], track.values[yp_col], track.values[zp_col]])) - rotations = np.pi/180.0*np.transpose(np.array([track.values[r1_col], track.values[r2_col], track.values[r3_col]])) - - """ Get Trajectory and smooth it""" - trajectory_filterwidth = self.position_smoothing - reference = positions.copy()*np.array([1,0,1]) - if trajectory_filterwidth>0: - reference = filters.gaussian_filter1d(reference, trajectory_filterwidth, axis=0, mode='nearest') - - """ Get Root Velocity """ - velocity = np.diff(reference, axis=0) - velocity = np.vstack((velocity[0,:], velocity)) - - """ Remove Root Translation """ - positions = positions-reference - - """ Get Forward Direction along the x-z plane, assuming character is facig z-forward """ - #forward = [Rotation(f, 'euler', from_deg=True, order=rot_order).rotmat[:,2] for f in rotations] # get the z-axis of the rotation matrix, assuming character is facig z-forward - #print("order:" + rot_order.lower()) - quats = Quaternions.from_euler(rotations, order=rot_order.lower(), world=False) - forward = quats*np.array([[0,0,1]]) - forward[:,1] = 0 - - """ Smooth Forward Direction """ - direction_filterwidth = self.rotation_smoothing - if direction_filterwidth>0: - forward = filters.gaussian_filter1d(forward, direction_filterwidth, axis=0, mode='nearest') - - forward = forward / np.sqrt((forward**2).sum(axis=-1))[...,np.newaxis] - - """ Remove Y Rotation """ - target = np.array([[0,0,1]]).repeat(len(forward), axis=0) - rotation = Quaternions.between(target, forward)[:,np.newaxis] - positions = (-rotation[:,0]) * positions - new_rotations = (-rotation[:,0]) * quats - velocity = (-rotation[:,0]) * velocity - - """ Get Root Rotation """ - #print(rotation[:,0]) - rvelocity = Pivots.from_quaternions(rotation[1:] * -rotation[:-1]).ps - rvelocity = np.vstack((rvelocity[0], rvelocity)) - - eulers = np.array([t3d.euler.quat2euler(q, axes=('s'+rot_order.lower()[::-1]))[::-1] for q in new_rotations])*180.0/np.pi - - new_df = track.values.copy() - - root_pos_x = pd.Series(data=positions[:,0], index=new_df.index) - root_pos_y = pd.Series(data=positions[:,1], index=new_df.index) - root_pos_z = pd.Series(data=positions[:,2], index=new_df.index) - root_pos_x_diff = pd.Series(data=velocity[:,0], index=new_df.index) - root_pos_z_diff = pd.Series(data=velocity[:,2], index=new_df.index) - - root_rot_1 = pd.Series(data=eulers[:,0], index=new_df.index) - root_rot_2 = pd.Series(data=eulers[:,1], index=new_df.index) - root_rot_3 = pd.Series(data=eulers[:,2], index=new_df.index) - root_rot_y_diff = pd.Series(data=rvelocity[:,0], index=new_df.index) - - #new_df.drop([xr_col, yr_col, zr_col, xp_col, zp_col], axis=1, inplace=True) - - new_df[xp_col] = root_pos_x - new_df[yp_col] = root_pos_y - new_df[zp_col] = root_pos_z - new_df[dxp_col] = root_pos_x_diff - new_df[dzp_col] = root_pos_z_diff - - new_df[r1_col] = root_rot_1 - new_df[r2_col] = root_rot_2 - new_df[r3_col] = root_rot_3 - #new_df[dxr_col] = root_rot_x_diff - new_df[dyr_col] = root_rot_y_diff - #new_df[dzr_col] = root_rot_z_diff - - new_track.values = new_df - - - elif self.method == 'hip_centric': - new_track = track.clone() - - # Absolute columns - xp_col = '%s_Xposition'%track.root_name - yp_col = '%s_Yposition'%track.root_name - zp_col = '%s_Zposition'%track.root_name - - xr_col = '%s_Xrotation'%track.root_name - yr_col = '%s_Yrotation'%track.root_name - zr_col = '%s_Zrotation'%track.root_name - - new_df = track.values.copy() - - all_zeros = np.zeros(track.values[xp_col].values.shape) - - new_df[xp_col] = pd.Series(data=all_zeros, index=new_df.index) - new_df[yp_col] = pd.Series(data=all_zeros, index=new_df.index) - new_df[zp_col] = pd.Series(data=all_zeros, index=new_df.index) - - new_df[xr_col] = pd.Series(data=all_zeros, index=new_df.index) - new_df[yr_col] = pd.Series(data=all_zeros, index=new_df.index) - new_df[zr_col] = pd.Series(data=all_zeros, index=new_df.index) - - new_track.values = new_df - - #print(new_track.values.columns) - Q.append(new_track) - - return Q - - def inverse_transform(self, X, copy=None, start_pos=None): - Q = [] - - #TODO: simplify this implementation - - startx = 0 - startz = 0 - - if start_pos is not None: - startx, startz = start_pos - - for track in X: - new_track = track.clone() - if self.method == 'abdolute_translation_deltas': - new_df = new_track.values - xpcol = '%s_Xposition'%track.root_name - ypcol = '%s_Yposition'%track.root_name - zpcol = '%s_Zposition'%track.root_name - - - dxpcol = '%s_dXposition'%track.root_name - dzpcol = '%s_dZposition'%track.root_name - - dx = track.values[dxpcol].values - dz = track.values[dzpcol].values - - recx = [startx] - recz = [startz] - - for i in range(dx.shape[0]-1): - recx.append(recx[i]+dx[i+1]) - recz.append(recz[i]+dz[i+1]) - - # recx = [recx[i]+dx[i+1] for i in range(dx.shape[0]-1)] - # recz = [recz[i]+dz[i+1] for i in range(dz.shape[0]-1)] - # recx = dx[:-1] + dx[1:] - # recz = dz[:-1] + dz[1:] - if self.position_smoothing > 0: - new_df[xpcol] = pd.Series(data=new_df[xpcol]+recx, index=new_df.index) - new_df[zpcol] = pd.Series(data=new_df[zpcol]+recz, index=new_df.index) - else: - new_df[xpcol] = pd.Series(data=recx, index=new_df.index) - new_df[zpcol] = pd.Series(data=recz, index=new_df.index) - - new_df.drop([dxpcol, dzpcol], axis=1, inplace=True) - - new_track.values = new_df - # end of abdolute_translation_deltas - - elif self.method == 'pos_rot_deltas': - # Absolute columns - rot_order = track.skeleton[track.root_name]['order'] - xp_col = '%s_Xposition'%track.root_name - yp_col = '%s_Yposition'%track.root_name - zp_col = '%s_Zposition'%track.root_name - - xr_col = '%s_Xrotation'%track.root_name - yr_col = '%s_Yrotation'%track.root_name - zr_col = '%s_Zrotation'%track.root_name - r1_col = '%s_%srotation'%(track.root_name, rot_order[0]) - r2_col = '%s_%srotation'%(track.root_name, rot_order[1]) - r3_col = '%s_%srotation'%(track.root_name, rot_order[2]) - - # Delta columns - dxp_col = '%s_dXposition'%track.root_name - dzp_col = '%s_dZposition'%track.root_name - - dyr_col = '%s_dYrotation'%track.root_name - - positions = np.transpose(np.array([track.values[xp_col], track.values[yp_col], track.values[zp_col]])) - rotations = np.pi/180.0*np.transpose(np.array([track.values[r1_col], track.values[r2_col], track.values[r3_col]])) - quats = Quaternions.from_euler(rotations, order=rot_order.lower(), world=False) - - new_df = track.values.copy() - - dx = track.values[dxp_col].values - dz = track.values[dzp_col].values - - dry = track.values[dyr_col].values - - #rec_p = np.array([startx, 0, startz])+positions[0,:] - rec_ry = Quaternions.id(quats.shape[0]) - rec_xp = [0] - rec_zp = [0] - - #rec_r = Quaternions.id(quats.shape[0]) - - for i in range(dx.shape[0]-1): - #print(dry[i]) - q_y = Quaternions.from_angle_axis(np.array(dry[i+1]), np.array([0,1,0])) - rec_ry[i+1] = q_y*rec_ry[i] - #print("dx: + " + str(dx[i+1])) - dp = rec_ry[i+1]*np.array([dx[i+1], 0, dz[i+1]]) - rec_xp.append(rec_xp[i]+dp[0,0]) - rec_zp.append(rec_zp[i]+dp[0,2]) - - rec_r=rec_ry*quats - pp=rec_ry*positions - rec_xp = rec_xp + pp[:,0] - rec_zp = rec_zp + pp[:,2] - - eulers = np.array([t3d.euler.quat2euler(q, axes=('s'+rot_order.lower()[::-1]))[::-1] for q in rec_r])*180.0/np.pi - - new_df = track.values.copy() - - root_rot_1 = pd.Series(data=eulers[:,0], index=new_df.index) - root_rot_2 = pd.Series(data=eulers[:,1], index=new_df.index) - root_rot_3 = pd.Series(data=eulers[:,2], index=new_df.index) - - new_df[xp_col] = pd.Series(data=rec_xp, index=new_df.index) - new_df[zp_col] = pd.Series(data=rec_zp, index=new_df.index) - - new_df[r1_col] = pd.Series(data=root_rot_1, index=new_df.index) - new_df[r2_col] = pd.Series(data=root_rot_2, index=new_df.index) - new_df[r3_col] = pd.Series(data=root_rot_3, index=new_df.index) - - new_df.drop([dyr_col, dxp_col, dzp_col], axis=1, inplace=True) - - - new_track.values = new_df - - #print(new_track.values.columns) - Q.append(new_track) - - return Q - - -class RootCentricPositionNormalizer(BaseEstimator, TransformerMixin): - def __init__(self): - pass - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - Q = [] - - for track in X: - new_track = track.clone() - - rxp = '%s_Xposition'%track.root_name - ryp = '%s_Yposition'%track.root_name - rzp = '%s_Zposition'%track.root_name - - projected_root_pos = track.values[[rxp, ryp, rzp]] - - projected_root_pos.loc[:,ryp] = 0 # we want the root's projection on the floor plane as the ref - - new_df = pd.DataFrame(index=track.values.index) - - all_but_root = [joint for joint in track.skeleton if track.root_name not in joint] - # all_but_root = [joint for joint in track.skeleton] - for joint in all_but_root: - new_df['%s_Xposition'%joint] = pd.Series(data=track.values['%s_Xposition'%joint]-projected_root_pos[rxp], index=new_df.index) - new_df['%s_Yposition'%joint] = pd.Series(data=track.values['%s_Yposition'%joint]-projected_root_pos[ryp], index=new_df.index) - new_df['%s_Zposition'%joint] = pd.Series(data=track.values['%s_Zposition'%joint]-projected_root_pos[rzp], index=new_df.index) - - - # keep the root as it is now - new_df[rxp] = track.values[rxp] - new_df[ryp] = track.values[ryp] - new_df[rzp] = track.values[rzp] - - new_track.values = new_df - - Q.append(new_track) - - return Q - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - new_track = track.clone() - - rxp = '%s_Xposition'%track.root_name - ryp = '%s_Yposition'%track.root_name - rzp = '%s_Zposition'%track.root_name - - projected_root_pos = track.values[[rxp, ryp, rzp]] - - projected_root_pos.loc[:,ryp] = 0 # we want the root's projection on the floor plane as the ref - - new_df = pd.DataFrame(index=track.values.index) - - for joint in track.skeleton: - new_df['%s_Xposition'%joint] = pd.Series(data=track.values['%s_Xposition'%joint]+projected_root_pos[rxp], index=new_df.index) - new_df['%s_Yposition'%joint] = pd.Series(data=track.values['%s_Yposition'%joint]+projected_root_pos[ryp], index=new_df.index) - new_df['%s_Zposition'%joint] = pd.Series(data=track.values['%s_Zposition'%joint]+projected_root_pos[rzp], index=new_df.index) - - - new_track.values = new_df - - Q.append(new_track) - - return Q - -class Flattener(BaseEstimator, TransformerMixin): - def __init__(self): - pass - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - return np.concatenate(X, axis=0) - -class ConstantsRemover(BaseEstimator, TransformerMixin): - ''' - For now it just looks at the first track - ''' - - def __init__(self, eps = 1e-6, only_cols=None): - self.eps = eps - self.only_cols = only_cols - - - def fit(self, X, y=None): - stds = X[0].values.std() - cols = X[0].values.columns.values - if self.only_cols is not None: - self.const_dims_ = [c for c in cols if ((stds[c] < self.eps).any()) and c in self.only_cols] - else: - self.const_dims_ = [c for c in cols if (stds[c] < self.eps).any()] - # self.const_values_ = {c:X[0].values[c].values[0] for c in cols if (stds[c] < self.eps).any()} - self.const_values_ = {c:X[0].values[c].values[0] for c in cols if self.const_dims_} - return self - - def transform(self, X, y=None): - Q = [] - - - for track in X: - t2 = track.clone() - #for key in t2.skeleton.keys(): - # if key in self.ConstDims_: - # t2.skeleton.pop(key) - #print(track.values.columns.difference(self.const_dims_)) - t2.values.drop(self.const_dims_, axis=1, inplace=True) - #t2.values = track.values[track.values.columns.difference(self.const_dims_)] - Q.append(t2) - - return Q - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - t2 = track.clone() - for d in self.const_dims_: - t2.values[d] = self.const_values_[d] -# t2.values.assign(d=pd.Series(data=self.const_values_[d], index = t2.values.index)) - Q.append(t2) - - return Q - -class ListStandardScaler(BaseEstimator, TransformerMixin): - def __init__(self, is_DataFrame=False): - self.is_DataFrame = is_DataFrame - - def fit(self, X, y=None): - if self.is_DataFrame: - X_train_flat = np.concatenate([m.values for m in X], axis=0) - else: - X_train_flat = np.concatenate([m for m in X], axis=0) - - self.data_mean_ = np.mean(X_train_flat, axis=0) - self.data_std_ = np.std(X_train_flat, axis=0) - - return self - - def transform(self, X, y=None): - Q = [] - - for track in X: - if self.is_DataFrame: - normalized_track = track.copy() - normalized_track.values = (track.values - self.data_mean_) / self.data_std_ - else: - normalized_track = (track - self.data_mean_) / self.data_std_ - - Q.append(normalized_track) - - if self.is_DataFrame: - return Q - else: - return np.array(Q) - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - - if self.is_DataFrame: - unnormalized_track = track.copy() - unnormalized_track.values = (track.values * self.data_std_) + self.data_mean_ - else: - unnormalized_track = (track * self.data_std_) + self.data_mean_ - - Q.append(unnormalized_track) - - if self.is_DataFrame: - return Q - else: - return np.array(Q) - -class ListMinMaxScaler(BaseEstimator, TransformerMixin): - def __init__(self, is_DataFrame=False): - self.is_DataFrame = is_DataFrame - - def fit(self, X, y=None): - if self.is_DataFrame: - X_train_flat = np.concatenate([m.values for m in X], axis=0) - else: - X_train_flat = np.concatenate([m for m in X], axis=0) - - self.data_max_ = np.max(X_train_flat, axis=0) - self.data_min_ = np.min(X_train_flat, axis=0) - - return self - - def transform(self, X, y=None): - Q = [] - - for track in X: - if self.is_DataFrame: - normalized_track = track.copy() - normalized_track.values = (track.values - self.data_min_) / (self.data_max_ - self.data_min_) - else: - normalized_track = (track - self.data_min_) / (self.data_max_ - self.data_min_) - - Q.append(normalized_track) - - if self.is_DataFrame: - return Q - else: - return np.array(Q) - - def inverse_transform(self, X, copy=None): - Q = [] - - for track in X: - - if self.is_DataFrame: - unnormalized_track = track.copy() - unnormalized_track.values = (track.values * (self.data_max_ - self.data_min_)) + self.data_min_ - else: - unnormalized_track = (track * (self.data_max_ - self.data_min_)) + self.data_min_ - - Q.append(unnormalized_track) - - if self.is_DataFrame: - return Q - else: - return np.array(Q) - -class DownSampler(BaseEstimator, TransformerMixin): - def __init__(self, tgt_fps, keep_all=False): - self.tgt_fps = tgt_fps - self.keep_all = keep_all - - - def fit(self, X, y=None): - - return self - - def transform(self, X, y=None): - Q = [] - - for track in X: - orig_fps=round(1.0/track.framerate) - rate = orig_fps//self.tgt_fps - if orig_fps%self.tgt_fps!=0: - print("error orig_fps (" + str(orig_fps) + ") is not dividable with tgt_fps (" + str(self.tgt_fps) + ")") - else: - print("downsampling with rate: " + str(rate)) - - #print(track.values.size) - for ii in range(0,rate): - new_track = track.clone() - new_track.values = track.values[ii:-1:rate].copy() - #print(new_track.values.size) - #new_track = track[0:-1:self.rate] - new_track.framerate = 1.0/self.tgt_fps - Q.append(new_track) - if not self.keep_all: - break - - return Q - - def inverse_transform(self, X, copy=None): - return X - -class ReverseTime(BaseEstimator, TransformerMixin): - def __init__(self, append=True): - self.append = append - - - def fit(self, X, y=None): - - return self - - def transform(self, X, y=None): - Q = [] - if self.append: - for track in X: - Q.append(track) - for track in X: - new_track = track.clone() - new_track.values = track.values[-1::-1] - Q.append(new_track) - - return Q - - def inverse_transform(self, X, copy=None): - return X - -#TODO: JointsSelector (x) -#TODO: SegmentMaker -#TODO: DynamicFeaturesAdder -#TODO: ShapeFeaturesAdder -#TODO: DataFrameNumpier (x) - -class TemplateTransform(BaseEstimator, TransformerMixin): - def __init__(self): - pass - - def fit(self, X, y=None): - return self - - def transform(self, X, y=None): - return X diff --git a/spaces/Marshalls/testmtd/training/__init__.py b/spaces/Marshalls/testmtd/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Masa-digital-art/movie-trailer-16k/constraints.md b/spaces/Masa-digital-art/movie-trailer-16k/constraints.md deleted file mode 100644 index a9232b265193e6ceedcf11327b62f3e51200c82e..0000000000000000000000000000000000000000 --- a/spaces/Masa-digital-art/movie-trailer-16k/constraints.md +++ /dev/null @@ -1,59 +0,0 @@ -# instructions - -- you are an excellent creative director -- Split user input into strings of appropriate length, abstractly interpret them, and set keywords that can be used as movie ideas without losing important context -- Follow the task below to suggest content for your movie trailer - -# tasks - -- Using keywords that are the source of movie ideas, perform the following tasks in order, structure them in markdown format, and output them according to the template. - -## 1. Generate logo/film company animation (0-5 seconds) - -- Display movie company logos and animations -- Emphasize the brand image of the film company and give viewers a sense of the quality of the film -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 2. Generate the opening shot (5-10 seconds) - -- The first shot shows the viewer the setting and tone of the movie -- It can be a city skyline in the background of a movie or a close-up of the main character -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 3. Generate movie theme (10-20 seconds) - -- Introduce movie themes -- Includes the film's major conflicts and narrative central themes -- Characters and locations are also initialized. -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 4. Generate the main cast and their roles (20-45 seconds) - -- Introduce the main cast and their roles -- Show your audience the faces and names of your cast members to make them anticipate their performance -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 5. Generate movie highlights (45-80 seconds) -- Extract the most exciting parts of the movie and get the audience excited -- Can contain action scenes, touching moments and important plot points -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 6. Generate a premonition of the climax (80-90 seconds) - -- Display the scene leading to the climax of the movie as a teaser -- However, avoid revealing the ending and pique the viewer's curiosity -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 7. Generate movie title and release date (90-100 seconds) - -- Show movie title and release date -- Keeps viewers excited and motivated to watch the movie -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -## 8. Generate end card (100-120 seconds) - -- Displays links to the movie's official website, official hashtags, and the movie's official SNS accounts. -- Provide an avenue for viewers to learn more about movies or participate in conversations about movies -- Generate impressive lines of characters, narration that makes you feel like a big hit, landscape descriptions in the play, camera angles, impressive productions, and sound effects. - -# template \ No newline at end of file diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/Wav.hpp b/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/Wav.hpp deleted file mode 100644 index c633366256a47fad29f8a385e03f847c0c94d1cb..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/Wav.hpp +++ /dev/null @@ -1,99 +0,0 @@ -class Wav { -public: - - struct WAV_HEADER { - char RIFF[4] = { 'R','I','F','F' }; //RIFF��ʶ - unsigned long ChunkSize; //�ļ���С-8 - char WAVE[4] = { 'W','A','V','E' }; //WAVE�� - char fmt[4] = { 'f','m','t',' ' }; //fmt�� - unsigned long Subchunk1Size; //fmt���С - unsigned short AudioFormat; //�����ʽ - unsigned short NumOfChan; //������ - unsigned long SamplesPerSec; //������ - unsigned long bytesPerSec; //ÿ�����ֽ��� - unsigned short blockAlign; //�������ֽ� - unsigned short bitsPerSample; //������λ�� - char Subchunk2ID[4] = { 'd','a','t','a' }; //���ݿ� - unsigned long Subchunk2Size; //���ݿ��С - WAV_HEADER(unsigned long cs = 36, unsigned long sc1s = 16, unsigned short af = 1, unsigned short nc = 1, unsigned long sr = 22050, unsigned long bps = 44100, unsigned short ba = 2, unsigned short bips = 16, unsigned long sc2s = 0) :ChunkSize(cs), Subchunk1Size(sc1s), AudioFormat(af), NumOfChan(nc), SamplesPerSec(sr), bytesPerSec(bps), blockAlign(ba), bitsPerSample(bips), Subchunk2Size(sc2s) {} - }; - using iterator = int16_t*; - Wav(unsigned long cs = 36, unsigned long sc1s = 16, unsigned short af = 1, unsigned short nc = 1, unsigned long sr = 22050, unsigned long bps = 44100, unsigned short ba = 2, unsigned short bips = 16, unsigned long sc2s = 0) :header({ - cs, - sc1s, - af, - nc, - sr, - bps, - ba, - bips, - sc2s - }), Data(nullptr), StartPos(44) { - dataSize = 0; - SData = nullptr; - } - Wav(unsigned long sr, unsigned long length, const void* data) :header({ - 36, - 16, - 1, - 1, - sr, - sr * 2, - 2, - 16, - length - }), Data(new char[length + 1]), StartPos(44) - { - header.ChunkSize = 36 + length; - memcpy(Data, data, length); - SData = reinterpret_cast<int16_t*>(Data); - dataSize = length / 2; - } - Wav(const wchar_t* Path); - Wav(const Wav& input); - Wav(Wav&& input) noexcept; - Wav& operator=(const Wav& input) = delete; - Wav& operator=(Wav&& input) noexcept; - ~Wav() { destory(); } - Wav& cat(const Wav& input); - bool isEmpty() const { return this->header.Subchunk2Size == 0; } - const char* getData() const { return Data; } - char* getData() { return Data; } - WAV_HEADER getHeader() const { return header; } - WAV_HEADER& Header() { return header; } - void destory() const { delete[] Data; } - void changeData(const void* indata,long length,int sr) - { - delete[] Data; - Data = new char[length]; - memcpy(Data, indata, length); - header.ChunkSize = 36 + length; - header.Subchunk2Size = length; - header.SamplesPerSec = sr; - header.bytesPerSec = 2 * sr; - } - int16_t& operator[](const size_t index) const - { - if (index < dataSize) - return *(SData + index); - return *(SData + dataSize - 1); - } - iterator begin() const - { - return reinterpret_cast<int16_t*>(Data); - } - iterator end() const - { - return reinterpret_cast<int16_t*>(Data + header.Subchunk2Size); - } - int64_t getDataLen()const - { - return static_cast<int64_t>(dataSize); - } -private: - WAV_HEADER header; - char* Data; - int16_t* SData; - size_t dataSize; - int StartPos; -}; diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/cldm/ddim_hacked.py b/spaces/Mellow-ai/PhotoAI_Mellow/cldm/ddim_hacked.py deleted file mode 100644 index 25b1bc947272ad14d7f7e5e4d1809005253b63d0..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/cldm/ddim_hacked.py +++ /dev/null @@ -1,317 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - model_t = self.model.apply_model(x, t, c) - model_uncond = self.model.apply_model(x, t, unconditional_conditioning) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - num_reference_steps = timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), timesteps[i], device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec diff --git a/spaces/NingKanae/anime-voice-generator/monotonic_align/core.py b/spaces/NingKanae/anime-voice-generator/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/NingKanae/anime-voice-generator/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_softmax.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_softmax.py deleted file mode 100644 index ae0c77ba0f6ee98501306d66cbc4a948b4ade0f7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/adaptive_softmax.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import operator - -import torch -import torch.nn.functional as F -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class TiedLinear(nn.Module): - def __init__(self, weight, transpose): - super().__init__() - self.weight = weight - self.transpose = transpose - - def forward(self, input): - return F.linear(input, self.weight.t() if self.transpose else self.weight) - - -class TiedHeadModule(nn.Module): - def __init__(self, weights, input_dim, num_classes, q_noise, qn_block_size): - super().__init__() - tied_emb, _ = weights - self.num_words, emb_dim = tied_emb.size() - - self.word_proj = quant_noise( - TiedLinear(tied_emb, transpose=False), q_noise, qn_block_size - ) - if input_dim != emb_dim: - self.word_proj = nn.Sequential( - quant_noise( - nn.Linear(input_dim, emb_dim, bias=False), q_noise, qn_block_size - ), - self.word_proj, - ) - - self.class_proj = quant_noise( - nn.Linear(input_dim, num_classes, bias=False), q_noise, qn_block_size - ) - self.out_dim = self.num_words + num_classes - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def forward(self, input): - inp_sz = functools.reduce(operator.mul, input.shape[:-1], 1) - out = self._float_tensor.new(inp_sz, self.out_dim) - out[:, : self.num_words] = self.word_proj(input.view(inp_sz, -1)) - out[:, self.num_words :] = self.class_proj(input.view(inp_sz, -1)) - return out - - -class AdaptiveSoftmax(nn.Module): - """ - This is an implementation of the efficient softmax approximation for - graphical processing units (GPU), described in the paper "Efficient softmax - approximation for GPUs" (http://arxiv.org/abs/1609.04309). - """ - - def __init__( - self, - vocab_size, - input_dim, - cutoff, - dropout, - factor=4.0, - adaptive_inputs=None, - tie_proj=False, - q_noise=0, - qn_block_size=8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - output_dim = cutoff[0] + len(cutoff) - 1 - - self.vocab_size = vocab_size - self.cutoff = cutoff - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.input_dim = input_dim - self.factor = factor - self.q_noise = q_noise - self.qn_block_size = qn_block_size - - self.lsm = nn.LogSoftmax(dim=1) - - if adaptive_inputs is not None: - self.head = TiedHeadModule( - adaptive_inputs.weights_for_band(0), - input_dim, - len(cutoff) - 1, - self.q_noise, - self.qn_block_size, - ) - else: - self.head = quant_noise( - nn.Linear(input_dim, output_dim, bias=False), - self.q_noise, - self.qn_block_size, - ) - - self._make_tail(adaptive_inputs, tie_proj) - - def init_weights(m): - if ( - hasattr(m, "weight") - and not isinstance(m, TiedLinear) - and not isinstance(m, TiedHeadModule) - ): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("version", torch.LongTensor([1])) - - def _make_tail(self, adaptive_inputs=None, tie_proj=False): - self.tail = nn.ModuleList() - for i in range(len(self.cutoff) - 1): - dim = int(self.input_dim // self.factor ** (i + 1)) - - tied_emb, tied_proj = ( - adaptive_inputs.weights_for_band(i + 1) - if adaptive_inputs is not None - else (None, None) - ) - - if tied_proj is not None: - if tie_proj: - proj = quant_noise( - TiedLinear(tied_proj, transpose=True), - self.q_noise, - self.qn_block_size, - ) - else: - proj = quant_noise( - nn.Linear(tied_proj.size(0), tied_proj.size(1), bias=False), - self.q_noise, - self.qn_block_size, - ) - else: - proj = quant_noise( - nn.Linear(self.input_dim, dim, bias=False), - self.q_noise, - self.qn_block_size, - ) - - if tied_emb is None: - out_proj = nn.Linear( - dim, self.cutoff[i + 1] - self.cutoff[i], bias=False - ) - else: - out_proj = TiedLinear(tied_emb, transpose=False) - - m = nn.Sequential( - proj, - nn.Dropout(self.dropout_module.p), - quant_noise(out_proj, self.q_noise, self.qn_block_size), - ) - - self.tail.append(m) - - def upgrade_state_dict_named(self, state_dict, name): - version_name = name + ".version" - if version_name not in state_dict: - raise Exception("This version of the model is no longer supported") - - def adapt_target(self, target): - """ - In order to be efficient, the AdaptiveSoftMax does not compute the - scores for all the word of the vocabulary for all the examples. It is - thus necessary to call the method adapt_target of the AdaptiveSoftMax - layer inside each forward pass. - """ - - target = target.view(-1) - new_target = [target.clone()] - target_idxs = [] - - for i in range(len(self.cutoff) - 1): - mask = target.ge(self.cutoff[i]).mul(target.lt(self.cutoff[i + 1])) - new_target[0][mask] = self.cutoff[0] + i - - if mask.any(): - target_idxs.append(mask.nonzero(as_tuple=False).squeeze(1)) - new_target.append(target[mask].add(-self.cutoff[i])) - else: - target_idxs.append(None) - new_target.append(None) - - return new_target, target_idxs - - def forward(self, input, target): - """ - Args: - input: (b x t x d) - target: (b x t) - Returns: - 2 lists: output for each cutoff section and new targets by cut off - """ - - input = input.contiguous().view(-1, input.size(-1)) - input = self.dropout_module(input) - - new_target, target_idxs = self.adapt_target(target) - output = [self.head(input)] - - for i in range(len(target_idxs)): - if target_idxs[i] is not None: - output.append(self.tail[i](input.index_select(0, target_idxs[i]))) - else: - output.append(None) - - return output, new_target - - def get_log_prob(self, input, target): - """ - Computes the log probabilities for all the words of the vocabulary, - given a 2D tensor of hidden vectors. - """ - - bsz, length, dim = input.size() - input = input.contiguous().view(-1, dim) - - if target is not None: - _, target_idxs = self.adapt_target(target) - else: - target_idxs = None - - head_y = self.head(input) - log_probs = head_y.new_zeros(input.size(0), self.vocab_size) - - head_sz = self.cutoff[0] + len(self.tail) - log_probs[:, :head_sz] = self.lsm(head_y) - tail_priors = log_probs[:, self.cutoff[0] : head_sz].clone() - - for i in range(len(self.tail)): - start = self.cutoff[i] - end = self.cutoff[i + 1] - - if target_idxs is None: - tail_out = log_probs[:, start:end] - tail_out.copy_(self.tail[i](input)) - log_probs[:, start:end] = self.lsm(tail_out).add_( - tail_priors[:, i, None] - ) - elif target_idxs[i] is not None: - idxs = target_idxs[i] - tail_out = log_probs[idxs, start:end] - tail_out.copy_(self.tail[i](input[idxs])) - log_probs[idxs, start:end] = self.lsm(tail_out).add_( - tail_priors[idxs, i, None] - ) - - log_probs = log_probs.view(bsz, length, -1) - return log_probs diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation_moe/translation_moe_src/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation_moe/translation_moe_src/__init__.py deleted file mode 100644 index c0abe53e973b4bb31cfb062708965d002c79b6e7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/translation_moe/translation_moe_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import translation_moe # noqa diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/unsupervised_quality_estimation/meteor.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/unsupervised_quality_estimation/meteor.py deleted file mode 100644 index 2ee0448cf1f167f6f3ecee56ad807922cffb0956..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/unsupervised_quality_estimation/meteor.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import math -import os -import subprocess -import sys -import tempfile -from collections import defaultdict -from itertools import combinations - - -def read_translations(path, n_repeats): - segment_counter = 0 - segment_translations = [] - translations = defaultdict(list) - for line in open(path): - segment_translations.append(" ".join(line.split())) - if len(segment_translations) == n_repeats: - translations[segment_counter] = segment_translations - segment_translations = [] - segment_counter += 1 - return translations - - -def generate_input(translations, n_repeats): - _, ref_path = tempfile.mkstemp() - _, mt_path = tempfile.mkstemp() - ref_fh = open(ref_path, "w") - mt_fh = open(mt_path, "w") - for segid in sorted(translations.keys()): - assert len(translations[segid]) == n_repeats - indexes = combinations(range(n_repeats), 2) - for idx1, idx2 in indexes: - mt_fh.write(translations[segid][idx1].strip() + "\n") - ref_fh.write(translations[segid][idx2].strip() + "\n") - sys.stderr.write("\nSaved translations to %s and %s" % (ref_path, mt_path)) - return ref_path, mt_path - - -def run_meteor(ref_path, mt_path, metric_path, lang="en"): - _, out_path = tempfile.mkstemp() - subprocess.call( - [ - "java", - "-Xmx2G", - "-jar", - metric_path, - mt_path, - ref_path, - "-p", - "0.5 0.2 0.6 0.75", # default parameters, only changed alpha to give equal weight to P and R - "-norm", - "-l", - lang, - ], - stdout=open(out_path, "w"), - ) - os.remove(ref_path) - os.remove(mt_path) - sys.stderr.write("\nSaved Meteor output to %s" % out_path) - return out_path - - -def read_output(meteor_output_path, n_repeats): - n_combinations = math.factorial(n_repeats) / ( - math.factorial(2) * math.factorial(n_repeats - 2) - ) - raw_scores = [] - average_scores = [] - for line in open(meteor_output_path): - if not line.startswith("Segment "): - continue - score = float(line.strip().split("\t")[1]) - raw_scores.append(score) - if len(raw_scores) == n_combinations: - average_scores.append(sum(raw_scores) / n_combinations) - raw_scores = [] - os.remove(meteor_output_path) - return average_scores - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--infile") - parser.add_argument("-n", "--repeat_times", type=int) - parser.add_argument("-m", "--meteor") - parser.add_argument("-o", "--output") - args = parser.parse_args() - - translations = read_translations(args.infile, args.repeat_times) - sys.stderr.write("\nGenerating input for Meteor...") - ref_path, mt_path = generate_input(translations, args.repeat_times) - sys.stderr.write("\nRunning Meteor...") - out_path = run_meteor(ref_path, mt_path, args.meteor) - sys.stderr.write("\nReading output...") - scores = read_output(out_path, args.repeat_times) - sys.stderr.write("\nWriting results...") - with open(args.output, "w") as o: - for scr in scores: - o.write("{}\n".format(scr)) - o.close() - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/setup.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/setup.py deleted file mode 100644 index 6a21f7e2ee0840a3b251522275a0b32a856951d7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="dynamicconv_layer", - ext_modules=[ - CUDAExtension( - name="dynamicconv_cuda", - sources=[ - "dynamicconv_cuda.cpp", - "dynamicconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/sparse_multihead_attention.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/sparse_multihead_attention.py deleted file mode 100644 index 3cbd9d6785886e319aab0601517e27df733b6f97..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/sparse_multihead_attention.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch - -from .multihead_attention import MultiheadAttention - - -class SparseMultiheadAttention(MultiheadAttention): - """Sparse Multi-Headed Attention. - - "Generating Long Sequences with Sparse Transformers". Implements - fixed factorized self attention, where l=stride and c=expressivity. - A(1) includes all words in the stride window and A(2) takes a summary of c - words from the end of each stride window. - If is_bidirectional=False, we do not include any words past the current word, - as in the paper. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - stride=32, - expressivity=8, - is_bidirectional=True, - ): - - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - ) - - self.is_bidirectional = is_bidirectional - self.stride = stride - self.expressivity = expressivity - assert self.stride > 0 and self.stride >= self.expressivity - - # Used for Ai(2) calculations - beginning of [l-c, l] range - def compute_checkpoint(self, word_index): - if word_index % self.stride == 0 and word_index != 0: - checkpoint_index = word_index - self.expressivity - else: - checkpoint_index = ( - math.floor(word_index / self.stride) * self.stride - + self.stride - - self.expressivity - ) - return checkpoint_index - - # Computes Ai(2) - def compute_subset_summaries(self, absolute_max): - checkpoint_index = self.compute_checkpoint(0) - subset_two = set() - while checkpoint_index <= absolute_max - 1: - summary = set( - range( - checkpoint_index, - min(checkpoint_index + self.expressivity + 1, absolute_max), - ) - ) - subset_two = subset_two.union(summary) - checkpoint_index = self.compute_checkpoint(checkpoint_index + self.stride) - return subset_two - - # Sparse Transformer Fixed Attention Pattern: https://arxiv.org/pdf/1904.10509.pdf - def compute_fixed_attention_subset(self, word_index, tgt_len): - # +1s account for range function; [min, max) -> [min, max] - if not self.is_bidirectional: - absolute_max = word_index + 1 - else: - absolute_max = tgt_len - - # Subset 1 - whole window - rounded_index = ( - math.floor((word_index + self.stride) / self.stride) * self.stride - ) - if word_index % self.stride == 0 and word_index != 0: - subset_one = set( - range(word_index - self.stride, min(absolute_max, word_index + 1)) - ) - else: - subset_one = set( - range( - max(0, rounded_index - self.stride), - min(absolute_max, rounded_index + 1), - ) - ) - - # Subset 2 - summary per window - # If bidirectional, subset 2 is the same for every index - subset_two = set() - if not self.is_bidirectional: - subset_two = self.compute_subset_summaries(absolute_max) - - return subset_one.union(subset_two) - - # Compute sparse mask - if bidirectional, can pre-compute and store - def buffered_sparse_mask(self, tensor, tgt_len, src_len): - assert tgt_len > self.stride - sparse_mask = torch.empty((tgt_len, src_len)).float().fill_(float("-inf")) - - # If bidirectional, subset 2 is the same for every index - subset_summaries = set() - if self.is_bidirectional: - subset_summaries = self.compute_subset_summaries(tgt_len) - - for i in range(tgt_len): - fixed_attention_subset = self.compute_fixed_attention_subset(i, tgt_len) - fixed_attention_subset = fixed_attention_subset.union(subset_summaries) - included_word_indices = torch.LongTensor(list(fixed_attention_subset)) - sparse_mask[i].index_fill_(0, included_word_indices, 0) - return sparse_mask.type_as(tensor) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - sparse_mask = self.buffered_sparse_mask(attn_weights, tgt_len, src_len) - sparse_mask = sparse_mask.unsqueeze(0).expand( - bsz * self.num_heads, tgt_len, src_len - ) - attn_weights += sparse_mask diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_ranking.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_ranking.py deleted file mode 100644 index bed44f34e5f8e506b6ae7ba30ddaa661bf4a7522..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/sentence_ranking.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq import utils -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("sentence_ranking") -class SentenceRankingTask(LegacyFairseqTask): - """ - Ranking task on multiple sentences. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", metavar="FILE", help="file prefix for data") - parser.add_argument( - "--num-classes", type=int, help="number of sentences to be ranked" - ) - parser.add_argument( - "--init-token", - type=int, - help="add token at the beginning of each batch item", - ) - parser.add_argument( - "--separator-token", type=int, help="add separator token between inputs" - ) - parser.add_argument("--no-shuffle", action="store_true") - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - parser.add_argument( - "--max-option-length", type=int, help="max length for each option" - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - - @classmethod - def load_dictionary(cls, args, filename, source=True): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("<mask>") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data dictionary - data_dict = cls.load_dictionary( - args, - os.path.join(args.data, "input0", "dict.txt"), - source=True, - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - return SentenceRankingTask(args, data_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(type, split): - return os.path.join(self.args.data, type, split) - - def make_dataset(type, dictionary): - split_path = get_path(type, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - input_options = [ - make_dataset("input{idx}".format(idx=idx + 1), self.source_dictionary) - for idx in range(self.args.num_classes) - ] - - if self.args.separator_token is not None: - input0 = PrependTokenDataset(input0, self.args.separator_token) - - src_tokens = [] - for input_option in input_options: - if self.args.init_token is not None: - input_option = PrependTokenDataset(input_option, self.args.init_token) - if self.args.max_option_length is not None: - input_option = TruncateDataset( - input_option, self.args.max_option_length - ) - src_token = ConcatSentencesDataset(input_option, input0) - src_token = maybe_shorten_dataset( - src_token, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.max_positions, - self.args.seed, - ) - src_tokens.append(src_token) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(src_tokens[0])) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for src_token_idx in range(len(src_tokens)): - dataset.update( - { - "net_input{idx}".format(idx=src_token_idx + 1): { - "src_tokens": RightPadDataset( - src_tokens[src_token_idx], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset( - src_tokens[src_token_idx], reduce=False - ), - } - } - ) - - label_path = "{}.label".format(get_path("label", split)) - if os.path.exists(label_path): - with open(label_path) as h: - dataset.update( - target=RawLabelDataset([int(x.strip()) for x in h.readlines()]) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - if self.args.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args): - from fairseq import models - - model = models.build_model(args, self) - - model.register_classification_head( - getattr(args, "ranking_head_name", "sentence_classification_head"), - num_classes=1, - ) - - return model - - def max_positions(self): - return self.args.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/resampling_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/resampling_dataset.py deleted file mode 100644 index 3d3b993164dc3962df48bacff26714328e843e80..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/resampling_dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -from fairseq.data import BaseWrapperDataset, plasma_utils - - -logger = logging.getLogger(__name__) - - -class ResamplingDataset(BaseWrapperDataset): - """Randomly samples from a given dataset at each epoch. - - Sampling is done with or without replacement, depending on the "replace" - parameter. - - Optionally, the epoch size can be rescaled. This is potentially desirable - to increase per-epoch coverage of the base dataset (since sampling with - replacement means that many items in the dataset will be left out). In the - case of sampling without replacement, size_ratio should be strictly less - than 1. - - Args: - dataset (~torch.utils.data.Dataset): dataset on which to sample. - weights (List[float]): list of probability weights - (default: None, which corresponds to uniform sampling). - replace (bool): sampling mode; True for "with replacement", or False - for "without replacement" (default: True) - size_ratio (float): the ratio to subsample to; must be positive - (default: 1.0). - batch_by_size (bool): whether or not to batch by sequence length - (default: True). - seed (int): RNG seed to use (default: 0). - epoch (int): starting epoch number (default: 1). - """ - - def __init__( - self, - dataset, - weights=None, - replace=True, - size_ratio=1.0, - batch_by_size=True, - seed=0, - epoch=1, - ): - super().__init__(dataset) - - if weights is None: - self.weights = None - - else: - assert len(weights) == len(dataset) - weights_arr = np.array(weights, dtype=np.float64) - weights_arr /= weights_arr.sum() - self.weights = plasma_utils.PlasmaArray(weights_arr) - - self.replace = replace - - assert size_ratio > 0.0 - if not self.replace: - assert size_ratio < 1.0 - self.size_ratio = float(size_ratio) - self.actual_size = np.ceil(len(dataset) * self.size_ratio).astype(int) - - self.batch_by_size = batch_by_size - self.seed = seed - - self._cur_epoch = None - self._cur_indices = None - - self.set_epoch(epoch) - - def __getitem__(self, index): - return self.dataset[self._cur_indices.array[index]] - - def __len__(self): - return self.actual_size - - @property - def sizes(self): - if isinstance(self.dataset.sizes, list): - return [s[self._cur_indices.array] for s in self.dataset.sizes] - return self.dataset.sizes[self._cur_indices.array] - - def num_tokens(self, index): - return self.dataset.num_tokens(self._cur_indices.array[index]) - - def size(self, index): - return self.dataset.size(self._cur_indices.array[index]) - - def ordered_indices(self): - if self.batch_by_size: - order = [ - np.arange(len(self)), - self.sizes, - ] # No need to handle `self.shuffle == True` - return np.lexsort(order) - else: - return np.arange(len(self)) - - def prefetch(self, indices): - self.dataset.prefetch(self._cur_indices.array[indices]) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch): - logger.debug("ResamplingDataset.set_epoch: {}".format(epoch)) - super().set_epoch(epoch) - - if epoch == self._cur_epoch: - return - - self._cur_epoch = epoch - - # Generate a weighted sample of indices as a function of the - # random seed and the current epoch. - - rng = np.random.RandomState( - [ - 42, # magic number - self.seed % (2 ** 32), # global seed - self._cur_epoch, # epoch index - ] - ) - self._cur_indices = plasma_utils.PlasmaArray( - rng.choice( - len(self.dataset), - self.actual_size, - replace=self.replace, - p=(None if self.weights is None else self.weights.array), - ) - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/search.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/search.py deleted file mode 100644 index d5ea68b4ce04409c504c1d22098b7968a9ce596a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/search.py +++ /dev/null @@ -1,814 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import List, Optional - -import torch -import torch.nn as nn -from fairseq.token_generation_constraints import ( - ConstraintState, - OrderedConstraintState, - UnorderedConstraintState, -) -from torch import Tensor - - -class Search(nn.Module): - def __init__(self, tgt_dict): - super().__init__() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.src_lengths = torch.tensor(-1) - self.supports_constraints = False - self.stop_on_max_len = False - - def step( - self, step, lprobs, scores, prev_output_tokens=None, original_batch_idxs=None - ): - """Take a single search step. - - Args: - step: the current search step, starting at 0 - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - scores: (bsz x input_beam_size x step) - the historical model scores of each hypothesis up to this point - prev_output_tokens: (bsz x step) - the previously generated oputput tokens - original_batch_idxs: (bsz) - the tensor with the batch indices, in the range [0, bsz) - this is useful in case there has been applied a re-ordering - and we need to know the orignal indices - - Return: A tuple of (scores, indices, beams) where: - scores: (bsz x output_beam_size) - the scores of the chosen elements; output_beam_size can be - larger than input_beam_size, e.g., we may return - 2*input_beam_size to account for EOS - indices: (bsz x output_beam_size) - the indices of the chosen elements - beams: (bsz x output_beam_size) - the hypothesis ids of the chosen elements, in the range [0, input_beam_size) - """ - raise NotImplementedError - - @torch.jit.export - def set_src_lengths(self, src_lengths): - self.src_lengths = src_lengths - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - """Initialize constraint states for constrained decoding (if supported). - - Args: - batch_constraints: (torch.Tensor, optional) - the list of constraints, in packed form - beam_size: (int) - the beam size - Returns: - *encoder_out* rearranged according to *new_order* - """ - pass - - def prune_sentences(self, batch_idxs: Tensor): - """ - Removes constraint states for completed sentences (if supported). - This is called from sequence_generator._generate() when sentences are - deleted from the batch. - - Args: - batch_idxs: Indices of *sentences* whose constraint state should be *kept*. - """ - pass - - def update_constraints(self, active_hypos: Tensor): - """ - Updates the constraint states by selecting the beam items that are retained. - This is called at each time step of sequence_generator._generate() when - the set of 2 * {beam_size} candidate hypotheses are reduced to the beam size. - - Args: - active_hypos: (batch size, beam size) - list of integers denoting, for each sentence, which beam candidate items - should be kept. - """ - pass - - -class BeamSearch(Search): - def __init__(self, tgt_dict): - super().__init__(tgt_dict) - self.constraint_states = None - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # At this point, beams_buf and indices_buf are single-dim and contain relative indices - return scores_buf, indices_buf, beams_buf - - -class PrefixConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, prefix_allowed_tokens_fn): - super().__init__(tgt_dict) - self.prefix_allowed_tokens_fn = prefix_allowed_tokens_fn - self.stop_on_max_len = True - - @torch.jit.export - def apply_mask(self, x, prev_output_tokens, original_batch_idxs): - beam_size = x.shape[0] // original_batch_idxs.shape[0] - original_batch_idxs = ( - original_batch_idxs.unsqueeze(-1).repeat((1, beam_size)).flatten().tolist() - ) - - mask = torch.full_like(x, -math.inf) - for sent_i, (sent, batch_i) in enumerate( - zip(prev_output_tokens, original_batch_idxs) - ): - mask[sent_i, :, self.prefix_allowed_tokens_fn(batch_i, sent)] = 0 - - return mask - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Tensor, - prev_output_tokens: Tensor, - original_batch_idxs: Tensor, - ): - bsz, beam_size, vocab_size = lprobs.size() - - lprobs += self.apply_mask( - lprobs.view(bsz * beam_size, 1, vocab_size), - prev_output_tokens, - original_batch_idxs, - ).view(bsz, beam_size, vocab_size) - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - return scores_buf, indices_buf, beams_buf - - -class LexicallyConstrainedBeamSearch(Search): - """Implements lexically constrained beam search as described in - - Fast Lexically Constrained Decoding with Dynamic Beam - Allocation for Neural Machine Translation. Post & Vilar, - NAACL 2018. https://www.aclweb.org/anthology/N18-1119/ - - and - - Improved Lexically Constrained Decoding for Translation and - Monolingual Rewriting. Hu et al, NAACL - 2019. https://www.aclweb.org/anthology/N19-1090/ - - This is accomplished by maintaining, for each beam hypothesis, a - ConstraintState object (see constraints.py) that tracks which - constraints have been generated and using this information to - shape the beam for each input sentence. - """ - - def __init__(self, tgt_dict, representation): - super().__init__(tgt_dict) - self.representation = representation - self.vocab_size = len(tgt_dict) - self.num_cands = 0 - self.supports_constraints = True - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - self.constraint_states = [] - for constraint_tensor in batch_constraints: - if self.representation == "ordered": - constraint_state = OrderedConstraintState.create(constraint_tensor) - elif self.representation == "unordered": - constraint_state = UnorderedConstraintState.create(constraint_tensor) - - self.constraint_states.append([constraint_state for i in range(beam_size)]) - - @torch.jit.export - def prune_sentences(self, batch_idxs: Tensor): - self.constraint_states = [ - self.constraint_states[i] for i in batch_idxs.tolist() - ] - - @torch.jit.export - def update_constraints(self, active_hypos: Tensor): - if self.constraint_states: - batch_size = active_hypos.size(0) - for sentid in range(batch_size): - self.constraint_states[sentid] = [ - self.constraint_states[sentid][i] for i in active_hypos[sentid] - ] - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - """ - A constrained step builds a large candidates list from the following: - - the top 2 * {beam_size} items over the whole beam - - for each item in the beam - - the top {each_k} (default 1) - - all next constraints - We then compute the constrained state of each beam item, and assign - stripe codes: 0 to the best in each bank, 1 to the 2nd-best, and so - on. We then sort by (stripe, score), and truncate the list at - 2 * beam size. - - Args: - step: the decoder step - lprobs: (batch size, beam size, target vocab) - the target-vocab distributions for each item in the beam. - Retrun: A tuple of (scores, indices, beams, constraints) where: - scores: (batch, output beam size) - the scores of the chosen elements - indices: (batch, output beam size) - the target vocab indices of the chosen elements - beams: (batch, output beam size) - the 0-indexed hypothesis ids of the chosen elements - constraints: (batch, output beam size) - the new constraint states - """ - each_k = 1 - device = lprobs.device - - batch_size, beam_size, vocab_size = lprobs.size() - - self.num_cands = min( - # Just take the k-best. We'll get another k from the 1-best from each - # row, plus more from the constraints - beam_size * 2, - lprobs.view(batch_size, -1).size(1) - 1, # -1 so we never select pad - ) - - # STEP 0: Preliminary. Prevent EOS for unfinished hyps across all batch items - constraint_states = self.constraint_states - if constraint_states and step > 0: - not_finished_indices = [] - for sentno, sent_constraints in enumerate(constraint_states): - for beamno, state in enumerate(sent_constraints): - index = sentno * beam_size + beamno - if not state.finished: - not_finished_indices.append(index) - not_finished_indices = torch.tensor(not_finished_indices) - if not_finished_indices.numel() > 0: - lprobs.view(batch_size * beam_size, -1)[ - not_finished_indices, self.eos - ] = -math.inf - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam entry for each batch item - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(batch_size, -1), - self.num_cands, - ) - scores_buf, indices_buf = top_prediction - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # Short circuit if there are no constraints in this batch - if not constraint_states: - return scores_buf, indices_buf, beams_buf - - # STEP 1: get top-1 from each hypothesis across all sentences in the batch - if step > 0: - top_scores, top_indices = torch.topk( - lprobs.view(batch_size * beam_size, -1), - k=each_k, - dim=1, - ) - top_scores = top_scores.view(batch_size, -1) - top_indices = top_indices.view(batch_size, -1) - scores_buf = torch.cat((scores_buf, top_scores), dim=1) - indices_buf = torch.cat((indices_buf, top_indices), dim=1) - new_beams = torch.arange(0, beam_size, device=device).repeat(batch_size, 1) - beams_buf = torch.cat((beams_buf, new_beams), dim=1) - - # Now, process sentences in the batch one by one. - new_scores_buf = torch.zeros((batch_size, 2 * beam_size), device=device) - new_indices_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - new_beams_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - for sentno, states in enumerate(constraint_states): - scores, indices, beams, new_states = self.step_sentence( - step, - sentno, - lprobs[sentno], - constraint_states[sentno], - beams_buf[sentno].clone(), - indices_buf[sentno].clone(), - scores_buf[sentno].clone(), - ) - new_scores_buf[sentno] = scores - new_indices_buf[sentno] = indices - new_beams_buf[sentno] = beams - self.constraint_states[sentno] = new_states - - return new_scores_buf, new_indices_buf, new_beams_buf - - @torch.jit.export - def step_sentence( - self, - step: int, - sentno: int, - lprobs: Tensor, - constraint_states: List[List[ConstraintState]], - beams_buf: Tensor, - indices_buf: Tensor, - scores_buf: Tensor, - ): - """Does per-sentence processing. Adds all constraints for each - hypothesis to the list of candidates; then removes duplicates, - sorts, and dynamically stripes across the banks. All tensor inputs - are collapsed to those pertaining to a single input sentence. - """ - device = lprobs.device - - # STEP 2: Add all constraints for each beam item - for beamno, state in enumerate(constraint_states): - next_tokens = torch.tensor(list(state.next_tokens()), device=device).long() - if next_tokens.numel() != 0: - indices_buf = torch.cat((indices_buf, next_tokens)) - next_beams = ( - torch.tensor(beamno, device=device) - .repeat(next_tokens.size(0)) - .long() - ) - beams_buf = torch.cat((beams_buf, next_beams)) - next_values = lprobs[beamno].take(next_tokens.view(-1)) - scores_buf = torch.cat((scores_buf, next_values)) - - # At the 0th time step, there is just one beam item - if step == 0: - break - - # STEP 3: Compute the "bank" for each candidate. This is the - # number of constraints it's generated. We need this so that - # we can do round-robin allocation of the beam across these - # banks. If C is the number of constraints, we select the best - # item in bank C, then the best in bank C-1, etc, followed by - # the 2nd-best in bank C, the 2nd-best in bank C-1, etc, and so - # on, until the maximum beam size. We accomplish this by - # creating a sort key and striping across the banks. - - # Compute the new states for all candidates - cands_size = indices_buf.size(0) - constraint_states = [ - constraint_states[beams_buf[i]].advance(indices_buf[i]) - for i in range(cands_size) - ] - - banks = torch.tensor([state.bank for state in constraint_states], device=device) - - # STEP 4: Sort - num_constraint_tokens = len(state.tokens) - - # Sort by keys (bank, score) (i.e., sort banks together, and scores - # within banks). AFAIK pytorch doesn't support either stable sort or - # multi-key sorting, so we have to hack this. - MAX_SCORE = -100 - sort_key = (num_constraint_tokens - banks) * MAX_SCORE + scores_buf - sort_values, sort_indices = sort_key.sort(dim=0, descending=True) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - banks = banks[sort_indices] - - # Sort the constraints to follow suit - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 5: Remove duplicates. The topk calls (overall and - # per-row) plus the per-row generation of constraints will - # produce duplicates. Here we remove them. - - def roll(t): - """Rolls a 1d tensor left by 1. - - [0, 1, 2, 3, 4] becomes [4, 0, 1, 2, 3] - """ - return torch.cat((t[-1].unsqueeze(0), t[0:-1]), dim=0) - - # We map candidates (beam, token_id) to a single dimension. - # This is then shifted by 1. We can then easily identify - # duplicates and create a mask that identifies unique - # extensions. - uniques_mask = beams_buf * (self.vocab_size + 1) + indices_buf - uniques_mask = roll(uniques_mask) != uniques_mask - - # Use the mask to pare down the data structures - scores_buf = torch.masked_select(scores_buf, uniques_mask) - indices_buf = torch.masked_select(indices_buf, uniques_mask) - beams_buf = torch.masked_select(beams_buf, uniques_mask) - banks = torch.masked_select(banks, uniques_mask) - i = 1 - for mask in uniques_mask[1:]: - if not mask: - constraint_states.pop(i) - i += mask - - # STEP 6: Assign IDs round-robin across banks, sort, and - # truncate. Now that the candidates are sorted by (bank, - # score) and uniqed, we dynamically allocate the {beam_size} - # beam by striping across the candidates. These stripes will - # be used as sort keys to do round-robin selection. This is - # accomplished in a single pass with offsets. Sorting by - # highest-banks (furthest-along hypotheses) first ensures - # progress through the constraints. - # - # e.g., BANKS: 3 3 3 2 2 2 2 1 1 1 0 0 - # OLD STRIPES: 0 1 2 0 1 2 3 0 1 2 0 1 - # NEW STRIPES: 0 1+4 2+8 0+1 1+5 2+9 3+11 0+2 1+6 2+10 0+3 1+7 - # = 0 5 10 1 6 11 13 2 7 12 3 8 - # - # Sorting by this then gives the following banks: - # - # 3 2 1 0 3 2 1 0 3 2 1 2 - # - # We'll take the top {beam_size} of these. - stripe_offsets = [offset * (len(banks) + 1) for offset in range(len(banks) + 1)] - stripes = torch.zeros_like(banks) - cur_bank_count = -1 - cur_bank = banks[0] - for i, bank in enumerate(banks): - if bank != cur_bank: - cur_bank_count = 0 - cur_bank = bank - else: - cur_bank_count += 1 - stripes[i] = num_constraint_tokens - bank + stripe_offsets[cur_bank_count] - - # STEP 7: Sort by the stripes values - sort_values, sort_indices = stripes.sort(dim=0) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 8: Truncate to the candidates size! - scores_buf = scores_buf[: self.num_cands] - indices_buf = indices_buf[: self.num_cands] - beams_buf = beams_buf[: self.num_cands] - - return scores_buf, indices_buf, beams_buf, constraint_states - - -class LengthConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, min_len_a, min_len_b, max_len_a, max_len_b): - super().__init__(tgt_dict) - self.min_len_a = min_len_a - self.min_len_b = min_len_b - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.beam = BeamSearch(tgt_dict) - self.needs_src_lengths = True - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - min_lens = self.min_len_a * self.src_lengths + self.min_len_b - max_lens = self.max_len_a * self.src_lengths + self.max_len_b - lprobs[step < min_lens, :, self.eos] = -math.inf - lprobs[step >= max_lens, :, self.eos] = 0 - return self.beam.step(step, lprobs, scores) - - -class DiverseBeamSearch(Search): - """Diverse Beam Search. - - See "Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence - Models" for details. - - We only implement the Hamming Diversity penalty here, which performed best - in the original paper. - """ - - def __init__(self, tgt_dict, num_groups, diversity_strength): - super().__init__(tgt_dict) - self.num_groups = num_groups - self.diversity_strength = -diversity_strength - self.beam = BeamSearch(tgt_dict) - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - if beam_size % self.num_groups != 0: - raise ValueError( - "DiverseBeamSearch requires --beam to be divisible by the number of groups" - ) - - # initialize diversity penalty - diversity_buf = torch.zeros(lprobs[:, 0, :].size()).to(lprobs) - - scores_G, indices_G, beams_G = [], [], [] - for g in range(self.num_groups): - lprobs_g = lprobs[:, g :: self.num_groups, :] - scores_g = scores[:, g :: self.num_groups, :] if step > 0 else None - - # apply diversity penalty - if g > 0: - lprobs_g = torch.add( - lprobs_g, - other=diversity_buf.unsqueeze(1), - alpha=self.diversity_strength, - ) - else: - lprobs_g = lprobs_g.contiguous() - - scores_buf, indices_buf, beams_buf = self.beam.step( - step, lprobs_g, scores_g - ) - beams_buf.mul_(self.num_groups).add_(g) - - scores_G.append(scores_buf.clone()) - indices_G.append(indices_buf.clone()) - beams_G.append(beams_buf.clone()) - - # update diversity penalty - diversity_buf.scatter_add_( - 1, indices_buf, torch.ones(indices_buf.size()).to(diversity_buf) - ) - - # interleave results from different groups - scores_buf = torch.stack(scores_G, dim=2).view(bsz, -1) - indices_buf = torch.stack(indices_G, dim=2).view(bsz, -1) - beams_buf = torch.stack(beams_G, dim=2).view(bsz, -1) - return scores_buf, indices_buf, beams_buf - - -class Sampling(Search): - sampling_topk: int - sampling_topp: float - - def __init__(self, tgt_dict, sampling_topk=-1, sampling_topp=-1.0): - super().__init__(tgt_dict) - self.sampling_topk = sampling_topk - self.sampling_topp = sampling_topp - - def _sample_topp(self, lprobs): - """Sample among the smallest set of elements whose cumulative probability mass exceeds p. - - See `"The Curious Case of Neural Text Degeneration" - (Holtzman et al., 2019) <https://arxiv.org/abs/1904.09751>`_. - - Args: - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - - Return: A tuple of (trimed_probs, truncated_indices) where: - trimed_probs: (bsz x input_beam_size x ?) - the model's probabilities over the elements selected to sample from. The - width of the third dimension is determined by top-P. - truncated_indices: (bsz x input_beam_size x ?) - the indices of the chosen elements. - """ - probs = lprobs.exp_() - - # sort the last dimension (vocab dimension) in descending order - sorted_probs, sorted_indices = probs.sort(descending=True) - - # compute a mask to indicate the words to be included in the top-P set. - cumsum_probs = sorted_probs.cumsum(dim=2) - mask = cumsum_probs.lt(self.sampling_topp) - - # note that mask was computed by 'lt'. One more word needs to be included - # so that the cumulative probability mass can exceed p. - cumsum_mask = mask.cumsum(dim=2) - last_included = cumsum_mask[:, :, -1:] - last_included.clamp_(0, mask.size()[2] - 1) - mask = mask.scatter_(2, last_included, 1) - - # truncate unnecessary dims. - max_dim = last_included.max() - truncated_mask = mask[:, :, : max_dim + 1] - truncated_probs = sorted_probs[:, :, : max_dim + 1] - truncated_indices = sorted_indices[:, :, : max_dim + 1] - - # trim the words that are not in top-P by setting their probabilities - # to 0, so that they would not be sampled later. - trim_mask = ~truncated_mask - trimed_probs = truncated_probs.masked_fill_(trim_mask, 0) - return trimed_probs, truncated_indices - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - - if self.sampling_topp > 0: - # only sample from the smallest set of words whose cumulative probability mass exceeds p - probs, top_indices = self._sample_topp(lprobs) - elif self.sampling_topk > 0: - # only sample from top-k candidates - lprobs, top_indices = lprobs.topk(self.sampling_topk) - probs = lprobs.exp_() - else: - probs = lprobs.exp_() - - # dummy data to be consistent with true branch for type check - top_indices = torch.empty(0).to(probs) - # sample - if step == 0: - indices_buf = torch.multinomial( - probs.view(bsz, -1), - beam_size, - replacement=True, - ).view(bsz, beam_size) - else: - indices_buf = torch.multinomial( - probs.view(bsz * beam_size, -1), - 1, - replacement=True, - ).view(bsz, beam_size) - - if step == 0: - # expand to beam size - probs = probs.expand(bsz, beam_size, -1) - - # gather scores - scores_buf = torch.gather(probs, dim=2, index=indices_buf.unsqueeze(-1)) - scores_buf = scores_buf.log_().view(bsz, -1) - - # remap indices if using top-k or top-P sampling - if self.sampling_topk > 0 or self.sampling_topp > 0: - indices_buf = torch.gather( - top_indices.expand(bsz, beam_size, -1), - dim=2, - index=indices_buf.unsqueeze(-1), - ).squeeze(2) - - if step == 0: - beams_buf = indices_buf.new_zeros(bsz, beam_size) - else: - beams_buf = torch.arange(0, beam_size).to(indices_buf).repeat(bsz, 1) - # make scores cumulative - scores_buf.add_( - torch.gather(scores[:, :, step - 1], dim=1, index=beams_buf) - ) - - return scores_buf, indices_buf, beams_buf - - -class DiverseSiblingsSearch(Search): - """ - Beam search with diverse siblings. - - See "A Simple, Fast Diverse Decoding Algorithm for Neural Generation" for details. - https://arxiv.org/abs/1611.08562 - - 1/ Calculate hypotheses for each beam - 2/ Intra-sibling ordering - 3/ Rewrite scores - 4/ Choose top K hypotheses - - if diversity_rate == 0 is equivalent to BeamSearch - """ - - def __init__(self, tgt_dict, diversity_rate): - super().__init__(tgt_dict) - self.diversity_rate = diversity_rate - self.beam = BeamSearch(tgt_dict) - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - k = min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ) - s_list: List[Tensor] - i_list: List[Tensor] - s_list = [torch.empty(0).to(lprobs) for i in range(beam_size)] - i_list = [torch.LongTensor().to(device=lprobs.device) for i in range(beam_size)] - sibling_score = torch.arange(1, k + 1).to(lprobs) * self.diversity_rate - - if step == 0: - return self.beam.step(step, lprobs, scores) - lprobs.add_(scores[:, :, step - 1].unsqueeze(-1)) - - # 1/ Calculate hypotheses for each beam - for i in range(beam_size): - torch.topk(lprobs[:, i, :].view(bsz, -1), k, out=(s_list[i], i_list[i])) - i_list[i].fmod_(vocab_size) - - # 2/ Intra-sibling ordering by default from topk + 3/ Rewrite scores - s_list[i].sub_(sibling_score) - - # 4/ Choose top K hypotheses - indices = torch.stack(i_list, dim=1).view(bsz, -1) - - final_scores = torch.empty(0).to(lprobs) - final_indices = torch.LongTensor().to(device=lprobs.device) - final_beams = torch.LongTensor().to(device=lprobs.device) - (final_scores, final_indices) = torch.topk( - torch.stack(s_list, dim=1).view(bsz, -1), - k, - ) - - final_beams = final_indices // k - - for i in range(bsz): - final_indices[i] = indices[i][final_indices[i]] - - return final_scores, final_indices, final_beams diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/frm_text_to_speech.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/frm_text_to_speech.py deleted file mode 100644 index 1fa9b0f83e742aefce764e2858a81f99db911afd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/frm_text_to_speech.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -from fairseq.data.audio.frm_text_to_speech_dataset import FrmTextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.text_to_speech import TextToSpeechTask - - -logging.basicConfig( - format='%(asctime)s | %(levelname)s | %(name)s | %(message)s', - datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO -) -logger = logging.getLogger(__name__) - - -@register_task('frm_text_to_speech') -class FrmTextToSpeechTask(TextToSpeechTask): - @staticmethod - def add_args(parser): - TextToSpeechTask.add_args(parser) - parser.add_argument( - "--do_chunk", action="store_true", help="train on chunks" - ) - parser.add_argument("--chunk_bound", default=-1, type=int) - parser.add_argument("--chunk_init", default=50, type=int) - parser.add_argument("--chunk_incr", default=5, type=int) - parser.add_argument("--add_eos", action="store_true") - parser.add_argument("--dedup", action="store_true") - parser.add_argument("--ref_fpu", default=-1, type=float) - - def load_dataset(self, split, **unused_kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = FrmTextToSpeechDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.src_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id, - do_chunk=self.args.do_chunk, - chunk_bound=self.args.chunk_bound, - chunk_init=self.args.chunk_init, - chunk_incr=self.args.chunk_incr, - add_eos=self.args.add_eos, - dedup=self.args.dedup, - ref_fpu=self.args.ref_fpu - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_train.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_train.py deleted file mode 100644 index 02ef94cc5b80c05485144db67501b2acedbaf291..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_train.py +++ /dev/null @@ -1,247 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import unittest -from io import StringIO -from unittest.mock import MagicMock, patch - -import torch -from fairseq import checkpoint_utils, data -from omegaconf import OmegaConf - - -def mock_trainer(epoch, num_updates, iterations_in_epoch): - trainer = MagicMock() - trainer.load_checkpoint.return_value = { - "train_iterator": { - "epoch": epoch, - "iterations_in_epoch": iterations_in_epoch, - "shuffle": False, - }, - } - trainer.get_num_updates.return_value = num_updates - return trainer - - -def mock_dict(): - d = MagicMock() - d.pad.return_value = 1 - d.eos.return_value = 2 - d.unk.return_value = 3 - return d - - -def get_trainer_and_epoch_itr(epoch, epoch_size, num_updates, iterations_in_epoch): - tokens = torch.LongTensor(list(range(epoch_size))).view(1, -1) - tokens_ds = data.TokenBlockDataset( - tokens, - sizes=[tokens.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - trainer = mock_trainer(epoch, num_updates, iterations_in_epoch) - dataset = data.LanguagePairDataset( - tokens_ds, tokens_ds.sizes, mock_dict(), shuffle=False - ) - epoch_itr = data.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=[[i] for i in range(epoch_size)], - ) - return trainer, epoch_itr - - -def get_mock_cfg(finetune_from_model): - cfg_mock = OmegaConf.create( - { - "checkpoint": { - "save_dir": None, - "optimizer_overrides": "{}", - "reset_dataloader": False, - "reset_meters": False, - "reset_optimizer": False, - "reset_lr_scheduler": False, - "finetune_from_model": finetune_from_model, - "model_parallel_size": 1, - "restore_file": "checkpoint_last.pt", - }, - "common": { - "model_parallel_size": 1, - }, - } - ) - return cfg_mock - - -class TestLoadCheckpoint(unittest.TestCase): - def setUp(self): - self.cfg_mock = get_mock_cfg(None) - self.patches = { - "os.makedirs": MagicMock(), - "os.path.join": MagicMock(), - "os.path.isfile": MagicMock(return_value=True), - "os.path.isabs": MagicMock(return_value=False), - "fairseq.file_io.PathManager.exists": MagicMock(return_value=False), - } - self.applied_patches = [patch(p, d) for p, d in self.patches.items()] - [p.start() for p in self.applied_patches] - logging.disable(logging.CRITICAL) - - def tearDown(self): - patch.stopall() - logging.disable(logging.NOTSET) - - def test_load_partial_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(2, 150, 200, 50) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - - self.assertEqual(epoch_itr.epoch, 2) - self.assertEqual(epoch_itr.iterations_in_epoch, 50) - - itr = epoch_itr.next_epoch_itr(shuffle=False) - self.assertEqual(epoch_itr.epoch, 2) - self.assertEqual(epoch_itr.iterations_in_epoch, 50) - - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 50) - self.assertEqual(epoch_itr.iterations_in_epoch, 51) - - for _ in range(150 - 52): - next(itr) - self.assertEqual(epoch_itr.iterations_in_epoch, 149) - self.assertTrue(itr.has_next()) - next(itr) - self.assertFalse(itr.has_next()) - - itr = epoch_itr.next_epoch_itr(shuffle=False) - self.assertTrue(itr.has_next()) - self.assertEqual(epoch_itr.epoch, 3) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - - def test_load_full_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(2, 150, 300, 150) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - itr = epoch_itr.next_epoch_itr(shuffle=False) - - self.assertEqual(epoch_itr.epoch, 3) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 0) - - def test_load_no_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - self.patches["os.path.isfile"].return_value = False - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - itr = epoch_itr.next_epoch_itr(shuffle=False) - - self.assertEqual(epoch_itr.epoch, 1) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 0) - - def test_finetune_from_model_args_conflict(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - for arg in [ - "reset_optimizer", - "reset_lr_scheduler", - "reset_meters", - "reset_dataloader", - ]: - with self.subTest(arg=arg): - cfg_mock = get_mock_cfg("/temp/checkpoint_pretrained.pt") - cfg_mock["checkpoint"][arg] = True - with self.assertRaises(Exception) as context: - _, _ = checkpoint_utils.load_checkpoint( - cfg_mock.checkpoint, trainer - ) - - self.assertTrue( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - in str(context.exception) - ) - - def test_finetune_from_model(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - from_model_path = "/temp/checkpoint_pretrained.pt" - - def mock_finetune_exist(path): - if path == from_model_path: - return True - else: - return False - - self.patches[ - "fairseq.file_io.PathManager.exists" - ].side_effect = mock_finetune_exist - cfg_mock = get_mock_cfg(from_model_path) - cfg_mock.checkpoint.restore_file = "checkpoint_last.pt" - _, _ = checkpoint_utils.load_checkpoint(cfg_mock.checkpoint, trainer) - ( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - ) = trainer.load_checkpoint.call_args[0] - reset_meters = trainer.load_checkpoint.call_args[1]["reset_meters"] - self.assertTrue(reset_optimizer) - self.assertTrue(reset_lr_scheduler) - self.assertTrue(reset_meters) - - def test_finetune_from_model_resume(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - from_model_path = "/temp/checkpoint_pretrained.pt" - - # launch second time - # both restore_file=checkpoint_last.pt and finetune_from_model are set - def mock_finetune_exist(path): - if path == from_model_path or path.endsWith("checkpoint_last.pt"): - return True - else: - return False - - self.patches[ - "fairseq.file_io.PathManager.exists" - ].side_effect = mock_finetune_exist - cfg_mock = get_mock_cfg(from_model_path) - cfg_mock.checkpoint.restore_file = "checkpoint_last.pt" - _, _ = checkpoint_utils.load_checkpoint(cfg_mock.checkpoint, trainer) - ( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - ) = trainer.load_checkpoint.call_args[0] - reset_meters = trainer.load_checkpoint.call_args[1]["reset_meters"] - self.assertFalse(reset_optimizer) - self.assertFalse(reset_lr_scheduler) - self.assertFalse(reset_meters) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ORI-Muchim/PowerTTS/commons.py b/spaces/ORI-Muchim/PowerTTS/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/PowerTTS/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/util.py b/spaces/PAIR/PAIR-Diffusion/annotator/util.py deleted file mode 100644 index 8fd735a5d3d5dd900d0b84b37a4ca123f9408ec0..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/util.py +++ /dev/null @@ -1,41 +0,0 @@ -import numpy as np -import cv2 -import os - - -annotator_ckpts_path = os.path.join(os.path.dirname(__file__), 'ckpts') - - -def HWC3(x): - assert x.dtype == np.uint8 - if x.ndim == 2: - x = x[:, :, None] - assert x.ndim == 3 - H, W, C = x.shape - assert C == 1 or C == 3 or C == 4 - if C == 3: - return x - if C == 1: - return np.concatenate([x, x, x], axis=2) - if C == 4: - color = x[:, :, 0:3].astype(np.float32) - alpha = x[:, :, 3:4].astype(np.float32) / 255.0 - y = color * alpha + 255.0 * (1.0 - alpha) - y = y.clip(0, 255).astype(np.uint8) - return y - - -def resize_image(input_image, resolution): - if len(input_image.shape) == 3: - H, W, C = input_image.shape - else: - H, W = input_image.shape - H = float(H) - W = float(W) - k = float(resolution) / min(H, W) - H *= k - W *= k - H = int(np.round(H / 64.0)) * 64 - W = int(np.round(W / 64.0)) * 64 - img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) - return img diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/flops_counter.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index d10af5feca7f4b8c0ba359b7b1c826f754e048be..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import annotator.uniformer.mmcv as mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/spaces/PSLD/PSLD/stable-diffusion/scripts/sample_diffusion.py b/spaces/PSLD/PSLD/stable-diffusion/scripts/sample_diffusion.py deleted file mode 100644 index 876fe3c3642fcc8c7209e4f763c0134166615f78..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/scripts/sample_diffusion.py +++ /dev/null @@ -1,313 +0,0 @@ -import argparse, os, sys, glob, datetime, yaml -import torch -import time -import numpy as np -from tqdm import trange - -from omegaconf import OmegaConf -from PIL import Image - -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.util import instantiate_from_config - -rescale = lambda x: (x + 1.) / 2. - -def custom_to_pil(x): - x = x.detach().cpu() - x = torch.clamp(x, -1., 1.) - x = (x + 1.) / 2. - x = x.permute(1, 2, 0).numpy() - x = (255 * x).astype(np.uint8) - x = Image.fromarray(x) - if not x.mode == "RGB": - x = x.convert("RGB") - return x - - -def custom_to_np(x): - # saves the batch in adm style as in https://github.com/openai/guided-diffusion/blob/main/scripts/image_sample.py - sample = x.detach().cpu() - sample = ((sample + 1) * 127.5).clamp(0, 255).to(torch.uint8) - sample = sample.permute(0, 2, 3, 1) - sample = sample.contiguous() - return sample - - -def logs2pil(logs, keys=["sample"]): - imgs = dict() - for k in logs: - try: - if len(logs[k].shape) == 4: - img = custom_to_pil(logs[k][0, ...]) - elif len(logs[k].shape) == 3: - img = custom_to_pil(logs[k]) - else: - print(f"Unknown format for key {k}. ") - img = None - except: - img = None - imgs[k] = img - return imgs - - -@torch.no_grad() -def convsample(model, shape, return_intermediates=True, - verbose=True, - make_prog_row=False): - - - if not make_prog_row: - return model.p_sample_loop(None, shape, - return_intermediates=return_intermediates, verbose=verbose) - else: - return model.progressive_denoising( - None, shape, verbose=True - ) - - -@torch.no_grad() -def convsample_ddim(model, steps, shape, eta=1.0 - ): - ddim = DDIMSampler(model) - bs = shape[0] - shape = shape[1:] - samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, eta=eta, verbose=False,) - return samples, intermediates - - -@torch.no_grad() -def make_convolutional_sample(model, batch_size, vanilla=False, custom_steps=None, eta=1.0,): - - - log = dict() - - shape = [batch_size, - model.model.diffusion_model.in_channels, - model.model.diffusion_model.image_size, - model.model.diffusion_model.image_size] - - with model.ema_scope("Plotting"): - t0 = time.time() - if vanilla: - sample, progrow = convsample(model, shape, - make_prog_row=True) - else: - sample, intermediates = convsample_ddim(model, steps=custom_steps, shape=shape, - eta=eta) - - t1 = time.time() - - x_sample = model.decode_first_stage(sample) - - log["sample"] = x_sample - log["time"] = t1 - t0 - log['throughput'] = sample.shape[0] / (t1 - t0) - print(f'Throughput for this batch: {log["throughput"]}') - return log - -def run(model, logdir, batch_size=50, vanilla=False, custom_steps=None, eta=None, n_samples=50000, nplog=None): - if vanilla: - print(f'Using Vanilla DDPM sampling with {model.num_timesteps} sampling steps.') - else: - print(f'Using DDIM sampling with {custom_steps} sampling steps and eta={eta}') - - - tstart = time.time() - n_saved = len(glob.glob(os.path.join(logdir,'*.png')))-1 - # path = logdir - if model.cond_stage_model is None: - all_images = [] - - print(f"Running unconditional sampling for {n_samples} samples") - for _ in trange(n_samples // batch_size, desc="Sampling Batches (unconditional)"): - logs = make_convolutional_sample(model, batch_size=batch_size, - vanilla=vanilla, custom_steps=custom_steps, - eta=eta) - n_saved = save_logs(logs, logdir, n_saved=n_saved, key="sample") - all_images.extend([custom_to_np(logs["sample"])]) - if n_saved >= n_samples: - print(f'Finish after generating {n_saved} samples') - break - all_img = np.concatenate(all_images, axis=0) - all_img = all_img[:n_samples] - shape_str = "x".join([str(x) for x in all_img.shape]) - nppath = os.path.join(nplog, f"{shape_str}-samples.npz") - np.savez(nppath, all_img) - - else: - raise NotImplementedError('Currently only sampling for unconditional models supported.') - - print(f"sampling of {n_saved} images finished in {(time.time() - tstart) / 60.:.2f} minutes.") - - -def save_logs(logs, path, n_saved=0, key="sample", np_path=None): - for k in logs: - if k == key: - batch = logs[key] - if np_path is None: - for x in batch: - img = custom_to_pil(x) - imgpath = os.path.join(path, f"{key}_{n_saved:06}.png") - img.save(imgpath) - n_saved += 1 - else: - npbatch = custom_to_np(batch) - shape_str = "x".join([str(x) for x in npbatch.shape]) - nppath = os.path.join(np_path, f"{n_saved}-{shape_str}-samples.npz") - np.savez(nppath, npbatch) - n_saved += npbatch.shape[0] - return n_saved - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument( - "-r", - "--resume", - type=str, - nargs="?", - help="load from logdir or checkpoint in logdir", - ) - parser.add_argument( - "-n", - "--n_samples", - type=int, - nargs="?", - help="number of samples to draw", - default=50000 - ) - parser.add_argument( - "-e", - "--eta", - type=float, - nargs="?", - help="eta for ddim sampling (0.0 yields deterministic sampling)", - default=1.0 - ) - parser.add_argument( - "-v", - "--vanilla_sample", - default=False, - action='store_true', - help="vanilla sampling (default option is DDIM sampling)?", - ) - parser.add_argument( - "-l", - "--logdir", - type=str, - nargs="?", - help="extra logdir", - default="none" - ) - parser.add_argument( - "-c", - "--custom_steps", - type=int, - nargs="?", - help="number of steps for ddim and fastdpm sampling", - default=50 - ) - parser.add_argument( - "--batch_size", - type=int, - nargs="?", - help="the bs", - default=10 - ) - return parser - - -def load_model_from_config(config, sd): - model = instantiate_from_config(config) - model.load_state_dict(sd,strict=False) - model.cuda() - model.eval() - return model - - -def load_model(config, ckpt, gpu, eval_mode): - if ckpt: - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - global_step = pl_sd["global_step"] - else: - pl_sd = {"state_dict": None} - global_step = None - model = load_model_from_config(config.model, - pl_sd["state_dict"]) - - return model, global_step - - -if __name__ == "__main__": - now = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") - sys.path.append(os.getcwd()) - command = " ".join(sys.argv) - - parser = get_parser() - opt, unknown = parser.parse_known_args() - ckpt = None - - if not os.path.exists(opt.resume): - raise ValueError("Cannot find {}".format(opt.resume)) - if os.path.isfile(opt.resume): - # paths = opt.resume.split("/") - try: - logdir = '/'.join(opt.resume.split('/')[:-1]) - # idx = len(paths)-paths[::-1].index("logs")+1 - print(f'Logdir is {logdir}') - except ValueError: - paths = opt.resume.split("/") - idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt - logdir = "/".join(paths[:idx]) - ckpt = opt.resume - else: - assert os.path.isdir(opt.resume), f"{opt.resume} is not a directory" - logdir = opt.resume.rstrip("/") - ckpt = os.path.join(logdir, "model.ckpt") - - base_configs = sorted(glob.glob(os.path.join(logdir, "config.yaml"))) - opt.base = base_configs - - configs = [OmegaConf.load(cfg) for cfg in opt.base] - cli = OmegaConf.from_dotlist(unknown) - config = OmegaConf.merge(*configs, cli) - - gpu = True - eval_mode = True - - if opt.logdir != "none": - locallog = logdir.split(os.sep)[-1] - if locallog == "": locallog = logdir.split(os.sep)[-2] - print(f"Switching logdir from '{logdir}' to '{os.path.join(opt.logdir, locallog)}'") - logdir = os.path.join(opt.logdir, locallog) - - print(config) - - model, global_step = load_model(config, ckpt, gpu, eval_mode) - print(f"global step: {global_step}") - print(75 * "=") - print("logging to:") - logdir = os.path.join(logdir, "samples", f"{global_step:08}", now) - imglogdir = os.path.join(logdir, "img") - numpylogdir = os.path.join(logdir, "numpy") - - os.makedirs(imglogdir) - os.makedirs(numpylogdir) - print(logdir) - print(75 * "=") - - # write config out - sampling_file = os.path.join(logdir, "sampling_config.yaml") - sampling_conf = vars(opt) - - with open(sampling_file, 'w') as f: - yaml.dump(sampling_conf, f, default_flow_style=False) - print(sampling_conf) - - - run(model, imglogdir, eta=opt.eta, - vanilla=opt.vanilla_sample, n_samples=opt.n_samples, custom_steps=opt.custom_steps, - batch_size=opt.batch_size, nplog=numpylogdir) - - print("done.") diff --git a/spaces/Parantonio/IA_voices/index.html b/spaces/Parantonio/IA_voices/index.html deleted file mode 100644 index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000 --- a/spaces/Parantonio/IA_voices/index.html +++ /dev/null @@ -1,19 +0,0 @@ -<!DOCTYPE html> -<html> - <head> - <meta charset="utf-8" /> - <meta name="viewport" content="width=device-width" /> - <title>My static Space - - - -
-

Welcome to your static Space!

-

You can modify this app directly by editing index.html in the Files and versions tab.

-

- Also don't forget to check the - Spaces documentation. -

-
- - diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/primitives.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/primitives.go deleted file mode 100644 index 270f1a9e89f8ecbc7ce50b24febd00ef86393894..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/primitives.go and /dev/null differ diff --git a/spaces/Pfs2021Funny/Basunat-Cinematic-Diffusion_demo/README.md b/spaces/Pfs2021Funny/Basunat-Cinematic-Diffusion_demo/README.md deleted file mode 100644 index 732c6fa5fadc85bdb4b5f18849731b43231a0cb8..0000000000000000000000000000000000000000 --- a/spaces/Pfs2021Funny/Basunat-Cinematic-Diffusion_demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Basunat-Cinematic-Diffusion Demo -emoji: 📊 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Phips/upscale_demo/app.py b/spaces/Phips/upscale_demo/app.py deleted file mode 100644 index a10bdf4fa8bb80d12a3b584a008db7f545afcb7d..0000000000000000000000000000000000000000 --- a/spaces/Phips/upscale_demo/app.py +++ /dev/null @@ -1,271 +0,0 @@ -# Code taken (and slightly adopted) from https://huggingface.co/spaces/havas79/Real-ESRGAN_Demo/blob/main/app.py - credit where credit is due. I am not showcasing code here, but demoing my own trained models ;) - -import gradio as gr -import cv2 -import numpy -import os -import random -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - -last_file = None -img_mode = "RGBA" - -def realesrgan(img, model_name, face_enhance): - global last_file - - # remove last upscale when doing this new upscale to prevent memory being full - if last_file: - print(f"Deleting {last_file} ...") - os.remove(last_file) - last_file = None - - if not img: - return - - imgwidth, imgheight = img.size - - if imgwidth > 1000 or imgheight > 1000: - return error("Input Image too big") - - # Define model parameters - if model_name == '4xNomos8kSC': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '4xHFA2k': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '4xLSDIR': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '4xLSDIRplusN': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '4xLSDIRplusC': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '4xLSDIRplusR': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif model_name == '2xParimgCompact': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu') - netscale = 2 - elif model_name == '2xHFA2kCompact': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu') - netscale = 2 - elif model_name == '4xLSDIRCompactN': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - elif model_name == '4xLSDIRCompactC3': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - elif model_name == '4xLSDIRCompactR3': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - - # Determine model paths - model_path = os.path.join('weights', model_name + '.pth') - - # Restorer Class - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=None, - model=model, - tile=128, - tile_pad=10, - pre_pad=10, - half=False, - gpu_id=None, - ) - - # Use GFPGAN for face enhancement - if face_enhance: - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth', - upscale=netscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - - # Convert the input PIL image to cv2 image, so that it can be processed by realesrgan - cv_img = numpy.array(img) - img = cv2.cvtColor(cv_img, cv2.COLOR_RGBA2BGRA) - - # Apply restoration - try: - if face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, netscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - # Save restored image and return it to the output Image component - extension = 'jpg' - out_filename = f"output_{rnd_string(16)}.{extension}" - cv2.imwrite(out_filename, output) - last_file = out_filename - return out_filename - - -def rnd_string(x): - """Returns a string of 'x' random characters - """ - characters = "abcdefghijklmnopqrstuvwxyz_0123456789" - result = "".join((random.choice(characters)) for i in range(x)) - return result - - -def reset(): - """Resets the Image components of the Gradio interface and deletes - the last processed image - """ - global last_file - if last_file: - print(f"Deleting {last_file} ...") - os.remove(last_file) - last_file = None - return gr.update(value=None), gr.update(value=None) - - -def has_transparency(img): - """This function works by first checking to see if a "transparency" property is defined - in the image's info -- if so, we return "True". Then, if the image is using indexed colors - (such as in GIFs), it gets the index of the transparent color in the palette - (img.info.get("transparency", -1)) and checks if it's used anywhere in the canvas - (img.getcolors()). If the image is in RGBA mode, then presumably it has transparency in - it, but it double-checks by getting the minimum and maximum values of every color channel - (img.getextrema()), and checks if the alpha channel's smallest value falls below 255. - https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent - """ - if img.info.get("transparency", None) is not None: - return True - if img.mode == "P": - transparent = img.info.get("transparency", -1) - for _, index in img.getcolors(): - if index == transparent: - return True - elif img.mode == "RGBA": - extrema = img.getextrema() - if extrema[3][0] < 255: - return True - return False - - -def image_properties(img): - """Returns the dimensions (width and height) and color mode of the input image and - also sets the global img_mode variable to be used by the realesrgan function - """ - global img_mode - if img: - if has_transparency(img): - img_mode = "RGBA" - else: - img_mode = "RGB" - properties = f"Width: {img.size[0]}, Height: {img.size[1]} | Color Mode: {img_mode}" - return properties - - -def main(): - # Gradio Interface - with gr.Blocks(title="Self-trained ESRGAN models demo", theme="dark") as demo: - - gr.Markdown( - """#
Upscale image
- Here I demo some of my self-trained models (only those trained on the SRVGGNet or RRDBNet archs). All my self-trained models can be found on the [openmodeldb](https://openmodeldb.info/?q=Helaman&sort=date-desc) or on [my github repo](https://github.com/phhofm/models). - """ - ) - - with gr.Group(): - with gr.Group(): - model_name = gr.Dropdown(label="Model to be used", - choices=["2xHFA2kCompact", "2xParimgCompact", "4xLSDIRCompactN", "4xLSDIRCompactC3", "4xLSDIRCompactR3", "4xNomos8kSC", "4xHFA2k", "4xLSDIR", "4xLSDIRplusN", "4xLSDIRplusC", "4xLSDIRplusR"], value="4xLSDIRCompactC3", - info="See model infos at the bottom of this page") - face_enhance = gr.Checkbox(label="Face Enhancement using GFPGAN (Doesn't work for anime images)",value=False, show_label=True) - - with gr.Group(): - input_image = gr.Image(label="Source Image", type="pil", image_mode="RGB") - input_image_properties = gr.Textbox(label="Image Properties - Demo will throw error if input image has either width or height > 1000. Output download is jpg for smaller size. Use models locally to circument these limits.", max_lines=1) - with gr.Group(): - output_image = gr.Image(label="Upscaled Image", type="pil", image_mode="RGB", interactive=False) - output_image_properties = gr.Textbox(label="Image Properties", max_lines=1) - with gr.Row(): - upscale_btn = gr.Button("Upscale") - reset_btn = gr.Button("Reset") - with gr.Group(): - gr.Markdown(""" **Examples are not pre-cached. You need to press the Upscale Button after selecting one**""") - gr.Examples(examples="examples",inputs=[input_image, model_name, face_enhance],outputs=output_image,fn=realesrgan, cache_examples=False) - gr.Markdown( - """ - **Model infos** - *SRVGGNetCompact models - in general faster, but less powerful, than RRDBNet* - 2xHFA2kCompact - use for upscaling anime images 2x, faster than 4xHFA2k but less powerful (SRVGGNetCompact) - 2xParimgCompact - upscaling photos 2x, fast (SRVGGNetCompact) - 4xLSDIRCompactN - upscale a good quality photo (no degradations) 4x, faster than 4xLSDIRN but less powerful (SRVGGNetCompact) - 4xLSDIRCompactC3 - upscale a jpg compressed photo 4x, fast (SRVGGNetCompact) - 4xLSDIRCompactR3 - upscale a degraded photo 4x, fast (SRVGGNetCompact) (too strong, best used for interpolation like 4xLSDIRCompactN (or C) 75% 4xLSDIRCompactR3 25% to add little degradation handling to the previous one) - - - - *RRDBNet models - in general more powerful than SRVGGNetCompact, but very slow in this demo* - 4xNomos8kSC - use for upscaling photos 4x or can also be tried out on anime - 4xHFA2k - use for upscaling anime images 4x - 4xLSDIR - upscale a good quality photo (no degradation) 4x - 4xLSDIRplusN - upscale a good quality photo (no degradation) 4x - 4xLSDIRplusC - upscale a jpg compressed photo 4x - 4xLSDIRplusR - upscale a degraded photo 4x (too strong, best used for interpolation like 4xLSDIRplusN (or C) 75% 4xLSDIRplusR 25% to add little degradation handling to the previous one) - - - - *Models that I trained that are not featured here, but available on [openmodeldb](https://openmodeldb.info/?q=Helaman&sort=date-desc) or on [github](https://github.com/phhofm/models):* - 4xNomos8kSCHAT-L - Photo upscaler (handles little bit of jpg compression and blur), [HAT-L](https://github.com/XPixelGroup/HAT) model (good output but very slow since huge) - 4xNomos8kSCHAT-S - Photo upscaler (handles little bit of jpg compression and blur), [HAT-S](https://github.com/XPixelGroup/HAT) model - 4xNomos8kSCSRFormer - Photo upscaler (handles little bit of jpg compression and blur), [SRFormer](https://github.com/HVision-NKU/SRFormer) base model (also good and slow since also big model) - 2xHFA2kAVCOmniSR - Anime frame upscaler that handles AVC (h264) video compression, [OmniSR](https://github.com/Francis0625/Omni-SR) model - 2xHFA2kAVCOmniSR_Sharp - Anime frame upscaler that handles AVC (h264) video compression with sharper outputs, [OmniSR](https://github.com/Francis0625/Omni-SR) model - 4xHFA2kAVCSRFormer_light - Anime frame upscaler that handles AVC (h264) video compression, [SRFormer](https://github.com/HVision-NKU/SRFormer) lightweight model - 2xHFA2kAVCEDSR_M - Anime frame upscaler that handles AVC (h264) video compression, [EDSR-M](https://github.com/LimBee/NTIRE2017) model - 2xHFA2kAVCCompact - Anime frame upscaler that handles AVC (h264) video compression, [SRVGGNet](https://github.com/xinntao/Real-ESRGAN) (also called Real-ESRGAN Compact) model - 4xHFA2kLUDVAESwinIR_light - Anime image upscaler that handles various realistic degradations, [SwinIR](https://github.com/JingyunLiang/SwinIR) light model - 4xHFA2kLUDVAEGRL_small - Anime image upscaler that handles various realistic degradations, [GRL](https://github.com/ofsoundof/GRL-Image-Restoration) small model - 4xHFA2kLUDVAESRFormer_light - Anime image upscaler that handles various realistic degradations, [SRFormer](https://github.com/HVision-NKU/SRFormer) light model - 4xLexicaHAT - An AI generated image upscaler, does not handle any degradations, [HAT](https://github.com/XPixelGroup/HAT) base model - 2xLexicaSwinIR - An AI generated image upscaler, does not handle any degradations, [SwinIR](https://github.com/JingyunLiang/SwinIR) base model - 2xLexicaRRDBNet - An AI generated image upscaler, does not handle any degradations, RRDBNet base model - 2xLexicaRRDBNet_Sharp - An AI generated image upscaler with sharper outputs, does not handle any degradations, RRDBNet base model - 4xHFA2kLUDVAESAFMN - dropped model since there were artifacts on the outputs when training with [SAFMN](https://github.com/sunny2109/SAFMN) arch - - - - *The following are not models I had trained, but rather interpolations I had created, they are available on my [repo](https://github.com/phhofm/models) and can be tried out locally with chaiNNer:* - 4xLSDIRplus (4xLSDIRplusC + 4xLSDIRplusR) - 4xLSDIRCompact3 (4xLSDIRCompactC3 + 4xLSDIRCompactR3) - 4xLSDIRCompact2 (4xLSDIRCompactC2 + 4xLSDIRCompactR2) - 4xInt-Ultracri (UltraSharp + Remacri) - 4xInt-Superscri (Superscale + Remacri) - 4xInt-Siacri(Siax + Remacri) - 4xInt-RemDF2K (Remacri + RealSR_DF2K_JPEG) - 4xInt-RemArt (Remacri + VolArt) - 4xInt-RemAnime (Remacri + AnimeSharp) - 4xInt-RemacRestore (Remacri + UltraMix_Restore) - 4xInt-AnimeArt (AnimeSharp + VolArt) - 2xInt-LD-AnimeJaNai (LD-Anime + AnimeJaNai) - """) - - # Event listeners: - input_image.change(fn=image_properties, inputs=input_image, outputs=input_image_properties) - output_image.change(fn=image_properties, inputs=output_image, outputs=output_image_properties) - upscale_btn.click(fn=realesrgan, inputs=[input_image, model_name, face_enhance], outputs=output_image) - reset_btn.click(fn=reset, inputs=[], outputs=[output_image, input_image]) - - demo.launch() - - -if __name__ == "__main__": - main() diff --git a/spaces/Priyanka-Kumavat/Supply-Chain/README.md b/spaces/Priyanka-Kumavat/Supply-Chain/README.md deleted file mode 100644 index c5abb170eafd910e7b99d999ef3eac1ce687ef89..0000000000000000000000000000000000000000 --- a/spaces/Priyanka-Kumavat/Supply-Chain/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Supply Chain -emoji: 📚 -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Q-bert/FaceGAN/README.md b/spaces/Q-bert/FaceGAN/README.md deleted file mode 100644 index 96fdf623a0c10bbc764ed1fe04d1e3ba6799b1db..0000000000000000000000000000000000000000 --- a/spaces/Q-bert/FaceGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FaceGAN -emoji: 🏢 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RedBaron5/PatentSolver/root_folder.py b/spaces/RedBaron5/PatentSolver/root_folder.py deleted file mode 100644 index 9149b88474ce11d51e769649c492b7c0d140ed5e..0000000000000000000000000000000000000000 --- a/spaces/RedBaron5/PatentSolver/root_folder.py +++ /dev/null @@ -1,5 +0,0 @@ -import os -import re - -ROOT = os.path.dirname(os.path.realpath(__file__)) -ROOT = re.sub(r'\\', '/', ROOT) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py deleted file mode 100644 index 42c0790c98616bb69621deed55547fc04c7392ef..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/cross_entropy_loss.py +++ /dev/null @@ -1,198 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import get_class_weight, weight_reduce_loss - - -def cross_entropy(pred, - label, - weight=None, - class_weight=None, - reduction='mean', - avg_factor=None, - ignore_index=-100): - """The wrapper function for :func:`F.cross_entropy`""" - # class_weight is a manual rescaling weight given to each class. - # If given, has to be a Tensor of size C element-wise losses - loss = F.cross_entropy( - pred, - label, - weight=class_weight, - reduction='none', - ignore_index=ignore_index) - - # apply weights and do the reduction - if weight is not None: - weight = weight.float() - loss = weight_reduce_loss( - loss, weight=weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def _expand_onehot_labels(labels, label_weights, target_shape, ignore_index): - """Expand onehot labels to match the size of prediction.""" - bin_labels = labels.new_zeros(target_shape) - valid_mask = (labels >= 0) & (labels != ignore_index) - inds = torch.nonzero(valid_mask, as_tuple=True) - - if inds[0].numel() > 0: - if labels.dim() == 3: - bin_labels[inds[0], labels[valid_mask], inds[1], inds[2]] = 1 - else: - bin_labels[inds[0], labels[valid_mask]] = 1 - - valid_mask = valid_mask.unsqueeze(1).expand(target_shape).float() - if label_weights is None: - bin_label_weights = valid_mask - else: - bin_label_weights = label_weights.unsqueeze(1).expand(target_shape) - bin_label_weights *= valid_mask - - return bin_labels, bin_label_weights - - -def binary_cross_entropy(pred, - label, - weight=None, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=255): - """Calculate the binary CrossEntropy loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 1). - label (torch.Tensor): The learning label of the prediction. - weight (torch.Tensor, optional): Sample-wise loss weight. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (int | None): The label index to be ignored. Default: 255 - - Returns: - torch.Tensor: The calculated loss - """ - if pred.dim() != label.dim(): - assert (pred.dim() == 2 and label.dim() == 1) or ( - pred.dim() == 4 and label.dim() == 3), \ - 'Only pred shape [N, C], label shape [N] or pred shape [N, C, ' \ - 'H, W], label shape [N, H, W] are supported' - label, weight = _expand_onehot_labels(label, weight, pred.shape, - ignore_index) - - # weighted element-wise losses - if weight is not None: - weight = weight.float() - loss = F.binary_cross_entropy_with_logits( - pred, label.float(), pos_weight=class_weight, reduction='none') - # do the reduction for the weighted loss - loss = weight_reduce_loss( - loss, weight, reduction=reduction, avg_factor=avg_factor) - - return loss - - -def mask_cross_entropy(pred, - target, - label, - reduction='mean', - avg_factor=None, - class_weight=None, - ignore_index=None): - """Calculate the CrossEntropy loss for masks. - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the number - of classes. - target (torch.Tensor): The learning label of the prediction. - label (torch.Tensor): ``label`` indicates the class label of the mask' - corresponding object. This will be used to select the mask in the - of the class which the object belongs to when the mask prediction - if not class-agnostic. - reduction (str, optional): The method used to reduce the loss. - Options are "none", "mean" and "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - class_weight (list[float], optional): The weight for each class. - ignore_index (None): Placeholder, to be consistent with other loss. - Default: None. - - Returns: - torch.Tensor: The calculated loss - """ - assert ignore_index is None, 'BCE loss does not support ignore_index' - # TODO: handle these two reserved arguments - assert reduction == 'mean' and avg_factor is None - num_rois = pred.size()[0] - inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device) - pred_slice = pred[inds, label].squeeze(1) - return F.binary_cross_entropy_with_logits( - pred_slice, target, weight=class_weight, reduction='mean')[None] - - -@LOSSES.register_module() -class CrossEntropyLoss(nn.Module): - """CrossEntropyLoss. - - Args: - use_sigmoid (bool, optional): Whether the prediction uses sigmoid - of softmax. Defaults to False. - use_mask (bool, optional): Whether to use mask cross entropy loss. - Defaults to False. - reduction (str, optional): . Defaults to 'mean'. - Options are "none", "mean" and "sum". - class_weight (list[float] | str, optional): Weight of each class. If in - str format, read them from a file. Defaults to None. - loss_weight (float, optional): Weight of the loss. Defaults to 1.0. - """ - - def __init__(self, - use_sigmoid=False, - use_mask=False, - reduction='mean', - class_weight=None, - loss_weight=1.0): - super(CrossEntropyLoss, self).__init__() - assert (use_sigmoid is False) or (use_mask is False) - self.use_sigmoid = use_sigmoid - self.use_mask = use_mask - self.reduction = reduction - self.loss_weight = loss_weight - self.class_weight = get_class_weight(class_weight) - - if self.use_sigmoid: - self.cls_criterion = binary_cross_entropy - elif self.use_mask: - self.cls_criterion = mask_cross_entropy - else: - self.cls_criterion = cross_entropy - - def forward(self, - cls_score, - label, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function.""" - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.class_weight is not None: - class_weight = cls_score.new_tensor(self.class_weight) - else: - class_weight = None - loss_cls = self.loss_weight * self.cls_criterion( - cls_score, - label, - weight, - class_weight=class_weight, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_cls diff --git a/spaces/Sakil/LLM_Question_Answering_ChatBot/app.py b/spaces/Sakil/LLM_Question_Answering_ChatBot/app.py deleted file mode 100644 index 5dadd78603d2d26820cbd7b50c110520510c5472..0000000000000000000000000000000000000000 --- a/spaces/Sakil/LLM_Question_Answering_ChatBot/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -from langchain.document_loaders import PyPDFLoader, DirectoryLoader -from langchain import PromptTemplate -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain.llms import CTransformers -from langchain.chains import RetrievalQA -import chainlit as cl - -DB_FAISS_PATH = 'vectorstore/db_faiss' - -custom_prompt_template = """Use the following pieces of information to answer the user's question. -If you don't know the answer, just say that you don't know, don't try to make up an answer. -Context: {context} -Question: {question} -Only return the helpful answer below and nothing else. -Helpful answer: -""" - -def set_custom_prompt(): - """ - Prompt template for QA retrieval for each vectorstore - """ - prompt = PromptTemplate(template=custom_prompt_template, - input_variables=['context', 'question']) - return prompt - -# Retrieval QA Chain -def retrieval_qa_chain(llm, prompt, db): - qa_chain = RetrievalQA.from_chain_type(llm=llm, - chain_type='stuff', - retriever=db.as_retriever(search_kwargs={'k': 2}), - return_source_documents=True, - chain_type_kwargs={'prompt': prompt} - ) - return qa_chain - -# Loading the model -def load_llm(max_new_tokens, temperature): - # Load the locally downloaded model here - llm = CTransformers( - model="llama-2-7b-chat.ggmlv3.q8_0.bin", - model_type="llama", - max_new_tokens=max_new_tokens, - temperature=temperature - ) - return llm - -# QA Model Function -def qa_bot(max_new_tokens, temperature): - embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", - model_kwargs={'device': 'cpu'}) - db = FAISS.load_local(DB_FAISS_PATH, embeddings) - llm = load_llm(max_new_tokens, temperature) - qa_prompt = set_custom_prompt() - qa = retrieval_qa_chain(llm, qa_prompt, db) - - return qa - -def main(): - st.title("AI ChatBot LLM") - - max_new_tokens = st.slider("Max New Tokens", min_value=1, max_value=1000, value=512) - temperature = st.slider("Temperature", min_value=0.1, max_value=1.0, step=0.1, value=0.5) - - qa_result = qa_bot(max_new_tokens, temperature) - - user_input = st.text_input("Enter your question:") - - if st.button("Ask"): - response = qa_result({'query': user_input}) - answer = response["result"] - sources = response["source_documents"] - - st.write("Answer:", answer) - if sources: - st.write("Sources:", sources) - else: - st.write("No sources found") - - if st.button("Clear"): - st.text_input("Enter your question:", value="") - -if __name__ == "__main__": - main() diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/README.md b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/README.md deleted file mode 100644 index 383c1b3bd49bc505f996f5655f3b504d9efa88ee..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uma Voice -emoji: 🚀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.7 -app_file: app.py -pinned: false -duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Salesforce/BLIP2/README.md b/spaces/Salesforce/BLIP2/README.md deleted file mode 100644 index 31166de4dede8fccf1cbb5935adbb4c228443d90..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP2/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: BLIP2 -emoji: 🌖 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: bsd-3-clause -models: - - Salesforce/blip2-opt-6.7b - - Salesforce/blip2-flan-t5-xxl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SaltyFishAB/anime-ai-detect/app.py b/spaces/SaltyFishAB/anime-ai-detect/app.py deleted file mode 100644 index 89224ac0e4493054be928e7fabed7b9d0485e412..0000000000000000000000000000000000000000 --- a/spaces/SaltyFishAB/anime-ai-detect/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -from transformers import pipeline - -detection_pipeline = pipeline("image-classification", "saltacc/anime-ai-detect") - - -def detect(img): - print(img) - output = detection_pipeline(img, top_k=2) - final = {} - for d in output: - final[d["label"]] = d["score"] - return final - - -iface = gr.Interface(fn=detect, inputs=gr.Image(type="pil"), outputs=gr.Label(label="result")) -iface.launch() diff --git a/spaces/Samuelxm/WeatherBot/app.py b/spaces/Samuelxm/WeatherBot/app.py deleted file mode 100644 index bb5ca87934c4bc21841f16d9c2a768b7ddef0671..0000000000000000000000000000000000000000 --- a/spaces/Samuelxm/WeatherBot/app.py +++ /dev/null @@ -1,20 +0,0 @@ -# initial draft on April 9, 2023 -#------------------------------ -import streamlit as st -import requests - -API_KEY = '0dab07e2d2af0b9039f11cb7ae4cf66f' -BASE_URL = 'http://api.openweathermap.org/data/2.5/weather' - -st.title('AHWeatherBot') - -city = st.text_input('Enter a city name') -if city: - params = {'q': city, 'appid': API_KEY, 'units': 'metric'} - response = requests.get(BASE_URL, params=params) - data = response.json() - - st.write(f"Current weather in {city}:") - st.write(f"Temperature: {data['main']['temp']}°C") - st.write(f"Feels like: {data['main']['feels_like']}°C") - st.write(f"Humidity: {data['main']['humidity']}%") diff --git a/spaces/SeyedAli/Image-Similarity/src/util/image.py b/spaces/SeyedAli/Image-Similarity/src/util/image.py deleted file mode 100644 index a0bc2ecb330af5b09b983037f248e5414534d73f..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Image-Similarity/src/util/image.py +++ /dev/null @@ -1,22 +0,0 @@ -from PIL import Image -import numpy as np -import requests - -def load_image_url(url, required_size = (224,224), image_type = 'array'): - print(f'downloading.. {url}, type: {image_type}') - img = Image.open(requests.get(url, stream=True).raw) - img = Image.fromarray(np.array(img)) - if required_size is not None: - img = img.resize(required_size) - if image_type == 'array': - img = (np.expand_dims(np.array(img), 0)/255).astype(np.float32) - return img - -def load_image_path(path, required_size = (224,224), image_type = 'array'): - img = Image.open(path) - img = Image.fromarray(np.array(img)) - if required_size is not None: - img = img.resize(required_size) - if image_type == 'array': - img = (np.expand_dims(np.array(img), 0)/255).astype(np.float32) - return img diff --git a/spaces/Shivam29rathore/shorter-finbert/app.py b/spaces/Shivam29rathore/shorter-finbert/app.py deleted file mode 100644 index 0707b6011d807cc6a884d5f16af54a2b071cbc82..0000000000000000000000000000000000000000 --- a/spaces/Shivam29rathore/shorter-finbert/app.py +++ /dev/null @@ -1,122 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import pickle -import torch -from transformers import PegasusTokenizer, PegasusForConditionalGeneration -import tensorflow as tf -from tensorflow.python.lib.io import file_io -from nltk.tokenize import sent_tokenize - - -import io - - -#contents = pickle.load(f) becomes... -#contents = CPU_Unpickler(f).load() - - -model_path = "finbert.sav" - -#load model from drive -with open(model_path, "rb") as f: - model= pickle.load(f) - - - -#tokenizer = AutoTokenizer.from_pretrained(checkpoint) -#model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - - -import nltk -from finbert_embedding.embedding import FinbertEmbedding -import pandas as pd -from nltk.cluster import KMeansClusterer -import numpy as np -import os -from scipy.spatial import distance_matrix -from tensorflow.python.lib.io import file_io -import pickle - -nltk.download('punkt') - - -def make_summary(word): - - # Create tokens from the txt file - tokens = nltk.sent_tokenize(word) - # Strip out trailing and leading white spaces from tokens - sentences = [word.strip() for word in tokens] - #Create a DataFrame from the tokens - data = pd.DataFrame(sentences) - # Assign name Sentences to the column containing text tokens - data.columns = ['Sentences'] - - # Function to create numerical embeddings for each text tokens in dataframe - def get_sentence_embeddings(): - # Create empty list for sentence embeddings - sentence_list = [] - # Loop through all sentences and append sentence embeddings to list - for i in tokens: - sentence_embedding = model.sentence_vector(i) - sentence_list.append(sentence_embedding) - # Create empty list for ndarray - sentence_array=[] - # Loop through sentence list and change data type from tensor to array - for i in sentence_list: - sentence_array.append(i.numpy()) - # return sentence embeddings as list - return sentence_array - - # Apply get_sentence_embeddings to dataframe to create column Embeddings - data['Embeddings'] = get_sentence_embeddings() - - #Number of expected sentences for shorter summaries - if len(tokens) <= 4: - NUM_CLUSTERS = 1 - else: - NUM_CLUSTERS = len(tokens)//4 - - iterations = 25 - # Convert Embeddings into an array and store in variable X - X = np.array(data['Embeddings'].to_list()) - - #Build k-means cluster algorithm - Kclusterer = KMeansClusterer( - NUM_CLUSTERS, - distance = nltk.cluster.util.cosine_distance, - repeats = iterations, avoid_empty_clusters = True) - - # if length of text is too short, K means would return an error - # use the try except block to return the text as result if it is too short. - try: - - assigned_clusters = Kclusterer.cluster(X,assign_clusters=True) - - # Apply Kmean Cluster to DataFrame and create new columns Clusters and Centroid - data['Cluster'] = pd.Series(assigned_clusters, index = data.index) - data['Centroid'] = data['Cluster'].apply(lambda x: Kclusterer.means()[x]) - - # return the text if clustering algorithm catches an exceptiona and move to the next text file - except ValueError: - return word - - # function that computes the distance of each embeddings from the centroid of the cluster - def distance_from_centroid(row): - return distance_matrix([row['Embeddings']], [row['Centroid'].tolist()])[0][0] - - # apply distance_from_centroid function to data - data['Distance_From_Centroid'] = data.apply(distance_from_centroid, axis =1) - - ## Return Final Summary - summary = " ".join(data.sort_values( - 'Distance_From_Centroid', - ascending = True).groupby('Cluster').head(1).sort_index()['Sentences'].tolist()) - return summary - -import gradio as gr - - - - -interface1 = gr.Interface(fn=make_summary, - inputs =gr.inputs.Textbox(lines=15,placeholder="Enter your text !!",label='Input-10k Sections'), - outputs=gr.outputs.Textbox(label='Output- Finbert')).launch() diff --git a/spaces/Shredder/CONBERT/fincat_utils.py b/spaces/Shredder/CONBERT/fincat_utils.py deleted file mode 100644 index 71f424ea468927c85eb43cf6218d8886e47bea89..0000000000000000000000000000000000000000 --- a/spaces/Shredder/CONBERT/fincat_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -import pandas as pd -import numpy as np -import pickle -import torch -from torch.utils.data import Dataset, DataLoader -from transformers import BertTokenizer, BertModel -from transformers import AutoTokenizer, AutoModel -import nltk - -tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') -model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True,) - -def extract_context_words(x, window = 6): - paragraph, offset_start, offset_end = x['paragraph'], x['offset_start'], x['offset_end'] - target_word = paragraph[offset_start : offset_end] - paragraph = ' ' + paragraph + ' ' - offset_start = offset_start + 1 - offset_end = offset_end + 1 - prev_space_posn = (paragraph[:offset_start].rindex(' ') + 1) - end_space_posn = (offset_end + paragraph[offset_end:].index(' ')) - full_word = paragraph[prev_space_posn : end_space_posn] - - prev_words = nltk.word_tokenize(paragraph[0:prev_space_posn]) - next_words = nltk.word_tokenize(paragraph[end_space_posn:]) - words_in_context_window = prev_words[-1*window:] + [full_word] + next_words[:window] - context_text = ' '.join(words_in_context_window) - return context_text - -"""The following functions have been created with inspiration from https://github.com/arushiprakash/MachineLearning/blob/main/BERT%20Word%20Embeddings.ipynb""" - -def bert_text_preparation(text, tokenizer): - """Preparing the input for BERT - - Takes a string argument and performs - pre-processing like adding special tokens, - tokenization, tokens to ids, and tokens to - segment ids. All tokens are mapped to seg- - ment id = 1. - - Args: - text (str): Text to be converted - tokenizer (obj): Tokenizer object - to convert text into BERT-re- - adable tokens and ids - - Returns: - list: List of BERT-readable tokens - obj: Torch tensor with token ids - obj: Torch tensor segment ids - - """ - marked_text = "[CLS] " + text + " [SEP]" - tokenized_text = tokenizer.tokenize(marked_text) - indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) - segments_ids = [1]*len(indexed_tokens) - - # Convert inputs to PyTorch tensors - tokens_tensor = torch.tensor([indexed_tokens]) - segments_tensors = torch.tensor([segments_ids]) - - return tokenized_text, tokens_tensor, segments_tensors - -def get_bert_embeddings(tokens_tensor, segments_tensors, model): - """Get embeddings from an embedding model - - Args: - tokens_tensor (obj): Torch tensor size [n_tokens] - with token ids for each token in text - segments_tensors (obj): Torch tensor size [n_tokens] - with segment ids for each token in text - model (obj): Embedding model to generate embeddings - from token and segment ids - - Returns: - list: List of list of floats of size - [n_tokens, n_embedding_dimensions] - containing embeddings for each token - """ - - # Gradient calculation id disabled - # Model is in inference mode - with torch.no_grad(): - outputs = model(tokens_tensor, segments_tensors) - # Removing the first hidden state - # The first state is the input state - hidden_states = outputs[2][1:] - - # Getting embeddings from the final BERT layer - token_embeddings = hidden_states[-1] - # Collapsing the tensor into 1-dimension - token_embeddings = torch.squeeze(token_embeddings, dim=0) - # Converting torchtensors to lists - list_token_embeddings = [token_embed.tolist() for token_embed in token_embeddings] - - return list_token_embeddings - -def bert_embedding_extract(context_text, word): - tokenized_text, tokens_tensor, segments_tensors = bert_text_preparation(context_text, tokenizer) - list_token_embeddings = get_bert_embeddings(tokens_tensor, segments_tensors, model) - word_tokens,tt,st = bert_text_preparation(word, tokenizer) - word_embedding_all = [] - try: - for word_tk in word_tokens: - word_index = tokenized_text.index(word_tk) - word_embedding = list_token_embeddings[word_index] - word_embedding_all.append(word_embedding) - word_embedding_mean = np.array(word_embedding_all).mean(axis=0) - return word_embedding_mean - except: - return ['None'] \ No newline at end of file diff --git a/spaces/Shrey-Patel/Image-Searcher/README.md b/spaces/Shrey-Patel/Image-Searcher/README.md deleted file mode 100644 index 4ed648c272569ef02bfde69fbb72188701b851a1..0000000000000000000000000000000000000000 --- a/spaces/Shrey-Patel/Image-Searcher/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Searcher -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sohaibahmad/AIdetector/README.md b/spaces/Sohaibahmad/AIdetector/README.md deleted file mode 100644 index db7ccfca5826e148cb146951c3e1b91af646bafc..0000000000000000000000000000000000000000 --- a/spaces/Sohaibahmad/AIdetector/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AIdetector -emoji: 🦀 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SorbonneUniversity/tone/app.py b/spaces/SorbonneUniversity/tone/app.py deleted file mode 100644 index eb138c34d6bc28bc3cb1c15aa556996d72976019..0000000000000000000000000000000000000000 --- a/spaces/SorbonneUniversity/tone/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import numpy as np -import gradio as gr - -def generate_tone(note, octave, duration): - sampling_rate = 48000 - a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9) - frequency = a4_freq * 2 ** (tones_from_a4 / 12) - audio = np.linspace(0, int(duration), int(duration) * sampling_rate) - audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16) - return sampling_rate, audio - -gr.Interface( - generate_tone, - [ - gr.inputs.Dropdown(["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"], type="index"), - gr.inputs.Slider(4, 6, step=1), - gr.inputs.Textbox(type="number", default=1, label="Duration in seconds"), - ], - "audio", - title="Generate a Musical Tone!" -).launch() \ No newline at end of file diff --git a/spaces/Sresti/sharma/app.py b/spaces/Sresti/sharma/app.py deleted file mode 100644 index 670c5307f179259cc26b41f2865bca5554c591f0..0000000000000000000000000000000000000000 --- a/spaces/Sresti/sharma/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """As an adventurous and globetrotting college student, you're constantly on the lookout for new cultures, experiences, and breathtaking landscapes. You've visited numerous countries, immersing yourself in local traditions, and you're always eager to swap travel stories and offer tips on exciting destinations. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Stearns/Soar/README.md b/spaces/Stearns/Soar/README.md deleted file mode 100644 index ae35e3d3d8fe7cb3b202def81645bb7bc2169424..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Soar -emoji: 🦅 -colorFrom: blue -colorTo: indigo -sdk: docker -pinned: false -license: bsd ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SuYuanS/AudioCraft_Plus/docs/ENCODEC.md b/spaces/SuYuanS/AudioCraft_Plus/docs/ENCODEC.md deleted file mode 100644 index efc2bcc7ec50190b907c887b920b70fd799c6953..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/docs/ENCODEC.md +++ /dev/null @@ -1,179 +0,0 @@ -# EnCodec: High Fidelity Neural Audio Compression - -AudioCraft provides the training code for EnCodec, a state-of-the-art deep learning -based audio codec supporting both mono stereo audio, presented in the -[High Fidelity Neural Audio Compression][arxiv] paper. -Check out our [sample page][encodec_samples]. - -## Original EnCodec models - -The EnCodec models presented in High Fidelity Neural Audio Compression can be accessed -and used with the [EnCodec repository](https://github.com/facebookresearch/encodec). - -**Note**: We do not guarantee compatibility between the AudioCraft and EnCodec codebases -and released checkpoints at this stage. - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - - -## Training - -The [CompressionSolver](../audiocraft/solvers/compression.py) implements the audio reconstruction -task to train an EnCodec model. Specifically, it trains an encoder-decoder with a quantization -bottleneck - a SEANet encoder-decoder with Residual Vector Quantization bottleneck for EnCodec - -using a combination of objective and perceptual losses in the forms of discriminators. - -The default configuration matches a causal EnCodec training with at a single bandwidth. - -### Example configuration and grids - -We provide sample configuration and grids for training EnCodec models. - -The compression configuration are defined in -[config/solver/compression](../config/solver/compression). - -The example grids are available at -[audiocraft/grids/compression](../audiocraft/grids/compression). - -```shell -# base causal encodec on monophonic audio sampled at 24 khz -dora grid compression.encodec_base_24khz -# encodec model used for MusicGen on monophonic audio sampled at 32 khz -dora grid compression.encodec_musicgen_32khz -``` - -### Training and valid stages - -The model is trained using a combination of objective and perceptual losses. -More specifically, EnCodec is trained with the MS-STFT discriminator along with -objective losses through the use of a loss balancer to effectively weight -the different losses, in an intuitive manner. - -### Evaluation stage - -Evaluations metrics for audio generation: -* SI-SNR: Scale-Invariant Signal-to-Noise Ratio. -* ViSQOL: Virtual Speech Quality Objective Listener. - -Note: Path to the ViSQOL binary (compiled with bazel) needs to be provided in -order to run the ViSQOL metric on the reference and degraded signals. -The metric is disabled by default. -Please refer to the [metrics documentation](../METRICS.md) to learn more. - -### Generation stage - -The generation stage consists in generating the reconstructed audio from samples -with the current model. The number of samples generated and the batch size used are -controlled by the `dataset.generate` configuration. The output path and audio formats -are defined in the generate stage configuration. - -```shell -# generate samples every 5 epoch -dora run solver=compression/encodec_base_24khz generate.every=5 -# run with a different dset -dora run solver=compression/encodec_base_24khz generate.path= -# limit the number of samples or use a different batch size -dora grid solver=compression/encodec_base_24khz dataset.generate.num_samples=10 dataset.generate.batch_size=4 -``` - -### Playing with the model - -Once you have a model trained, it is possible to get the entire solver, or just -the trained model with the following functions: - -```python -from audiocraft.solvers import CompressionSolver - -# If you trained a custom model with signature SIG. -model = CompressionSolver.model_from_checkpoint('//sig/SIG') -# If you want to get one of the pretrained models with the `//pretrained/` prefix. -model = CompressionSolver.model_from_checkpoint('//pretrained/facebook/encodec_32khz') -# Or load from a custom checkpoint path -model = CompressionSolver.model_from_checkpoint('/my_checkpoints/foo/bar/checkpoint.th') - - -# If you only want to use a pretrained model, you can also directly get it -# from the CompressionModel base model class. -from audiocraft.models import CompressionModel - -# Here do not put the `//pretrained/` prefix! -model = CompressionModel.get_pretrained('facebook/encodec_32khz') -model = CompressionModel.get_pretrained('dac_44khz') - -# Finally, you can also retrieve the full Solver object, with its dataloader etc. -from audiocraft import train -from pathlib import Path -import logging -import os -import sys - -# uncomment the following line if you want some detailed logs when loading a Solver. -logging.basicConfig(stream=sys.stderr, level=logging.INFO) -# You must always run the following function from the root directory. -os.chdir(Path(train.__file__).parent.parent) - - -# You can also get the full solver (only for your own experiments). -# You can provide some overrides to the parameters to make things more convenient. -solver = train.get_solver_from_sig('SIG', {'device': 'cpu', 'dataset': {'batch_size': 8}}) -solver.model -solver.dataloaders -``` - -### Importing / Exporting models - -At the moment we do not have a definitive workflow for exporting EnCodec models, for -instance to Hugging Face (HF). We are working on supporting automatic convertion between -AudioCraft and Hugging Face implementations. - -We still have some support for fine tuning an EnCodec model coming from HF in AudioCraft, -using for instance `continue_from=//pretrained/facebook/encodec_32k`. - -An AudioCraft checkpoint can be exported in a more compact format (excluding the optimizer etc.) -using `audiocraft.utils.export.export_encodec`. For instance, you could run - -```python -from audiocraft.utils import export -from audiocraft import train -xp = train.main.get_xp_from_sig('SIG') -export.export_encodec( - xp.folder / 'checkpoint.th', - '/checkpoints/my_audio_lm/compression_state_dict.bin') - - -from audiocraft.models import CompressionModel -model = CompressionModel.get_pretrained('/checkpoints/my_audio_lm/compression_state_dict.bin') - -from audiocraft.solvers import CompressionSolver -# The two are strictly equivalent, but this function supports also loading from non already exported models. -model = CompressionSolver.model_from_checkpoint('//pretrained//checkpoints/my_audio_lm/compression_state_dict.bin') -``` - -We will see then how to use this model as a tokenizer for MusicGen/Audio gen in the -[MusicGen documentation](./MUSICGEN.md). - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - - -## Citation -``` -@article{defossez2022highfi, - title={High Fidelity Neural Audio Compression}, - author={Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi}, - journal={arXiv preprint arXiv:2210.13438}, - year={2022} -} -``` - - -## License - -See license information in the [README](../README.md). - -[arxiv]: https://arxiv.org/abs/2210.13438 -[encodec_samples]: https://ai.honu.io/papers/encodec/samples.html diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/components.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/components.py deleted file mode 100644 index cc88d14160a62ae7be4793964a638f573375c475..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/components.py +++ /dev/null @@ -1,183 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -import functools - -from debugpy.common import json, log, messaging, util - - -ACCEPT_CONNECTIONS_TIMEOUT = 10 - - -class ComponentNotAvailable(Exception): - def __init__(self, type): - super().__init__(f"{type.__name__} is not available") - - -class Component(util.Observable): - """A component managed by a debug adapter: client, launcher, or debug server. - - Every component belongs to a Session, which is used for synchronization and - shared data. - - Every component has its own message channel, and provides message handlers for - that channel. All handlers should be decorated with @Component.message_handler, - which ensures that Session is locked for the duration of the handler. Thus, only - one handler is running at any given time across all components, unless the lock - is released explicitly or via Session.wait_for(). - - Components report changes to their attributes to Session, allowing one component - to wait_for() a change caused by another component. - """ - - def __init__(self, session, stream=None, channel=None): - assert (stream is None) ^ (channel is None) - - try: - lock_held = session.lock.acquire(blocking=False) - assert lock_held, "__init__ of a Component subclass must lock its Session" - finally: - session.lock.release() - - super().__init__() - - self.session = session - - if channel is None: - stream.name = str(self) - channel = messaging.JsonMessageChannel(stream, self) - channel.start() - else: - channel.name = channel.stream.name = str(self) - channel.handlers = self - self.channel = channel - self.is_connected = True - - # Do this last to avoid triggering useless notifications for assignments above. - self.observers += [lambda *_: self.session.notify_changed()] - - def __str__(self): - return f"{type(self).__name__}[{self.session.id}]" - - @property - def client(self): - return self.session.client - - @property - def launcher(self): - return self.session.launcher - - @property - def server(self): - return self.session.server - - def wait_for(self, *args, **kwargs): - return self.session.wait_for(*args, **kwargs) - - @staticmethod - def message_handler(f): - """Applied to a message handler to automatically lock and unlock the session - for its duration, and to validate the session state. - - If the handler raises ComponentNotAvailable or JsonIOError, converts it to - Message.cant_handle(). - """ - - @functools.wraps(f) - def lock_and_handle(self, message): - try: - with self.session: - return f(self, message) - except ComponentNotAvailable as exc: - raise message.cant_handle("{0}", exc, silent=True) - except messaging.MessageHandlingError as exc: - if exc.cause is message: - raise - else: - exc.propagate(message) - except messaging.JsonIOError as exc: - raise message.cant_handle( - "{0} disconnected unexpectedly", exc.stream.name, silent=True - ) - - return lock_and_handle - - def disconnect(self): - with self.session: - self.is_connected = False - self.session.finalize("{0} has disconnected".format(self)) - - -def missing(session, type): - class Missing(object): - """A dummy component that raises ComponentNotAvailable whenever some - attribute is accessed on it. - """ - - __getattr__ = __setattr__ = lambda self, *_: report() - __bool__ = __nonzero__ = lambda self: False - - def report(): - try: - raise ComponentNotAvailable(type) - except Exception as exc: - log.reraise_exception("{0} in {1}", exc, session) - - return Missing() - - -class Capabilities(dict): - """A collection of feature flags for a component. Corresponds to JSON properties - in the DAP "initialize" request or response, other than those that identify the - party. - """ - - PROPERTIES = {} - """JSON property names and default values for the the capabilities represented - by instances of this class. Keys are names, and values are either default values - or validators. - - If the value is callable, it must be a JSON validator; see debugpy.common.json for - details. If the value is not callable, it is as if json.default(value) validator - was used instead. - """ - - def __init__(self, component, message): - """Parses an "initialize" request or response and extracts the feature flags. - - For every "X" in self.PROPERTIES, sets self["X"] to the corresponding value - from message.payload if it's present there, or to the default value otherwise. - """ - - assert message.is_request("initialize") or message.is_response("initialize") - - self.component = component - - payload = message.payload - for name, validate in self.PROPERTIES.items(): - value = payload.get(name, ()) - if not callable(validate): - validate = json.default(validate) - - try: - value = validate(value) - except Exception as exc: - raise message.isnt_valid("{0} {1}", json.repr(name), exc) - - assert ( - value != () - ), f"{validate} must provide a default value for missing properties." - self[name] = value - - log.debug("{0}", self) - - def __repr__(self): - return f"{type(self).__name__}: {json.repr(dict(self))}" - - def require(self, *keys): - for key in keys: - if not self[key]: - raise messaging.MessageHandlingError( - f"{self.component} does not have capability {json.repr(key)}", - ) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/launcher/winapi.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/launcher/winapi.py deleted file mode 100644 index a93dbc70af24eb46d6ea7e59cae71dd726f7f196..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/launcher/winapi.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -import ctypes -from ctypes.wintypes import BOOL, DWORD, HANDLE, LARGE_INTEGER, LPCSTR, UINT - -from debugpy.common import log - - -JOBOBJECTCLASS = ctypes.c_int -LPDWORD = ctypes.POINTER(DWORD) -LPVOID = ctypes.c_void_p -SIZE_T = ctypes.c_size_t -ULONGLONG = ctypes.c_ulonglong - - -class IO_COUNTERS(ctypes.Structure): - _fields_ = [ - ("ReadOperationCount", ULONGLONG), - ("WriteOperationCount", ULONGLONG), - ("OtherOperationCount", ULONGLONG), - ("ReadTransferCount", ULONGLONG), - ("WriteTransferCount", ULONGLONG), - ("OtherTransferCount", ULONGLONG), - ] - - -class JOBOBJECT_BASIC_LIMIT_INFORMATION(ctypes.Structure): - _fields_ = [ - ("PerProcessUserTimeLimit", LARGE_INTEGER), - ("PerJobUserTimeLimit", LARGE_INTEGER), - ("LimitFlags", DWORD), - ("MinimumWorkingSetSize", SIZE_T), - ("MaximumWorkingSetSize", SIZE_T), - ("ActiveProcessLimit", DWORD), - ("Affinity", SIZE_T), - ("PriorityClass", DWORD), - ("SchedulingClass", DWORD), - ] - - -class JOBOBJECT_EXTENDED_LIMIT_INFORMATION(ctypes.Structure): - _fields_ = [ - ("BasicLimitInformation", JOBOBJECT_BASIC_LIMIT_INFORMATION), - ("IoInfo", IO_COUNTERS), - ("ProcessMemoryLimit", SIZE_T), - ("JobMemoryLimit", SIZE_T), - ("PeakProcessMemoryUsed", SIZE_T), - ("PeakJobMemoryUsed", SIZE_T), - ] - - -JobObjectExtendedLimitInformation = JOBOBJECTCLASS(9) - -JOB_OBJECT_LIMIT_BREAKAWAY_OK = 0x00000800 -JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE = 0x00002000 - -PROCESS_TERMINATE = 0x0001 -PROCESS_SET_QUOTA = 0x0100 - - -def _errcheck(is_error_result=(lambda result: not result)): - def impl(result, func, args): - if is_error_result(result): - log.debug("{0} returned {1}", func.__name__, result) - raise ctypes.WinError() - else: - return result - - return impl - - -kernel32 = ctypes.windll.kernel32 - -kernel32.AssignProcessToJobObject.errcheck = _errcheck() -kernel32.AssignProcessToJobObject.restype = BOOL -kernel32.AssignProcessToJobObject.argtypes = (HANDLE, HANDLE) - -kernel32.CreateJobObjectA.errcheck = _errcheck(lambda result: result == 0) -kernel32.CreateJobObjectA.restype = HANDLE -kernel32.CreateJobObjectA.argtypes = (LPVOID, LPCSTR) - -kernel32.OpenProcess.errcheck = _errcheck(lambda result: result == 0) -kernel32.OpenProcess.restype = HANDLE -kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD) - -kernel32.QueryInformationJobObject.errcheck = _errcheck() -kernel32.QueryInformationJobObject.restype = BOOL -kernel32.QueryInformationJobObject.argtypes = ( - HANDLE, - JOBOBJECTCLASS, - LPVOID, - DWORD, - LPDWORD, -) - -kernel32.SetInformationJobObject.errcheck = _errcheck() -kernel32.SetInformationJobObject.restype = BOOL -kernel32.SetInformationJobObject.argtypes = (HANDLE, JOBOBJECTCLASS, LPVOID, DWORD) - -kernel32.TerminateJobObject.errcheck = _errcheck() -kernel32.TerminateJobObject.restype = BOOL -kernel32.TerminateJobObject.argtypes = (HANDLE, UINT) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/pascal_voc_evaluation.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/pascal_voc_evaluation.py deleted file mode 100644 index b2963e5dc5b6ed471f0c37056b35a350ea4cf020..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/pascal_voc_evaluation.py +++ /dev/null @@ -1,300 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import os -import tempfile -import xml.etree.ElementTree as ET -from collections import OrderedDict, defaultdict -from functools import lru_cache -import torch - -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.utils import comm -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class PascalVOCDetectionEvaluator(DatasetEvaluator): - """ - Evaluate Pascal VOC style AP for Pascal VOC dataset. - It contains a synchronization, therefore has to be called from all ranks. - - Note that the concept of AP can be implemented in different ways and may not - produce identical results. This class mimics the implementation of the official - Pascal VOC Matlab API, and should produce similar but not identical results to the - official API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): name of the dataset, e.g., "voc_2007_test" - """ - self._dataset_name = dataset_name - meta = MetadataCatalog.get(dataset_name) - - # Too many tiny files, download all to local for speed. - annotation_dir_local = PathManager.get_local_path( - os.path.join(meta.dirname, "Annotations/") - ) - self._anno_file_template = os.path.join(annotation_dir_local, "{}.xml") - self._image_set_path = os.path.join(meta.dirname, "ImageSets", "Main", meta.split + ".txt") - self._class_names = meta.thing_classes - assert meta.year in [2007, 2012], meta.year - self._is_2007 = meta.year == 2007 - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._predictions = defaultdict(list) # class name -> list of prediction strings - - def process(self, inputs, outputs): - for input, output in zip(inputs, outputs): - image_id = input["image_id"] - instances = output["instances"].to(self._cpu_device) - boxes = instances.pred_boxes.tensor.numpy() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - for box, score, cls in zip(boxes, scores, classes): - xmin, ymin, xmax, ymax = box - # The inverse of data loading logic in `datasets/pascal_voc.py` - xmin += 1 - ymin += 1 - self._predictions[cls].append( - f"{image_id} {score:.3f} {xmin:.1f} {ymin:.1f} {xmax:.1f} {ymax:.1f}" - ) - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP", "AP50", and "AP75". - """ - all_predictions = comm.gather(self._predictions, dst=0) - if not comm.is_main_process(): - return - predictions = defaultdict(list) - for predictions_per_rank in all_predictions: - for clsid, lines in predictions_per_rank.items(): - predictions[clsid].extend(lines) - del all_predictions - - self._logger.info( - "Evaluating {} using {} metric. " - "Note that results do not use the official Matlab API.".format( - self._dataset_name, 2007 if self._is_2007 else 2012 - ) - ) - - with tempfile.TemporaryDirectory(prefix="pascal_voc_eval_") as dirname: - res_file_template = os.path.join(dirname, "{}.txt") - - aps = defaultdict(list) # iou -> ap per class - for cls_id, cls_name in enumerate(self._class_names): - lines = predictions.get(cls_id, [""]) - - with open(res_file_template.format(cls_name), "w") as f: - f.write("\n".join(lines)) - - for thresh in range(50, 100, 5): - rec, prec, ap = voc_eval( - res_file_template, - self._anno_file_template, - self._image_set_path, - cls_name, - ovthresh=thresh / 100.0, - use_07_metric=self._is_2007, - ) - aps[thresh].append(ap * 100) - - ret = OrderedDict() - mAP = {iou: np.mean(x) for iou, x in aps.items()} - ret["bbox"] = {"AP": np.mean(list(mAP.values())), "AP50": mAP[50], "AP75": mAP[75]} - return ret - - -############################################################################## -# -# Below code is modified from -# https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/voc_eval.py -# -------------------------------------------------------- -# Fast/er R-CNN -# Licensed under The MIT License [see LICENSE for details] -# Written by Bharath Hariharan -# -------------------------------------------------------- - -"""Python implementation of the PASCAL VOC devkit's AP evaluation code.""" - - -@lru_cache(maxsize=None) -def parse_rec(filename): - """Parse a PASCAL VOC xml file.""" - with PathManager.open(filename) as f: - tree = ET.parse(f) - objects = [] - for obj in tree.findall("object"): - obj_struct = {} - obj_struct["name"] = obj.find("name").text - obj_struct["pose"] = obj.find("pose").text - obj_struct["truncated"] = int(obj.find("truncated").text) - obj_struct["difficult"] = int(obj.find("difficult").text) - bbox = obj.find("bndbox") - obj_struct["bbox"] = [ - int(bbox.find("xmin").text), - int(bbox.find("ymin").text), - int(bbox.find("xmax").text), - int(bbox.find("ymax").text), - ] - objects.append(obj_struct) - - return objects - - -def voc_ap(rec, prec, use_07_metric=False): - """Compute VOC AP given precision and recall. If use_07_metric is true, uses - the VOC 07 11-point method (default:False). - """ - if use_07_metric: - # 11 point metric - ap = 0.0 - for t in np.arange(0.0, 1.1, 0.1): - if np.sum(rec >= t) == 0: - p = 0 - else: - p = np.max(prec[rec >= t]) - ap = ap + p / 11.0 - else: - # correct AP calculation - # first append sentinel values at the end - mrec = np.concatenate(([0.0], rec, [1.0])) - mpre = np.concatenate(([0.0], prec, [0.0])) - - # compute the precision envelope - for i in range(mpre.size - 1, 0, -1): - mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) - - # to calculate area under PR curve, look for points - # where X axis (recall) changes value - i = np.where(mrec[1:] != mrec[:-1])[0] - - # and sum (\Delta recall) * prec - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) - return ap - - -def voc_eval(detpath, annopath, imagesetfile, classname, ovthresh=0.5, use_07_metric=False): - """rec, prec, ap = voc_eval(detpath, - annopath, - imagesetfile, - classname, - [ovthresh], - [use_07_metric]) - - Top level function that does the PASCAL VOC evaluation. - - detpath: Path to detections - detpath.format(classname) should produce the detection results file. - annopath: Path to annotations - annopath.format(imagename) should be the xml annotations file. - imagesetfile: Text file containing the list of images, one image per line. - classname: Category name (duh) - [ovthresh]: Overlap threshold (default = 0.5) - [use_07_metric]: Whether to use VOC07's 11 point AP computation - (default False) - """ - # assumes detections are in detpath.format(classname) - # assumes annotations are in annopath.format(imagename) - # assumes imagesetfile is a text file with each line an image name - - # first load gt - # read list of images - with PathManager.open(imagesetfile, "r") as f: - lines = f.readlines() - imagenames = [x.strip() for x in lines] - - # load annots - recs = {} - for imagename in imagenames: - recs[imagename] = parse_rec(annopath.format(imagename)) - - # extract gt objects for this class - class_recs = {} - npos = 0 - for imagename in imagenames: - R = [obj for obj in recs[imagename] if obj["name"] == classname] - bbox = np.array([x["bbox"] for x in R]) - difficult = np.array([x["difficult"] for x in R]).astype(bool) - # difficult = np.array([False for x in R]).astype(bool) # treat all "difficult" as GT - det = [False] * len(R) - npos = npos + sum(~difficult) - class_recs[imagename] = {"bbox": bbox, "difficult": difficult, "det": det} - - # read dets - detfile = detpath.format(classname) - with open(detfile, "r") as f: - lines = f.readlines() - - splitlines = [x.strip().split(" ") for x in lines] - image_ids = [x[0] for x in splitlines] - confidence = np.array([float(x[1]) for x in splitlines]) - BB = np.array([[float(z) for z in x[2:]] for x in splitlines]).reshape(-1, 4) - - # sort by confidence - sorted_ind = np.argsort(-confidence) - BB = BB[sorted_ind, :] - image_ids = [image_ids[x] for x in sorted_ind] - - # go down dets and mark TPs and FPs - nd = len(image_ids) - tp = np.zeros(nd) - fp = np.zeros(nd) - for d in range(nd): - R = class_recs[image_ids[d]] - bb = BB[d, :].astype(float) - ovmax = -np.inf - BBGT = R["bbox"].astype(float) - - if BBGT.size > 0: - # compute overlaps - # intersection - ixmin = np.maximum(BBGT[:, 0], bb[0]) - iymin = np.maximum(BBGT[:, 1], bb[1]) - ixmax = np.minimum(BBGT[:, 2], bb[2]) - iymax = np.minimum(BBGT[:, 3], bb[3]) - iw = np.maximum(ixmax - ixmin + 1.0, 0.0) - ih = np.maximum(iymax - iymin + 1.0, 0.0) - inters = iw * ih - - # union - uni = ( - (bb[2] - bb[0] + 1.0) * (bb[3] - bb[1] + 1.0) - + (BBGT[:, 2] - BBGT[:, 0] + 1.0) * (BBGT[:, 3] - BBGT[:, 1] + 1.0) - - inters - ) - - overlaps = inters / uni - ovmax = np.max(overlaps) - jmax = np.argmax(overlaps) - - if ovmax > ovthresh: - if not R["difficult"][jmax]: - if not R["det"][jmax]: - tp[d] = 1.0 - R["det"][jmax] = 1 - else: - fp[d] = 1.0 - else: - fp[d] = 1.0 - - # compute precision recall - fp = np.cumsum(fp) - tp = np.cumsum(tp) - rec = tp / float(npos) - # avoid divide by zero in case the first detection matches a difficult - # ground truth - prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) - ap = voc_ap(rec, prec, use_07_metric) - - return rec, prec, ap diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/__init__.py deleted file mode 100644 index b2d0540b93ebbad78d6ff2cc0adc0fe8375816c2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib.abc -import importlib.util -from pathlib import Path - -__all__ = [] - -_PROJECTS = { - "point_rend": "PointRend", - "deeplab": "DeepLab", - "panoptic_deeplab": "Panoptic-DeepLab", -} -_PROJECT_ROOT = Path(__file__).resolve().parent.parent.parent / "projects" - -if _PROJECT_ROOT.is_dir(): - # This is true only for in-place installation (pip install -e, setup.py develop), - # where setup(package_dir=) does not work: https://github.com/pypa/setuptools/issues/230 - - class _D2ProjectsFinder(importlib.abc.MetaPathFinder): - def find_spec(self, name, path, target=None): - if not name.startswith("detectron2.projects."): - return - project_name = name.split(".")[-1] - project_dir = _PROJECTS.get(project_name) - if not project_dir: - return - target_file = _PROJECT_ROOT / f"{project_dir}/{project_name}/__init__.py" - if not target_file.is_file(): - return - return importlib.util.spec_from_file_location(name, target_file) - - import sys - - sys.meta_path.append(_D2ProjectsFinder()) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/__init__.py deleted file mode 100644 index c2942fc58e3fce82e690eafc2de0204816e94cc2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from annotator.oneformer.detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py deleted file mode 100644 index 710c81bee298e9e6b21a93742d09e720024ceeff..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py +++ /dev/null @@ -1,203 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/dataset_mapper.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch - -from annotator.oneformer.detectron2.config import configurable - -from annotator.oneformer.detectron2.data import detection_utils as utils -from annotator.oneformer.detectron2.data import transforms as T -from annotator.oneformer.oneformer.data.tokenizer import SimpleTokenizer, Tokenize - -__all__ = ["DatasetMapper"] - - -class DatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by the model. - - This is the default callable to be used to map your dataset dict into training data. - You may need to follow it to implement your own one for customized logic, - such as a different way to read or transform images. - See :doc:`/tutorials/data_loading` for details. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies cropping/geometric transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - task_seq_len: int, - task: str = "panoptic", - use_instance_mask: bool = False, - use_keypoint: bool = False, - instance_mask_format: str = "polygon", - keypoint_hflip_indices: Optional[np.ndarray] = None, - precomputed_proposal_topk: Optional[int] = None, - recompute_boxes: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - use_keypoint: whether to process keypoint annotations if available - instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation - masks into this format. - keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices` - precomputed_proposal_topk: if given, will load pre-computed - proposals from dataset_dict and keep the top k proposals for each image. - recompute_boxes: whether to overwrite bounding box annotations - by computing tight bounding boxes from instance mask annotations. - """ - if recompute_boxes: - assert use_instance_mask, "recompute_boxes requires instance masks" - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.instance_mask_format = instance_mask_format - self.use_keypoint = use_keypoint - self.keypoint_hflip_indices = keypoint_hflip_indices - self.proposal_topk = precomputed_proposal_topk - self.recompute_boxes = recompute_boxes - self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len) - self.task = task - assert self.task in ["panoptic", "semantic", "instance"] - - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE)) - recompute_boxes = cfg.MODEL.MASK_ON - else: - recompute_boxes = False - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "instance_mask_format": cfg.INPUT.MASK_FORMAT, - "use_keypoint": cfg.MODEL.KEYPOINT_ON, - "task_seq_len": cfg.INPUT.TASK_SEQ_LEN, - "recompute_boxes": recompute_boxes, - "task": cfg.MODEL.TEST.TASK, - } - - if cfg.MODEL.KEYPOINT_ON: - ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN) - - if cfg.MODEL.LOAD_PROPOSALS: - ret["precomputed_proposal_topk"] = ( - cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN - if is_train - else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST - ) - return ret - - def _transform_annotations(self, dataset_dict, transforms, image_shape): - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations( - obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - # After transforms such as cropping are applied, the bounding box may no longer - # tightly bound the object. As an example, imagine a triangle object - # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight - # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to - # the intersection of original bounding box and the cropping box. - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - task = f"The task is {self.task}" - dataset_dict["task"] = task - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk - ) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - self._transform_annotations(dataset_dict, transforms, image_shape) - - return dataset_dict \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/tokenizer.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/tokenizer.py deleted file mode 100644 index 05d4c29c2d1ed03e5748e7346eeea494a2cd9144..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/tokenizer.py +++ /dev/null @@ -1,192 +0,0 @@ -# ------------------------------------------------------------------------- -# MIT License -# -# Copyright (c) 2021 OpenAI -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# -# Modified by Jiarui Xu -# ------------------------------------------------------------------------- - -import gzip -import html -import os -from functools import lru_cache - -import ftfy -import regex as re -import torch - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), 'bpe_simple_vocab_16e6.txt.gz') - -@lru_cache() -def bytes_to_unicode(): - """Returns list of utf-8 byte and a corresponding list of unicode strings. - - The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab - if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent - coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables - between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord('!'), ord('~') + 1)) + list(range(ord('¡'), ord('¬') + 1)) + list(range(ord('®'), ord('ÿ') + 1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - -class Tokenize: - - def __init__(self, tokenizer, max_seq_len=77, truncate=True): - self.tokenizer = tokenizer - self.max_seq_len = max_seq_len - self.truncate = truncate - - def __call__(self, texts): - expanded_dim = False - if isinstance(texts, str): - texts = [texts] - expanded_dim = True - - sot_token = self.tokenizer.encoder['<|startoftext|>'] - eot_token = self.tokenizer.encoder['<|endoftext|>'] - all_tokens = [[sot_token] + self.tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), self.max_seq_len, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > self.max_seq_len: - if self.truncate: - tokens = tokens[:self.max_seq_len] - tokens[-1] = eot_token - else: - raise RuntimeError(f'Input {texts[i]} is too long for context length {self.max_seq_len}') - result[i, :len(tokens)] = torch.tensor(tokens) - - if expanded_dim: - return result[0] - - return result - - -class SimpleTokenizer(object): - - def __init__(self, bpe_path: str = default_bpe()): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode('utf-8').split('\n') - merges = merges[1:49152 - 256 - 2 + 1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v + '' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - vocab.extend(['<|startoftext|>', '<|endoftext|>']) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'} - self.pat = re.compile( - r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", - re.IGNORECASE) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + (token[-1] + '', ) - pairs = get_pairs(word) - - if not pairs: - return token + '' - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: # noqa: E722 - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors='replace').replace('', ' ') - return text \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/utils/geometry.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/utils/geometry.py deleted file mode 100644 index e3da8c75b5a8e39b4b58a4dcd827b84d79b9115c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/utils/geometry.py +++ /dev/null @@ -1,98 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import numpy as np - -def get_intrinsics(H,W): - """ - Intrinsics for a pinhole camera model. - Assume fov of 55 degrees and central principal point. - """ - f = 0.5 * W / np.tan(0.5 * 55 * np.pi / 180.0) - cx = 0.5 * W - cy = 0.5 * H - return np.array([[f, 0, cx], - [0, f, cy], - [0, 0, 1]]) - -def depth_to_points(depth, R=None, t=None): - - K = get_intrinsics(depth.shape[1], depth.shape[2]) - Kinv = np.linalg.inv(K) - if R is None: - R = np.eye(3) - if t is None: - t = np.zeros(3) - - # M converts from your coordinate to PyTorch3D's coordinate system - M = np.eye(3) - M[0, 0] = -1.0 - M[1, 1] = -1.0 - - height, width = depth.shape[1:3] - - x = np.arange(width) - y = np.arange(height) - coord = np.stack(np.meshgrid(x, y), -1) - coord = np.concatenate((coord, np.ones_like(coord)[:, :, [0]]), -1) # z=1 - coord = coord.astype(np.float32) - # coord = torch.as_tensor(coord, dtype=torch.float32, device=device) - coord = coord[None] # bs, h, w, 3 - - D = depth[:, :, :, None, None] - # print(D.shape, Kinv[None, None, None, ...].shape, coord[:, :, :, :, None].shape ) - pts3D_1 = D * Kinv[None, None, None, ...] @ coord[:, :, :, :, None] - # pts3D_1 live in your coordinate system. Convert them to Py3D's - pts3D_1 = M[None, None, None, ...] @ pts3D_1 - # from reference to targe tviewpoint - pts3D_2 = R[None, None, None, ...] @ pts3D_1 + t[None, None, None, :, None] - # pts3D_2 = pts3D_1 - # depth_2 = pts3D_2[:, :, :, 2, :] # b,1,h,w - return pts3D_2[:, :, :, :3, 0][0] - - -def create_triangles(h, w, mask=None): - """ - Reference: https://github.com/google-research/google-research/blob/e96197de06613f1b027d20328e06d69829fa5a89/infinite_nature/render_utils.py#L68 - Creates mesh triangle indices from a given pixel grid size. - This function is not and need not be differentiable as triangle indices are - fixed. - Args: - h: (int) denoting the height of the image. - w: (int) denoting the width of the image. - Returns: - triangles: 2D numpy array of indices (int) with shape (2(W-1)(H-1) x 3) - """ - x, y = np.meshgrid(range(w - 1), range(h - 1)) - tl = y * w + x - tr = y * w + x + 1 - bl = (y + 1) * w + x - br = (y + 1) * w + x + 1 - triangles = np.array([tl, bl, tr, br, tr, bl]) - triangles = np.transpose(triangles, (1, 2, 0)).reshape( - ((w - 1) * (h - 1) * 2, 3)) - if mask is not None: - mask = mask.reshape(-1) - triangles = triangles[mask[triangles].all(1)] - return triangles diff --git a/spaces/Superying/vits-uma-genshin-honkai/text/cleaners.py b/spaces/Superying/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/Superying/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i num_feats: - if return_capped: - X = X[:, 0:num_feats] - categorical_feats = [c for c in categorical_feats if c < num_feats] - modifications['feats_capped'] = True - else: - print('Too many features') - continue - if X.shape[0] == max_samples: - modifications['samples_capped'] = True - - if X.shape[0] < min_samples: - print(f'Too few samples left') - continue - - if len(np.unique(y)) > max_num_classes: - if return_capped: - X = X[y < np.unique(y)[10]] - y = y[y < np.unique(y)[10]] - modifications['classes_capped'] = True - else: - print(f'Too many classes') - continue - - datasets += [[entry['name'], X, y, categorical_feats, attribute_names, modifications]] - - return datasets, datalist - - -# Classification -valid_dids_classification = [13, 59, 4, 15, 40710, 43, 1498] -test_dids_classification = [973, 1596, 40981, 1468, 40984, 40975, 41163, 41147, 1111, 41164, 1169, 1486, 41143, 1461, 41167, 40668, 41146, 41169, 41027, 23517, 41165, 41161, 41159, 41138, 1590, 41166, 1464, 41168, 41150, 1489, 41142, 3, 12, 31, 54, 1067] -valid_large_classification = [ 943, 23512, 49, 838, 1131, 767, 1142, 748, 1112, - 1541, 384, 912, 1503, 796, 20, 30, 903, 4541, - 961, 805, 1000, 4135, 1442, 816, 1130, 906, 1511, - 184, 181, 137, 1452, 1481, 949, 449, 50, 913, - 1071, 831, 843, 9, 896, 1532, 311, 39, 451, - 463, 382, 778, 474, 737, 1162, 1538, 820, 188, - 452, 1156, 37, 957, 911, 1508, 1054, 745, 1220, - 763, 900, 25, 387, 38, 757, 1507, 396, 4153, - 806, 779, 746, 1037, 871, 717, 1480, 1010, 1016, - 981, 1547, 1002, 1126, 1459, 846, 837, 1042, 273, - 1524, 375, 1018, 1531, 1458, 6332, 1546, 1129, 679, - 389] - -open_cc_dids = [11, - 14, - 15, - 16, - 18, - 22, - 23, - 29, - 31, - 37, - 50, - 54, - 188, - 458, - 469, - 1049, - 1050, - 1063, - 1068, - 1510, - 1494, - 1480, - 1462, - 1464, - 6332, - 23381, - 40966, - 40982, - 40994, - 40975] -# Filtered by N_samples < 2000, N feats < 100, N classes < 10 - -open_cc_valid_dids = [13,25,35,40,41,43,48,49,51,53,55,56,59,61,187,285,329,333,334,335,336,337,338,377,446,450,451,452,460,463,464,466,470,475,481,679,694,717,721,724,733,738,745,747,748,750,753,756,757,764,765,767,774,778,786,788,795,796,798,801,802,810,811,814,820,825,826,827,831,839,840,841,844,852,853,854,860,880,886,895,900,906,907,908,909,915,925,930,931,934,939,940,941,949,966,968,984,987,996,1048,1054,1071,1073,1100,1115,1412,1442,1443,1444,1446,1447,1448,1451,1453,1488,1490,1495,1498,1499,1506,1508,1511,1512,1520,1523,4153,23499,40496,40646,40663,40669,40680,40682,40686,40690,40693,40705,40706,40710,40711,40981,41430,41538,41919,41976,42172,42261,42544,42585,42638] diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py deleted file mode 100644 index 3facc2e3a67be6aff35c826b05661365e7dc02c3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py +++ /dev/null @@ -1,930 +0,0 @@ -"""Imported from the recipes section of the itertools documentation. - -All functions taken from the recipes section of the itertools library docs -[1]_. -Some backward-compatible usability improvements have been made. - -.. [1] http://docs.python.org/library/itertools.html#recipes - -""" -import math -import operator -import warnings - -from collections import deque -from collections.abc import Sized -from functools import reduce -from itertools import ( - chain, - combinations, - compress, - count, - cycle, - groupby, - islice, - product, - repeat, - starmap, - tee, - zip_longest, -) -from random import randrange, sample, choice -from sys import hexversion - -__all__ = [ - 'all_equal', - 'batched', - 'before_and_after', - 'consume', - 'convolve', - 'dotproduct', - 'first_true', - 'factor', - 'flatten', - 'grouper', - 'iter_except', - 'iter_index', - 'matmul', - 'ncycles', - 'nth', - 'nth_combination', - 'padnone', - 'pad_none', - 'pairwise', - 'partition', - 'polynomial_from_roots', - 'powerset', - 'prepend', - 'quantify', - 'random_combination_with_replacement', - 'random_combination', - 'random_permutation', - 'random_product', - 'repeatfunc', - 'roundrobin', - 'sieve', - 'sliding_window', - 'subslices', - 'tabulate', - 'tail', - 'take', - 'transpose', - 'triplewise', - 'unique_everseen', - 'unique_justseen', -] - -_marker = object() - - -def take(n, iterable): - """Return first *n* items of the iterable as a list. - - >>> take(3, range(10)) - [0, 1, 2] - - If there are fewer than *n* items in the iterable, all of them are - returned. - - >>> take(10, range(3)) - [0, 1, 2] - - """ - return list(islice(iterable, n)) - - -def tabulate(function, start=0): - """Return an iterator over the results of ``func(start)``, - ``func(start + 1)``, ``func(start + 2)``... - - *func* should be a function that accepts one integer argument. - - If *start* is not specified it defaults to 0. It will be incremented each - time the iterator is advanced. - - >>> square = lambda x: x ** 2 - >>> iterator = tabulate(square, -3) - >>> take(4, iterator) - [9, 4, 1, 0] - - """ - return map(function, count(start)) - - -def tail(n, iterable): - """Return an iterator over the last *n* items of *iterable*. - - >>> t = tail(3, 'ABCDEFG') - >>> list(t) - ['E', 'F', 'G'] - - """ - # If the given iterable has a length, then we can use islice to get its - # final elements. Note that if the iterable is not actually Iterable, - # either islice or deque will throw a TypeError. This is why we don't - # check if it is Iterable. - if isinstance(iterable, Sized): - yield from islice(iterable, max(0, len(iterable) - n), None) - else: - yield from iter(deque(iterable, maxlen=n)) - - -def consume(iterator, n=None): - """Advance *iterable* by *n* steps. If *n* is ``None``, consume it - entirely. - - Efficiently exhausts an iterator without returning values. Defaults to - consuming the whole iterator, but an optional second argument may be - provided to limit consumption. - - >>> i = (x for x in range(10)) - >>> next(i) - 0 - >>> consume(i, 3) - >>> next(i) - 4 - >>> consume(i) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - If the iterator has fewer items remaining than the provided limit, the - whole iterator will be consumed. - - >>> i = (x for x in range(3)) - >>> consume(i, 5) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - """ - # Use functions that consume iterators at C speed. - if n is None: - # feed the entire iterator into a zero-length deque - deque(iterator, maxlen=0) - else: - # advance to the empty slice starting at position n - next(islice(iterator, n, n), None) - - -def nth(iterable, n, default=None): - """Returns the nth item or a default value. - - >>> l = range(10) - >>> nth(l, 3) - 3 - >>> nth(l, 20, "zebra") - 'zebra' - - """ - return next(islice(iterable, n, None), default) - - -def all_equal(iterable): - """ - Returns ``True`` if all the elements are equal to each other. - - >>> all_equal('aaaa') - True - >>> all_equal('aaab') - False - - """ - g = groupby(iterable) - return next(g, True) and not next(g, False) - - -def quantify(iterable, pred=bool): - """Return the how many times the predicate is true. - - >>> quantify([True, False, True]) - 2 - - """ - return sum(map(pred, iterable)) - - -def pad_none(iterable): - """Returns the sequence of elements and then returns ``None`` indefinitely. - - >>> take(5, pad_none(range(3))) - [0, 1, 2, None, None] - - Useful for emulating the behavior of the built-in :func:`map` function. - - See also :func:`padded`. - - """ - return chain(iterable, repeat(None)) - - -padnone = pad_none - - -def ncycles(iterable, n): - """Returns the sequence elements *n* times - - >>> list(ncycles(["a", "b"], 3)) - ['a', 'b', 'a', 'b', 'a', 'b'] - - """ - return chain.from_iterable(repeat(tuple(iterable), n)) - - -def dotproduct(vec1, vec2): - """Returns the dot product of the two iterables. - - >>> dotproduct([10, 10], [20, 20]) - 400 - - """ - return sum(map(operator.mul, vec1, vec2)) - - -def flatten(listOfLists): - """Return an iterator flattening one level of nesting in a list of lists. - - >>> list(flatten([[0, 1], [2, 3]])) - [0, 1, 2, 3] - - See also :func:`collapse`, which can flatten multiple levels of nesting. - - """ - return chain.from_iterable(listOfLists) - - -def repeatfunc(func, times=None, *args): - """Call *func* with *args* repeatedly, returning an iterable over the - results. - - If *times* is specified, the iterable will terminate after that many - repetitions: - - >>> from operator import add - >>> times = 4 - >>> args = 3, 5 - >>> list(repeatfunc(add, times, *args)) - [8, 8, 8, 8] - - If *times* is ``None`` the iterable will not terminate: - - >>> from random import randrange - >>> times = None - >>> args = 1, 11 - >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP - [2, 4, 8, 1, 8, 4] - - """ - if times is None: - return starmap(func, repeat(args)) - return starmap(func, repeat(args, times)) - - -def _pairwise(iterable): - """Returns an iterator of paired items, overlapping, from the original - - >>> take(4, pairwise(count())) - [(0, 1), (1, 2), (2, 3), (3, 4)] - - On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`. - - """ - a, b = tee(iterable) - next(b, None) - yield from zip(a, b) - - -try: - from itertools import pairwise as itertools_pairwise -except ImportError: - pairwise = _pairwise -else: - - def pairwise(iterable): - yield from itertools_pairwise(iterable) - - pairwise.__doc__ = _pairwise.__doc__ - - -class UnequalIterablesError(ValueError): - def __init__(self, details=None): - msg = 'Iterables have different lengths' - if details is not None: - msg += (': index 0 has length {}; index {} has length {}').format( - *details - ) - - super().__init__(msg) - - -def _zip_equal_generator(iterables): - for combo in zip_longest(*iterables, fillvalue=_marker): - for val in combo: - if val is _marker: - raise UnequalIterablesError() - yield combo - - -def _zip_equal(*iterables): - # Check whether the iterables are all the same size. - try: - first_size = len(iterables[0]) - for i, it in enumerate(iterables[1:], 1): - size = len(it) - if size != first_size: - break - else: - # If we didn't break out, we can use the built-in zip. - return zip(*iterables) - - # If we did break out, there was a mismatch. - raise UnequalIterablesError(details=(first_size, i, size)) - # If any one of the iterables didn't have a length, start reading - # them until one runs out. - except TypeError: - return _zip_equal_generator(iterables) - - -def grouper(iterable, n, incomplete='fill', fillvalue=None): - """Group elements from *iterable* into fixed-length groups of length *n*. - - >>> list(grouper('ABCDEF', 3)) - [('A', 'B', 'C'), ('D', 'E', 'F')] - - The keyword arguments *incomplete* and *fillvalue* control what happens for - iterables whose length is not a multiple of *n*. - - When *incomplete* is `'fill'`, the last group will contain instances of - *fillvalue*. - - >>> list(grouper('ABCDEFG', 3, incomplete='fill', fillvalue='x')) - [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')] - - When *incomplete* is `'ignore'`, the last group will not be emitted. - - >>> list(grouper('ABCDEFG', 3, incomplete='ignore', fillvalue='x')) - [('A', 'B', 'C'), ('D', 'E', 'F')] - - When *incomplete* is `'strict'`, a subclass of `ValueError` will be raised. - - >>> it = grouper('ABCDEFG', 3, incomplete='strict') - >>> list(it) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - UnequalIterablesError - - """ - args = [iter(iterable)] * n - if incomplete == 'fill': - return zip_longest(*args, fillvalue=fillvalue) - if incomplete == 'strict': - return _zip_equal(*args) - if incomplete == 'ignore': - return zip(*args) - else: - raise ValueError('Expected fill, strict, or ignore') - - -def roundrobin(*iterables): - """Yields an item from each iterable, alternating between them. - - >>> list(roundrobin('ABC', 'D', 'EF')) - ['A', 'D', 'E', 'B', 'F', 'C'] - - This function produces the same output as :func:`interleave_longest`, but - may perform better for some inputs (in particular when the number of - iterables is small). - - """ - # Recipe credited to George Sakkis - pending = len(iterables) - nexts = cycle(iter(it).__next__ for it in iterables) - while pending: - try: - for next in nexts: - yield next() - except StopIteration: - pending -= 1 - nexts = cycle(islice(nexts, pending)) - - -def partition(pred, iterable): - """ - Returns a 2-tuple of iterables derived from the input iterable. - The first yields the items that have ``pred(item) == False``. - The second yields the items that have ``pred(item) == True``. - - >>> is_odd = lambda x: x % 2 != 0 - >>> iterable = range(10) - >>> even_items, odd_items = partition(is_odd, iterable) - >>> list(even_items), list(odd_items) - ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9]) - - If *pred* is None, :func:`bool` is used. - - >>> iterable = [0, 1, False, True, '', ' '] - >>> false_items, true_items = partition(None, iterable) - >>> list(false_items), list(true_items) - ([0, False, ''], [1, True, ' ']) - - """ - if pred is None: - pred = bool - - evaluations = ((pred(x), x) for x in iterable) - t1, t2 = tee(evaluations) - return ( - (x for (cond, x) in t1 if not cond), - (x for (cond, x) in t2 if cond), - ) - - -def powerset(iterable): - """Yields all possible subsets of the iterable. - - >>> list(powerset([1, 2, 3])) - [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] - - :func:`powerset` will operate on iterables that aren't :class:`set` - instances, so repeated elements in the input will produce repeated elements - in the output. Use :func:`unique_everseen` on the input to avoid generating - duplicates: - - >>> seq = [1, 1, 0] - >>> list(powerset(seq)) - [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)] - >>> from more_itertools import unique_everseen - >>> list(powerset(unique_everseen(seq))) - [(), (1,), (0,), (1, 0)] - - """ - s = list(iterable) - return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)) - - -def unique_everseen(iterable, key=None): - """ - Yield unique elements, preserving order. - - >>> list(unique_everseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D'] - >>> list(unique_everseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'D'] - - Sequences with a mix of hashable and unhashable items can be used. - The function will be slower (i.e., `O(n^2)`) for unhashable items. - - Remember that ``list`` objects are unhashable - you can use the *key* - parameter to transform the list to a tuple (which is hashable) to - avoid a slowdown. - - >>> iterable = ([1, 2], [2, 3], [1, 2]) - >>> list(unique_everseen(iterable)) # Slow - [[1, 2], [2, 3]] - >>> list(unique_everseen(iterable, key=tuple)) # Faster - [[1, 2], [2, 3]] - - Similary, you may want to convert unhashable ``set`` objects with - ``key=frozenset``. For ``dict`` objects, - ``key=lambda x: frozenset(x.items())`` can be used. - - """ - seenset = set() - seenset_add = seenset.add - seenlist = [] - seenlist_add = seenlist.append - use_key = key is not None - - for element in iterable: - k = key(element) if use_key else element - try: - if k not in seenset: - seenset_add(k) - yield element - except TypeError: - if k not in seenlist: - seenlist_add(k) - yield element - - -def unique_justseen(iterable, key=None): - """Yields elements in order, ignoring serial duplicates - - >>> list(unique_justseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D', 'A', 'B'] - >>> list(unique_justseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'A', 'D'] - - """ - return map(next, map(operator.itemgetter(1), groupby(iterable, key))) - - -def iter_except(func, exception, first=None): - """Yields results from a function repeatedly until an exception is raised. - - Converts a call-until-exception interface to an iterator interface. - Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel - to end the loop. - - >>> l = [0, 1, 2] - >>> list(iter_except(l.pop, IndexError)) - [2, 1, 0] - - Multiple exceptions can be specified as a stopping condition: - - >>> l = [1, 2, 3, '...', 4, 5, 6] - >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError))) - [7, 6, 5] - >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError))) - [4, 3, 2] - >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError))) - [] - - """ - try: - if first is not None: - yield first() - while 1: - yield func() - except exception: - pass - - -def first_true(iterable, default=None, pred=None): - """ - Returns the first true value in the iterable. - - If no true value is found, returns *default* - - If *pred* is not None, returns the first item for which - ``pred(item) == True`` . - - >>> first_true(range(10)) - 1 - >>> first_true(range(10), pred=lambda x: x > 5) - 6 - >>> first_true(range(10), default='missing', pred=lambda x: x > 9) - 'missing' - - """ - return next(filter(pred, iterable), default) - - -def random_product(*args, repeat=1): - """Draw an item at random from each of the input iterables. - - >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP - ('c', 3, 'Z') - - If *repeat* is provided as a keyword argument, that many items will be - drawn from each iterable. - - >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP - ('a', 2, 'd', 3) - - This equivalent to taking a random selection from - ``itertools.product(*args, **kwarg)``. - - """ - pools = [tuple(pool) for pool in args] * repeat - return tuple(choice(pool) for pool in pools) - - -def random_permutation(iterable, r=None): - """Return a random *r* length permutation of the elements in *iterable*. - - If *r* is not specified or is ``None``, then *r* defaults to the length of - *iterable*. - - >>> random_permutation(range(5)) # doctest:+SKIP - (3, 4, 0, 1, 2) - - This equivalent to taking a random selection from - ``itertools.permutations(iterable, r)``. - - """ - pool = tuple(iterable) - r = len(pool) if r is None else r - return tuple(sample(pool, r)) - - -def random_combination(iterable, r): - """Return a random *r* length subsequence of the elements in *iterable*. - - >>> random_combination(range(5), 3) # doctest:+SKIP - (2, 3, 4) - - This equivalent to taking a random selection from - ``itertools.combinations(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(sample(range(n), r)) - return tuple(pool[i] for i in indices) - - -def random_combination_with_replacement(iterable, r): - """Return a random *r* length subsequence of elements in *iterable*, - allowing individual elements to be repeated. - - >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP - (0, 0, 1, 2, 2) - - This equivalent to taking a random selection from - ``itertools.combinations_with_replacement(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(randrange(n) for i in range(r)) - return tuple(pool[i] for i in indices) - - -def nth_combination(iterable, r, index): - """Equivalent to ``list(combinations(iterable, r))[index]``. - - The subsequences of *iterable* that are of length *r* can be ordered - lexicographically. :func:`nth_combination` computes the subsequence at - sort position *index* directly, without computing the previous - subsequences. - - >>> nth_combination(range(5), 3, 5) - (0, 3, 4) - - ``ValueError`` will be raised If *r* is negative or greater than the length - of *iterable*. - ``IndexError`` will be raised if the given *index* is invalid. - """ - pool = tuple(iterable) - n = len(pool) - if (r < 0) or (r > n): - raise ValueError - - c = 1 - k = min(r, n - r) - for i in range(1, k + 1): - c = c * (n - k + i) // i - - if index < 0: - index += c - - if (index < 0) or (index >= c): - raise IndexError - - result = [] - while r: - c, n, r = c * r // n, n - 1, r - 1 - while index >= c: - index -= c - c, n = c * (n - r) // n, n - 1 - result.append(pool[-1 - n]) - - return tuple(result) - - -def prepend(value, iterator): - """Yield *value*, followed by the elements in *iterator*. - - >>> value = '0' - >>> iterator = ['1', '2', '3'] - >>> list(prepend(value, iterator)) - ['0', '1', '2', '3'] - - To prepend multiple values, see :func:`itertools.chain` - or :func:`value_chain`. - - """ - return chain([value], iterator) - - -def convolve(signal, kernel): - """Convolve the iterable *signal* with the iterable *kernel*. - - >>> signal = (1, 2, 3, 4, 5) - >>> kernel = [3, 2, 1] - >>> list(convolve(signal, kernel)) - [3, 8, 14, 20, 26, 14, 5] - - Note: the input arguments are not interchangeable, as the *kernel* - is immediately consumed and stored. - - """ - kernel = tuple(kernel)[::-1] - n = len(kernel) - window = deque([0], maxlen=n) * n - for x in chain(signal, repeat(0, n - 1)): - window.append(x) - yield sum(map(operator.mul, kernel, window)) - - -def before_and_after(predicate, it): - """A variant of :func:`takewhile` that allows complete access to the - remainder of the iterator. - - >>> it = iter('ABCdEfGhI') - >>> all_upper, remainder = before_and_after(str.isupper, it) - >>> ''.join(all_upper) - 'ABC' - >>> ''.join(remainder) # takewhile() would lose the 'd' - 'dEfGhI' - - Note that the first iterator must be fully consumed before the second - iterator can generate valid results. - """ - it = iter(it) - transition = [] - - def true_iterator(): - for elem in it: - if predicate(elem): - yield elem - else: - transition.append(elem) - return - - # Note: this is different from itertools recipes to allow nesting - # before_and_after remainders into before_and_after again. See tests - # for an example. - remainder_iterator = chain(transition, it) - - return true_iterator(), remainder_iterator - - -def triplewise(iterable): - """Return overlapping triplets from *iterable*. - - >>> list(triplewise('ABCDE')) - [('A', 'B', 'C'), ('B', 'C', 'D'), ('C', 'D', 'E')] - - """ - for (a, _), (b, c) in pairwise(pairwise(iterable)): - yield a, b, c - - -def sliding_window(iterable, n): - """Return a sliding window of width *n* over *iterable*. - - >>> list(sliding_window(range(6), 4)) - [(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)] - - If *iterable* has fewer than *n* items, then nothing is yielded: - - >>> list(sliding_window(range(3), 4)) - [] - - For a variant with more features, see :func:`windowed`. - """ - it = iter(iterable) - window = deque(islice(it, n), maxlen=n) - if len(window) == n: - yield tuple(window) - for x in it: - window.append(x) - yield tuple(window) - - -def subslices(iterable): - """Return all contiguous non-empty subslices of *iterable*. - - >>> list(subslices('ABC')) - [['A'], ['A', 'B'], ['A', 'B', 'C'], ['B'], ['B', 'C'], ['C']] - - This is similar to :func:`substrings`, but emits items in a different - order. - """ - seq = list(iterable) - slices = starmap(slice, combinations(range(len(seq) + 1), 2)) - return map(operator.getitem, repeat(seq), slices) - - -def polynomial_from_roots(roots): - """Compute a polynomial's coefficients from its roots. - - >>> roots = [5, -4, 3] # (x - 5) * (x + 4) * (x - 3) - >>> polynomial_from_roots(roots) # x^3 - 4 * x^2 - 17 * x + 60 - [1, -4, -17, 60] - """ - # Use math.prod for Python 3.8+, - prod = getattr(math, 'prod', lambda x: reduce(operator.mul, x, 1)) - roots = list(map(operator.neg, roots)) - return [ - sum(map(prod, combinations(roots, k))) for k in range(len(roots) + 1) - ] - - -def iter_index(iterable, value, start=0): - """Yield the index of each place in *iterable* that *value* occurs, - beginning with index *start*. - - See :func:`locate` for a more general means of finding the indexes - associated with particular values. - - >>> list(iter_index('AABCADEAF', 'A')) - [0, 1, 4, 7] - """ - try: - seq_index = iterable.index - except AttributeError: - # Slow path for general iterables - it = islice(iterable, start, None) - for i, element in enumerate(it, start): - if element is value or element == value: - yield i - else: - # Fast path for sequences - i = start - 1 - try: - while True: - i = seq_index(value, i + 1) - yield i - except ValueError: - pass - - -def sieve(n): - """Yield the primes less than n. - - >>> list(sieve(30)) - [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] - """ - isqrt = getattr(math, 'isqrt', lambda x: int(math.sqrt(x))) - data = bytearray((0, 1)) * (n // 2) - data[:3] = 0, 0, 0 - limit = isqrt(n) + 1 - for p in compress(range(limit), data): - data[p * p : n : p + p] = bytes(len(range(p * p, n, p + p))) - data[2] = 1 - return iter_index(data, 1) if n > 2 else iter([]) - - -def batched(iterable, n): - """Batch data into lists of length *n*. The last batch may be shorter. - - >>> list(batched('ABCDEFG', 3)) - [['A', 'B', 'C'], ['D', 'E', 'F'], ['G']] - - This recipe is from the ``itertools`` docs. This library also provides - :func:`chunked`, which has a different implementation. - """ - if hexversion >= 0x30C00A0: # Python 3.12.0a0 - warnings.warn( - ( - 'batched will be removed in a future version of ' - 'more-itertools. Use the standard library ' - 'itertools.batched function instead' - ), - DeprecationWarning, - ) - - it = iter(iterable) - while True: - batch = list(islice(it, n)) - if not batch: - break - yield batch - - -def transpose(it): - """Swap the rows and columns of the input. - - >>> list(transpose([(1, 2, 3), (11, 22, 33)])) - [(1, 11), (2, 22), (3, 33)] - - The caller should ensure that the dimensions of the input are compatible. - """ - # TODO: when 3.9 goes end-of-life, add stric=True to this. - return zip(*it) - - -def matmul(m1, m2): - """Multiply two matrices. - >>> list(matmul([(7, 5), (3, 5)], [(2, 5), (7, 9)])) - [[49, 80], [41, 60]] - - The caller should ensure that the dimensions of the input matrices are - compatible with each other. - """ - n = len(m2[0]) - return batched(starmap(dotproduct, product(m1, transpose(m2))), n) - - -def factor(n): - """Yield the prime factors of n. - >>> list(factor(360)) - [2, 2, 2, 3, 3, 5] - """ - isqrt = getattr(math, 'isqrt', lambda x: int(math.sqrt(x))) - for prime in sieve(isqrt(n) + 1): - while True: - quotient, remainder = divmod(n, prime) - if remainder: - break - yield prime - n = quotient - if n == 1: - return - if n >= 2: - yield n diff --git a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_resnet_pseudo3d.py b/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_resnet_pseudo3d.py deleted file mode 100644 index 22fc33eaebf6a0e5ac450db37d370a1b91942ec2..0000000000000000000000000000000000000000 --- a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_resnet_pseudo3d.py +++ /dev/null @@ -1,175 +0,0 @@ - -from typing import Optional, Union, Sequence - -import jax -import jax.numpy as jnp -import flax.linen as nn - -import einops - - -class ConvPseudo3D(nn.Module): - features: int - kernel_size: Sequence[int] - strides: Union[None, int, Sequence[int]] = 1 - padding: nn.linear.PaddingLike = 'SAME' - dtype: jnp.dtype = jnp.float32 - - def setup(self) -> None: - self.spatial_conv = nn.Conv( - features = self.features, - kernel_size = self.kernel_size, - strides = self.strides, - padding = self.padding, - dtype = self.dtype - ) - self.temporal_conv = nn.Conv( - features = self.features, - kernel_size = (3,), - padding = 'SAME', - dtype = self.dtype, - bias_init = nn.initializers.zeros_init() - # TODO dirac delta (identity) initialization impl - # kernel_init = torch.nn.init.dirac_ <-> jax/lax - ) - - def __call__(self, x: jax.Array, convolve_across_time: bool = True) -> jax.Array: - is_video = x.ndim == 5 - convolve_across_time = convolve_across_time and is_video - if is_video: - b, f, h, w, c = x.shape - x = einops.rearrange(x, 'b f h w c -> (b f) h w c') - x = self.spatial_conv(x) - if is_video: - x = einops.rearrange(x, '(b f) h w c -> b f h w c', b = b) - b, f, h, w, c = x.shape - if not convolve_across_time: - return x - if is_video: - x = einops.rearrange(x, 'b f h w c -> (b h w) f c') - x = self.temporal_conv(x) - x = einops.rearrange(x, '(b h w) f c -> b f h w c', h = h, w = w) - return x - - -class UpsamplePseudo3D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self) -> None: - self.conv = ConvPseudo3D( - features = self.out_channels, - kernel_size = (3, 3), - strides = (1, 1), - padding = ((1, 1), (1, 1)), - dtype = self.dtype - ) - - def __call__(self, hidden_states: jax.Array) -> jax.Array: - is_video = hidden_states.ndim == 5 - if is_video: - b, *_ = hidden_states.shape - hidden_states = einops.rearrange(hidden_states, 'b f h w c -> (b f) h w c') - batch, h, w, c = hidden_states.shape - hidden_states = jax.image.resize( - image = hidden_states, - shape = (batch, h * 2, w * 2, c), - method = 'nearest' - ) - if is_video: - hidden_states = einops.rearrange(hidden_states, '(b f) h w c -> b f h w c', b = b) - hidden_states = self.conv(hidden_states) - return hidden_states - - -class DownsamplePseudo3D(nn.Module): - out_channels: int - dtype: jnp.dtype = jnp.float32 - - def setup(self) -> None: - self.conv = ConvPseudo3D( - features = self.out_channels, - kernel_size = (3, 3), - strides = (2, 2), - padding = ((1, 1), (1, 1)), - dtype = self.dtype - ) - - def __call__(self, hidden_states: jax.Array) -> jax.Array: - hidden_states = self.conv(hidden_states) - return hidden_states - - -class ResnetBlockPseudo3D(nn.Module): - in_channels: int - out_channels: Optional[int] = None - use_nin_shortcut: Optional[bool] = None - dtype: jnp.dtype = jnp.float32 - - def setup(self) -> None: - out_channels = self.in_channels if self.out_channels is None else self.out_channels - self.norm1 = nn.GroupNorm( - num_groups = 32, - epsilon = 1e-5 - ) - self.conv1 = ConvPseudo3D( - features = out_channels, - kernel_size = (3, 3), - strides = (1, 1), - padding = ((1, 1), (1, 1)), - dtype = self.dtype - ) - self.time_emb_proj = nn.Dense( - out_channels, - dtype = self.dtype - ) - self.norm2 = nn.GroupNorm( - num_groups = 32, - epsilon = 1e-5 - ) - self.conv2 = ConvPseudo3D( - features = out_channels, - kernel_size = (3, 3), - strides = (1, 1), - padding = ((1, 1), (1, 1)), - dtype = self.dtype - ) - use_nin_shortcut = self.in_channels != out_channels if self.use_nin_shortcut is None else self.use_nin_shortcut - self.conv_shortcut = None - if use_nin_shortcut: - self.conv_shortcut = ConvPseudo3D( - features = self.out_channels, - kernel_size = (1, 1), - strides = (1, 1), - padding = 'VALID', - dtype = self.dtype - ) - - def __call__(self, - hidden_states: jax.Array, - temb: jax.Array - ) -> jax.Array: - is_video = hidden_states.ndim == 5 - residual = hidden_states - hidden_states = self.norm1(hidden_states) - hidden_states = nn.silu(hidden_states) - hidden_states = self.conv1(hidden_states) - temb = nn.silu(temb) - temb = self.time_emb_proj(temb) - temb = jnp.expand_dims(temb, 1) - temb = jnp.expand_dims(temb, 1) - if is_video: - b, f, *_ = hidden_states.shape - hidden_states = einops.rearrange(hidden_states, 'b f h w c -> (b f) h w c') - hidden_states = hidden_states + temb.repeat(f, 0) - hidden_states = einops.rearrange(hidden_states, '(b f) h w c -> b f h w c', b = b) - else: - hidden_states = hidden_states + temb - hidden_states = self.norm2(hidden_states) - hidden_states = nn.silu(hidden_states) - hidden_states = self.conv2(hidden_states) - if self.conv_shortcut is not None: - residual = self.conv_shortcut(residual) - hidden_states = hidden_states + residual - return hidden_states - diff --git a/spaces/Treav/DICOMDeidentify2/deidentify.py b/spaces/Treav/DICOMDeidentify2/deidentify.py deleted file mode 100644 index 23cdb416118e8d3e1219f428bfa509d7e761f51a..0000000000000000000000000000000000000000 --- a/spaces/Treav/DICOMDeidentify2/deidentify.py +++ /dev/null @@ -1,26 +0,0 @@ -import glob -from pathlib import Path -import matplotlib.pyplot as plt -import pydicom -from presidio_image_redactor import DicomImageRedactorEngine -import pytesseract -from pydicom import dcmread, dcmwrite - -import os -def dicomprocess(pathDicom): - engine = DicomImageRedactorEngine() - # Load in and process your DICOM file as needed - dicom_instance = pydicom.dcmread(pathDicom) - # Redact - redacted_dicom_instance = engine.redact(dicom_instance, fill="contrast") - dcmwrite("redacted.dcm", redacted_dicom_instance) - print("-----------------------------{}-------------------".format( os.path.realpath("redacted.dcm"))) - return os.path.realpath("redacted.dcm") - - - - - - - - diff --git a/spaces/VickyKira/NASAGPT/client/css/button.css b/spaces/VickyKira/NASAGPT/client/css/button.css deleted file mode 100644 index 5f604a8460d048458249f78be9dc544ade84801e..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/button.css +++ /dev/null @@ -1,26 +0,0 @@ -.button { - display: flex; - padding: 8px 12px; - align-items: center; - justify-content: center; - border: 1px solid var(--conversations); - border-radius: var(--border-radius-1); - width: 100%; - background: transparent; - cursor: pointer; -} - -.button span { - color: var(--colour-3); - font-size: 0.875rem; -} - -.button i::before { - margin-right: 8px; -} - -@media screen and (max-width: 990px) { - .button span { - font-size: 0.75rem; - } -} diff --git a/spaces/Xule/ChuanhuChatGPT/modules/webui_locale.py b/spaces/Xule/ChuanhuChatGPT/modules/webui_locale.py deleted file mode 100644 index 1ce4d97b9b41cbb2d9be3fdadc4c85f6ef897604..0000000000000000000000000000000000000000 --- a/spaces/Xule/ChuanhuChatGPT/modules/webui_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import locale -import commentjson as json - -class I18nAuto: - def __init__(self): - if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) - else: - config = {} - lang_config = config.get("language", "auto") - language = os.environ.get("LANGUAGE", lang_config) - if language == "auto": - language = locale.getdefaultlocale()[0] # get the language code of the system (ex. zh_CN) - self.language_map = {} - self.file_is_exists = os.path.isfile(f"./locale/{language}.json") - if self.file_is_exists: - with open(f"./locale/{language}.json", "r", encoding="utf-8") as f: - self.language_map.update(json.load(f)) - - def __call__(self, key): - if self.file_is_exists and key in self.language_map: - return self.language_map[key] - else: - return key diff --git a/spaces/XzJosh/Echo-Bert-VITS2/text/chinese_bert.py b/spaces/XzJosh/Echo-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/YE01/saya-vits/text/thai.py b/spaces/YE01/saya-vits/text/thai.py deleted file mode 100644 index 998207c01a85c710a46db1ec8b62c39c2d94bc84..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/thai.py +++ /dev/null @@ -1,44 +0,0 @@ -import re -from num_thai.thainumbers import NumThai - - -num = NumThai() - -# List of (Latin alphabet, Thai) pairs: -_latin_to_thai = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'เอ'), - ('b','บี'), - ('c','ซี'), - ('d','ดี'), - ('e','อี'), - ('f','เอฟ'), - ('g','จี'), - ('h','เอช'), - ('i','ไอ'), - ('j','เจ'), - ('k','เค'), - ('l','แอล'), - ('m','เอ็ม'), - ('n','เอ็น'), - ('o','โอ'), - ('p','พี'), - ('q','คิว'), - ('r','แอร์'), - ('s','เอส'), - ('t','ที'), - ('u','ยู'), - ('v','วี'), - ('w','ดับเบิลยู'), - ('x','เอ็กซ์'), - ('y','วาย'), - ('z','ซี') -]] - - -def num_to_thai(text): - return re.sub(r'(?:\d+(?:,?\d+)?)+(?:\.\d+(?:,?\d+)?)?', lambda x: ''.join(num.NumberToTextThai(float(x.group(0).replace(',', '')))), text) - -def latin_to_thai(text): - for regex, replacement in _latin_to_thai: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py deleted file mode 100644 index b867cc865e5ac4d7b70221da141894efd7cbd75c..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/Yuliang/ICON/lib/dataset/NormalDataset.py b/spaces/Yuliang/ICON/lib/dataset/NormalDataset.py deleted file mode 100644 index 636d00c6952c002d3bab2dbcfe52ce80506d42ef..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/dataset/NormalDataset.py +++ /dev/null @@ -1,212 +0,0 @@ - -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import os.path as osp -import numpy as np -from PIL import Image -import torchvision.transforms as transforms - - -class NormalDataset(): - def __init__(self, cfg, split='train'): - - self.split = split - self.root = cfg.root - self.overfit = cfg.overfit - - self.opt = cfg.dataset - self.datasets = self.opt.types - self.input_size = self.opt.input_size - self.set_splits = self.opt.set_splits - self.scales = self.opt.scales - self.pifu = self.opt.pifu - - # input data types and dimensions - self.in_nml = [item[0] for item in cfg.net.in_nml] - self.in_nml_dim = [item[1] for item in cfg.net.in_nml] - self.in_total = self.in_nml + ['normal_F', 'normal_B'] - self.in_total_dim = self.in_nml_dim + [3, 3] - - if self.split != 'train': - self.rotations = range(0, 360, 120) - else: - self.rotations = np.arange(0, 360, 360 / - self.opt.rotation_num).astype(np.int) - - self.datasets_dict = {} - for dataset_id, dataset in enumerate(self.datasets): - dataset_dir = osp.join(self.root, dataset, "smplx") - self.datasets_dict[dataset] = { - "subjects": - np.loadtxt(osp.join(self.root, dataset, "all.txt"), dtype=str), - "path": - dataset_dir, - "scale": - self.scales[dataset_id] - } - - self.subject_list = self.get_subject_list(split) - - # PIL to tensor - self.image_to_tensor = transforms.Compose([ - transforms.Resize(self.input_size), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ]) - - # PIL to tensor - self.mask_to_tensor = transforms.Compose([ - transforms.Resize(self.input_size), - transforms.ToTensor(), - transforms.Normalize((0.0, ), (1.0, )) - ]) - - def get_subject_list(self, split): - - subject_list = [] - - for dataset in self.datasets: - - if self.pifu: - txt = osp.join(self.root, dataset, f'{split}_pifu.txt') - else: - txt = osp.join(self.root, dataset, f'{split}.txt') - - if osp.exists(txt): - print(f"load from {txt}") - subject_list += sorted(np.loadtxt(txt, dtype=str).tolist()) - - if self.pifu: - miss_pifu = sorted( - np.loadtxt(osp.join(self.root, dataset, - "miss_pifu.txt"), - dtype=str).tolist()) - subject_list = [ - subject for subject in subject_list - if subject not in miss_pifu - ] - subject_list = [ - "renderpeople/" + subject for subject in subject_list - ] - - else: - train_txt = osp.join(self.root, dataset, 'train.txt') - val_txt = osp.join(self.root, dataset, 'val.txt') - test_txt = osp.join(self.root, dataset, 'test.txt') - - print( - f"generate lists of [train, val, test] \n {train_txt} \n {val_txt} \n {test_txt} \n" - ) - - split_txt = osp.join(self.root, dataset, f'{split}.txt') - - subjects = self.datasets_dict[dataset]['subjects'] - train_split = int(len(subjects) * self.set_splits[0]) - val_split = int( - len(subjects) * self.set_splits[1]) + train_split - - with open(train_txt, "w") as f: - f.write("\n".join(dataset + "/" + item - for item in subjects[:train_split])) - with open(val_txt, "w") as f: - f.write("\n".join( - dataset + "/" + item - for item in subjects[train_split:val_split])) - with open(test_txt, "w") as f: - f.write("\n".join(dataset + "/" + item - for item in subjects[val_split:])) - - subject_list += sorted( - np.loadtxt(split_txt, dtype=str).tolist()) - - bug_list = sorted( - np.loadtxt(osp.join(self.root, 'bug.txt'), dtype=str).tolist()) - - subject_list = [ - subject for subject in subject_list if (subject not in bug_list) - ] - - return subject_list - - def __len__(self): - return len(self.subject_list) * len(self.rotations) - - def __getitem__(self, index): - - # only pick the first data if overfitting - if self.overfit: - index = 0 - - rid = index % len(self.rotations) - mid = index // len(self.rotations) - - rotation = self.rotations[rid] - - # choose specific test sets - subject = self.subject_list[mid] - - subject_render = "/".join( - [subject.split("/")[0] + "_12views", - subject.split("/")[1]]) - - # setup paths - data_dict = { - 'dataset': - subject.split("/")[0], - 'subject': - subject, - 'rotation': - rotation, - 'image_path': - osp.join(self.root, subject_render, 'render', - f'{rotation:03d}.png') - } - - # image/normal/depth loader - for name, channel in zip(self.in_total, self.in_total_dim): - - if name != 'image': - data_dict.update({ - f'{name}_path': - osp.join(self.root, subject_render, name, - f'{rotation:03d}.png') - }) - data_dict.update({ - name: - self.imagepath2tensor(data_dict[f'{name}_path'], - channel, - inv='depth_B' in name) - }) - - path_keys = [ - key for key in data_dict.keys() if '_path' in key or '_dir' in key - ] - for key in path_keys: - del data_dict[key] - - return data_dict - - def imagepath2tensor(self, path, channel=3, inv=False): - - rgba = Image.open(path).convert('RGBA') - mask = rgba.split()[-1] - image = rgba.convert('RGB') - image = self.image_to_tensor(image) - mask = self.mask_to_tensor(mask) - image = (image * mask)[:channel] - - return (image * (0.5 - inv) * 2.0).float() diff --git a/spaces/Yunshansongbai/SVC-Nahida/spec_gen.py b/spaces/Yunshansongbai/SVC-Nahida/spec_gen.py deleted file mode 100644 index 9476395adab6fa841fde10c05fbb92902310ebd4..0000000000000000000000000000000000000000 --- a/spaces/Yunshansongbai/SVC-Nahida/spec_gen.py +++ /dev/null @@ -1,22 +0,0 @@ -from data_utils import TextAudioSpeakerLoader -import json -from tqdm import tqdm - -from utils import HParams - -config_path = 'configs/config.json' -with open(config_path, "r") as f: - data = f.read() -config = json.loads(data) -hps = HParams(**config) - -train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps) -test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps) -eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps) - -for _ in tqdm(train_dataset): - pass -for _ in tqdm(eval_dataset): - pass -for _ in tqdm(test_dataset): - pass \ No newline at end of file diff --git a/spaces/Zengyf-CVer/Gradio-YOLOv8-Det/app.py b/spaces/Zengyf-CVer/Gradio-YOLOv8-Det/app.py deleted file mode 100644 index 7d1ea2731bf230d33323e75c66c4d9dc77107b5c..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio-YOLOv8-Det/app.py +++ /dev/null @@ -1,532 +0,0 @@ -# Gradio YOLOv8 Det v1.1.0 -# 创建人:曾逸夫 -# 创建时间:2023-11-04 -# pip install gradio>=4.1.1 - -import argparse -import csv -import random -import sys -from collections import Counter -from pathlib import Path - -import cv2 -import gradio as gr -import numpy as np -from matplotlib import font_manager -from ultralytics import YOLO - -ROOT_PATH = sys.path[0] # 项目根目录 - -# --------------------- 字体库 --------------------- -SimSun_path = f"{ROOT_PATH}/fonts/SimSun.ttf" # 宋体文件路径 -TimesNesRoman_path = f"{ROOT_PATH}/fonts/TimesNewRoman.ttf" # 新罗马字体文件路径 -# 宋体 -SimSun = font_manager.FontProperties(fname=SimSun_path, size=12) -# 新罗马字体 -TimesNesRoman = font_manager.FontProperties(fname=TimesNesRoman_path, size=12) - -import yaml -from PIL import Image, ImageDraw, ImageFont - -from util.fonts_opt import is_fonts - -ROOT_PATH = sys.path[0] # 根目录 - -# Gradio YOLOv8 Det版本 -GYD_VERSION = "Gradio YOLOv8 Det v1.1.0" - -# 文件后缀 -suffix_list = [".csv", ".yaml"] - -# 字体大小 -FONTSIZE = 25 - -# 目标尺寸 -obj_style = ["小目标", "中目标", "大目标"] - - -def parse_args(known=False): - parser = argparse.ArgumentParser(description="Gradio YOLOv8 Det v1.1.0") - parser.add_argument("--model_name", "-mn", default="yolov8s", type=str, help="model name") - parser.add_argument( - "--model_cfg", - "-mc", - default="./model_config/model_name_all.yaml", - type=str, - help="model config", - ) - parser.add_argument( - "--cls_name", - "-cls", - default="./cls_name/cls_name_zh.yaml", - type=str, - help="cls name", - ) - parser.add_argument( - "--nms_conf", - "-conf", - default=0.5, - type=float, - help="model NMS confidence threshold", - ) - parser.add_argument("--nms_iou", "-iou", default=0.45, type=float, help="model NMS IoU threshold") - parser.add_argument("--inference_size", "-isz", default=640, type=int, help="model inference size") - parser.add_argument("--max_detnum", "-mdn", default=50, type=float, help="model max det num") - parser.add_argument("--slider_step", "-ss", default=0.05, type=float, help="slider step") - parser.add_argument( - "--is_login", - "-isl", - action="store_true", - default=False, - help="is login", - ) - parser.add_argument('--usr_pwd', - "-up", - nargs='+', - type=str, - default=["admin", "admin"], - help="user & password for login") - parser.add_argument( - "--is_share", - "-is", - action="store_true", - default=False, - help="is login", - ) - parser.add_argument("--server_port", "-sp", default=7860, type=int, help="server port") - - args = parser.parse_known_args()[0] if known else parser.parse_args() - return args - - -# yaml文件解析 -def yaml_parse(file_path): - return yaml.safe_load(open(file_path, encoding="utf-8").read()) - - -# yaml csv 文件解析 -def yaml_csv(file_path, file_tag): - file_suffix = Path(file_path).suffix - if file_suffix == suffix_list[0]: - # 模型名称 - file_names = [i[0] for i in list(csv.reader(open(file_path)))] # csv版 - elif file_suffix == suffix_list[1]: - # 模型名称 - file_names = yaml_parse(file_path).get(file_tag) # yaml版 - else: - print(f"{file_path}格式不正确!程序退出!") - sys.exit() - - return file_names - - -# 检查网络连接 -def check_online(): - # 参考:https://github.com/ultralytics/yolov5/blob/master/utils/general.py - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility - return True - except OSError: - return False - - -# 标签和边界框颜色设置 -def color_set(cls_num): - color_list = [] - for i in range(cls_num): - color = tuple(np.random.choice(range(256), size=3)) - color_list.append(color) - - return color_list - - -# 随机生成浅色系或者深色系 -def random_color(cls_num, is_light=True): - color_list = [] - for i in range(cls_num): - color = ( - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - ) - color_list.append(color) - - return color_list - - -# 检测绘制 -def pil_draw(img, score_l, bbox_l, cls_l, cls_index_l, textFont, color_list): - img_pil = ImageDraw.Draw(img) - id = 0 - - for score, (xmin, ymin, xmax, ymax), label, cls_index in zip(score_l, bbox_l, cls_l, cls_index_l): - - img_pil.rectangle([xmin, ymin, xmax, ymax], fill=None, outline=color_list[cls_index], width=2) # 边界框 - countdown_msg = f"{id}-{label} {score:.2f}" - # text_w, text_h = textFont.getsize(countdown_msg) # 标签尺寸 pillow 9.5.0 - # left, top, left + width, top + height - # 标签尺寸 pillow 10.0.0 - text_xmin, text_ymin, text_xmax, text_ymax = textFont.getbbox(countdown_msg) - # 标签背景 - img_pil.rectangle( - # (xmin, ymin, xmin + text_w, ymin + text_h), # pillow 9.5.0 - (xmin, ymin, xmin + text_xmax - text_xmin, ymin + text_ymax - text_ymin), # pillow 10.0.0 - fill=color_list[cls_index], - outline=color_list[cls_index], - ) - - # 标签 - img_pil.multiline_text( - (xmin, ymin), - countdown_msg, - fill=(0, 0, 0), - font=textFont, - align="center", - ) - - id += 1 - - return img - - -# 绘制多边形 -def polygon_drawing(img_mask, canvas, color_seg): - # ------- RGB转BGR ------- - color_seg = list(color_seg) - color_seg[0], color_seg[2] = color_seg[2], color_seg[0] - color_seg = tuple(color_seg) - # 定义多边形的顶点 - pts = np.array(img_mask, dtype=np.int32) - - # 多边形绘制 - cv2.drawContours(canvas, [pts], -1, color_seg, thickness=-1) - - -# 输出分割结果 -def seg_output(img_path, seg_mask_list, color_list, cls_list): - img = cv2.imread(img_path) - img_c = img.copy() - - # w, h = img.shape[1], img.shape[0] - - # 获取分割坐标 - for seg_mask, cls_index in zip(seg_mask_list, cls_list): - img_mask = [] - for i in range(len(seg_mask)): - # img_mask.append([seg_mask[i][0] * w, seg_mask[i][1] * h]) - img_mask.append([seg_mask[i][0], seg_mask[i][1]]) - - polygon_drawing(img_mask, img_c, color_list[int(cls_index)]) # 绘制分割图形 - - img_mask_merge = cv2.addWeighted(img, 0.3, img_c, 0.7, 0) # 合并图像 - - return img_mask_merge - - -# 目标检测和图像分割模型加载 -def model_loading(img_path, device_opt, conf, iou, infer_size, max_det, yolo_model="yolov8n.pt"): - model = YOLO(yolo_model) - - results = model(source=img_path, device=device_opt, imgsz=infer_size, conf=conf, iou=iou, max_det=max_det) - results = list(results)[0] - return results - - -# 图像分类模型加载 -def model_cls_loading(img_path, yolo_model="yolov8s-cls.pt"): - model = YOLO(yolo_model) - - results = model(source=img_path) - results = list(results)[0] - return results - - -# YOLOv8图片检测函数 -def yolo_det_img(img_path, model_name, device_opt, infer_size, conf, iou, max_det, obj_size): - - global model, model_name_tmp, device_tmp - - s_obj, m_obj, l_obj = 0, 0, 0 - - area_obj_all = [] # 目标面积 - - score_det_stat = [] # 置信度统计 - bbox_det_stat = [] # 边界框统计 - cls_det_stat = [] # 类别数量统计 - cls_index_det_stat = [] # 1 - - # 模型加载 - predict_results = model_loading(img_path, device_opt, conf, iou, infer_size, max_det, yolo_model=f"{model_name}.pt") - # 检测参数 - xyxy_list = predict_results.boxes.xyxy.cpu().numpy().tolist() - conf_list = predict_results.boxes.conf.cpu().numpy().tolist() - cls_list = predict_results.boxes.cls.cpu().numpy().tolist() - - # 颜色列表 - color_list = random_color(len(model_cls_name_cp), True) - - # 图像分割 - if (model_name[-3:] == "seg"): - # masks_list = predict_results.masks.xyn - masks_list = predict_results.masks.xy - img_mask_merge = seg_output(img_path, masks_list, color_list, cls_list) - img = Image.fromarray(cv2.cvtColor(img_mask_merge, cv2.COLOR_BGRA2RGBA)) - else: - img = Image.open(img_path) - - # 判断检测对象是否为空 - if (xyxy_list != []): - - # ---------------- 加载字体 ---------------- - yaml_index = cls_name.index(".yaml") - cls_name_lang = cls_name[yaml_index - 2:yaml_index] - - if cls_name_lang == "zh": - # 中文 - textFont = ImageFont.truetype(str(f"{ROOT_PATH}/fonts/SimSun.ttf"), size=FONTSIZE) - elif cls_name_lang in ["en", "ru", "es", "ar"]: - # 英文、俄语、西班牙语、阿拉伯语 - textFont = ImageFont.truetype(str(f"{ROOT_PATH}/fonts/TimesNewRoman.ttf"), size=FONTSIZE) - elif cls_name_lang == "ko": - # 韩语 - textFont = ImageFont.truetype(str(f"{ROOT_PATH}/fonts/malgun.ttf"), size=FONTSIZE) - - for i in range(len(xyxy_list)): - - # ------------ 边框坐标 ------------ - x0 = int(xyxy_list[i][0]) - y0 = int(xyxy_list[i][1]) - x1 = int(xyxy_list[i][2]) - y1 = int(xyxy_list[i][3]) - - # ---------- 加入目标尺寸 ---------- - w_obj = x1 - x0 - h_obj = y1 - y0 - area_obj = w_obj * h_obj # 目标尺寸 - - if (obj_size == "小目标" and area_obj > 0 and area_obj <= 32 ** 2): - obj_cls_index = int(cls_list[i]) # 类别索引 - cls_index_det_stat.append(obj_cls_index) - - obj_cls = model_cls_name_cp[obj_cls_index] # 类别 - cls_det_stat.append(obj_cls) - - bbox_det_stat.append((x0, y0, x1, y1)) - - conf = float(conf_list[i]) # 置信度 - score_det_stat.append(conf) - - area_obj_all.append(area_obj) - elif (obj_size == "中目标" and area_obj > 32 ** 2 and area_obj <= 96 ** 2): - obj_cls_index = int(cls_list[i]) # 类别索引 - cls_index_det_stat.append(obj_cls_index) - - obj_cls = model_cls_name_cp[obj_cls_index] # 类别 - cls_det_stat.append(obj_cls) - - bbox_det_stat.append((x0, y0, x1, y1)) - - conf = float(conf_list[i]) # 置信度 - score_det_stat.append(conf) - - area_obj_all.append(area_obj) - elif (obj_size == "大目标" and area_obj > 96 ** 2): - obj_cls_index = int(cls_list[i]) # 类别索引 - cls_index_det_stat.append(obj_cls_index) - - obj_cls = model_cls_name_cp[obj_cls_index] # 类别 - cls_det_stat.append(obj_cls) - - bbox_det_stat.append((x0, y0, x1, y1)) - - conf = float(conf_list[i]) # 置信度 - score_det_stat.append(conf) - - area_obj_all.append(area_obj) - elif (obj_size == "所有尺寸"): - obj_cls_index = int(cls_list[i]) # 类别索引 - cls_index_det_stat.append(obj_cls_index) - - obj_cls = model_cls_name_cp[obj_cls_index] # 类别 - cls_det_stat.append(obj_cls) - - bbox_det_stat.append((x0, y0, x1, y1)) - - conf = float(conf_list[i]) # 置信度 - score_det_stat.append(conf) - - area_obj_all.append(area_obj) - - det_img = pil_draw(img, score_det_stat, bbox_det_stat, cls_det_stat, cls_index_det_stat, textFont, color_list) - - # -------------- 目标尺寸计算 -------------- - for i in range(len(area_obj_all)): - if (0 < area_obj_all[i] <= 32 ** 2): - s_obj = s_obj + 1 - elif (32 ** 2 < area_obj_all[i] <= 96 ** 2): - m_obj = m_obj + 1 - elif (area_obj_all[i] > 96 ** 2): - l_obj = l_obj + 1 - - sml_obj_total = s_obj + m_obj + l_obj - objSize_dict = {} - objSize_dict = {obj_style[i]: [s_obj, m_obj, l_obj][i] / sml_obj_total for i in range(3)} - - # ------------ 类别统计 ------------ - clsRatio_dict = {} - clsDet_dict = Counter(cls_det_stat) - clsDet_dict_sum = sum(clsDet_dict.values()) - for k, v in clsDet_dict.items(): - clsRatio_dict[k] = v / clsDet_dict_sum - - gr.Info("图片检测成功!") - return det_img, objSize_dict, clsRatio_dict - else: - raise gr.Error("图片检测失败!") - - -# YOLOv8图片分类函数 -def yolo_cls_img(img_path, model_name): - - # 模型加载 - predict_results = model_cls_loading(img_path, yolo_model=f"{model_name}.pt") - - det_img = Image.open(img_path) - clas_ratio_list = predict_results.probs.top5conf.tolist() - clas_index_list = predict_results.probs.top5 - - clas_name_list = [] - for i in clas_index_list: - clas_name_list.append(predict_results.names[i]) - - clsRatio_dict = {} - index_cls = 0 - clsDet_dict = Counter(clas_name_list) - for k, v in clsDet_dict.items(): - clsRatio_dict[k] = clas_ratio_list[index_cls] - index_cls+=1 - - return det_img, clsRatio_dict - - -def main(args): - gr.close_all() - - global model_cls_name_cp, cls_name - - nms_conf = args.nms_conf - nms_iou = args.nms_iou - model_name = args.model_name - model_cfg = args.model_cfg - cls_name = args.cls_name - inference_size = args.inference_size - max_detnum = args.max_detnum - slider_step = args.slider_step - - is_fonts(f"{ROOT_PATH}/fonts") # 检查字体文件 - - model_names = yaml_csv(model_cfg, "model_names") # 模型名称 - model_cls_name = yaml_csv(cls_name, "model_cls_name") # 类别名称 - - model_cls_name_cp = model_cls_name.copy() # 类别名称 - - # ------------ Gradio Blocks ------------ - with gr.Blocks() as gyd: - with gr.Row(): - gr.Markdown(value="

\ - Simple Icons\ -

基于 Gradio 的 YOLOv8 通用计算机视觉演示系统

集成目标检测、图像分割和图像分类于一体,可自定义检测模型

" - ) - with gr.Row(): - gr.Markdown(value="作者:曾逸夫,Gitee:https://gitee.com/PyCVer ,Github:https://github.com/Zengyf-CVer") - with gr.Row(): - with gr.Column(scale=1): - with gr.Tabs(): - with gr.TabItem("目标检测与图像分割"): - with gr.Row(): - inputs_img = gr.Image(image_mode="RGB", type="filepath", label="原始图片") - with gr.Row(): - device_opt = gr.Radio(choices=["cpu", "0", "1", "2", "3"], value="cpu", label="设备") - with gr.Row(): - inputs_model = gr.Dropdown(choices=model_names, value=model_name, type="value", label="模型") - with gr.Row(): - inputs_size = gr.Slider(320, 1600, step=1, value=inference_size, label="推理尺寸") - max_det = gr.Slider(1, 1000, step=1, value=max_detnum, label="最大检测数") - with gr.Row(): - input_conf = gr.Slider(0, 1, step=slider_step, value=nms_conf, label="置信度阈值") - inputs_iou = gr.Slider(0, 1, step=slider_step, value=nms_iou, label="IoU 阈值") - with gr.Row(): - obj_size = gr.Radio(choices=["所有尺寸", "小目标", "中目标", "大目标"], value="所有尺寸", label="目标尺寸") - with gr.Row(): - gr.ClearButton(inputs_img, value="清除") - det_btn_img = gr.Button(value='检测', variant="primary") - - with gr.TabItem("图像分类"): - with gr.Row(): - inputs_img_cls = gr.Image(image_mode="RGB", type="filepath", label="原始图片") - with gr.Row(): - inputs_model_cls = gr.Dropdown(choices=["yolov8n-cls", "yolov8s-cls", "yolov8l-cls", "yolov8m-cls", "yolov8x-cls"], value="yolov8s-cls", type="value", label="模型") - with gr.Row(): - gr.ClearButton(inputs_img, value="清除") - det_btn_img_cls = gr.Button(value='检测', variant="primary") - - with gr.Column(scale=1): - with gr.Tabs(): - with gr.TabItem("目标检测与图像分割"): - with gr.Row(): - outputs_img = gr.Image(type="pil", label="检测图片") - with gr.Row(): - outputs_objSize = gr.Label(label="目标尺寸占比统计") - with gr.Row(): - outputs_clsSize = gr.Label(label="类别检测占比统计") - - with gr.TabItem("图像分类"): - with gr.Row(): - outputs_img_cls = gr.Image(type="pil", label="检测图片") - with gr.Row(): - outputs_ratio_cls = gr.Label(label="图像分类结果") - - - with gr.Row(): - example_list = [ - ["./img_examples/bus.jpg", "yolov8s", "cpu", 640, 0.6, 0.5, 100, "所有尺寸"], - ["./img_examples/giraffe.jpg", "yolov8l", "cpu", 320, 0.5, 0.45, 100, "所有尺寸"], - ["./img_examples/zidane.jpg", "yolov8m", "cpu", 640, 0.6, 0.5, 100, "所有尺寸"], - ["./img_examples/Millenial-at-work.jpg", "yolov8x", "cpu", 1280, 0.5, 0.5, 100, "所有尺寸"], - ["./img_examples/bus.jpg", "yolov8s-seg", "cpu", 640, 0.6, 0.5, 100, "所有尺寸"], - ["./img_examples/Millenial-at-work.jpg", "yolov8x-seg", "cpu", 1280, 0.5, 0.5, 100, "所有尺寸"],] - gr.Examples(example_list, - [inputs_img, inputs_model, device_opt, inputs_size, input_conf, inputs_iou, max_det, obj_size], - [outputs_img, outputs_objSize, outputs_clsSize], - yolo_det_img, - cache_examples=False) - - det_btn_img.click(fn=yolo_det_img, - inputs=[ - inputs_img, inputs_model, device_opt, inputs_size, input_conf, inputs_iou, max_det, - obj_size], - outputs=[outputs_img, outputs_objSize, outputs_clsSize]) - - det_btn_img_cls.click(fn=yolo_cls_img, - inputs=[ - inputs_img_cls, inputs_model_cls], - outputs=[outputs_img_cls, outputs_ratio_cls]) - - return gyd - - -if __name__ == "__main__": - args = parse_args() - gyd = main(args) - is_share = args.is_share - - gyd.queue().launch( - inbrowser=True, # 自动打开默认浏览器 - share=is_share, # 项目共享,其他设备可以访问 - favicon_path="./icon/logo.ico", # 网页图标 - show_error=True, # 在浏览器控制台中显示错误信息 - quiet=True, # 禁止大多数打印语句 - ) diff --git a/spaces/Zengyf-CVer/color_generator/app.py b/spaces/Zengyf-CVer/color_generator/app.py deleted file mode 100644 index 3b18bd0839fba3897d92660a7eb8bd79d493d2f1..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/color_generator/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import gradio as gr -import cv2 -import numpy as np -import random - - -# Convert decimal color to hexadecimal color -def RGB_to_Hex(rgb): - color = "#" - for i in rgb: - num = int(i) - color += str(hex(num))[-2:].replace("x", "0").upper() - return color - - -# Randomly generate light or dark colors -def random_color(is_light=True): - return ( - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - random.randint(0, 127) + int(is_light) * 128, - ) - - -def switch_color(color_style): - if color_style == "light": - is_light = True - elif color_style == "dark": - is_light = False - back_color_ = random_color(is_light) # Randomly generate colors - back_color = RGB_to_Hex(back_color_) # Convert to hexadecimal - - # Draw color pictures. - w, h = 50, 50 - img = np.zeros((h, w, 3), np.uint8) - cv2.rectangle(img, (0, 0), (w, h), back_color_, thickness=-1) - - return back_color, back_color, img - - -inputs = [gr.Radio(["light", "dark"], value="light")] - -outputs = [ - gr.ColorPicker(label="color"), - gr.Textbox(label="hexadecimal color"), - gr.Image(type="numpy", label="color picture"), -] - -title = "Color Generator" -description = ( - "Click the Submit button, and a dark or light color will be randomly generated." -) - -demo = gr.Interface( - fn=switch_color, - inputs=inputs, - outputs=outputs, - title=title, - description=description, -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/abby711/FaceRestoration/inference_gfpgan.py b/spaces/abby711/FaceRestoration/inference_gfpgan.py deleted file mode 100644 index a426cfc7b9e67aef84e0f3c0666e09d875ebb222..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/inference_gfpgan.py +++ /dev/null @@ -1,116 +0,0 @@ -import argparse -import cv2 -import glob -import numpy as np -import os -import torch -from basicsr.utils import imwrite - -from gfpgan import GFPGANer - - -def main(): - """Inference demo for GFPGAN. - """ - parser = argparse.ArgumentParser() - parser.add_argument('--upscale', type=int, default=2, help='The final upsampling scale of the image') - parser.add_argument('--arch', type=str, default='clean', help='The GFPGAN architecture. Option: clean | original') - parser.add_argument('--channel', type=int, default=2, help='Channel multiplier for large networks of StyleGAN2') - parser.add_argument('--model_path', type=str, default='experiments/pretrained_models/GFPGANCleanv1-NoCE-C2.pth') - parser.add_argument('--bg_upsampler', type=str, default='realesrgan', help='background upsampler') - parser.add_argument( - '--bg_tile', type=int, default=400, help='Tile size for background sampler, 0 for no tile during testing') - parser.add_argument('--test_path', type=str, default='inputs/whole_imgs', help='Input folder') - parser.add_argument('--suffix', type=str, default=None, help='Suffix of the restored faces') - parser.add_argument('--only_center_face', action='store_true', help='Only restore the center face') - parser.add_argument('--aligned', action='store_true', help='Input are aligned faces') - parser.add_argument('--paste_back', action='store_false', help='Paste the restored faces back to images') - parser.add_argument('--save_root', type=str, default='results', help='Path to save root') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - args = parser.parse_args() - - args = parser.parse_args() - if args.test_path.endswith('/'): - args.test_path = args.test_path[:-1] - os.makedirs(args.save_root, exist_ok=True) - - # background upsampler - if args.bg_upsampler == 'realesrgan': - if not torch.cuda.is_available(): # CPU - import warnings - warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') - bg_upsampler = None - else: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan import RealESRGANer - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - bg_upsampler = RealESRGANer( - scale=2, - model_path='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth', - model=model, - tile=args.bg_tile, - tile_pad=10, - pre_pad=0, - half=True) # need to set False in CPU mode - else: - bg_upsampler = None - # set up GFPGAN restorer - restorer = GFPGANer( - model_path=args.model_path, - upscale=args.upscale, - arch=args.arch, - channel_multiplier=args.channel, - bg_upsampler=bg_upsampler) - - img_list = sorted(glob.glob(os.path.join(args.test_path, '*'))) - for img_path in img_list: - # read image - img_name = os.path.basename(img_path) - print(f'Processing {img_name} ...') - basename, ext = os.path.splitext(img_name) - input_img = cv2.imread(img_path, cv2.IMREAD_COLOR) - - # restore faces and background if necessary - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=args.aligned, only_center_face=args.only_center_face, paste_back=args.paste_back) - - # save faces - for idx, (cropped_face, restored_face) in enumerate(zip(cropped_faces, restored_faces)): - # save cropped face - save_crop_path = os.path.join(args.save_root, 'cropped_faces', f'{basename}_{idx:02d}.png') - imwrite(cropped_face, save_crop_path) - # save restored face - if args.suffix is not None: - save_face_name = f'{basename}_{idx:02d}_{args.suffix}.png' - else: - save_face_name = f'{basename}_{idx:02d}.png' - save_restore_path = os.path.join(args.save_root, 'restored_faces', save_face_name) - imwrite(restored_face, save_restore_path) - # save comparison image - cmp_img = np.concatenate((cropped_face, restored_face), axis=1) - imwrite(cmp_img, os.path.join(args.save_root, 'cmp', f'{basename}_{idx:02d}.png')) - - # save restored img - if restored_img is not None: - if args.ext == 'auto': - extension = ext[1:] - else: - extension = args.ext - - if args.suffix is not None: - save_restore_path = os.path.join(args.save_root, 'restored_imgs', - f'{basename}_{args.suffix}.{extension}') - else: - save_restore_path = os.path.join(args.save_root, 'restored_imgs', f'{basename}.{extension}') - imwrite(restored_img, save_restore_path) - - print(f'Results are in the [{args.save_root}] folder.') - - -if __name__ == '__main__': - main() diff --git a/spaces/abdvl/datahub_qa_bot/docs/advanced/es-7-upgrade.md b/spaces/abdvl/datahub_qa_bot/docs/advanced/es-7-upgrade.md deleted file mode 100644 index 58e86e54d921bfef39a48627614163991804f2c3..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/advanced/es-7-upgrade.md +++ /dev/null @@ -1,38 +0,0 @@ -# Elasticsearch upgrade from 5.6.8 to 7.9.3 - -## Summary of changes -Checkout the list of breaking changes for [Elasticsearch 6](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/breaking-changes-6.0.html) and [Elasticsearch 7](https://www.elastic.co/guide/en/elasticsearch/reference/7.x/breaking-changes-7.0.html). Following is the summary of changes that impact Datahub. - -### Search index mapping & settings -- Removal of mapping types (as mentioned [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html)) -- Specify the maximum allowed difference between `min_gram` and `max_gram` for NGramTokenizer and NGramTokenFilter by adding property `max_ngram_diff` in index settings, particularly if the difference is greater than 1 (as mentioned [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html)) - -### Search query -The following parameters are/were `optional` and hence automatically populated in the search query. Some tests that expect a certain search query to be sent to ES will change with the ES upgrade. -- `disable_coord` parameter of the `bool` and `common_terms` queries has been removed (as mentioned [here](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/breaking-changes-6.0.html)) -- `auto_generate_synonyms_phrase_query` parameter in `match` query is added with a default value of `true` (as mentioned [here](https://www.elastic.co/guide/en/elasticsearch/reference/7.x/query-dsl-match-query.html)) - -### Java High Level Rest Client -- In 7.9.3, Java High Level Rest Client instance needs a REST low-level client builder to be built. In 5.6.8, the same instance needs REST low-level client -- Document APIs such as the Index API, Delete API, etc no longer takes the doc `type` as an input - -## Migration strategy - -As mentioned in the docs, indices created in Elasticsearch 5.x are not readable by Elasticsearch 7.x. Running the upgraded elasticsearch container on the existing esdata volume will fail. - -For local development, our recommendation is to run the `docker/nuke.sh` script to remove the existing esdata volume before starting up the containers. Note, all data will be lost. - -To migrate without losing data, please refer to the python script and Dockerfile in `contrib/elasticsearch/es7-upgrade`. The script takes source and destination elasticsearch cluster URL and SSL configuration (if applicable) as input. It ports the mappings and settings for all indices in the source cluster to the destination cluster making the necessary changes stated above. Then it transfers all documents in the source cluster to the destination cluster. - -You can run the script in a docker container as follows -``` -docker build -t migrate-es-7 . -docker run migrate-es-7 -s SOURCE -d DEST [--disable-source-ssl] - [--disable-dest-ssl] [--cert-file CERT_FILE] - [--key-file KEY_FILE] [--ca-file CA_FILE] [--create-only] - [-i INDICES] [--name-override NAME_OVERRIDE] -``` - -## Plan - -We will create an "elasticsearch-5-legacy" branch with the version of master prior to the elasticsearch 7 upgrade. However, we will not be supporting this branch moving forward and all future development will be done using elasticsearch 7.9.3 \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/drop.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/drop.py deleted file mode 100644 index b7b4fccd457a0d51fb10c789df3c8537fe7b67c1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/drop.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import build_from_cfg -from .registry import DROPOUT_LAYERS - - -def drop_path(x, drop_prob=0., training=False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - # handle tensors with different dimensions, not just 4D tensors. - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) - random_tensor = keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - output = x.div(keep_prob) * random_tensor.floor() - return output - - -@DROPOUT_LAYERS.register_module() -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - We follow the implementation - https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501 - - Args: - drop_prob (float): Probability of the path to be zeroed. Default: 0.1 - """ - - def __init__(self, drop_prob=0.1): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -@DROPOUT_LAYERS.register_module() -class Dropout(nn.Dropout): - """A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of - ``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with - ``DropPath`` - - Args: - drop_prob (float): Probability of the elements to be - zeroed. Default: 0.5. - inplace (bool): Do the operation inplace or not. Default: False. - """ - - def __init__(self, drop_prob=0.5, inplace=False): - super().__init__(p=drop_prob, inplace=inplace) - - -def build_dropout(cfg, default_args=None): - """Builder for drop out layers.""" - return build_from_cfg(cfg, DROPOUT_LAYERS, default_args) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/pixel_group.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/pixel_group.py deleted file mode 100644 index 2143c75f835a467c802fc3c37ecd3ac0f85bcda4..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/pixel_group.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['pixel_group']) - - -def pixel_group(score, mask, embedding, kernel_label, kernel_contour, - kernel_region_num, distance_threshold): - """Group pixels into text instances, which is widely used text detection - methods. - - Arguments: - score (np.array or Tensor): The foreground score with size hxw. - mask (np.array or Tensor): The foreground mask with size hxw. - embedding (np.array or Tensor): The embedding with size hxwxc to - distinguish instances. - kernel_label (np.array or Tensor): The instance kernel index with - size hxw. - kernel_contour (np.array or Tensor): The kernel contour with size hxw. - kernel_region_num (int): The instance kernel region number. - distance_threshold (float): The embedding distance threshold between - kernel and pixel in one instance. - - Returns: - pixel_assignment (List[List[float]]): The instance coordinate list. - Each element consists of averaged confidence, pixel number, and - coordinates (x_i, y_i for all pixels) in order. - """ - assert isinstance(score, (torch.Tensor, np.ndarray)) - assert isinstance(mask, (torch.Tensor, np.ndarray)) - assert isinstance(embedding, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_contour, (torch.Tensor, np.ndarray)) - assert isinstance(kernel_region_num, int) - assert isinstance(distance_threshold, float) - - if isinstance(score, np.ndarray): - score = torch.from_numpy(score) - if isinstance(mask, np.ndarray): - mask = torch.from_numpy(mask) - if isinstance(embedding, np.ndarray): - embedding = torch.from_numpy(embedding) - if isinstance(kernel_label, np.ndarray): - kernel_label = torch.from_numpy(kernel_label) - if isinstance(kernel_contour, np.ndarray): - kernel_contour = torch.from_numpy(kernel_contour) - - if torch.__version__ == 'parrots': - label = ext_module.pixel_group( - score, - mask, - embedding, - kernel_label, - kernel_contour, - kernel_region_num=kernel_region_num, - distance_threshold=distance_threshold) - label = label.tolist() - label = label[0] - list_index = kernel_region_num - pixel_assignment = [] - for x in range(kernel_region_num): - pixel_assignment.append( - np.array( - label[list_index:list_index + int(label[x])], - dtype=np.float)) - list_index = list_index + int(label[x]) - else: - pixel_assignment = ext_module.pixel_group(score, mask, embedding, - kernel_label, kernel_contour, - kernel_region_num, - distance_threshold) - return pixel_assignment diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv_custom/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv_custom/__init__.py deleted file mode 100644 index 4b958738b9fd93bfcec239c550df1d9a44b8c536..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv_custom/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# -*- coding: utf-8 -*- - -from .checkpoint import load_checkpoint - -__all__ = ['load_checkpoint'] \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index f0fd80d41407162df71ba5349fc659d4713cdb6e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,66 +0,0 @@ -from ..builder import DETECTORS -from .faster_rcnn import FasterRCNN - - -@DETECTORS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - - super(TridentFasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = img_metas * num_branch - proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, trident_img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs): - """make copies of img and gts to fit multi-branch.""" - trident_gt_bboxes = tuple(gt_bboxes * self.num_branch) - trident_gt_labels = tuple(gt_labels * self.num_branch) - trident_img_metas = tuple(img_metas * self.num_branch) - - return super(TridentFasterRCNN, - self).forward_train(img, trident_img_metas, - trident_gt_bboxes, trident_gt_labels) diff --git a/spaces/acmyu/frame_interpolation_prototype/app.py b/spaces/acmyu/frame_interpolation_prototype/app.py deleted file mode 100644 index 42c8a37c8800192d7b672e3e5ef7e23b34fce657..0000000000000000000000000000000000000000 --- a/spaces/acmyu/frame_interpolation_prototype/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import gradio as gr -from interpolate import run_model, save_outputs -import time -from random import random -import os -from PIL import Image -import json - - -def run(frame1, frame2): - width, height = frame1.size - newsize = (128, 128) - frame1 = frame1.resize(newsize) - frame2 = frame2.resize(newsize) - - interp, start, end = run_model('prod', frame1, frame2) - - folder = 'output/'+str(time.time()).replace('.', '_')+'-'+str(int(random()*100000))+'/' - save_outputs(interp, start, end, folder) - - filenames = os.listdir(folder) - imgs = [] - for f in filenames: - if f != 'a.png' and f != 'c.png': - im = Image.open(folder + f) - im = im.resize((width, height)) - imgs.append(im) - return imgs - -gr.Interface(fn=run, - inputs=[gr.Image(type="pil"), gr.Image(type="pil")], - outputs=gr.Gallery(columns=10), - examples=[ - ['data/t1.png', 'data/t2.png'], - ['test/frame_000000000.jpg', 'test/frame_000000001.jpg'], - ['test/frame_000000002.jpg', 'test/frame_000000003.jpg'], - ['test/frame_000000004.jpg', 'test/frame_000000005.jpg'], - ['test/frame_000000006.jpg', 'test/frame_000000007.jpg'], - ['test/frame_000000008.jpg', 'test/frame_000000009.jpg'], - ['test/frame_000000010.jpg', 'test/frame_000000011.jpg'], - ['test/frame_000000012.jpg', 'test/frame_000000013.jpg'], - ['test/frame_000000014.jpg', 'test/frame_000000015.jpg'], - ]).launch(share=True) diff --git a/spaces/acmyu/frame_interpolation_prototype/data/video2frames.py b/spaces/acmyu/frame_interpolation_prototype/data/video2frames.py deleted file mode 100644 index aa3d8bf6cac780be4596358d67917bcdeb309c08..0000000000000000000000000000000000000000 --- a/spaces/acmyu/frame_interpolation_prototype/data/video2frames.py +++ /dev/null @@ -1,79 +0,0 @@ -import os -import cv2 - -from skimage import img_as_float -from skimage.metrics import structural_similarity as ssim - -IN_PATH = 'temp' -OUT_PATH = 'frames' -DIMS = 128 -#DIMS = 256 -NAME_START_ID = 20249 #30386 -IS_ANIME = True -SKIP = 1 if IS_ANIME else 3 -CHECK_SIMILARITY = True -MAX_SIMILARITY = 0.7 if IS_ANIME else 0.999 - -def video_to_rgb(n, video_filename, out_dir, resize_shape): - file_template = 'frame_{0:09d}.jpg' - reader = cv2.VideoCapture(video_filename) - success, frame1 = reader.read() - - count = 0 - while success: - frame1 = cv2.resize(frame1, resize_shape) - - success, frame2 = reader.read() - if not success: - break - frame2 = cv2.resize(frame2, resize_shape) - - similarity = 0.0 - if count % SKIP != 0: - similarity = 2.0 - elif CHECK_SIMILARITY: - similarity = ssim(frame1, frame2, win_size=DIMS-1, channel_axis=2) - - if count % SKIP == 0 and similarity < MAX_SIMILARITY: - #out_filepath = os.path.join(out_dir, file_template.format(count)) - out_filepath = os.path.join(out_dir, file_template.format(n)) - cv2.imwrite(out_filepath, frame2) - n += 1 - - frame1 = frame2 - count += 1 - return n - -def process_videofile(n, video_filename, video_path, rgb_out_path, file_extension: str ='.mp4'): - filepath = os.path.join(video_path, video_filename) - video_filename = video_filename.replace(file_extension, '') - OUT_HEIGHT_WIDTH = (DIMS, DIMS) - - out_dir = rgb_out_path - if (not os.path.isdir(out_dir)): - os.mkdir(out_dir) - return video_to_rgb(n, filepath, out_dir, resize_shape=OUT_HEIGHT_WIDTH) - - -if __name__ == '__main__': - # the path to the folder which contains all video files (mp4, webm, or other) - video_path = IN_PATH - # the root output path where RGB frame folders should be created - rgb_out_path = OUT_PATH - # the file extension that the videos have - file_extension = '.mp4' - # hight and width to resize RGB frames to - - if (not os.path.isdir(rgb_out_path)): - os.mkdir(rgb_out_path) - - - video_filenames = os.listdir(video_path) - - print('This can take an hour or two depending on dataset size') - - n = NAME_START_ID - for video_filename in video_filenames: - n = process_videofile(n, video_filename, video_path, rgb_out_path, file_extension) - - print('all done') diff --git a/spaces/agutfraind/llmscanner/app.py b/spaces/agutfraind/llmscanner/app.py deleted file mode 100644 index ba5ef9cddacd55c64b45df3d7c2fdc0388998f2c..0000000000000000000000000000000000000000 --- a/spaces/agutfraind/llmscanner/app.py +++ /dev/null @@ -1,192 +0,0 @@ -''' -LLM scanner streamlit app - -streamlit run .\app.py - -Functionality -- tokenize documents -- respond to queries -- generate new documents - -Based on: -1. https://huggingface.co/spaces/llamaindex/llama_index_vector_demo -2. https://github.com/logan-markewich/llama_index_starter_pack/blob/main/streamlit_term_definition/ - -TODO: -- customize to other [LLMs](https://gpt-index.readthedocs.io/en/latest/reference/llm_predictor.html#llama_index.llm_predictor.LLMPredictor) -- guardrails on -- prevent answers on facts outside the document (e.g. birthdate of Michael Jordan in the docs vs. the baseball player) -''' - -import os -import streamlit as st -from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, ServiceContext, LLMPredictor, PromptHelper, readers -from llama_index import StorageContext, load_index_from_storage - -from langchain import OpenAI, HuggingFaceHub - -import app_constants - -index_fpath = "./llamas_index" -documents_folder = "./documents" #initial documents - additional can be added via upload - -if "dummy" not in st.session_state: - st.session_state["dummy"] = "dummy" - -#@st.cache_resource #st makes this globally available for all users and sessions -def initialize_index(index_name, documents_folder, persisted_to_storage=True): - """ - creates an index of the documents in the folder - if the index exists, skipped - """ - # set maximum input size - max_input_size = 4096 - # set number of output tokens - num_outputs = 2000 - # set maximum chunk overlap - max_chunk_overlap = 20 - # set chunk size limit - chunk_size_limit = 600 - - llm_predictor = LLMPredictor(llm=OpenAI(openai_api_key=api_key, #from env - temperature=0.5, - model_name="text-davinci-003", - max_tokens=num_outputs)) - #wishlist: alternatives - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) - if os.path.exists(index_name): - storage_context = StorageContext.from_defaults(persist_dir=index_fpath) - doc_index = load_index_from_storage(service_context=service_context, storage_context=storage_context) - else: - #st.info("Updating the document index") - prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit) - - documents = SimpleDirectoryReader(documents_folder).load_data() - doc_index = GPTVectorStoreIndex.from_documents( - documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper, - chunk_size_limit=512, service_context=service_context - ) - if persisted_to_storage: - doc_index.storage_context.persist(index_fpath) - - #avoid this side-effect: st.session_state["doc_index"] = "doc_index" - return doc_index - -#st returns data that's available for future caller -@st.cache_data(max_entries=200, persist=True) -def query_index(_index, query_text): - query_engine = _index.as_query_engine() - response = query_engine.query(query_text) - #response = _index.query(query_text) - return str(response) - - -#page format is directly written her -st.title("LLM scanner") -st.markdown( - ( - "This app allows you to query documents!\n\n" - "Powered by [Llama Index](https://gpt-index.readthedocs.io/en/latest/index.html)" - ) -) - -setup_tab, upload_tab, query_tab = st.tabs( - ["Setup", "Index", "Query"] -) - -with setup_tab: - st.subheader("LLM Setup") - api_key = st.text_input("Enter your OpenAI API key here", type="password") - - #wishlist llm_name = st.selectbox( - # "Which LLM?", ["text-davinci-003", "gpt-3.5-turbo", "gpt-4"] - #) - #repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options - #llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) - - #model_temperature = st.slider( - # "LLM Temperature", min_value=0.0, max_value=1.0, step=0.1 - #) - -if api_key is not None and "doc_index" not in st.session_state: - st.session_state["doc_index"] = initialize_index(index_fpath, documents_folder, persisted_to_storage=False) - - -with upload_tab: - st.subheader("Upload documents") - - if st.button("Re-initialize index with pre-packaged documents"): - st.session_state["doc_index"] = initialize_index(index_fpath, documents_folder, persisted_to_storage=False) - st.info('Documents in index: ' + str(st.session_state["doc_index"].docstore.docs.__len__())) - - if "doc_index" in st.session_state: - doc_index = st.session_state["doc_index"] - st.markdown( - "Either upload a document, or enter the text manually." - ) - uploaded_file = st.file_uploader( - "Upload a document (pdf):", type=["pdf"] - ) - document_text = st.text_area("Enter text") - if st.button("Add document to index") and (uploaded_file or document_text): - with st.spinner("Inserting (large files may be slow)..."): - if document_text: - doc_index.refresh([readers.Document(text=document_text)]) #tokenizes new documents - st.info('Documents in index: ' + str(st.session_state["doc_index"].docstore.docs.__len__())) - - st.session_state["doc_index"] = doc_index - if uploaded_file: - uploads_folder = "uploads/" - if not os.path.exists(uploads_folder): - os.mkdir(uploads_folder) - #file_details = {"FileName":uploaded_file.name,"FileType":uploaded_file.type} - with open(uploads_folder + "tmp.pdf", "wb") as f: - f.write(uploaded_file.getbuffer()) - documents = SimpleDirectoryReader(uploads_folder).load_data() - doc_index.refresh(documents) #tokenizes new documents - st.session_state["doc_index"] = doc_index - st.info('Documents in index: ' + str(st.session_state["doc_index"].docstore.docs.__len__())) - - st.session_state["doc_index"] = doc_index - os.remove(uploads_folder + "tmp.pdf") - -with query_tab: - st.subheader("Query Tab") - st.write("Enter a query about the included documents. Find [documentation here](https://huggingface.co/spaces/agutfraind/llmscanner)") - - doc_index = None - #api_key = st.text_input("Enter your OpenAI API key here:", type="password") - if api_key: - os.environ['OPENAI_API_KEY'] = api_key - #doc_index = initialize_index(index_fpath, documents_folder) - - if doc_index is None: - if "doc_index" in st.session_state: - doc_index = st.session_state["doc_index"] - st.info('Documents in index: ' + str(doc_index.docstore.docs.__len__())) - else: - st.warning("Doc index is not available - initialize or upload") - #st.warning("Please enter your api key first.") - - if doc_index and api_key: - select_type_your_own = 'type your own...' - options_for_queries = app_constants.canned_questions + [select_type_your_own] - query_selection = st.selectbox("Select option", options=options_for_queries) - query_text = None - - if query_selection == select_type_your_own: - query_text = st.text_input("Query text") - else: - query_text = query_selection - - if st.button("Run Query") and (doc_index is not None) and (query_text is not None): - response = query_index(doc_index, query_text) - st.markdown(response) - - llm_col, embed_col = st.columns(2) - with llm_col: - st.markdown(f"LLM Tokens Used: {doc_index.service_context.llm_predictor._last_token_usage}") - - with embed_col: - st.markdown(f"Embedding Tokens Used: {doc_index.service_context.embed_model._last_token_usage}") - diff --git a/spaces/akhaliq/Detic/detic/data/datasets/oid.py b/spaces/akhaliq/Detic/detic/data/datasets/oid.py deleted file mode 100644 index 90d7f8613e4f12e942ec8967db9f17c0ec0d41f4..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/data/datasets/oid.py +++ /dev/null @@ -1,535 +0,0 @@ -# Part of the code is from https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet/data/datasets/oid.py -# Copyright (c) Facebook, Inc. and its affiliates. -from .register_oid import register_oid_instances -import os - -categories = [ - {'id': 1, 'name': 'Infant bed', 'freebase_id': '/m/061hd_'}, - {'id': 2, 'name': 'Rose', 'freebase_id': '/m/06m11'}, - {'id': 3, 'name': 'Flag', 'freebase_id': '/m/03120'}, - {'id': 4, 'name': 'Flashlight', 'freebase_id': '/m/01kb5b'}, - {'id': 5, 'name': 'Sea turtle', 'freebase_id': '/m/0120dh'}, - {'id': 6, 'name': 'Camera', 'freebase_id': '/m/0dv5r'}, - {'id': 7, 'name': 'Animal', 'freebase_id': '/m/0jbk'}, - {'id': 8, 'name': 'Glove', 'freebase_id': '/m/0174n1'}, - {'id': 9, 'name': 'Crocodile', 'freebase_id': '/m/09f_2'}, - {'id': 10, 'name': 'Cattle', 'freebase_id': '/m/01xq0k1'}, - {'id': 11, 'name': 'House', 'freebase_id': '/m/03jm5'}, - {'id': 12, 'name': 'Guacamole', 'freebase_id': '/m/02g30s'}, - {'id': 13, 'name': 'Penguin', 'freebase_id': '/m/05z6w'}, - {'id': 14, 'name': 'Vehicle registration plate', 'freebase_id': '/m/01jfm_'}, - {'id': 15, 'name': 'Bench', 'freebase_id': '/m/076lb9'}, - {'id': 16, 'name': 'Ladybug', 'freebase_id': '/m/0gj37'}, - {'id': 17, 'name': 'Human nose', 'freebase_id': '/m/0k0pj'}, - {'id': 18, 'name': 'Watermelon', 'freebase_id': '/m/0kpqd'}, - {'id': 19, 'name': 'Flute', 'freebase_id': '/m/0l14j_'}, - {'id': 20, 'name': 'Butterfly', 'freebase_id': '/m/0cyf8'}, - {'id': 21, 'name': 'Washing machine', 'freebase_id': '/m/0174k2'}, - {'id': 22, 'name': 'Raccoon', 'freebase_id': '/m/0dq75'}, - {'id': 23, 'name': 'Segway', 'freebase_id': '/m/076bq'}, - {'id': 24, 'name': 'Taco', 'freebase_id': '/m/07crc'}, - {'id': 25, 'name': 'Jellyfish', 'freebase_id': '/m/0d8zb'}, - {'id': 26, 'name': 'Cake', 'freebase_id': '/m/0fszt'}, - {'id': 27, 'name': 'Pen', 'freebase_id': '/m/0k1tl'}, - {'id': 28, 'name': 'Cannon', 'freebase_id': '/m/020kz'}, - {'id': 29, 'name': 'Bread', 'freebase_id': '/m/09728'}, - {'id': 30, 'name': 'Tree', 'freebase_id': '/m/07j7r'}, - {'id': 31, 'name': 'Shellfish', 'freebase_id': '/m/0fbdv'}, - {'id': 32, 'name': 'Bed', 'freebase_id': '/m/03ssj5'}, - {'id': 33, 'name': 'Hamster', 'freebase_id': '/m/03qrc'}, - {'id': 34, 'name': 'Hat', 'freebase_id': '/m/02dl1y'}, - {'id': 35, 'name': 'Toaster', 'freebase_id': '/m/01k6s3'}, - {'id': 36, 'name': 'Sombrero', 'freebase_id': '/m/02jfl0'}, - {'id': 37, 'name': 'Tiara', 'freebase_id': '/m/01krhy'}, - {'id': 38, 'name': 'Bowl', 'freebase_id': '/m/04kkgm'}, - {'id': 39, 'name': 'Dragonfly', 'freebase_id': '/m/0ft9s'}, - {'id': 40, 'name': 'Moths and butterflies', 'freebase_id': '/m/0d_2m'}, - {'id': 41, 'name': 'Antelope', 'freebase_id': '/m/0czz2'}, - {'id': 42, 'name': 'Vegetable', 'freebase_id': '/m/0f4s2w'}, - {'id': 43, 'name': 'Torch', 'freebase_id': '/m/07dd4'}, - {'id': 44, 'name': 'Building', 'freebase_id': '/m/0cgh4'}, - {'id': 45, 'name': 'Power plugs and sockets', 'freebase_id': '/m/03bbps'}, - {'id': 46, 'name': 'Blender', 'freebase_id': '/m/02pjr4'}, - {'id': 47, 'name': 'Billiard table', 'freebase_id': '/m/04p0qw'}, - {'id': 48, 'name': 'Cutting board', 'freebase_id': '/m/02pdsw'}, - {'id': 49, 'name': 'Bronze sculpture', 'freebase_id': '/m/01yx86'}, - {'id': 50, 'name': 'Turtle', 'freebase_id': '/m/09dzg'}, - {'id': 51, 'name': 'Broccoli', 'freebase_id': '/m/0hkxq'}, - {'id': 52, 'name': 'Tiger', 'freebase_id': '/m/07dm6'}, - {'id': 53, 'name': 'Mirror', 'freebase_id': '/m/054_l'}, - {'id': 54, 'name': 'Bear', 'freebase_id': '/m/01dws'}, - {'id': 55, 'name': 'Zucchini', 'freebase_id': '/m/027pcv'}, - {'id': 56, 'name': 'Dress', 'freebase_id': '/m/01d40f'}, - {'id': 57, 'name': 'Volleyball', 'freebase_id': '/m/02rgn06'}, - {'id': 58, 'name': 'Guitar', 'freebase_id': '/m/0342h'}, - {'id': 59, 'name': 'Reptile', 'freebase_id': '/m/06bt6'}, - {'id': 60, 'name': 'Golf cart', 'freebase_id': '/m/0323sq'}, - {'id': 61, 'name': 'Tart', 'freebase_id': '/m/02zvsm'}, - {'id': 62, 'name': 'Fedora', 'freebase_id': '/m/02fq_6'}, - {'id': 63, 'name': 'Carnivore', 'freebase_id': '/m/01lrl'}, - {'id': 64, 'name': 'Car', 'freebase_id': '/m/0k4j'}, - {'id': 65, 'name': 'Lighthouse', 'freebase_id': '/m/04h7h'}, - {'id': 66, 'name': 'Coffeemaker', 'freebase_id': '/m/07xyvk'}, - {'id': 67, 'name': 'Food processor', 'freebase_id': '/m/03y6mg'}, - {'id': 68, 'name': 'Truck', 'freebase_id': '/m/07r04'}, - {'id': 69, 'name': 'Bookcase', 'freebase_id': '/m/03__z0'}, - {'id': 70, 'name': 'Surfboard', 'freebase_id': '/m/019w40'}, - {'id': 71, 'name': 'Footwear', 'freebase_id': '/m/09j5n'}, - {'id': 72, 'name': 'Bench', 'freebase_id': '/m/0cvnqh'}, - {'id': 73, 'name': 'Necklace', 'freebase_id': '/m/01llwg'}, - {'id': 74, 'name': 'Flower', 'freebase_id': '/m/0c9ph5'}, - {'id': 75, 'name': 'Radish', 'freebase_id': '/m/015x5n'}, - {'id': 76, 'name': 'Marine mammal', 'freebase_id': '/m/0gd2v'}, - {'id': 77, 'name': 'Frying pan', 'freebase_id': '/m/04v6l4'}, - {'id': 78, 'name': 'Tap', 'freebase_id': '/m/02jz0l'}, - {'id': 79, 'name': 'Peach', 'freebase_id': '/m/0dj6p'}, - {'id': 80, 'name': 'Knife', 'freebase_id': '/m/04ctx'}, - {'id': 81, 'name': 'Handbag', 'freebase_id': '/m/080hkjn'}, - {'id': 82, 'name': 'Laptop', 'freebase_id': '/m/01c648'}, - {'id': 83, 'name': 'Tent', 'freebase_id': '/m/01j61q'}, - {'id': 84, 'name': 'Ambulance', 'freebase_id': '/m/012n7d'}, - {'id': 85, 'name': 'Christmas tree', 'freebase_id': '/m/025nd'}, - {'id': 86, 'name': 'Eagle', 'freebase_id': '/m/09csl'}, - {'id': 87, 'name': 'Limousine', 'freebase_id': '/m/01lcw4'}, - {'id': 88, 'name': 'Kitchen & dining room table', 'freebase_id': '/m/0h8n5zk'}, - {'id': 89, 'name': 'Polar bear', 'freebase_id': '/m/0633h'}, - {'id': 90, 'name': 'Tower', 'freebase_id': '/m/01fdzj'}, - {'id': 91, 'name': 'Football', 'freebase_id': '/m/01226z'}, - {'id': 92, 'name': 'Willow', 'freebase_id': '/m/0mw_6'}, - {'id': 93, 'name': 'Human head', 'freebase_id': '/m/04hgtk'}, - {'id': 94, 'name': 'Stop sign', 'freebase_id': '/m/02pv19'}, - {'id': 95, 'name': 'Banana', 'freebase_id': '/m/09qck'}, - {'id': 96, 'name': 'Mixer', 'freebase_id': '/m/063rgb'}, - {'id': 97, 'name': 'Binoculars', 'freebase_id': '/m/0lt4_'}, - {'id': 98, 'name': 'Dessert', 'freebase_id': '/m/0270h'}, - {'id': 99, 'name': 'Bee', 'freebase_id': '/m/01h3n'}, - {'id': 100, 'name': 'Chair', 'freebase_id': '/m/01mzpv'}, - {'id': 101, 'name': 'Wood-burning stove', 'freebase_id': '/m/04169hn'}, - {'id': 102, 'name': 'Flowerpot', 'freebase_id': '/m/0fm3zh'}, - {'id': 103, 'name': 'Beaker', 'freebase_id': '/m/0d20w4'}, - {'id': 104, 'name': 'Oyster', 'freebase_id': '/m/0_cp5'}, - {'id': 105, 'name': 'Woodpecker', 'freebase_id': '/m/01dy8n'}, - {'id': 106, 'name': 'Harp', 'freebase_id': '/m/03m5k'}, - {'id': 107, 'name': 'Bathtub', 'freebase_id': '/m/03dnzn'}, - {'id': 108, 'name': 'Wall clock', 'freebase_id': '/m/0h8mzrc'}, - {'id': 109, 'name': 'Sports uniform', 'freebase_id': '/m/0h8mhzd'}, - {'id': 110, 'name': 'Rhinoceros', 'freebase_id': '/m/03d443'}, - {'id': 111, 'name': 'Beehive', 'freebase_id': '/m/01gllr'}, - {'id': 112, 'name': 'Cupboard', 'freebase_id': '/m/0642b4'}, - {'id': 113, 'name': 'Chicken', 'freebase_id': '/m/09b5t'}, - {'id': 114, 'name': 'Man', 'freebase_id': '/m/04yx4'}, - {'id': 115, 'name': 'Blue jay', 'freebase_id': '/m/01f8m5'}, - {'id': 116, 'name': 'Cucumber', 'freebase_id': '/m/015x4r'}, - {'id': 117, 'name': 'Balloon', 'freebase_id': '/m/01j51'}, - {'id': 118, 'name': 'Kite', 'freebase_id': '/m/02zt3'}, - {'id': 119, 'name': 'Fireplace', 'freebase_id': '/m/03tw93'}, - {'id': 120, 'name': 'Lantern', 'freebase_id': '/m/01jfsr'}, - {'id': 121, 'name': 'Missile', 'freebase_id': '/m/04ylt'}, - {'id': 122, 'name': 'Book', 'freebase_id': '/m/0bt_c3'}, - {'id': 123, 'name': 'Spoon', 'freebase_id': '/m/0cmx8'}, - {'id': 124, 'name': 'Grapefruit', 'freebase_id': '/m/0hqkz'}, - {'id': 125, 'name': 'Squirrel', 'freebase_id': '/m/071qp'}, - {'id': 126, 'name': 'Orange', 'freebase_id': '/m/0cyhj_'}, - {'id': 127, 'name': 'Coat', 'freebase_id': '/m/01xygc'}, - {'id': 128, 'name': 'Punching bag', 'freebase_id': '/m/0420v5'}, - {'id': 129, 'name': 'Zebra', 'freebase_id': '/m/0898b'}, - {'id': 130, 'name': 'Billboard', 'freebase_id': '/m/01knjb'}, - {'id': 131, 'name': 'Bicycle', 'freebase_id': '/m/0199g'}, - {'id': 132, 'name': 'Door handle', 'freebase_id': '/m/03c7gz'}, - {'id': 133, 'name': 'Mechanical fan', 'freebase_id': '/m/02x984l'}, - {'id': 134, 'name': 'Ring binder', 'freebase_id': '/m/04zwwv'}, - {'id': 135, 'name': 'Table', 'freebase_id': '/m/04bcr3'}, - {'id': 136, 'name': 'Parrot', 'freebase_id': '/m/0gv1x'}, - {'id': 137, 'name': 'Sock', 'freebase_id': '/m/01nq26'}, - {'id': 138, 'name': 'Vase', 'freebase_id': '/m/02s195'}, - {'id': 139, 'name': 'Weapon', 'freebase_id': '/m/083kb'}, - {'id': 140, 'name': 'Shotgun', 'freebase_id': '/m/06nrc'}, - {'id': 141, 'name': 'Glasses', 'freebase_id': '/m/0jyfg'}, - {'id': 142, 'name': 'Seahorse', 'freebase_id': '/m/0nybt'}, - {'id': 143, 'name': 'Belt', 'freebase_id': '/m/0176mf'}, - {'id': 144, 'name': 'Watercraft', 'freebase_id': '/m/01rzcn'}, - {'id': 145, 'name': 'Window', 'freebase_id': '/m/0d4v4'}, - {'id': 146, 'name': 'Giraffe', 'freebase_id': '/m/03bk1'}, - {'id': 147, 'name': 'Lion', 'freebase_id': '/m/096mb'}, - {'id': 148, 'name': 'Tire', 'freebase_id': '/m/0h9mv'}, - {'id': 149, 'name': 'Vehicle', 'freebase_id': '/m/07yv9'}, - {'id': 150, 'name': 'Canoe', 'freebase_id': '/m/0ph39'}, - {'id': 151, 'name': 'Tie', 'freebase_id': '/m/01rkbr'}, - {'id': 152, 'name': 'Shelf', 'freebase_id': '/m/0gjbg72'}, - {'id': 153, 'name': 'Picture frame', 'freebase_id': '/m/06z37_'}, - {'id': 154, 'name': 'Printer', 'freebase_id': '/m/01m4t'}, - {'id': 155, 'name': 'Human leg', 'freebase_id': '/m/035r7c'}, - {'id': 156, 'name': 'Boat', 'freebase_id': '/m/019jd'}, - {'id': 157, 'name': 'Slow cooker', 'freebase_id': '/m/02tsc9'}, - {'id': 158, 'name': 'Croissant', 'freebase_id': '/m/015wgc'}, - {'id': 159, 'name': 'Candle', 'freebase_id': '/m/0c06p'}, - {'id': 160, 'name': 'Pancake', 'freebase_id': '/m/01dwwc'}, - {'id': 161, 'name': 'Pillow', 'freebase_id': '/m/034c16'}, - {'id': 162, 'name': 'Coin', 'freebase_id': '/m/0242l'}, - {'id': 163, 'name': 'Stretcher', 'freebase_id': '/m/02lbcq'}, - {'id': 164, 'name': 'Sandal', 'freebase_id': '/m/03nfch'}, - {'id': 165, 'name': 'Woman', 'freebase_id': '/m/03bt1vf'}, - {'id': 166, 'name': 'Stairs', 'freebase_id': '/m/01lynh'}, - {'id': 167, 'name': 'Harpsichord', 'freebase_id': '/m/03q5t'}, - {'id': 168, 'name': 'Stool', 'freebase_id': '/m/0fqt361'}, - {'id': 169, 'name': 'Bus', 'freebase_id': '/m/01bjv'}, - {'id': 170, 'name': 'Suitcase', 'freebase_id': '/m/01s55n'}, - {'id': 171, 'name': 'Human mouth', 'freebase_id': '/m/0283dt1'}, - {'id': 172, 'name': 'Juice', 'freebase_id': '/m/01z1kdw'}, - {'id': 173, 'name': 'Skull', 'freebase_id': '/m/016m2d'}, - {'id': 174, 'name': 'Door', 'freebase_id': '/m/02dgv'}, - {'id': 175, 'name': 'Violin', 'freebase_id': '/m/07y_7'}, - {'id': 176, 'name': 'Chopsticks', 'freebase_id': '/m/01_5g'}, - {'id': 177, 'name': 'Digital clock', 'freebase_id': '/m/06_72j'}, - {'id': 178, 'name': 'Sunflower', 'freebase_id': '/m/0ftb8'}, - {'id': 179, 'name': 'Leopard', 'freebase_id': '/m/0c29q'}, - {'id': 180, 'name': 'Bell pepper', 'freebase_id': '/m/0jg57'}, - {'id': 181, 'name': 'Harbor seal', 'freebase_id': '/m/02l8p9'}, - {'id': 182, 'name': 'Snake', 'freebase_id': '/m/078jl'}, - {'id': 183, 'name': 'Sewing machine', 'freebase_id': '/m/0llzx'}, - {'id': 184, 'name': 'Goose', 'freebase_id': '/m/0dbvp'}, - {'id': 185, 'name': 'Helicopter', 'freebase_id': '/m/09ct_'}, - {'id': 186, 'name': 'Seat belt', 'freebase_id': '/m/0dkzw'}, - {'id': 187, 'name': 'Coffee cup', 'freebase_id': '/m/02p5f1q'}, - {'id': 188, 'name': 'Microwave oven', 'freebase_id': '/m/0fx9l'}, - {'id': 189, 'name': 'Hot dog', 'freebase_id': '/m/01b9xk'}, - {'id': 190, 'name': 'Countertop', 'freebase_id': '/m/0b3fp9'}, - {'id': 191, 'name': 'Serving tray', 'freebase_id': '/m/0h8n27j'}, - {'id': 192, 'name': 'Dog bed', 'freebase_id': '/m/0h8n6f9'}, - {'id': 193, 'name': 'Beer', 'freebase_id': '/m/01599'}, - {'id': 194, 'name': 'Sunglasses', 'freebase_id': '/m/017ftj'}, - {'id': 195, 'name': 'Golf ball', 'freebase_id': '/m/044r5d'}, - {'id': 196, 'name': 'Waffle', 'freebase_id': '/m/01dwsz'}, - {'id': 197, 'name': 'Palm tree', 'freebase_id': '/m/0cdl1'}, - {'id': 198, 'name': 'Trumpet', 'freebase_id': '/m/07gql'}, - {'id': 199, 'name': 'Ruler', 'freebase_id': '/m/0hdln'}, - {'id': 200, 'name': 'Helmet', 'freebase_id': '/m/0zvk5'}, - {'id': 201, 'name': 'Ladder', 'freebase_id': '/m/012w5l'}, - {'id': 202, 'name': 'Office building', 'freebase_id': '/m/021sj1'}, - {'id': 203, 'name': 'Tablet computer', 'freebase_id': '/m/0bh9flk'}, - {'id': 204, 'name': 'Toilet paper', 'freebase_id': '/m/09gtd'}, - {'id': 205, 'name': 'Pomegranate', 'freebase_id': '/m/0jwn_'}, - {'id': 206, 'name': 'Skirt', 'freebase_id': '/m/02wv6h6'}, - {'id': 207, 'name': 'Gas stove', 'freebase_id': '/m/02wv84t'}, - {'id': 208, 'name': 'Cookie', 'freebase_id': '/m/021mn'}, - {'id': 209, 'name': 'Cart', 'freebase_id': '/m/018p4k'}, - {'id': 210, 'name': 'Raven', 'freebase_id': '/m/06j2d'}, - {'id': 211, 'name': 'Egg', 'freebase_id': '/m/033cnk'}, - {'id': 212, 'name': 'Burrito', 'freebase_id': '/m/01j3zr'}, - {'id': 213, 'name': 'Goat', 'freebase_id': '/m/03fwl'}, - {'id': 214, 'name': 'Kitchen knife', 'freebase_id': '/m/058qzx'}, - {'id': 215, 'name': 'Skateboard', 'freebase_id': '/m/06_fw'}, - {'id': 216, 'name': 'Salt and pepper shakers', 'freebase_id': '/m/02x8cch'}, - {'id': 217, 'name': 'Lynx', 'freebase_id': '/m/04g2r'}, - {'id': 218, 'name': 'Boot', 'freebase_id': '/m/01b638'}, - {'id': 219, 'name': 'Platter', 'freebase_id': '/m/099ssp'}, - {'id': 220, 'name': 'Ski', 'freebase_id': '/m/071p9'}, - {'id': 221, 'name': 'Swimwear', 'freebase_id': '/m/01gkx_'}, - {'id': 222, 'name': 'Swimming pool', 'freebase_id': '/m/0b_rs'}, - {'id': 223, 'name': 'Drinking straw', 'freebase_id': '/m/03v5tg'}, - {'id': 224, 'name': 'Wrench', 'freebase_id': '/m/01j5ks'}, - {'id': 225, 'name': 'Drum', 'freebase_id': '/m/026t6'}, - {'id': 226, 'name': 'Ant', 'freebase_id': '/m/0_k2'}, - {'id': 227, 'name': 'Human ear', 'freebase_id': '/m/039xj_'}, - {'id': 228, 'name': 'Headphones', 'freebase_id': '/m/01b7fy'}, - {'id': 229, 'name': 'Fountain', 'freebase_id': '/m/0220r2'}, - {'id': 230, 'name': 'Bird', 'freebase_id': '/m/015p6'}, - {'id': 231, 'name': 'Jeans', 'freebase_id': '/m/0fly7'}, - {'id': 232, 'name': 'Television', 'freebase_id': '/m/07c52'}, - {'id': 233, 'name': 'Crab', 'freebase_id': '/m/0n28_'}, - {'id': 234, 'name': 'Microphone', 'freebase_id': '/m/0hg7b'}, - {'id': 235, 'name': 'Home appliance', 'freebase_id': '/m/019dx1'}, - {'id': 236, 'name': 'Snowplow', 'freebase_id': '/m/04vv5k'}, - {'id': 237, 'name': 'Beetle', 'freebase_id': '/m/020jm'}, - {'id': 238, 'name': 'Artichoke', 'freebase_id': '/m/047v4b'}, - {'id': 239, 'name': 'Jet ski', 'freebase_id': '/m/01xs3r'}, - {'id': 240, 'name': 'Stationary bicycle', 'freebase_id': '/m/03kt2w'}, - {'id': 241, 'name': 'Human hair', 'freebase_id': '/m/03q69'}, - {'id': 242, 'name': 'Brown bear', 'freebase_id': '/m/01dxs'}, - {'id': 243, 'name': 'Starfish', 'freebase_id': '/m/01h8tj'}, - {'id': 244, 'name': 'Fork', 'freebase_id': '/m/0dt3t'}, - {'id': 245, 'name': 'Lobster', 'freebase_id': '/m/0cjq5'}, - {'id': 246, 'name': 'Corded phone', 'freebase_id': '/m/0h8lkj8'}, - {'id': 247, 'name': 'Drink', 'freebase_id': '/m/0271t'}, - {'id': 248, 'name': 'Saucer', 'freebase_id': '/m/03q5c7'}, - {'id': 249, 'name': 'Carrot', 'freebase_id': '/m/0fj52s'}, - {'id': 250, 'name': 'Insect', 'freebase_id': '/m/03vt0'}, - {'id': 251, 'name': 'Clock', 'freebase_id': '/m/01x3z'}, - {'id': 252, 'name': 'Castle', 'freebase_id': '/m/0d5gx'}, - {'id': 253, 'name': 'Tennis racket', 'freebase_id': '/m/0h8my_4'}, - {'id': 254, 'name': 'Ceiling fan', 'freebase_id': '/m/03ldnb'}, - {'id': 255, 'name': 'Asparagus', 'freebase_id': '/m/0cjs7'}, - {'id': 256, 'name': 'Jaguar', 'freebase_id': '/m/0449p'}, - {'id': 257, 'name': 'Musical instrument', 'freebase_id': '/m/04szw'}, - {'id': 258, 'name': 'Train', 'freebase_id': '/m/07jdr'}, - {'id': 259, 'name': 'Cat', 'freebase_id': '/m/01yrx'}, - {'id': 260, 'name': 'Rifle', 'freebase_id': '/m/06c54'}, - {'id': 261, 'name': 'Dumbbell', 'freebase_id': '/m/04h8sr'}, - {'id': 262, 'name': 'Mobile phone', 'freebase_id': '/m/050k8'}, - {'id': 263, 'name': 'Taxi', 'freebase_id': '/m/0pg52'}, - {'id': 264, 'name': 'Shower', 'freebase_id': '/m/02f9f_'}, - {'id': 265, 'name': 'Pitcher', 'freebase_id': '/m/054fyh'}, - {'id': 266, 'name': 'Lemon', 'freebase_id': '/m/09k_b'}, - {'id': 267, 'name': 'Invertebrate', 'freebase_id': '/m/03xxp'}, - {'id': 268, 'name': 'Turkey', 'freebase_id': '/m/0jly1'}, - {'id': 269, 'name': 'High heels', 'freebase_id': '/m/06k2mb'}, - {'id': 270, 'name': 'Bust', 'freebase_id': '/m/04yqq2'}, - {'id': 271, 'name': 'Elephant', 'freebase_id': '/m/0bwd_0j'}, - {'id': 272, 'name': 'Scarf', 'freebase_id': '/m/02h19r'}, - {'id': 273, 'name': 'Barrel', 'freebase_id': '/m/02zn6n'}, - {'id': 274, 'name': 'Trombone', 'freebase_id': '/m/07c6l'}, - {'id': 275, 'name': 'Pumpkin', 'freebase_id': '/m/05zsy'}, - {'id': 276, 'name': 'Box', 'freebase_id': '/m/025dyy'}, - {'id': 277, 'name': 'Tomato', 'freebase_id': '/m/07j87'}, - {'id': 278, 'name': 'Frog', 'freebase_id': '/m/09ld4'}, - {'id': 279, 'name': 'Bidet', 'freebase_id': '/m/01vbnl'}, - {'id': 280, 'name': 'Human face', 'freebase_id': '/m/0dzct'}, - {'id': 281, 'name': 'Houseplant', 'freebase_id': '/m/03fp41'}, - {'id': 282, 'name': 'Van', 'freebase_id': '/m/0h2r6'}, - {'id': 283, 'name': 'Shark', 'freebase_id': '/m/0by6g'}, - {'id': 284, 'name': 'Ice cream', 'freebase_id': '/m/0cxn2'}, - {'id': 285, 'name': 'Swim cap', 'freebase_id': '/m/04tn4x'}, - {'id': 286, 'name': 'Falcon', 'freebase_id': '/m/0f6wt'}, - {'id': 287, 'name': 'Ostrich', 'freebase_id': '/m/05n4y'}, - {'id': 288, 'name': 'Handgun', 'freebase_id': '/m/0gxl3'}, - {'id': 289, 'name': 'Whiteboard', 'freebase_id': '/m/02d9qx'}, - {'id': 290, 'name': 'Lizard', 'freebase_id': '/m/04m9y'}, - {'id': 291, 'name': 'Pasta', 'freebase_id': '/m/05z55'}, - {'id': 292, 'name': 'Snowmobile', 'freebase_id': '/m/01x3jk'}, - {'id': 293, 'name': 'Light bulb', 'freebase_id': '/m/0h8l4fh'}, - {'id': 294, 'name': 'Window blind', 'freebase_id': '/m/031b6r'}, - {'id': 295, 'name': 'Muffin', 'freebase_id': '/m/01tcjp'}, - {'id': 296, 'name': 'Pretzel', 'freebase_id': '/m/01f91_'}, - {'id': 297, 'name': 'Computer monitor', 'freebase_id': '/m/02522'}, - {'id': 298, 'name': 'Horn', 'freebase_id': '/m/0319l'}, - {'id': 299, 'name': 'Furniture', 'freebase_id': '/m/0c_jw'}, - {'id': 300, 'name': 'Sandwich', 'freebase_id': '/m/0l515'}, - {'id': 301, 'name': 'Fox', 'freebase_id': '/m/0306r'}, - {'id': 302, 'name': 'Convenience store', 'freebase_id': '/m/0crjs'}, - {'id': 303, 'name': 'Fish', 'freebase_id': '/m/0ch_cf'}, - {'id': 304, 'name': 'Fruit', 'freebase_id': '/m/02xwb'}, - {'id': 305, 'name': 'Earrings', 'freebase_id': '/m/01r546'}, - {'id': 306, 'name': 'Curtain', 'freebase_id': '/m/03rszm'}, - {'id': 307, 'name': 'Grape', 'freebase_id': '/m/0388q'}, - {'id': 308, 'name': 'Sofa bed', 'freebase_id': '/m/03m3pdh'}, - {'id': 309, 'name': 'Horse', 'freebase_id': '/m/03k3r'}, - {'id': 310, 'name': 'Luggage and bags', 'freebase_id': '/m/0hf58v5'}, - {'id': 311, 'name': 'Desk', 'freebase_id': '/m/01y9k5'}, - {'id': 312, 'name': 'Crutch', 'freebase_id': '/m/05441v'}, - {'id': 313, 'name': 'Bicycle helmet', 'freebase_id': '/m/03p3bw'}, - {'id': 314, 'name': 'Tick', 'freebase_id': '/m/0175cv'}, - {'id': 315, 'name': 'Airplane', 'freebase_id': '/m/0cmf2'}, - {'id': 316, 'name': 'Canary', 'freebase_id': '/m/0ccs93'}, - {'id': 317, 'name': 'Spatula', 'freebase_id': '/m/02d1br'}, - {'id': 318, 'name': 'Watch', 'freebase_id': '/m/0gjkl'}, - {'id': 319, 'name': 'Lily', 'freebase_id': '/m/0jqgx'}, - {'id': 320, 'name': 'Kitchen appliance', 'freebase_id': '/m/0h99cwc'}, - {'id': 321, 'name': 'Filing cabinet', 'freebase_id': '/m/047j0r'}, - {'id': 322, 'name': 'Aircraft', 'freebase_id': '/m/0k5j'}, - {'id': 323, 'name': 'Cake stand', 'freebase_id': '/m/0h8n6ft'}, - {'id': 324, 'name': 'Candy', 'freebase_id': '/m/0gm28'}, - {'id': 325, 'name': 'Sink', 'freebase_id': '/m/0130jx'}, - {'id': 326, 'name': 'Mouse', 'freebase_id': '/m/04rmv'}, - {'id': 327, 'name': 'Wine', 'freebase_id': '/m/081qc'}, - {'id': 328, 'name': 'Wheelchair', 'freebase_id': '/m/0qmmr'}, - {'id': 329, 'name': 'Goldfish', 'freebase_id': '/m/03fj2'}, - {'id': 330, 'name': 'Refrigerator', 'freebase_id': '/m/040b_t'}, - {'id': 331, 'name': 'French fries', 'freebase_id': '/m/02y6n'}, - {'id': 332, 'name': 'Drawer', 'freebase_id': '/m/0fqfqc'}, - {'id': 333, 'name': 'Treadmill', 'freebase_id': '/m/030610'}, - {'id': 334, 'name': 'Picnic basket', 'freebase_id': '/m/07kng9'}, - {'id': 335, 'name': 'Dice', 'freebase_id': '/m/029b3'}, - {'id': 336, 'name': 'Cabbage', 'freebase_id': '/m/0fbw6'}, - {'id': 337, 'name': 'Football helmet', 'freebase_id': '/m/07qxg_'}, - {'id': 338, 'name': 'Pig', 'freebase_id': '/m/068zj'}, - {'id': 339, 'name': 'Person', 'freebase_id': '/m/01g317'}, - {'id': 340, 'name': 'Shorts', 'freebase_id': '/m/01bfm9'}, - {'id': 341, 'name': 'Gondola', 'freebase_id': '/m/02068x'}, - {'id': 342, 'name': 'Honeycomb', 'freebase_id': '/m/0fz0h'}, - {'id': 343, 'name': 'Doughnut', 'freebase_id': '/m/0jy4k'}, - {'id': 344, 'name': 'Chest of drawers', 'freebase_id': '/m/05kyg_'}, - {'id': 345, 'name': 'Land vehicle', 'freebase_id': '/m/01prls'}, - {'id': 346, 'name': 'Bat', 'freebase_id': '/m/01h44'}, - {'id': 347, 'name': 'Monkey', 'freebase_id': '/m/08pbxl'}, - {'id': 348, 'name': 'Dagger', 'freebase_id': '/m/02gzp'}, - {'id': 349, 'name': 'Tableware', 'freebase_id': '/m/04brg2'}, - {'id': 350, 'name': 'Human foot', 'freebase_id': '/m/031n1'}, - {'id': 351, 'name': 'Mug', 'freebase_id': '/m/02jvh9'}, - {'id': 352, 'name': 'Alarm clock', 'freebase_id': '/m/046dlr'}, - {'id': 353, 'name': 'Pressure cooker', 'freebase_id': '/m/0h8ntjv'}, - {'id': 354, 'name': 'Human hand', 'freebase_id': '/m/0k65p'}, - {'id': 355, 'name': 'Tortoise', 'freebase_id': '/m/011k07'}, - {'id': 356, 'name': 'Baseball glove', 'freebase_id': '/m/03grzl'}, - {'id': 357, 'name': 'Sword', 'freebase_id': '/m/06y5r'}, - {'id': 358, 'name': 'Pear', 'freebase_id': '/m/061_f'}, - {'id': 359, 'name': 'Miniskirt', 'freebase_id': '/m/01cmb2'}, - {'id': 360, 'name': 'Traffic sign', 'freebase_id': '/m/01mqdt'}, - {'id': 361, 'name': 'Girl', 'freebase_id': '/m/05r655'}, - {'id': 362, 'name': 'Roller skates', 'freebase_id': '/m/02p3w7d'}, - {'id': 363, 'name': 'Dinosaur', 'freebase_id': '/m/029tx'}, - {'id': 364, 'name': 'Porch', 'freebase_id': '/m/04m6gz'}, - {'id': 365, 'name': 'Human beard', 'freebase_id': '/m/015h_t'}, - {'id': 366, 'name': 'Submarine sandwich', 'freebase_id': '/m/06pcq'}, - {'id': 367, 'name': 'Screwdriver', 'freebase_id': '/m/01bms0'}, - {'id': 368, 'name': 'Strawberry', 'freebase_id': '/m/07fbm7'}, - {'id': 369, 'name': 'Wine glass', 'freebase_id': '/m/09tvcd'}, - {'id': 370, 'name': 'Seafood', 'freebase_id': '/m/06nwz'}, - {'id': 371, 'name': 'Racket', 'freebase_id': '/m/0dv9c'}, - {'id': 372, 'name': 'Wheel', 'freebase_id': '/m/083wq'}, - {'id': 373, 'name': 'Sea lion', 'freebase_id': '/m/0gd36'}, - {'id': 374, 'name': 'Toy', 'freebase_id': '/m/0138tl'}, - {'id': 375, 'name': 'Tea', 'freebase_id': '/m/07clx'}, - {'id': 376, 'name': 'Tennis ball', 'freebase_id': '/m/05ctyq'}, - {'id': 377, 'name': 'Waste container', 'freebase_id': '/m/0bjyj5'}, - {'id': 378, 'name': 'Mule', 'freebase_id': '/m/0dbzx'}, - {'id': 379, 'name': 'Cricket ball', 'freebase_id': '/m/02ctlc'}, - {'id': 380, 'name': 'Pineapple', 'freebase_id': '/m/0fp6w'}, - {'id': 381, 'name': 'Coconut', 'freebase_id': '/m/0djtd'}, - {'id': 382, 'name': 'Doll', 'freebase_id': '/m/0167gd'}, - {'id': 383, 'name': 'Coffee table', 'freebase_id': '/m/078n6m'}, - {'id': 384, 'name': 'Snowman', 'freebase_id': '/m/0152hh'}, - {'id': 385, 'name': 'Lavender', 'freebase_id': '/m/04gth'}, - {'id': 386, 'name': 'Shrimp', 'freebase_id': '/m/0ll1f78'}, - {'id': 387, 'name': 'Maple', 'freebase_id': '/m/0cffdh'}, - {'id': 388, 'name': 'Cowboy hat', 'freebase_id': '/m/025rp__'}, - {'id': 389, 'name': 'Goggles', 'freebase_id': '/m/02_n6y'}, - {'id': 390, 'name': 'Rugby ball', 'freebase_id': '/m/0wdt60w'}, - {'id': 391, 'name': 'Caterpillar', 'freebase_id': '/m/0cydv'}, - {'id': 392, 'name': 'Poster', 'freebase_id': '/m/01n5jq'}, - {'id': 393, 'name': 'Rocket', 'freebase_id': '/m/09rvcxw'}, - {'id': 394, 'name': 'Organ', 'freebase_id': '/m/013y1f'}, - {'id': 395, 'name': 'Saxophone', 'freebase_id': '/m/06ncr'}, - {'id': 396, 'name': 'Traffic light', 'freebase_id': '/m/015qff'}, - {'id': 397, 'name': 'Cocktail', 'freebase_id': '/m/024g6'}, - {'id': 398, 'name': 'Plastic bag', 'freebase_id': '/m/05gqfk'}, - {'id': 399, 'name': 'Squash', 'freebase_id': '/m/0dv77'}, - {'id': 400, 'name': 'Mushroom', 'freebase_id': '/m/052sf'}, - {'id': 401, 'name': 'Hamburger', 'freebase_id': '/m/0cdn1'}, - {'id': 402, 'name': 'Light switch', 'freebase_id': '/m/03jbxj'}, - {'id': 403, 'name': 'Parachute', 'freebase_id': '/m/0cyfs'}, - {'id': 404, 'name': 'Teddy bear', 'freebase_id': '/m/0kmg4'}, - {'id': 405, 'name': 'Winter melon', 'freebase_id': '/m/02cvgx'}, - {'id': 406, 'name': 'Deer', 'freebase_id': '/m/09kx5'}, - {'id': 407, 'name': 'Musical keyboard', 'freebase_id': '/m/057cc'}, - {'id': 408, 'name': 'Plumbing fixture', 'freebase_id': '/m/02pkr5'}, - {'id': 409, 'name': 'Scoreboard', 'freebase_id': '/m/057p5t'}, - {'id': 410, 'name': 'Baseball bat', 'freebase_id': '/m/03g8mr'}, - {'id': 411, 'name': 'Envelope', 'freebase_id': '/m/0frqm'}, - {'id': 412, 'name': 'Adhesive tape', 'freebase_id': '/m/03m3vtv'}, - {'id': 413, 'name': 'Briefcase', 'freebase_id': '/m/0584n8'}, - {'id': 414, 'name': 'Paddle', 'freebase_id': '/m/014y4n'}, - {'id': 415, 'name': 'Bow and arrow', 'freebase_id': '/m/01g3x7'}, - {'id': 416, 'name': 'Telephone', 'freebase_id': '/m/07cx4'}, - {'id': 417, 'name': 'Sheep', 'freebase_id': '/m/07bgp'}, - {'id': 418, 'name': 'Jacket', 'freebase_id': '/m/032b3c'}, - {'id': 419, 'name': 'Boy', 'freebase_id': '/m/01bl7v'}, - {'id': 420, 'name': 'Pizza', 'freebase_id': '/m/0663v'}, - {'id': 421, 'name': 'Otter', 'freebase_id': '/m/0cn6p'}, - {'id': 422, 'name': 'Office supplies', 'freebase_id': '/m/02rdsp'}, - {'id': 423, 'name': 'Couch', 'freebase_id': '/m/02crq1'}, - {'id': 424, 'name': 'Cello', 'freebase_id': '/m/01xqw'}, - {'id': 425, 'name': 'Bull', 'freebase_id': '/m/0cnyhnx'}, - {'id': 426, 'name': 'Camel', 'freebase_id': '/m/01x_v'}, - {'id': 427, 'name': 'Ball', 'freebase_id': '/m/018xm'}, - {'id': 428, 'name': 'Duck', 'freebase_id': '/m/09ddx'}, - {'id': 429, 'name': 'Whale', 'freebase_id': '/m/084zz'}, - {'id': 430, 'name': 'Shirt', 'freebase_id': '/m/01n4qj'}, - {'id': 431, 'name': 'Tank', 'freebase_id': '/m/07cmd'}, - {'id': 432, 'name': 'Motorcycle', 'freebase_id': '/m/04_sv'}, - {'id': 433, 'name': 'Accordion', 'freebase_id': '/m/0mkg'}, - {'id': 434, 'name': 'Owl', 'freebase_id': '/m/09d5_'}, - {'id': 435, 'name': 'Porcupine', 'freebase_id': '/m/0c568'}, - {'id': 436, 'name': 'Sun hat', 'freebase_id': '/m/02wbtzl'}, - {'id': 437, 'name': 'Nail', 'freebase_id': '/m/05bm6'}, - {'id': 438, 'name': 'Scissors', 'freebase_id': '/m/01lsmm'}, - {'id': 439, 'name': 'Swan', 'freebase_id': '/m/0dftk'}, - {'id': 440, 'name': 'Lamp', 'freebase_id': '/m/0dtln'}, - {'id': 441, 'name': 'Crown', 'freebase_id': '/m/0nl46'}, - {'id': 442, 'name': 'Piano', 'freebase_id': '/m/05r5c'}, - {'id': 443, 'name': 'Sculpture', 'freebase_id': '/m/06msq'}, - {'id': 444, 'name': 'Cheetah', 'freebase_id': '/m/0cd4d'}, - {'id': 445, 'name': 'Oboe', 'freebase_id': '/m/05kms'}, - {'id': 446, 'name': 'Tin can', 'freebase_id': '/m/02jnhm'}, - {'id': 447, 'name': 'Mango', 'freebase_id': '/m/0fldg'}, - {'id': 448, 'name': 'Tripod', 'freebase_id': '/m/073bxn'}, - {'id': 449, 'name': 'Oven', 'freebase_id': '/m/029bxz'}, - {'id': 450, 'name': 'Mouse', 'freebase_id': '/m/020lf'}, - {'id': 451, 'name': 'Barge', 'freebase_id': '/m/01btn'}, - {'id': 452, 'name': 'Coffee', 'freebase_id': '/m/02vqfm'}, - {'id': 453, 'name': 'Snowboard', 'freebase_id': '/m/06__v'}, - {'id': 454, 'name': 'Common fig', 'freebase_id': '/m/043nyj'}, - {'id': 455, 'name': 'Salad', 'freebase_id': '/m/0grw1'}, - {'id': 456, 'name': 'Marine invertebrates', 'freebase_id': '/m/03hl4l9'}, - {'id': 457, 'name': 'Umbrella', 'freebase_id': '/m/0hnnb'}, - {'id': 458, 'name': 'Kangaroo', 'freebase_id': '/m/04c0y'}, - {'id': 459, 'name': 'Human arm', 'freebase_id': '/m/0dzf4'}, - {'id': 460, 'name': 'Measuring cup', 'freebase_id': '/m/07v9_z'}, - {'id': 461, 'name': 'Snail', 'freebase_id': '/m/0f9_l'}, - {'id': 462, 'name': 'Loveseat', 'freebase_id': '/m/0703r8'}, - {'id': 463, 'name': 'Suit', 'freebase_id': '/m/01xyhv'}, - {'id': 464, 'name': 'Teapot', 'freebase_id': '/m/01fh4r'}, - {'id': 465, 'name': 'Bottle', 'freebase_id': '/m/04dr76w'}, - {'id': 466, 'name': 'Alpaca', 'freebase_id': '/m/0pcr'}, - {'id': 467, 'name': 'Kettle', 'freebase_id': '/m/03s_tn'}, - {'id': 468, 'name': 'Trousers', 'freebase_id': '/m/07mhn'}, - {'id': 469, 'name': 'Popcorn', 'freebase_id': '/m/01hrv5'}, - {'id': 470, 'name': 'Centipede', 'freebase_id': '/m/019h78'}, - {'id': 471, 'name': 'Spider', 'freebase_id': '/m/09kmb'}, - {'id': 472, 'name': 'Sparrow', 'freebase_id': '/m/0h23m'}, - {'id': 473, 'name': 'Plate', 'freebase_id': '/m/050gv4'}, - {'id': 474, 'name': 'Bagel', 'freebase_id': '/m/01fb_0'}, - {'id': 475, 'name': 'Personal care', 'freebase_id': '/m/02w3_ws'}, - {'id': 476, 'name': 'Apple', 'freebase_id': '/m/014j1m'}, - {'id': 477, 'name': 'Brassiere', 'freebase_id': '/m/01gmv2'}, - {'id': 478, 'name': 'Bathroom cabinet', 'freebase_id': '/m/04y4h8h'}, - {'id': 479, 'name': 'studio couch', 'freebase_id': '/m/026qbn5'}, - {'id': 480, 'name': 'Computer keyboard', 'freebase_id': '/m/01m2v'}, - {'id': 481, 'name': 'Table tennis racket', 'freebase_id': '/m/05_5p_0'}, - {'id': 482, 'name': 'Sushi', 'freebase_id': '/m/07030'}, - {'id': 483, 'name': 'Cabinetry', 'freebase_id': '/m/01s105'}, - {'id': 484, 'name': 'Street light', 'freebase_id': '/m/033rq4'}, - {'id': 485, 'name': 'Towel', 'freebase_id': '/m/0162_1'}, - {'id': 486, 'name': 'Nightstand', 'freebase_id': '/m/02z51p'}, - {'id': 487, 'name': 'Rabbit', 'freebase_id': '/m/06mf6'}, - {'id': 488, 'name': 'Dolphin', 'freebase_id': '/m/02hj4'}, - {'id': 489, 'name': 'Dog', 'freebase_id': '/m/0bt9lr'}, - {'id': 490, 'name': 'Jug', 'freebase_id': '/m/08hvt4'}, - {'id': 491, 'name': 'Wok', 'freebase_id': '/m/084rd'}, - {'id': 492, 'name': 'Fire hydrant', 'freebase_id': '/m/01pns0'}, - {'id': 493, 'name': 'Human eye', 'freebase_id': '/m/014sv8'}, - {'id': 494, 'name': 'Skyscraper', 'freebase_id': '/m/079cl'}, - {'id': 495, 'name': 'Backpack', 'freebase_id': '/m/01940j'}, - {'id': 496, 'name': 'Potato', 'freebase_id': '/m/05vtc'}, - {'id': 497, 'name': 'Paper towel', 'freebase_id': '/m/02w3r3'}, - {'id': 498, 'name': 'Lifejacket', 'freebase_id': '/m/054xkw'}, - {'id': 499, 'name': 'Bicycle wheel', 'freebase_id': '/m/01bqk0'}, - {'id': 500, 'name': 'Toilet', 'freebase_id': '/m/09g1w'}, -] - - -def _get_builtin_metadata(cats): - id_to_name = {x['id']: x['name'] for x in cats} - thing_dataset_id_to_contiguous_id = {i + 1: i for i in range(len(cats))} - thing_classes = [x['name'] for x in sorted(cats, key=lambda x: x['id'])] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS_OID = { - # cat threshold: 500, 1500: r 170, c 151, f 179 - "oid_train": ("oid/images/", "oid/annotations/oid_challenge_2019_train_bbox.json"), - # "expanded" duplicates annotations to their father classes based on the official - # hierarchy. This is used in the official evaulation protocol. - # https://storage.googleapis.com/openimages/web/evaluation.html - "oid_val_expanded": ("oid/images/validation/", "oid/annotations/oid_challenge_2019_val_expanded.json"), - "oid_val_expanded_rare": ("oid/images/validation/", "oid/annotations/oid_challenge_2019_val_expanded_rare.json"), -} - - -for key, (image_root, json_file) in _PREDEFINED_SPLITS_OID.items(): - register_oid_instances( - key, - _get_builtin_metadata(categories), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/e4e/configs/__init__.py b/spaces/akhaliq/JoJoGAN/e4e/configs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/PaintTransformer/train/util/util.py b/spaces/akhaliq/PaintTransformer/train/util/util.py deleted file mode 100644 index 2889bfeb151f3d09af5cbb0948ac1d838154f1df..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/PaintTransformer/train/util/util.py +++ /dev/null @@ -1,103 +0,0 @@ -"""This module contains simple helper functions """ -from __future__ import print_function -import torch -import numpy as np -from PIL import Image -import os - - -def tensor2im(input_image, imtype=np.uint8): - """"Converts a Tensor array into a numpy image array. - - Parameters: - input_image (tensor) -- the input image tensor array - imtype (type) -- the desired type of the converted numpy array - """ - if not isinstance(input_image, np.ndarray): - if isinstance(input_image, torch.Tensor): # get the data from a variable - image_tensor = input_image.data - else: - return input_image - image_numpy = image_tensor[0].cpu().float().numpy() # convert it into a numpy array - if image_numpy.shape[0] == 1: # grayscale to RGB - image_numpy = np.tile(image_numpy, (3, 1, 1)) - image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 # post-processing: transpose and scaling - else: # if it is a numpy array - image_numpy = input_image * 255. - return image_numpy.astype(imtype) - - -def diagnose_network(net, name='network'): - """Calculate and print the mean of average absolute(gradients) - - Parameters: - net (torch network) -- Torch network - name (str) -- the name of the network - """ - mean = 0.0 - count = 0 - for param in net.parameters(): - if param.grad is not None: - mean += torch.mean(torch.abs(param.grad.data)) - count += 1 - if count > 0: - mean = mean / count - print(name) - print(mean) - - -def save_image(image_numpy, image_path, aspect_ratio=1.0): - """Save a numpy image to the disk - - Parameters: - image_numpy (numpy array) -- input numpy array - image_path (str) -- the path of the image - """ - - image_pil = Image.fromarray(image_numpy) - h, w, _ = image_numpy.shape - - if aspect_ratio > 1.0: - image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC) - if aspect_ratio < 1.0: - image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC) - image_pil.save(image_path) - - -def print_numpy(x, val=True, shp=False): - """Print the mean, min, max, median, std, and size of a numpy array - - Parameters: - val (bool) -- if print the values of the numpy array - shp (bool) -- if print the shape of the numpy array - """ - x = x.astype(np.float64) - if shp: - print('shape,', x.shape) - if val: - x = x.flatten() - print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % ( - np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x))) - - -def mkdirs(paths): - """create empty directories if they don't exist - - Parameters: - paths (str list) -- a list of directory paths - """ - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - """create a single empty directory if it didn't exist - - Parameters: - path (str) -- a single directory path - """ - if not os.path.exists(path): - os.makedirs(path) diff --git a/spaces/akhaliq/lama/bin/gen_mask_dataset.py b/spaces/akhaliq/lama/bin/gen_mask_dataset.py deleted file mode 100644 index 6e2ce3a9bc9708fd46641cab815113508af32d02..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/gen_mask_dataset.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python3 - -import glob -import os -import shutil -import traceback - -import PIL.Image as Image -import numpy as np -from joblib import Parallel, delayed - -from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop -from saicinpainting.evaluation.utils import load_yaml, SmallMode -from saicinpainting.training.data.masks import MixedMaskGenerator - - -class MakeManyMasksWrapper: - def __init__(self, impl, variants_n=2): - self.impl = impl - self.variants_n = variants_n - - def get_masks(self, img): - img = np.transpose(np.array(img), (2, 0, 1)) - return [self.impl(img)[0] for _ in range(self.variants_n)] - - -def process_images(src_images, indir, outdir, config): - if config.generator_kind == 'segmentation': - mask_generator = SegmentationMask(**config.mask_generator_kwargs) - elif config.generator_kind == 'random': - variants_n = config.mask_generator_kwargs.pop('variants_n', 2) - mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**config.mask_generator_kwargs), - variants_n=variants_n) - else: - raise ValueError(f'Unexpected generator kind: {config.generator_kind}') - - max_tamper_area = config.get('max_tamper_area', 1) - - for infile in src_images: - try: - file_relpath = infile[len(indir):] - img_outpath = os.path.join(outdir, file_relpath) - os.makedirs(os.path.dirname(img_outpath), exist_ok=True) - - image = Image.open(infile).convert('RGB') - - # scale input image to output resolution and filter smaller images - if min(image.size) < config.cropping.out_min_size: - handle_small_mode = SmallMode(config.cropping.handle_small_mode) - if handle_small_mode == SmallMode.DROP: - continue - elif handle_small_mode == SmallMode.UPSCALE: - factor = config.cropping.out_min_size / min(image.size) - out_size = (np.array(image.size) * factor).round().astype('uint32') - image = image.resize(out_size, resample=Image.BICUBIC) - else: - factor = config.cropping.out_min_size / min(image.size) - out_size = (np.array(image.size) * factor).round().astype('uint32') - image = image.resize(out_size, resample=Image.BICUBIC) - - # generate and select masks - src_masks = mask_generator.get_masks(image) - - filtered_image_mask_pairs = [] - for cur_mask in src_masks: - if config.cropping.out_square_crop: - (crop_left, - crop_top, - crop_right, - crop_bottom) = propose_random_square_crop(cur_mask, - min_overlap=config.cropping.crop_min_overlap) - cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right] - cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom)) - else: - cur_image = image - - if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area: - continue - - filtered_image_mask_pairs.append((cur_image, cur_mask)) - - mask_indices = np.random.choice(len(filtered_image_mask_pairs), - size=min(len(filtered_image_mask_pairs), config.max_masks_per_image), - replace=False) - - # crop masks; save masks together with input image - mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0]) - for i, idx in enumerate(mask_indices): - cur_image, cur_mask = filtered_image_mask_pairs[idx] - cur_basename = mask_basename + f'_crop{i:03d}' - Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'), - mode='L').save(cur_basename + f'_mask{i:03d}.png') - cur_image.save(cur_basename + '.png') - except KeyboardInterrupt: - return - except Exception as ex: - print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}') - - -def main(args): - if not args.indir.endswith('/'): - args.indir += '/' - - os.makedirs(args.outdir, exist_ok=True) - - config = load_yaml(args.config) - - in_files = list(glob.glob(os.path.join(args.indir, '**', f'*.{args.ext}'), recursive=True)) - if args.n_jobs == 0: - process_images(in_files, args.indir, args.outdir, config) - else: - in_files_n = len(in_files) - chunk_size = in_files_n // args.n_jobs + (1 if in_files_n % args.n_jobs > 0 else 0) - Parallel(n_jobs=args.n_jobs)( - delayed(process_images)(in_files[start:start+chunk_size], args.indir, args.outdir, config) - for start in range(0, len(in_files), chunk_size) - ) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to config for dataset generation') - aparser.add_argument('indir', type=str, help='Path to folder with images') - aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to') - aparser.add_argument('--n-jobs', type=int, default=0, help='How many processes to use') - aparser.add_argument('--ext', type=str, default='jpg', help='Input image extension') - - main(aparser.parse_args()) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/exceptions.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/exceptions.py deleted file mode 100644 index 97b9612a187a5e97579551e82244bcc30eacb3bf..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/exceptions.py +++ /dev/null @@ -1,658 +0,0 @@ -"""Exceptions used throughout package. - -This module MUST NOT try to import from anything within `pip._internal` to -operate. This is expected to be importable from any/all files within the -subpackage and, thus, should not depend on them. -""" - -import configparser -import re -from itertools import chain, groupby, repeat -from typing import TYPE_CHECKING, Dict, List, Optional, Union - -from pip._vendor.requests.models import Request, Response -from pip._vendor.rich.console import Console, ConsoleOptions, RenderResult -from pip._vendor.rich.markup import escape -from pip._vendor.rich.text import Text - -if TYPE_CHECKING: - from hashlib import _Hash - from typing import Literal - - from pip._internal.metadata import BaseDistribution - from pip._internal.req.req_install import InstallRequirement - - -# -# Scaffolding -# -def _is_kebab_case(s: str) -> bool: - return re.match(r"^[a-z]+(-[a-z]+)*$", s) is not None - - -def _prefix_with_indent( - s: Union[Text, str], - console: Console, - *, - prefix: str, - indent: str, -) -> Text: - if isinstance(s, Text): - text = s - else: - text = console.render_str(s) - - return console.render_str(prefix, overflow="ignore") + console.render_str( - f"\n{indent}", overflow="ignore" - ).join(text.split(allow_blank=True)) - - -class PipError(Exception): - """The base pip error.""" - - -class DiagnosticPipError(PipError): - """An error, that presents diagnostic information to the user. - - This contains a bunch of logic, to enable pretty presentation of our error - messages. Each error gets a unique reference. Each error can also include - additional context, a hint and/or a note -- which are presented with the - main error message in a consistent style. - - This is adapted from the error output styling in `sphinx-theme-builder`. - """ - - reference: str - - def __init__( - self, - *, - kind: 'Literal["error", "warning"]' = "error", - reference: Optional[str] = None, - message: Union[str, Text], - context: Optional[Union[str, Text]], - hint_stmt: Optional[Union[str, Text]], - note_stmt: Optional[Union[str, Text]] = None, - link: Optional[str] = None, - ) -> None: - # Ensure a proper reference is provided. - if reference is None: - assert hasattr(self, "reference"), "error reference not provided!" - reference = self.reference - assert _is_kebab_case(reference), "error reference must be kebab-case!" - - self.kind = kind - self.reference = reference - - self.message = message - self.context = context - - self.note_stmt = note_stmt - self.hint_stmt = hint_stmt - - self.link = link - - super().__init__(f"<{self.__class__.__name__}: {self.reference}>") - - def __repr__(self) -> str: - return ( - f"<{self.__class__.__name__}(" - f"reference={self.reference!r}, " - f"message={self.message!r}, " - f"context={self.context!r}, " - f"note_stmt={self.note_stmt!r}, " - f"hint_stmt={self.hint_stmt!r}" - ")>" - ) - - def __rich_console__( - self, - console: Console, - options: ConsoleOptions, - ) -> RenderResult: - colour = "red" if self.kind == "error" else "yellow" - - yield f"[{colour} bold]{self.kind}[/]: [bold]{self.reference}[/]" - yield "" - - if not options.ascii_only: - # Present the main message, with relevant context indented. - if self.context is not None: - yield _prefix_with_indent( - self.message, - console, - prefix=f"[{colour}]×[/] ", - indent=f"[{colour}]│[/] ", - ) - yield _prefix_with_indent( - self.context, - console, - prefix=f"[{colour}]╰─>[/] ", - indent=f"[{colour}] [/] ", - ) - else: - yield _prefix_with_indent( - self.message, - console, - prefix="[red]×[/] ", - indent=" ", - ) - else: - yield self.message - if self.context is not None: - yield "" - yield self.context - - if self.note_stmt is not None or self.hint_stmt is not None: - yield "" - - if self.note_stmt is not None: - yield _prefix_with_indent( - self.note_stmt, - console, - prefix="[magenta bold]note[/]: ", - indent=" ", - ) - if self.hint_stmt is not None: - yield _prefix_with_indent( - self.hint_stmt, - console, - prefix="[cyan bold]hint[/]: ", - indent=" ", - ) - - if self.link is not None: - yield "" - yield f"Link: {self.link}" - - -# -# Actual Errors -# -class ConfigurationError(PipError): - """General exception in configuration""" - - -class InstallationError(PipError): - """General exception during installation""" - - -class UninstallationError(PipError): - """General exception during uninstallation""" - - -class MissingPyProjectBuildRequires(DiagnosticPipError): - """Raised when pyproject.toml has `build-system`, but no `build-system.requires`.""" - - reference = "missing-pyproject-build-system-requires" - - def __init__(self, *, package: str) -> None: - super().__init__( - message=f"Can not process {escape(package)}", - context=Text( - "This package has an invalid pyproject.toml file.\n" - "The [build-system] table is missing the mandatory `requires` key." - ), - note_stmt="This is an issue with the package mentioned above, not pip.", - hint_stmt=Text("See PEP 518 for the detailed specification."), - ) - - -class InvalidPyProjectBuildRequires(DiagnosticPipError): - """Raised when pyproject.toml an invalid `build-system.requires`.""" - - reference = "invalid-pyproject-build-system-requires" - - def __init__(self, *, package: str, reason: str) -> None: - super().__init__( - message=f"Can not process {escape(package)}", - context=Text( - "This package has an invalid `build-system.requires` key in " - f"pyproject.toml.\n{reason}" - ), - note_stmt="This is an issue with the package mentioned above, not pip.", - hint_stmt=Text("See PEP 518 for the detailed specification."), - ) - - -class NoneMetadataError(PipError): - """Raised when accessing a Distribution's "METADATA" or "PKG-INFO". - - This signifies an inconsistency, when the Distribution claims to have - the metadata file (if not, raise ``FileNotFoundError`` instead), but is - not actually able to produce its content. This may be due to permission - errors. - """ - - def __init__( - self, - dist: "BaseDistribution", - metadata_name: str, - ) -> None: - """ - :param dist: A Distribution object. - :param metadata_name: The name of the metadata being accessed - (can be "METADATA" or "PKG-INFO"). - """ - self.dist = dist - self.metadata_name = metadata_name - - def __str__(self) -> str: - # Use `dist` in the error message because its stringification - # includes more information, like the version and location. - return "None {} metadata found for distribution: {}".format( - self.metadata_name, - self.dist, - ) - - -class UserInstallationInvalid(InstallationError): - """A --user install is requested on an environment without user site.""" - - def __str__(self) -> str: - return "User base directory is not specified" - - -class InvalidSchemeCombination(InstallationError): - def __str__(self) -> str: - before = ", ".join(str(a) for a in self.args[:-1]) - return f"Cannot set {before} and {self.args[-1]} together" - - -class DistributionNotFound(InstallationError): - """Raised when a distribution cannot be found to satisfy a requirement""" - - -class RequirementsFileParseError(InstallationError): - """Raised when a general error occurs parsing a requirements file line.""" - - -class BestVersionAlreadyInstalled(PipError): - """Raised when the most up-to-date version of a package is already - installed.""" - - -class BadCommand(PipError): - """Raised when virtualenv or a command is not found""" - - -class CommandError(PipError): - """Raised when there is an error in command-line arguments""" - - -class PreviousBuildDirError(PipError): - """Raised when there's a previous conflicting build directory""" - - -class NetworkConnectionError(PipError): - """HTTP connection error""" - - def __init__( - self, error_msg: str, response: Response = None, request: Request = None - ) -> None: - """ - Initialize NetworkConnectionError with `request` and `response` - objects. - """ - self.response = response - self.request = request - self.error_msg = error_msg - if ( - self.response is not None - and not self.request - and hasattr(response, "request") - ): - self.request = self.response.request - super().__init__(error_msg, response, request) - - def __str__(self) -> str: - return str(self.error_msg) - - -class InvalidWheelFilename(InstallationError): - """Invalid wheel filename.""" - - -class UnsupportedWheel(InstallationError): - """Unsupported wheel.""" - - -class InvalidWheel(InstallationError): - """Invalid (e.g. corrupt) wheel.""" - - def __init__(self, location: str, name: str): - self.location = location - self.name = name - - def __str__(self) -> str: - return f"Wheel '{self.name}' located at {self.location} is invalid." - - -class MetadataInconsistent(InstallationError): - """Built metadata contains inconsistent information. - - This is raised when the metadata contains values (e.g. name and version) - that do not match the information previously obtained from sdist filename - or user-supplied ``#egg=`` value. - """ - - def __init__( - self, ireq: "InstallRequirement", field: str, f_val: str, m_val: str - ) -> None: - self.ireq = ireq - self.field = field - self.f_val = f_val - self.m_val = m_val - - def __str__(self) -> str: - template = ( - "Requested {} has inconsistent {}: " - "filename has {!r}, but metadata has {!r}" - ) - return template.format(self.ireq, self.field, self.f_val, self.m_val) - - -class LegacyInstallFailure(DiagnosticPipError): - """Error occurred while executing `setup.py install`""" - - reference = "legacy-install-failure" - - def __init__(self, package_details: str) -> None: - super().__init__( - message="Encountered error while trying to install package.", - context=package_details, - hint_stmt="See above for output from the failure.", - note_stmt="This is an issue with the package mentioned above, not pip.", - ) - - -class InstallationSubprocessError(DiagnosticPipError, InstallationError): - """A subprocess call failed.""" - - reference = "subprocess-exited-with-error" - - def __init__( - self, - *, - command_description: str, - exit_code: int, - output_lines: Optional[List[str]], - ) -> None: - if output_lines is None: - output_prompt = Text("See above for output.") - else: - output_prompt = ( - Text.from_markup(f"[red][{len(output_lines)} lines of output][/]\n") - + Text("".join(output_lines)) - + Text.from_markup(R"[red]\[end of output][/]") - ) - - super().__init__( - message=( - f"[green]{escape(command_description)}[/] did not run successfully.\n" - f"exit code: {exit_code}" - ), - context=output_prompt, - hint_stmt=None, - note_stmt=( - "This error originates from a subprocess, and is likely not a " - "problem with pip." - ), - ) - - self.command_description = command_description - self.exit_code = exit_code - - def __str__(self) -> str: - return f"{self.command_description} exited with {self.exit_code}" - - -class MetadataGenerationFailed(InstallationSubprocessError, InstallationError): - reference = "metadata-generation-failed" - - def __init__( - self, - *, - package_details: str, - ) -> None: - super(InstallationSubprocessError, self).__init__( - message="Encountered error while generating package metadata.", - context=escape(package_details), - hint_stmt="See above for details.", - note_stmt="This is an issue with the package mentioned above, not pip.", - ) - - def __str__(self) -> str: - return "metadata generation failed" - - -class HashErrors(InstallationError): - """Multiple HashError instances rolled into one for reporting""" - - def __init__(self) -> None: - self.errors: List["HashError"] = [] - - def append(self, error: "HashError") -> None: - self.errors.append(error) - - def __str__(self) -> str: - lines = [] - self.errors.sort(key=lambda e: e.order) - for cls, errors_of_cls in groupby(self.errors, lambda e: e.__class__): - lines.append(cls.head) - lines.extend(e.body() for e in errors_of_cls) - if lines: - return "\n".join(lines) - return "" - - def __bool__(self) -> bool: - return bool(self.errors) - - -class HashError(InstallationError): - """ - A failure to verify a package against known-good hashes - - :cvar order: An int sorting hash exception classes by difficulty of - recovery (lower being harder), so the user doesn't bother fretting - about unpinned packages when he has deeper issues, like VCS - dependencies, to deal with. Also keeps error reports in a - deterministic order. - :cvar head: A section heading for display above potentially many - exceptions of this kind - :ivar req: The InstallRequirement that triggered this error. This is - pasted on after the exception is instantiated, because it's not - typically available earlier. - - """ - - req: Optional["InstallRequirement"] = None - head = "" - order: int = -1 - - def body(self) -> str: - """Return a summary of me for display under the heading. - - This default implementation simply prints a description of the - triggering requirement. - - :param req: The InstallRequirement that provoked this error, with - its link already populated by the resolver's _populate_link(). - - """ - return f" {self._requirement_name()}" - - def __str__(self) -> str: - return f"{self.head}\n{self.body()}" - - def _requirement_name(self) -> str: - """Return a description of the requirement that triggered me. - - This default implementation returns long description of the req, with - line numbers - - """ - return str(self.req) if self.req else "unknown package" - - -class VcsHashUnsupported(HashError): - """A hash was provided for a version-control-system-based requirement, but - we don't have a method for hashing those.""" - - order = 0 - head = ( - "Can't verify hashes for these requirements because we don't " - "have a way to hash version control repositories:" - ) - - -class DirectoryUrlHashUnsupported(HashError): - """A hash was provided for a version-control-system-based requirement, but - we don't have a method for hashing those.""" - - order = 1 - head = ( - "Can't verify hashes for these file:// requirements because they " - "point to directories:" - ) - - -class HashMissing(HashError): - """A hash was needed for a requirement but is absent.""" - - order = 2 - head = ( - "Hashes are required in --require-hashes mode, but they are " - "missing from some requirements. Here is a list of those " - "requirements along with the hashes their downloaded archives " - "actually had. Add lines like these to your requirements files to " - "prevent tampering. (If you did not enable --require-hashes " - "manually, note that it turns on automatically when any package " - "has a hash.)" - ) - - def __init__(self, gotten_hash: str) -> None: - """ - :param gotten_hash: The hash of the (possibly malicious) archive we - just downloaded - """ - self.gotten_hash = gotten_hash - - def body(self) -> str: - # Dodge circular import. - from pip._internal.utils.hashes import FAVORITE_HASH - - package = None - if self.req: - # In the case of URL-based requirements, display the original URL - # seen in the requirements file rather than the package name, - # so the output can be directly copied into the requirements file. - package = ( - self.req.original_link - if self.req.original_link - # In case someone feeds something downright stupid - # to InstallRequirement's constructor. - else getattr(self.req, "req", None) - ) - return " {} --hash={}:{}".format( - package or "unknown package", FAVORITE_HASH, self.gotten_hash - ) - - -class HashUnpinned(HashError): - """A requirement had a hash specified but was not pinned to a specific - version.""" - - order = 3 - head = ( - "In --require-hashes mode, all requirements must have their " - "versions pinned with ==. These do not:" - ) - - -class HashMismatch(HashError): - """ - Distribution file hash values don't match. - - :ivar package_name: The name of the package that triggered the hash - mismatch. Feel free to write to this after the exception is raise to - improve its error message. - - """ - - order = 4 - head = ( - "THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS " - "FILE. If you have updated the package versions, please update " - "the hashes. Otherwise, examine the package contents carefully; " - "someone may have tampered with them." - ) - - def __init__(self, allowed: Dict[str, List[str]], gots: Dict[str, "_Hash"]) -> None: - """ - :param allowed: A dict of algorithm names pointing to lists of allowed - hex digests - :param gots: A dict of algorithm names pointing to hashes we - actually got from the files under suspicion - """ - self.allowed = allowed - self.gots = gots - - def body(self) -> str: - return " {}:\n{}".format(self._requirement_name(), self._hash_comparison()) - - def _hash_comparison(self) -> str: - """ - Return a comparison of actual and expected hash values. - - Example:: - - Expected sha256 abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde - or 123451234512345123451234512345123451234512345 - Got bcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdef - - """ - - def hash_then_or(hash_name: str) -> "chain[str]": - # For now, all the decent hashes have 6-char names, so we can get - # away with hard-coding space literals. - return chain([hash_name], repeat(" or")) - - lines: List[str] = [] - for hash_name, expecteds in self.allowed.items(): - prefix = hash_then_or(hash_name) - lines.extend( - (" Expected {} {}".format(next(prefix), e)) for e in expecteds - ) - lines.append( - " Got {}\n".format(self.gots[hash_name].hexdigest()) - ) - return "\n".join(lines) - - -class UnsupportedPythonVersion(InstallationError): - """Unsupported python version according to Requires-Python package - metadata.""" - - -class ConfigurationFileCouldNotBeLoaded(ConfigurationError): - """When there are errors while loading a configuration file""" - - def __init__( - self, - reason: str = "could not be loaded", - fname: Optional[str] = None, - error: Optional[configparser.Error] = None, - ) -> None: - super().__init__(error) - self.reason = reason - self.fname = fname - self.error = error - - def __str__(self) -> str: - if self.fname is not None: - message_part = f" in {self.fname}." - else: - assert self.error is not None - message_part = f".\n{self.error}\n" - return f"Configuration file {self.reason}{message_part}" diff --git a/spaces/allknowingroger/Image-Models-Test24/app.py b/spaces/allknowingroger/Image-Models-Test24/app.py deleted file mode 100644 index fa7f25a49db07f3985f2f13442687d2cfb3aaab6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test24/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "krystv/hestyle-diffusion", - "louisaubrt/mpgshirt", - "arnomatic/seacreatures", - "jbilcke-hf/sdxl-moebius-lean", - "Fictiverse/Stable_Diffusion_Microscopic_model", - "digiplay/polla_mix_2.3D", - "plasmo/clayitization-sd1-5-768px", - "LinoyTsaban/huggy_v10", - "ItsJayQz/Firewatch_Diffusion", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test27/README.md b/spaces/allknowingroger/Image-Models-Test27/README.md deleted file mode 100644 index a7ff2b69c525295faca7acaa506de4bf5e5c1917..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test27/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test26 ---- - - \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/silero_tts/script.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/silero_tts/script.py deleted file mode 100644 index 460e76a888ae6ff74b74c34ee7437eae85a8c691..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/silero_tts/script.py +++ /dev/null @@ -1,182 +0,0 @@ -import time -from pathlib import Path - -import gradio as gr -import torch - -from extensions.silero_tts import tts_preprocessor -from modules import chat, shared -from modules.html_generator import chat_html_wrapper - -torch._C._jit_set_profiling_mode(False) - - -params = { - 'activate': True, - 'speaker': 'en_56', - 'language': 'en', - 'model_id': 'v3_en', - 'sample_rate': 48000, - 'device': 'cpu', - 'show_text': False, - 'autoplay': True, - 'voice_pitch': 'medium', - 'voice_speed': 'medium', - 'local_cache_path': '' # User can override the default cache path to something other via settings.json -} - -current_params = params.copy() -voices_by_gender = ['en_99', 'en_45', 'en_18', 'en_117', 'en_49', 'en_51', 'en_68', 'en_0', 'en_26', 'en_56', 'en_74', 'en_5', 'en_38', 'en_53', 'en_21', 'en_37', 'en_107', 'en_10', 'en_82', 'en_16', 'en_41', 'en_12', 'en_67', 'en_61', 'en_14', 'en_11', 'en_39', 'en_52', 'en_24', 'en_97', 'en_28', 'en_72', 'en_94', 'en_36', 'en_4', 'en_43', 'en_88', 'en_25', 'en_65', 'en_6', 'en_44', 'en_75', 'en_91', 'en_60', 'en_109', 'en_85', 'en_101', 'en_108', 'en_50', 'en_96', 'en_64', 'en_92', 'en_76', 'en_33', 'en_116', 'en_48', 'en_98', 'en_86', 'en_62', 'en_54', 'en_95', 'en_55', 'en_111', 'en_3', 'en_83', 'en_8', 'en_47', 'en_59', 'en_1', 'en_2', 'en_7', 'en_9', 'en_13', 'en_15', 'en_17', 'en_19', 'en_20', 'en_22', 'en_23', 'en_27', 'en_29', 'en_30', 'en_31', 'en_32', 'en_34', 'en_35', 'en_40', 'en_42', 'en_46', 'en_57', 'en_58', 'en_63', 'en_66', 'en_69', 'en_70', 'en_71', 'en_73', 'en_77', 'en_78', 'en_79', 'en_80', 'en_81', 'en_84', 'en_87', 'en_89', 'en_90', 'en_93', 'en_100', 'en_102', 'en_103', 'en_104', 'en_105', 'en_106', 'en_110', 'en_112', 'en_113', 'en_114', 'en_115'] -voice_pitches = ['x-low', 'low', 'medium', 'high', 'x-high'] -voice_speeds = ['x-slow', 'slow', 'medium', 'fast', 'x-fast'] -streaming_state = shared.args.no_stream # remember if chat streaming was enabled - -# Used for making text xml compatible, needed for voice pitch and speed control -table = str.maketrans({ - "<": "<", - ">": ">", - "&": "&", - "'": "'", - '"': """, -}) - - -def xmlesc(txt): - return txt.translate(table) - - -def load_model(): - torch_cache_path = torch.hub.get_dir() if params['local_cache_path'] == '' else params['local_cache_path'] - model_path = torch_cache_path + "/snakers4_silero-models_master/src/silero/model/" + params['model_id'] + ".pt" - if Path(model_path).is_file(): - print(f'\nUsing Silero TTS cached checkpoint found at {torch_cache_path}') - model, example_text = torch.hub.load(repo_or_dir=torch_cache_path + '/snakers4_silero-models_master/', model='silero_tts', language=params['language'], speaker=params['model_id'], source='local', path=model_path, force_reload=True) - else: - print(f'\nSilero TTS cache not found at {torch_cache_path}. Attempting to download...') - model, example_text = torch.hub.load(repo_or_dir='snakers4/silero-models', model='silero_tts', language=params['language'], speaker=params['model_id']) - model.to(params['device']) - return model - - -def remove_tts_from_history(name1, name2, mode): - for i, entry in enumerate(shared.history['internal']): - shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]] - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def toggle_text_in_history(name1, name2, mode): - for i, entry in enumerate(shared.history['visible']): - visible_reply = entry[1] - if visible_reply.startswith('')[0]}\n\n{reply}"] - else: - shared.history['visible'][i] = [shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"] - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - # Remove autoplay from the last reply - if shared.is_chat() and len(shared.history['internal']) > 0: - shared.history['visible'][-1] = [shared.history['visible'][-1][0], shared.history['visible'][-1][1].replace('controls autoplay>', 'controls>')] - - shared.processing_message = "*Is recording a voice message...*" - shared.args.no_stream = True # Disable streaming cause otherwise the audio output will stutter and begin anew every time the message is being updated - return string - - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global model, current_params, streaming_state - - for i in params: - if params[i] != current_params[i]: - model = load_model() - current_params = params.copy() - break - - if not params['activate']: - return string - - original_string = string - string = tts_preprocessor.preprocess(string) - - if string == '': - string = '*Empty reply, try regenerating*' - else: - output_file = Path(f'extensions/silero_tts/outputs/{shared.character}_{int(time.time())}.wav') - prosody = ''.format(params['voice_speed'], params['voice_pitch']) - silero_input = f'{prosody}{xmlesc(string)}' - model.save_wav(ssml_text=silero_input, speaker=params['speaker'], sample_rate=int(params['sample_rate']), audio_path=str(output_file)) - - autoplay = 'autoplay' if params['autoplay'] else '' - string = f'' - if params['show_text']: - string += f'\n\n{original_string}' - - shared.processing_message = "*Is typing...*" - shared.args.no_stream = streaming_state # restore the streaming option to the previous value - return string - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - - -def setup(): - global model - model = load_model() - - -def ui(): - # Gradio elements - with gr.Accordion("Silero TTS"): - with gr.Row(): - activate = gr.Checkbox(value=params['activate'], label='Activate TTS') - autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically') - - show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player') - voice = gr.Dropdown(value=params['speaker'], choices=voices_by_gender, label='TTS voice') - with gr.Row(): - v_pitch = gr.Dropdown(value=params['voice_pitch'], choices=voice_pitches, label='Voice pitch') - v_speed = gr.Dropdown(value=params['voice_speed'], choices=voice_speeds, label='Voice speed') - - with gr.Row(): - convert = gr.Button('Permanently replace audios with the message texts') - convert_cancel = gr.Button('Cancel', visible=False) - convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False) - - # Convert history with confirmation - convert_arr = [convert_confirm, convert, convert_cancel] - convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr) - convert_confirm.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - convert_confirm.click(remove_tts_from_history, [shared.gradio[k] for k in ['name1', 'name2', 'mode']], shared.gradio['display']) - convert_confirm.click(lambda: chat.save_history(timestamp=False), [], [], show_progress=False) - convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr) - - # Toggle message text in history - show_text.change(lambda x: params.update({"show_text": x}), show_text, None) - show_text.change(toggle_text_in_history, [shared.gradio[k] for k in ['name1', 'name2', 'mode']], shared.gradio['display']) - show_text.change(lambda: chat.save_history(timestamp=False), [], [], show_progress=False) - - # Event functions to update the parameters in the backend - activate.change(lambda x: params.update({"activate": x}), activate, None) - autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None) - voice.change(lambda x: params.update({"speaker": x}), voice, None) - v_pitch.change(lambda x: params.update({"voice_pitch": x}), v_pitch, None) - v_speed.change(lambda x: params.update({"voice_speed": x}), v_speed, None) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PaletteFile.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PaletteFile.py deleted file mode 100644 index ee9dca86017758b5a7b1f31733b6e7bf4b4d3729..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PaletteFile.py +++ /dev/null @@ -1,53 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read simple, teragon-style palette files -# -# History: -# 97-08-23 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -from ._binary import o8 - - -class PaletteFile: - """File handler for Teragon-style palette files.""" - - rawmode = "RGB" - - def __init__(self, fp): - - self.palette = [(i, i, i) for i in range(256)] - - while True: - - s = fp.readline() - - if not s: - break - if s[:1] == b"#": - continue - if len(s) > 100: - raise SyntaxError("bad palette file") - - v = [int(x) for x in s.split()] - try: - [i, r, g, b] = v - except ValueError: - [i, r] = v - g = b = r - - if 0 <= i <= 255: - self.palette[i] = o8(r) + o8(g) + o8(b) - - self.palette = b"".join(self.palette) - - def getpalette(self): - - return self.palette, self.rawmode diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/_sounddevice_data/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/_sounddevice_data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_area.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_area.py deleted file mode 100644 index 08cb18df02d8628e0f53ebbfc24315292bc762fe..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_area.py +++ /dev/null @@ -1,19 +0,0 @@ -""" -Trellis Area Chart ------------------- -This example shows small multiples of an area chart. -""" -# category: area charts -import altair as alt -from vega_datasets import data - -source = data.iowa_electricity() - -alt.Chart(source).mark_area().encode( - x="year:T", - y="net_generation:Q", - color="source:N", - row="source:N" -).properties( - height=100 -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/schemapi.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/schemapi.py deleted file mode 100644 index 2dfdc8ee14be827d291066ba6ec9578ed2346dc8..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/schemapi.py +++ /dev/null @@ -1,587 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. -import collections -import contextlib -import inspect -import json - -import jsonschema -import numpy as np -import pandas as pd - - -# If DEBUG_MODE is True, then schema objects are converted to dict and -# validated at creation time. This slows things down, particularly for -# larger specs, but leads to much more useful tracebacks for the user. -# Individual schema classes can override this by setting the -# class-level _class_is_valid_at_instantiation attribute to False -DEBUG_MODE = True - - -def enable_debug_mode(): - global DEBUG_MODE - DEBUG_MODE = True - - -def disable_debug_mode(): - global DEBUG_MODE - DEBUG_MODE = True - - -@contextlib.contextmanager -def debug_mode(arg): - global DEBUG_MODE - original = DEBUG_MODE - DEBUG_MODE = arg - try: - yield - finally: - DEBUG_MODE = original - - -def _subclasses(cls): - """Breadth-first sequence of all classes which inherit from cls.""" - seen = set() - current_set = {cls} - while current_set: - seen |= current_set - current_set = set.union(*(set(cls.__subclasses__()) for cls in current_set)) - for cls in current_set - seen: - yield cls - - -def _todict(obj, validate, context): - """Convert an object to a dict representation.""" - if isinstance(obj, SchemaBase): - return obj.to_dict(validate=validate, context=context) - elif isinstance(obj, (list, tuple, np.ndarray)): - return [_todict(v, validate, context) for v in obj] - elif isinstance(obj, dict): - return { - k: _todict(v, validate, context) - for k, v in obj.items() - if v is not Undefined - } - elif hasattr(obj, "to_dict"): - return obj.to_dict() - elif isinstance(obj, np.number): - return float(obj) - elif isinstance(obj, (pd.Timestamp, np.datetime64)): - return pd.Timestamp(obj).isoformat() - else: - return obj - - -def _resolve_references(schema, root=None): - """Resolve schema references.""" - resolver = jsonschema.RefResolver.from_schema(root or schema) - while "$ref" in schema: - with resolver.resolving(schema["$ref"]) as resolved: - schema = resolved - return schema - - -class SchemaValidationError(jsonschema.ValidationError): - """A wrapper for jsonschema.ValidationError with friendlier traceback""" - - def __init__(self, obj, err): - super(SchemaValidationError, self).__init__(**self._get_contents(err)) - self.obj = obj - - @staticmethod - def _get_contents(err): - """Get a dictionary with the contents of a ValidationError""" - try: - # works in jsonschema 2.3 or later - contents = err._contents() - except AttributeError: - try: - # works in Python >=3.4 - spec = inspect.getfullargspec(err.__init__) - except AttributeError: - # works in Python <3.4 - spec = inspect.getargspec(err.__init__) - contents = {key: getattr(err, key) for key in spec.args[1:]} - return contents - - def __str__(self): - cls = self.obj.__class__ - schema_path = ["{}.{}".format(cls.__module__, cls.__name__)] - schema_path.extend(self.schema_path) - schema_path = "->".join( - str(val) - for val in schema_path[:-1] - if val not in ("properties", "additionalProperties", "patternProperties") - ) - return """Invalid specification - - {}, validating {!r} - - {} - """.format( - schema_path, self.validator, self.message - ) - - -class UndefinedType(object): - """A singleton object for marking undefined attributes""" - - __instance = None - - def __new__(cls, *args, **kwargs): - if not isinstance(cls.__instance, cls): - cls.__instance = object.__new__(cls, *args, **kwargs) - return cls.__instance - - def __repr__(self): - return "Undefined" - - -Undefined = UndefinedType() - - -class SchemaBase(object): - """Base class for schema wrappers. - - Each derived class should set the _schema class attribute (and optionally - the _rootschema class attribute) which is used for validation. - """ - - _schema = None - _rootschema = None - _class_is_valid_at_instantiation = True - _validator = jsonschema.Draft7Validator - - def __init__(self, *args, **kwds): - # Two valid options for initialization, which should be handled by - # derived classes: - # - a single arg with no kwds, for, e.g. {'type': 'string'} - # - zero args with zero or more kwds for {'type': 'object'} - if self._schema is None: - raise ValueError( - "Cannot instantiate object of type {}: " - "_schema class attribute is not defined." - "".format(self.__class__) - ) - - if kwds: - assert len(args) == 0 - else: - assert len(args) in [0, 1] - - # use object.__setattr__ because we override setattr below. - object.__setattr__(self, "_args", args) - object.__setattr__(self, "_kwds", kwds) - - if DEBUG_MODE and self._class_is_valid_at_instantiation: - self.to_dict(validate=True) - - def copy(self, deep=True, ignore=()): - """Return a copy of the object - - Parameters - ---------- - deep : boolean or list, optional - If True (default) then return a deep copy of all dict, list, and - SchemaBase objects within the object structure. - If False, then only copy the top object. - If a list or iterable, then only copy the listed attributes. - ignore : list, optional - A list of keys for which the contents should not be copied, but - only stored by reference. - """ - - def _shallow_copy(obj): - if isinstance(obj, SchemaBase): - return obj.copy(deep=False) - elif isinstance(obj, list): - return obj[:] - elif isinstance(obj, dict): - return obj.copy() - else: - return obj - - def _deep_copy(obj, ignore=()): - if isinstance(obj, SchemaBase): - args = tuple(_deep_copy(arg) for arg in obj._args) - kwds = { - k: (_deep_copy(v, ignore=ignore) if k not in ignore else v) - for k, v in obj._kwds.items() - } - with debug_mode(False): - return obj.__class__(*args, **kwds) - elif isinstance(obj, list): - return [_deep_copy(v, ignore=ignore) for v in obj] - elif isinstance(obj, dict): - return { - k: (_deep_copy(v, ignore=ignore) if k not in ignore else v) - for k, v in obj.items() - } - else: - return obj - - try: - deep = list(deep) - except TypeError: - deep_is_list = False - else: - deep_is_list = True - - if deep and not deep_is_list: - return _deep_copy(self, ignore=ignore) - - with debug_mode(False): - copy = self.__class__(*self._args, **self._kwds) - if deep_is_list: - for attr in deep: - copy[attr] = _shallow_copy(copy._get(attr)) - return copy - - def _get(self, attr, default=Undefined): - """Get an attribute, returning default if not present.""" - attr = self._kwds.get(attr, Undefined) - if attr is Undefined: - attr = default - return attr - - def __getattr__(self, attr): - # reminder: getattr is called after the normal lookups - if attr == "_kwds": - raise AttributeError() - if attr in self._kwds: - return self._kwds[attr] - else: - try: - _getattr = super(SchemaBase, self).__getattr__ - except AttributeError: - _getattr = super(SchemaBase, self).__getattribute__ - return _getattr(attr) - - def __setattr__(self, item, val): - self._kwds[item] = val - - def __getitem__(self, item): - return self._kwds[item] - - def __setitem__(self, item, val): - self._kwds[item] = val - - def __repr__(self): - if self._kwds: - args = ( - "{}: {!r}".format(key, val) - for key, val in sorted(self._kwds.items()) - if val is not Undefined - ) - args = "\n" + ",\n".join(args) - return "{0}({{{1}\n}})".format( - self.__class__.__name__, args.replace("\n", "\n ") - ) - else: - return "{}({!r})".format(self.__class__.__name__, self._args[0]) - - def __eq__(self, other): - return ( - type(self) is type(other) - and self._args == other._args - and self._kwds == other._kwds - ) - - def to_dict(self, validate=True, ignore=None, context=None): - """Return a dictionary representation of the object - - Parameters - ---------- - validate : boolean or string - If True (default), then validate the output dictionary - against the schema. If "deep" then recursively validate - all objects in the spec. This takes much more time, but - it results in friendlier tracebacks for large objects. - ignore : list - A list of keys to ignore. This will *not* passed to child to_dict - function calls. - context : dict (optional) - A context dictionary that will be passed to all child to_dict - function calls - - Returns - ------- - dct : dictionary - The dictionary representation of this object - - Raises - ------ - jsonschema.ValidationError : - if validate=True and the dict does not conform to the schema - """ - if context is None: - context = {} - if ignore is None: - ignore = [] - sub_validate = "deep" if validate == "deep" else False - - if self._args and not self._kwds: - result = _todict(self._args[0], validate=sub_validate, context=context) - elif not self._args: - result = _todict( - {k: v for k, v in self._kwds.items() if k not in ignore}, - validate=sub_validate, - context=context, - ) - else: - raise ValueError( - "{} instance has both a value and properties : " - "cannot serialize to dict".format(self.__class__) - ) - if validate: - try: - self.validate(result) - except jsonschema.ValidationError as err: - raise SchemaValidationError(self, err) - return result - - def to_json( - self, validate=True, ignore=[], context={}, indent=2, sort_keys=True, **kwargs - ): - """Emit the JSON representation for this object as a string. - - Parameters - ---------- - validate : boolean or string - If True (default), then validate the output dictionary - against the schema. If "deep" then recursively validate - all objects in the spec. This takes much more time, but - it results in friendlier tracebacks for large objects. - ignore : list - A list of keys to ignore. This will *not* passed to child to_dict - function calls. - context : dict (optional) - A context dictionary that will be passed to all child to_dict - function calls - indent : integer, default 2 - the number of spaces of indentation to use - sort_keys : boolean, default True - if True, sort keys in the output - **kwargs - Additional keyword arguments are passed to ``json.dumps()`` - - Returns - ------- - spec : string - The JSON specification of the chart object. - """ - dct = self.to_dict(validate=validate, ignore=ignore, context=context) - return json.dumps(dct, indent=indent, sort_keys=sort_keys, **kwargs) - - @classmethod - def _default_wrapper_classes(cls): - """Return the set of classes used within cls.from_dict()""" - return _subclasses(SchemaBase) - - @classmethod - def from_dict(cls, dct, validate=True, _wrapper_classes=None): - """Construct class from a dictionary representation - - Parameters - ---------- - dct : dictionary - The dict from which to construct the class - validate : boolean - If True (default), then validate the input against the schema. - _wrapper_classes : list (optional) - The set of SchemaBase classes to use when constructing wrappers - of the dict inputs. If not specified, the result of - cls._default_wrapper_classes will be used. - - Returns - ------- - obj : Schema object - The wrapped schema - - Raises - ------ - jsonschema.ValidationError : - if validate=True and dct does not conform to the schema - """ - if validate: - cls.validate(dct) - if _wrapper_classes is None: - _wrapper_classes = cls._default_wrapper_classes() - converter = _FromDict(_wrapper_classes) - return converter.from_dict(dct, cls) - - @classmethod - def from_json(cls, json_string, validate=True, **kwargs): - """Instantiate the object from a valid JSON string - - Parameters - ---------- - json_string : string - The string containing a valid JSON chart specification. - validate : boolean - If True (default), then validate the input against the schema. - **kwargs : - Additional keyword arguments are passed to json.loads - - Returns - ------- - chart : Chart object - The altair Chart object built from the specification. - """ - dct = json.loads(json_string, **kwargs) - return cls.from_dict(dct, validate=validate) - - @classmethod - def validate(cls, instance, schema=None): - """ - Validate the instance against the class schema in the context of the - rootschema. - """ - if schema is None: - schema = cls._schema - resolver = jsonschema.RefResolver.from_schema(cls._rootschema or cls._schema) - return jsonschema.validate( - instance, schema, cls=cls._validator, resolver=resolver - ) - - @classmethod - def resolve_references(cls, schema=None): - """Resolve references in the context of this object's schema or root schema.""" - return _resolve_references( - schema=(schema or cls._schema), - root=(cls._rootschema or cls._schema or schema), - ) - - @classmethod - def validate_property(cls, name, value, schema=None): - """ - Validate a property against property schema in the context of the - rootschema - """ - value = _todict(value, validate=False, context={}) - props = cls.resolve_references(schema or cls._schema).get("properties", {}) - resolver = jsonschema.RefResolver.from_schema(cls._rootschema or cls._schema) - return jsonschema.validate(value, props.get(name, {}), resolver=resolver) - - def __dir__(self): - return list(self._kwds.keys()) - - -def _passthrough(*args, **kwds): - return args[0] if args else kwds - - -class _FromDict(object): - """Class used to construct SchemaBase class hierarchies from a dict - - The primary purpose of using this class is to be able to build a hash table - that maps schemas to their wrapper classes. The candidate classes are - specified in the ``class_list`` argument to the constructor. - """ - - _hash_exclude_keys = ("definitions", "title", "description", "$schema", "id") - - def __init__(self, class_list): - # Create a mapping of a schema hash to a list of matching classes - # This lets us quickly determine the correct class to construct - self.class_dict = collections.defaultdict(list) - for cls in class_list: - if cls._schema is not None: - self.class_dict[self.hash_schema(cls._schema)].append(cls) - - @classmethod - def hash_schema(cls, schema, use_json=True): - """ - Compute a python hash for a nested dictionary which - properly handles dicts, lists, sets, and tuples. - - At the top level, the function excludes from the hashed schema all keys - listed in `exclude_keys`. - - This implements two methods: one based on conversion to JSON, and one based - on recursive conversions of unhashable to hashable types; the former seems - to be slightly faster in several benchmarks. - """ - if cls._hash_exclude_keys and isinstance(schema, dict): - schema = { - key: val - for key, val in schema.items() - if key not in cls._hash_exclude_keys - } - if use_json: - s = json.dumps(schema, sort_keys=True) - return hash(s) - else: - - def _freeze(val): - if isinstance(val, dict): - return frozenset((k, _freeze(v)) for k, v in val.items()) - elif isinstance(val, set): - return frozenset(map(_freeze, val)) - elif isinstance(val, list) or isinstance(val, tuple): - return tuple(map(_freeze, val)) - else: - return val - - return hash(_freeze(schema)) - - def from_dict( - self, dct, cls=None, schema=None, rootschema=None, default_class=_passthrough - ): - """Construct an object from a dict representation""" - if (schema is None) == (cls is None): - raise ValueError("Must provide either cls or schema, but not both.") - if schema is None: - schema = schema or cls._schema - rootschema = rootschema or cls._rootschema - rootschema = rootschema or schema - - if isinstance(dct, SchemaBase): - return dct - - if cls is None: - # If there are multiple matches, we use the first one in the dict. - # Our class dict is constructed breadth-first from top to bottom, - # so the first class that matches is the most general match. - matches = self.class_dict[self.hash_schema(schema)] - if matches: - cls = matches[0] - else: - cls = default_class - schema = _resolve_references(schema, rootschema) - - if "anyOf" in schema or "oneOf" in schema: - schemas = schema.get("anyOf", []) + schema.get("oneOf", []) - for possible_schema in schemas: - resolver = jsonschema.RefResolver.from_schema(rootschema) - try: - jsonschema.validate(dct, possible_schema, resolver=resolver) - except jsonschema.ValidationError: - continue - else: - return self.from_dict( - dct, - schema=possible_schema, - rootschema=rootschema, - default_class=cls, - ) - - if isinstance(dct, dict): - # TODO: handle schemas for additionalProperties/patternProperties - props = schema.get("properties", {}) - kwds = {} - for key, val in dct.items(): - if key in props: - val = self.from_dict(val, schema=props[key], rootschema=rootschema) - kwds[key] = val - return cls(**kwds) - - elif isinstance(dct, list): - item_schema = schema.get("items", {}) - dct = [ - self.from_dict(val, schema=item_schema, rootschema=rootschema) - for val in dct - ] - return cls(dct) - else: - return cls(dct) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/version.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/version.py deleted file mode 100644 index a6f47bec7a2d10961a89827b324d97d2da5495b6..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/audioread/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# This file is part of audioread. -# Copyright 2017, Adrian Sampson. -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. - -"""Version data for the audioread package.""" - -version = '2.1.9' -short_version = '2.1' diff --git a/spaces/aulhan/microsoft-codereviewer/app.py b/spaces/aulhan/microsoft-codereviewer/app.py deleted file mode 100644 index 00f704ede0a9bb84c92e62e01edefca4fb429028..0000000000000000000000000000000000000000 --- a/spaces/aulhan/microsoft-codereviewer/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/microsoft/codereviewer").launch() \ No newline at end of file diff --git a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/README.md b/spaces/awacke1/AutoMLUsingStreamlit-Plotly/README.md deleted file mode 100644 index f88659f2e732a22e45d59a1d0e70ba4f9124f780..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 😻 AutoML 😻 Streamlit Plotly -emoji: Vis 😻 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/StreamlitSolution-To-Your-Problem-Generator/app.py b/spaces/awacke1/StreamlitSolution-To-Your-Problem-Generator/app.py deleted file mode 100644 index 68789f1f015046d5b0d500e5986944350a291b34..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitSolution-To-Your-Problem-Generator/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import streamlit as st -import random -from transformers import pipeline -import pandas as pd -from datetime import datetime -import pytz - -generator = pipeline('text-generation', model='gpt2') -max_length = 50 - -prompts = { - "Difficulty sleeping": [ - "Try keeping a consistent sleep schedule and avoid caffeine before bedtime.", - "Make your bedroom a comfortable and calming environment.", - "Avoid using electronic devices before bedtime.", - "Try relaxation techniques like deep breathing or meditation.", - "Consider talking to a healthcare provider if sleep problems persist." - ], - "Time management": [ - "Use a planner or time-tracking app to prioritize tasks and stay on schedule.", - "Break down large tasks into smaller ones.", - "Limit multitasking and focus on one task at a time.", - "Delegate tasks to others when possible.", - "Take regular breaks and avoid overworking yourself." - ], - "Stress management": [ - "Practice mindfulness techniques such as deep breathing or meditation.", - "Get regular exercise to reduce stress and improve mood.", - "Get enough sleep and practice good sleep habits.", - "Take breaks throughout the day to reduce stress levels.", - "Try to identify the sources of stress in your life and develop strategies to manage them." - ] -} - -def generate_prompt(prompt): - solution = random.choice(prompts[prompt]) - prompt_text = f"What can I do to {prompt.lower()}? " - output = generator(prompt_text, max_length=max_length, num_return_sequences=1, no_repeat_ngram_size=2, early_stopping=True) - output_text = output[0]['generated_text'][len(prompt_text):].strip() - return prompt_text, output_text, solution - -st.title('ICL-LM Interface') -option = st.selectbox('Select a problem:', list(prompts.keys())) - -if st.button('Generate Prompt and Solution'): - results = [] - for _ in range(3): - prompt_text, prompt, solution = generate_prompt(option) - results.append([prompt_text, prompt, solution]) - - user_timezone = st.text_input("Enter your timezone (e.g., 'America/New_York'):") - - try: - tz = pytz.timezone(user_timezone) - current_time = datetime.now(tz).strftime('%Y-%m-%d %H:%M:%S %Z') - except Exception: - current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S') - st.warning('Invalid timezone entered. Using the server timezone.') - - with open('results.txt', 'a') as f: - for result in results: - f.write(f"{current_time}\t{result[0]}\t{result[1]}\t{result[2]}\n") - - df = pd.read_csv('results.txt', sep='\t', header=None, names=['Timestamp', 'Input', 'Prompt', 'Solution']) - st.write(df) diff --git a/spaces/awacke1/Video-Summary/app.py b/spaces/awacke1/Video-Summary/app.py deleted file mode 100644 index 1e85892f816bcd3860ef6cd1f9c6569c93eb5a43..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Video-Summary/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -from summarize import Summarizer -interface = gr.Interface(fn = Summarizer, - inputs = [gr.inputs.Textbox(lines=2, - placeholder="Enter your link...", - label='YouTube Video Link'), - gr.inputs.Radio(["mT5", "BART"], type="value", label='Model')], - outputs = [gr.outputs.Textbox( - label="Summary")], - - title = "Video Summary Generator", - examples = [ - ['https://www.youtube.com/watch?v=JN3KPFbWCy8&t=197s', 'BART'],#https://www.youtube.com/watch?v=cdiD-9MMpb0 - ['https://www.youtube.com/watch?v=p3lsYlod5OU&t=5202s', 'BART'], - ['https://www.youtube.com/watch?v=Gfr50f6ZBvo&t=1493s', 'BART'], - ['https://www.youtube.com/watch?v=4oDZyOf6CW4&t=3149s', 'BART'], - ['https://www.youtube.com/watch?v=lvh3g7eszVQ&t=291s', 'mT5'], - ['https://www.youtube.com/watch?v=OaeYUm06in0', 'mT5'], - ['https://www.youtube.com/watch?v=ZecQ64l-gKM&t=545s', 'mT5'], - ['https://www.youtube.com/watch?v=5zOHSysMmH0&t=5798s', 'mT5'], - ['https://www.youtube.com/watch?v=X0-SXS6zdEQ&t=23s', 'mT5'], - ['https://www.youtube.com/watch?v=gFEE3w7F0ww&t=18s', 'mT5'], - ['https://www.youtube.com/watch?v=Z1KwkpTUbkg&t=30s', 'mT5'], - ['https://www.youtube.com/watch?v=rIpUf-Vy2JA&t=3542s', 'mT5'], - ['https://www.youtube.com/watch?v=bgNzUxyS-kQ&t=3631s', 'mT5'] - ], - enable_queue=True) - -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/attention.py b/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/attention.py deleted file mode 100644 index e5c758afa34c534a251fe6d164eb81a6f3a3230b..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/model/language_model/mpt/attention.py +++ /dev/null @@ -1,300 +0,0 @@ -"""Attention layers.""" -import math -import warnings -from typing import Optional -import torch -import torch.nn as nn -from einops import rearrange -from packaging import version -from torch import nn -from .norm import LPLayerNorm - -def _reset_is_causal(num_query_tokens: int, num_key_tokens: int, original_is_causal: bool): - if original_is_causal and num_query_tokens != num_key_tokens: - if num_query_tokens != 1: - raise NotImplementedError('MPT does not support query and key with different number of tokens, unless number of query tokens is 1.') - else: - return False - return original_is_causal - -def scaled_multihead_dot_product_attention(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - q = rearrange(query, 'b s (h d) -> b h s d', h=n_heads) - kv_n_heads = 1 if multiquery else n_heads - k = rearrange(key, 'b s (h d) -> b h d s', h=kv_n_heads) - v = rearrange(value, 'b s (h d) -> b h s d', h=kv_n_heads) - if past_key_value is not None: - if len(past_key_value) != 0: - k = torch.cat([past_key_value[0], k], dim=3) - v = torch.cat([past_key_value[1], v], dim=2) - past_key_value = (k, v) - (b, _, s_q, d) = q.shape - s_k = k.size(-1) - if softmax_scale is None: - softmax_scale = 1 / math.sqrt(d) - attn_weight = q.matmul(k) * softmax_scale - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - s_q) - _s_k = max(0, attn_bias.size(3) - s_k) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if attn_bias.size(-1) != 1 and attn_bias.size(-1) != s_k or (attn_bias.size(-2) != 1 and attn_bias.size(-2) != s_q): - raise RuntimeError(f'attn_bias (shape: {attn_bias.shape}) is expected to broadcast to shape: {attn_weight.shape}.') - attn_weight = attn_weight + attn_bias - min_val = torch.finfo(q.dtype).min - if key_padding_mask is not None: - if attn_bias is not None: - warnings.warn('Propogating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unneccessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.') - attn_weight = attn_weight.masked_fill(~key_padding_mask.view((b, 1, 1, s_k)), min_val) - if is_causal and (not q.size(2) == 1): - s = max(s_q, s_k) - causal_mask = attn_weight.new_ones(s, s, dtype=torch.float16) - causal_mask = causal_mask.tril() - causal_mask = causal_mask.to(torch.bool) - causal_mask = ~causal_mask - causal_mask = causal_mask[-s_q:, -s_k:] - attn_weight = attn_weight.masked_fill(causal_mask.view(1, 1, s_q, s_k), min_val) - attn_weight = torch.softmax(attn_weight, dim=-1) - if dropout_p: - attn_weight = torch.nn.functional.dropout(attn_weight, p=dropout_p, training=training, inplace=True) - out = attn_weight.to(v.dtype).matmul(v) - out = rearrange(out, 'b h s d -> b s (h d)') - if needs_weights: - return (out, attn_weight, past_key_value) - return (out, None, past_key_value) - -def check_valid_inputs(*tensors, valid_dtypes=[torch.float16, torch.bfloat16]): - for tensor in tensors: - if tensor.dtype not in valid_dtypes: - raise TypeError(f'tensor.dtype={tensor.dtype!r} must be in valid_dtypes={valid_dtypes!r}.') - if not tensor.is_cuda: - raise TypeError(f'Inputs must be cuda tensors (tensor.is_cuda={tensor.is_cuda!r}).') - -def flash_attn_fn(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - try: - from flash_attn import bert_padding, flash_attn_interface - except: - raise RuntimeError('Please install flash-attn==1.0.3.post0') - check_valid_inputs(query, key, value) - if past_key_value is not None: - if len(past_key_value) != 0: - key = torch.cat([past_key_value[0], key], dim=1) - value = torch.cat([past_key_value[1], value], dim=1) - past_key_value = (key, value) - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - query.size(1)) - _s_k = max(0, attn_bias.size(3) - key.size(1)) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if attn_bias is not None: - raise NotImplementedError(f'attn_bias not implemented for flash attn.') - (batch_size, seqlen) = query.shape[:2] - if key_padding_mask is None: - key_padding_mask = torch.ones_like(key[:, :, 0], dtype=torch.bool) - query_padding_mask = key_padding_mask[:, -query.size(1):] - (query_unpad, indices_q, cu_seqlens_q, max_seqlen_q) = bert_padding.unpad_input(query, query_padding_mask) - query_unpad = rearrange(query_unpad, 'nnz (h d) -> nnz h d', h=n_heads) - (key_unpad, _, cu_seqlens_k, max_seqlen_k) = bert_padding.unpad_input(key, key_padding_mask) - key_unpad = rearrange(key_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads) - (value_unpad, _, _, _) = bert_padding.unpad_input(value, key_padding_mask) - value_unpad = rearrange(value_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads) - if multiquery: - key_unpad = key_unpad.expand(key_unpad.size(0), n_heads, key_unpad.size(-1)) - value_unpad = value_unpad.expand(value_unpad.size(0), n_heads, value_unpad.size(-1)) - dropout_p = dropout_p if training else 0.0 - reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) - output_unpad = flash_attn_interface.flash_attn_unpadded_func(query_unpad, key_unpad, value_unpad, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale=softmax_scale, causal=reset_is_causal, return_attn_probs=needs_weights) - output = bert_padding.pad_input(rearrange(output_unpad, 'nnz h d -> nnz (h d)'), indices_q, batch_size, seqlen) - return (output, None, past_key_value) - -def triton_flash_attn_fn(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - try: - from .flash_attn_triton import flash_attn_func - except: - _installed = False - if version.parse(torch.__version__) < version.parse('2.0.0'): - _installed = True - try: - from flash_attn.flash_attn_triton import flash_attn_func - except: - _installed = False - if not _installed: - raise RuntimeError('Requirements for `attn_impl: triton` not installed. Either (1) have a CUDA-compatible GPU and `pip install .[gpu]` if installing from llm-foundry source or `pip install triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python` if installing from pypi, or (2) use torch attn model.attn_config.attn_impl=torch (torch attn_impl will be slow). Note: (1) requires you have CMake and PyTorch already installed.') - check_valid_inputs(query, key, value) - if past_key_value is not None: - if len(past_key_value) != 0: - key = torch.cat([past_key_value[0], key], dim=1) - value = torch.cat([past_key_value[1], value], dim=1) - past_key_value = (key, value) - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - query.size(1)) - _s_k = max(0, attn_bias.size(3) - key.size(1)) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if dropout_p: - raise NotImplementedError(f'Dropout not implemented for attn_impl: triton.') - if needs_weights: - raise NotImplementedError(f'attn_impl: triton cannot return attn weights.') - if key_padding_mask is not None: - warnings.warn('Propagating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unnecessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.') - (b_size, s_k) = key_padding_mask.shape[:2] - if attn_bias is None: - attn_bias = query.new_zeros(b_size, 1, 1, s_k) - attn_bias = attn_bias.masked_fill(~key_padding_mask.view((b_size, 1, 1, s_k)), torch.finfo(query.dtype).min) - query = rearrange(query, 'b s (h d) -> b s h d', h=n_heads) - key = rearrange(key, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads) - value = rearrange(value, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads) - if multiquery: - key = key.expand(*key.shape[:2], n_heads, key.size(-1)) - value = value.expand(*value.shape[:2], n_heads, value.size(-1)) - reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) - attn_output = flash_attn_func(query, key, value, attn_bias, reset_is_causal, softmax_scale) - output = attn_output.view(*attn_output.shape[:2], -1) - return (output, None, past_key_value) - -class MultiheadAttention(nn.Module): - """Multi-head self attention. - - Using torch or triton attention implemetation enables user to also use - additive bias. - """ - - def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, verbose: int=0, device: Optional[str]=None): - super().__init__() - self.attn_impl = attn_impl - self.clip_qkv = clip_qkv - self.qk_ln = qk_ln - self.d_model = d_model - self.n_heads = n_heads - self.softmax_scale = softmax_scale - if self.softmax_scale is None: - self.softmax_scale = 1 / math.sqrt(self.d_model / self.n_heads) - self.attn_dropout_p = attn_pdrop - self.Wqkv = nn.Linear(self.d_model, 3 * self.d_model, device=device) - fuse_splits = (d_model, 2 * d_model) - self.Wqkv._fused = (0, fuse_splits) - if self.qk_ln: - layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm - self.q_ln = layernorm_class(self.d_model, device=device) - self.k_ln = layernorm_class(self.d_model, device=device) - if self.attn_impl == 'flash': - self.attn_fn = flash_attn_fn - elif self.attn_impl == 'triton': - self.attn_fn = triton_flash_attn_fn - if verbose: - warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.') - elif self.attn_impl == 'torch': - self.attn_fn = scaled_multihead_dot_product_attention - if torch.cuda.is_available() and verbose: - warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.') - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - self.out_proj = nn.Linear(self.d_model, self.d_model, device=device) - self.out_proj._is_residual = True - - def forward(self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False): - qkv = self.Wqkv(x) - if self.clip_qkv: - qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) - (query, key, value) = qkv.chunk(3, dim=2) - key_padding_mask = attention_mask - if self.qk_ln: - dtype = query.dtype - query = self.q_ln(query).to(dtype) - key = self.k_ln(key).to(dtype) - (context, attn_weights, past_key_value) = self.attn_fn(query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights) - return (self.out_proj(context), attn_weights, past_key_value) - -class MultiQueryAttention(nn.Module): - """Multi-Query self attention. - - Using torch or triton attention implemetation enables user to also use - additive bias. - """ - - def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, verbose: int=0, device: Optional[str]=None): - super().__init__() - self.attn_impl = attn_impl - self.clip_qkv = clip_qkv - self.qk_ln = qk_ln - self.d_model = d_model - self.n_heads = n_heads - self.head_dim = d_model // n_heads - self.softmax_scale = softmax_scale - if self.softmax_scale is None: - self.softmax_scale = 1 / math.sqrt(self.head_dim) - self.attn_dropout_p = attn_pdrop - self.Wqkv = nn.Linear(d_model, d_model + 2 * self.head_dim, device=device) - fuse_splits = (d_model, d_model + self.head_dim) - self.Wqkv._fused = (0, fuse_splits) - if self.qk_ln: - layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm - self.q_ln = layernorm_class(d_model, device=device) - self.k_ln = layernorm_class(self.head_dim, device=device) - if self.attn_impl == 'flash': - self.attn_fn = flash_attn_fn - elif self.attn_impl == 'triton': - self.attn_fn = triton_flash_attn_fn - if verbose: - warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.') - elif self.attn_impl == 'torch': - self.attn_fn = scaled_multihead_dot_product_attention - if torch.cuda.is_available() and verbose: - warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.') - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - self.out_proj = nn.Linear(self.d_model, self.d_model, device=device) - self.out_proj._is_residual = True - - def forward(self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False): - qkv = self.Wqkv(x) - if self.clip_qkv: - qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) - (query, key, value) = qkv.split([self.d_model, self.head_dim, self.head_dim], dim=2) - key_padding_mask = attention_mask - if self.qk_ln: - dtype = query.dtype - query = self.q_ln(query).to(dtype) - key = self.k_ln(key).to(dtype) - (context, attn_weights, past_key_value) = self.attn_fn(query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, multiquery=True) - return (self.out_proj(context), attn_weights, past_key_value) - -def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id): - if attn_impl == 'flash': - return None - elif attn_impl in ['torch', 'triton']: - if alibi: - if (prefix_lm or not causal) or use_sequence_id: - return (1, n_heads, seq_len, seq_len) - return (1, n_heads, 1, seq_len) - elif prefix_lm or use_sequence_id: - return (1, 1, seq_len, seq_len) - return None - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - -def build_attn_bias(attn_impl, attn_bias, n_heads, seq_len, causal=False, alibi=False, alibi_bias_max=8): - if attn_impl == 'flash': - return None - elif attn_impl in ['torch', 'triton']: - if alibi: - (device, dtype) = (attn_bias.device, attn_bias.dtype) - attn_bias = attn_bias.add(build_alibi_bias(n_heads, seq_len, full=not causal, alibi_bias_max=alibi_bias_max, device=device, dtype=dtype)) - return attn_bias - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - -def gen_slopes(n_heads, alibi_bias_max=8, device=None): - _n_heads = 2 ** math.ceil(math.log2(n_heads)) - m = torch.arange(1, _n_heads + 1, dtype=torch.float32, device=device) - m = m.mul(alibi_bias_max / _n_heads) - slopes = 1.0 / torch.pow(2, m) - if _n_heads != n_heads: - slopes = torch.concat([slopes[1::2], slopes[::2]])[:n_heads] - return slopes.view(1, n_heads, 1, 1) - -def build_alibi_bias(n_heads, seq_len, full=False, alibi_bias_max=8, device=None, dtype=None): - alibi_bias = torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, 1, seq_len) - if full: - alibi_bias = alibi_bias - torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, seq_len, 1) - alibi_bias = alibi_bias.abs().mul(-1) - slopes = gen_slopes(n_heads, alibi_bias_max, device=device) - alibi_bias = alibi_bias * slopes - return alibi_bias.to(dtype=dtype) -ATTN_CLASS_REGISTRY = {'multihead_attention': MultiheadAttention, 'multiquery_attention': MultiQueryAttention} \ No newline at end of file diff --git a/spaces/banana-projects/convai/server/lib/obj.d.ts b/spaces/banana-projects/convai/server/lib/obj.d.ts deleted file mode 100644 index 6694da54405dccd020d42661d46166767c03ceab..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/convai/server/lib/obj.d.ts +++ /dev/null @@ -1,8 +0,0 @@ -/** - * Hf helper type for a dictionary-like object with arbitrary keys. - */ -declare interface Obj { - [key: string]: T -} - -type Extend = T & Obj; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326215622.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326215622.py deleted file mode 100644 index 0aae55360740accb8e84bd9c2010b10d708f2eea..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326215622.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -os.system("pip install gfpgan") - -os.system("pip freeze") -os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='GFPGANCleanv1-NoCE-C2.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - - - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102938.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102938.py deleted file mode 100644 index 8f88d7e738d88431b5accb10ef5c59597118b7e6..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102938.py +++ /dev/null @@ -1,42 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -background = st.selectbox("表格线条是否隐藏",(False,True)) -extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取")) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - if extractor_mode == "单页抽取": - page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[1].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") - if extractor_mode == "全文抽取": - tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background) - result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - with open('result_all.xlsx','rb') as f: - st.download_button('抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git a/spaces/bgk/sipariseng/app.py b/spaces/bgk/sipariseng/app.py deleted file mode 100644 index e4359a849ee23fd74f7a4923a39358ac2e719a48..0000000000000000000000000000000000000000 --- a/spaces/bgk/sipariseng/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import gradio as gr -from simpletransformers.ner import NERModel -import string - -labels = ["O", "B-FOOD_QUANTITY", "B-FOOD_SIZE", "B-FOOD", "I-FOOD", "B-FOOD_INGREDIENTS", "I-FOOD_INGREDIENTS", "B-DRINK_SIZE", "B-DRINK_QUANTITY", "B-DRINK", "B-PAYMENT", "I-PAYMENT", "B-DELIVERY_ADDRESS", "I-DRINK_SIZE", "I-DRINK", "I-FOOD_SIZE", "I-DELIVERY_ADDRESS"] - -model = NERModel( - "roberta", - "bgk/berteng", labels=labels, - use_cuda=False, - ignore_mismatched_sizes=True - ) - -examples=[['I want two hamburgers and one sprite and one milkshake send it to my workplace' ], [' I want to order two large pizzas, two medium coke, send it to my home, I will pay with cash' ]] - -def ner(text): - trans_table = text.maketrans('', '', string.punctuation) - text = text.translate(trans_table) - text=text.lower() - - prediction, model_output = model.predict([text]) - - filtered_output = (({v: k} for d in sublist for k, v in d.items() if (v.startswith("B-") or v.startswith("I-"))) for sublist in prediction) - entities = [] - for sublist in filtered_output: - for d in sublist: - for k, v in d.items(): - label = k.split("-")[1] - entities.extend([(label, v)]) - - return entities # prediction - -demo = gr.Interface(ner, - gr.Textbox(placeholder="Enter your sentences here..."), - gr.HighlightedText(), - examples=examples) - - -demo.launch() diff --git a/spaces/bguberfain/Detic/detic/modeling/backbone/timm.py b/spaces/bguberfain/Detic/detic/modeling/backbone/timm.py deleted file mode 100644 index f06b25c8036d99bb6b9518662ab1664a4521b8f5..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/detic/modeling/backbone/timm.py +++ /dev/null @@ -1,200 +0,0 @@ - #!/usr/bin/env python -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from os.path import join -import numpy as np -import copy -from functools import partial - -import torch -from torch import nn -import torch.utils.model_zoo as model_zoo -import torch.nn.functional as F -import fvcore.nn.weight_init as weight_init - -from detectron2.modeling.backbone import FPN -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.layers.batch_norm import get_norm, FrozenBatchNorm2d -from detectron2.modeling.backbone import Backbone - -from timm import create_model -from timm.models.helpers import build_model_with_cfg -from timm.models.registry import register_model -from timm.models.resnet import ResNet, Bottleneck -from timm.models.resnet import default_cfgs as default_cfgs_resnet - - -class CustomResNet(ResNet): - def __init__(self, **kwargs): - self.out_indices = kwargs.pop('out_indices') - super().__init__(**kwargs) - - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.act1(x) - x = self.maxpool(x) - ret = [x] - x = self.layer1(x) - ret.append(x) - x = self.layer2(x) - ret.append(x) - x = self.layer3(x) - ret.append(x) - x = self.layer4(x) - ret.append(x) - return [ret[i] for i in self.out_indices] - - - def load_pretrained(self, cached_file): - data = torch.load(cached_file, map_location='cpu') - if 'state_dict' in data: - self.load_state_dict(data['state_dict']) - else: - self.load_state_dict(data) - - -model_params = { - 'resnet50': dict(block=Bottleneck, layers=[3, 4, 6, 3]), - 'resnet50_in21k': dict(block=Bottleneck, layers=[3, 4, 6, 3]), -} - - -def create_timm_resnet(variant, out_indices, pretrained=False, **kwargs): - params = model_params[variant] - default_cfgs_resnet['resnet50_in21k'] = \ - copy.deepcopy(default_cfgs_resnet['resnet50']) - default_cfgs_resnet['resnet50_in21k']['url'] = \ - 'https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ImageNet_21K_P/models/resnet50_miil_21k.pth' - default_cfgs_resnet['resnet50_in21k']['num_classes'] = 11221 - - return build_model_with_cfg( - CustomResNet, variant, pretrained, - default_cfg=default_cfgs_resnet[variant], - out_indices=out_indices, - pretrained_custom_load=True, - **params, - **kwargs) - - -class LastLevelP6P7_P5(nn.Module): - """ - """ - def __init__(self, in_channels, out_channels): - super().__init__() - self.num_levels = 2 - self.in_feature = "p5" - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -def freeze_module(x): - """ - """ - for p in x.parameters(): - p.requires_grad = False - FrozenBatchNorm2d.convert_frozen_batchnorm(x) - return x - - -class TIMM(Backbone): - def __init__(self, base_name, out_levels, freeze_at=0, norm='FrozenBN'): - super().__init__() - out_indices = [x - 1 for x in out_levels] - if 'resnet' in base_name: - self.base = create_timm_resnet( - base_name, out_indices=out_indices, - pretrained=False) - elif 'eff' in base_name: - self.base = create_model( - base_name, features_only=True, - out_indices=out_indices, pretrained=True) - else: - assert 0, base_name - feature_info = [dict(num_chs=f['num_chs'], reduction=f['reduction']) \ - for i, f in enumerate(self.base.feature_info)] - self._out_features = ['layer{}'.format(x) for x in out_levels] - self._out_feature_channels = { - 'layer{}'.format(l): feature_info[l - 1]['num_chs'] for l in out_levels} - self._out_feature_strides = { - 'layer{}'.format(l): feature_info[l - 1]['reduction'] for l in out_levels} - self._size_divisibility = max(self._out_feature_strides.values()) - if 'resnet' in base_name: - self.freeze(freeze_at) - if norm == 'FrozenBN': - self = FrozenBatchNorm2d.convert_frozen_batchnorm(self) - - def freeze(self, freeze_at=0): - """ - """ - if freeze_at >= 1: - print('Frezing', self.base.conv1) - self.base.conv1 = freeze_module(self.base.conv1) - if freeze_at >= 2: - print('Frezing', self.base.layer1) - self.base.layer1 = freeze_module(self.base.layer1) - - def forward(self, x): - features = self.base(x) - ret = {k: v for k, v in zip(self._out_features, features)} - return ret - - @property - def size_divisibility(self): - return self._size_divisibility - - -@BACKBONE_REGISTRY.register() -def build_timm_backbone(cfg, input_shape): - model = TIMM( - cfg.MODEL.TIMM.BASE_NAME, - cfg.MODEL.TIMM.OUT_LEVELS, - freeze_at=cfg.MODEL.TIMM.FREEZE_AT, - norm=cfg.MODEL.TIMM.NORM, - ) - return model - - -@BACKBONE_REGISTRY.register() -def build_p67_timm_fpn_backbone(cfg, input_shape): - """ - """ - bottom_up = build_timm_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_p35_timm_fpn_backbone(cfg, input_shape): - """ - """ - bottom_up = build_timm_backbone(cfg, input_shape) - - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=None, - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/model/loss.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/model/loss.py deleted file mode 100644 index 72e5de6af050df7d55c2871a69637077970ddfb9..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/model/loss.py +++ /dev/null @@ -1,128 +0,0 @@ -import torch -import numpy as np -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class EPE(nn.Module): - def __init__(self): - super(EPE, self).__init__() - - def forward(self, flow, gt, loss_mask): - loss_map = (flow - gt.detach()) ** 2 - loss_map = (loss_map.sum(1, True) + 1e-6) ** 0.5 - return (loss_map * loss_mask) - - -class Ternary(nn.Module): - def __init__(self): - super(Ternary, self).__init__() - patch_size = 7 - out_channels = patch_size * patch_size - self.w = np.eye(out_channels).reshape( - (patch_size, patch_size, 1, out_channels)) - self.w = np.transpose(self.w, (3, 2, 0, 1)) - self.w = torch.tensor(self.w).float().to(device) - - def transform(self, img): - patches = F.conv2d(img, self.w, padding=3, bias=None) - transf = patches - img - transf_norm = transf / torch.sqrt(0.81 + transf**2) - return transf_norm - - def rgb2gray(self, rgb): - r, g, b = rgb[:, 0:1, :, :], rgb[:, 1:2, :, :], rgb[:, 2:3, :, :] - gray = 0.2989 * r + 0.5870 * g + 0.1140 * b - return gray - - def hamming(self, t1, t2): - dist = (t1 - t2) ** 2 - dist_norm = torch.mean(dist / (0.1 + dist), 1, True) - return dist_norm - - def valid_mask(self, t, padding): - n, _, h, w = t.size() - inner = torch.ones(n, 1, h - 2 * padding, w - 2 * padding).type_as(t) - mask = F.pad(inner, [padding] * 4) - return mask - - def forward(self, img0, img1): - img0 = self.transform(self.rgb2gray(img0)) - img1 = self.transform(self.rgb2gray(img1)) - return self.hamming(img0, img1) * self.valid_mask(img0, 1) - - -class SOBEL(nn.Module): - def __init__(self): - super(SOBEL, self).__init__() - self.kernelX = torch.tensor([ - [1, 0, -1], - [2, 0, -2], - [1, 0, -1], - ]).float() - self.kernelY = self.kernelX.clone().T - self.kernelX = self.kernelX.unsqueeze(0).unsqueeze(0).to(device) - self.kernelY = self.kernelY.unsqueeze(0).unsqueeze(0).to(device) - - def forward(self, pred, gt): - N, C, H, W = pred.shape[0], pred.shape[1], pred.shape[2], pred.shape[3] - img_stack = torch.cat( - [pred.reshape(N*C, 1, H, W), gt.reshape(N*C, 1, H, W)], 0) - sobel_stack_x = F.conv2d(img_stack, self.kernelX, padding=1) - sobel_stack_y = F.conv2d(img_stack, self.kernelY, padding=1) - pred_X, gt_X = sobel_stack_x[:N*C], sobel_stack_x[N*C:] - pred_Y, gt_Y = sobel_stack_y[:N*C], sobel_stack_y[N*C:] - - L1X, L1Y = torch.abs(pred_X-gt_X), torch.abs(pred_Y-gt_Y) - loss = (L1X+L1Y) - return loss - -class MeanShift(nn.Conv2d): - def __init__(self, data_mean, data_std, data_range=1, norm=True): - c = len(data_mean) - super(MeanShift, self).__init__(c, c, kernel_size=1) - std = torch.Tensor(data_std) - self.weight.data = torch.eye(c).view(c, c, 1, 1) - if norm: - self.weight.data.div_(std.view(c, 1, 1, 1)) - self.bias.data = -1 * data_range * torch.Tensor(data_mean) - self.bias.data.div_(std) - else: - self.weight.data.mul_(std.view(c, 1, 1, 1)) - self.bias.data = data_range * torch.Tensor(data_mean) - self.requires_grad = False - -class VGGPerceptualLoss(torch.nn.Module): - def __init__(self, rank=0): - super(VGGPerceptualLoss, self).__init__() - blocks = [] - pretrained = True - self.vgg_pretrained_features = models.vgg19(pretrained=pretrained).features - self.normalize = MeanShift([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], norm=True).cuda() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X, Y, indices=None): - X = self.normalize(X) - Y = self.normalize(Y) - indices = [2, 7, 12, 21, 30] - weights = [1.0/2.6, 1.0/4.8, 1.0/3.7, 1.0/5.6, 10/1.5] - k = 0 - loss = 0 - for i in range(indices[-1]): - X = self.vgg_pretrained_features[i](X) - Y = self.vgg_pretrained_features[i](Y) - if (i+1) in indices: - loss += weights[k] * (X - Y.detach()).abs().mean() * 0.1 - k += 1 - return loss - -if __name__ == '__main__': - img0 = torch.zeros(3, 3, 256, 256).float().to(device) - img1 = torch.tensor(np.random.normal( - 0, 1, (3, 3, 256, 256))).float().to(device) - ternary_loss = Ternary() - print(ternary_loss(img0, img1).shape) diff --git a/spaces/bigjoker/stable-diffusion-webui/javascript/textualInversion.js b/spaces/bigjoker/stable-diffusion-webui/javascript/textualInversion.js deleted file mode 100644 index 1103cf6fb1c0d9f0fd6f22dd3d66e8c9d1edbe6c..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/javascript/textualInversion.js +++ /dev/null @@ -1,17 +0,0 @@ - - - -function start_training_textual_inversion(){ - gradioApp().querySelector('#ti_error').innerHTML='' - - var id = randomId() - requestProgress(id, gradioApp().getElementById('ti_output'), gradioApp().getElementById('ti_gallery'), function(){}, function(progress){ - gradioApp().getElementById('ti_progress').innerHTML = progress.textinfo - }) - - var res = args_to_array(arguments) - - res[0] = id - - return res -} diff --git a/spaces/bilgeyucel/prompt-lemmatizer/app.py b/spaces/bilgeyucel/prompt-lemmatizer/app.py deleted file mode 100644 index 0cd9827f20f9583ca6a48685d87c3d015c98b032..0000000000000000000000000000000000000000 --- a/spaces/bilgeyucel/prompt-lemmatizer/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import gradio as gr -import concurrent.futures -from haystack.nodes import PromptNode - -from utils import lemmatizer_func - -def run_prompt(prompt, api_key, model_name, max_length): - prompt_node = PromptNode(model_name_or_path=model_name, api_key=api_key, max_length=max_length) - lemmatized_prompt = lemmatizer_func(prompt) - with concurrent.futures.ThreadPoolExecutor() as executor: - future_plain = executor.submit(prompt_node, prompt) - future_lemmatized = executor.submit(prompt_node, lemmatized_prompt) - - response_plain = future_plain.result() - response_lemmatized = future_lemmatized.result() - return lemmatized_prompt, response_plain[0][0], response_plain[1]["prompt_tokens"], response_plain[1]["completion_tokens"], response_lemmatized[0][0], response_lemmatized[1]["prompt_tokens"], response_lemmatized[1]["completion_tokens"] - -description = """ -# Prompt Lemmatizer 🐢 -## Lemmatize your prompts and compare the outputs of lemmatized and non-lemmatized versions. - -Enter an OpenAI or Cohere key, choose your model and set the `max_length`. - -Built by [Bilge Yucel](https://twitter.com/bilgeycl) and [Stefano Fiorucci](https://github.com/anakin87), with [Haystack](https://github.com/deepset-ai/haystack). -""" - -with gr.Blocks(theme="default") as demo: - gr.Markdown(value=description) - with gr.Row(): - api_key = gr.Textbox(label="Enter your api key", type="password") - model_name = gr.Dropdown(["text-davinci-003", "gpt-3.5-turbo", "gpt-4", "gpt-4-32k", "command", "command-light", "base", "base-light"], value="gpt-3.5-turbo", label="Choose your model!") - max_length = gr.Slider(100, 500, value=100, step=10, label="Max Length", info="Max token length of the response. Choose between 100 and 500") - with gr.Row(): - prompt = gr.TextArea(label="Prompt", value="Rachel has 17 apples. She gives 9 to Sarah. How many apples does Rachel have now?") - gr.Examples( - [ - "I want you to act as a travel guide. I will write you my location and you will suggest a place to visit near my location. In some cases, I will also give you the type of places I will visit. You will also suggest me places of similar type that are close to my first location. My first suggestion request is \"I am in Italy and I want to visit only museums.\"", - "Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body’s immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance. Explain the above in one sentence:", - "Please give a sentiment for this context. Answer with positive, negative or neutral. Context: A flicker in the dark started of interesting and I was glued to the novel. It was just a little longer that I had anticipated to get to the bottom of the story. I felt sorry for the Chloe's mother but I had thought there was something odd about her brother. Well being a murderer was it any wonder he did not like this sister's boyfriend because I think he knew what happened. I love the cover of the book and the title is good. If only hardbacks had the cover printed onto them. Answer:", - ], - examples_per_page=1, - inputs=prompt, - label="Click on any example" - ) - submit_btn = gr.Button("✂️ Let's lemmatize and see!") - with gr.Row(): - with gr.Column(): - with gr.Row(): - token_count_plain = gr.Number(label="Prompt Token Count") - token_count_plain_completion = gr.Number(label="Output Token Count") - with gr.Row(): - prompt_response = gr.TextArea(label="Output", show_copy_button=True) - with gr.Column(): - with gr.Row(): - token_count_lemmatized = gr.Number(label="Lemmatized Prompt Token Count") - token_count_lemmatized_completion = gr.Number(label="Output Token Count (Lemmatized Prompt)") - lemmatized_prompt_response = gr.TextArea(label="Output (Lemmatized Prompt)", show_copy_button=True) - with gr.Accordion("See Lemmatized Prompt", open=False): - lemmatized_prompt = gr.TextArea(show_copy_button=True, show_label=False, container=False) - - submit_btn.click(fn=run_prompt, inputs=[prompt, api_key, model_name, max_length], outputs=[lemmatized_prompt, prompt_response, token_count_plain, token_count_plain_completion, lemmatized_prompt_response, token_count_lemmatized, token_count_lemmatized_completion]) - -if __name__ == "__main__": - demo.launch() - diff --git a/spaces/bioriAsaeru/text-to-voice/Bhouri Movie Download 720p In Hindi BEST.md b/spaces/bioriAsaeru/text-to-voice/Bhouri Movie Download 720p In Hindi BEST.md deleted file mode 100644 index 8333ae0c4cf64b7abeb093cbcc9dae9275e2e1fd..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Bhouri Movie Download 720p In Hindi BEST.md +++ /dev/null @@ -1,40 +0,0 @@ -
-

Bhouri Movie Download 720p in Hindi: A Tragic Love Story of a Young Woman

-

Bhouri is a 2017 Hindi movie that tells the story of a 23-year old woman who is married to a 55-year old man and faces the exploitation of women in a male dominated society. The movie stars Raghuveer Yadav, Masha Paur, Aditya Pancholi, Kunickaa Sadanand, Manoj Joshi, and Shakti Kapoor in the lead roles. The movie is directed by Jasbir Bijender Bhati and written by Kushal Ved Bakshi. The movie was released on 24 February 2017 and received mixed reviews from critics and audiences.

-

Bhouri movie download 720p in hindi


Download - https://urloso.com/2uyRQF



-

What is the plot of Bhouri?

-

The plot of Bhouri revolves around the life of Bhouri (Masha Paur), a young woman who lives in a remote village with her husband Ghulam (Raghuveer Yadav), who is more than twice her age. Bhouri is treated as a slave by her husband and his family, who abuse her physically and mentally. She is also harassed by the men of the village, who lust after her beauty and innocence. Bhouri's only solace is her friendship with Sumitra (Kunickaa Sadanand), a widow who runs a small shop in the village.

-

One day, Bhouri meets Sankar (Aditya Pancholi), a contractor who comes to the village to build a road. Sankar is attracted to Bhouri and offers her a job as his assistant. Bhouri sees this as an opportunity to escape from her miserable life and agrees to work with him. However, this decision leads to a series of tragic events that change Bhouri's life forever.

-

How to watch or download Bhouri movie online?

-

If you want to watch or download Bhouri movie online, you have several options. You can stream the movie for free on websites like Hindimoviesonline.to or onlinemovieshindi.com. These websites offer HD quality streaming and downloading of the movie without any registration or subscription. However, these websites may not be legal or safe to use, as they may contain ads, pop-ups, malware, or viruses that can harm your device or data.

-

If you want to watch or download Bhouri movie online legally and safely, you can use platforms like Amazon Prime Video or Netflix. These platforms offer high-quality streaming and downloading of the movie with a subscription fee. You can also enjoy other benefits like ad-free viewing, offline access, multiple devices support, and original content. However, these platforms may not have the movie available in your region or language.

-

Why should you watch Bhouri movie?

-

Bhouri movie is a movie that highlights the plight of women in rural India who face oppression, violence, and discrimination at the hands of men. The movie portrays the reality of child marriage, domestic abuse, sexual harassment, honor killing, and female infanticide that are prevalent in many parts of the country. The movie also showcases the courage and resilience of women who fight for their dignity and freedom against all odds.

-

-

Bhouri movie is a movie that features powerful performances by the cast, especially Masha Paur who plays the titular role of Bhouri. The movie also has a strong message about women empowerment and social justice that can inspire and educate the viewers. The movie also has some emotional and dramatic scenes that can touch your heart and make you empathize with the characters.

-

Conclusion

-

Bhouri movie download 720p in hindi is a movie that tells the story of a young woman who is married to an old man and faces the exploitation of women in a male dominated society. The movie stars Raghuveer Yadav, Masha Paur, Aditya Pancholi, Kunickaa Sadanand, Manoj Joshi, and Shakti Kapoor in the lead roles. The movie is directed by Jasbir Bijender Bhati and written by Kushal Ved Bakshi. The movie was released on 24 February 2017 and received mixed reviews from critics and audiences.

-

If you want to watch or download Bhouri movie online, you can stream it for free on websites like Hindimoviesonline.to or onlinemovieshindi.com. However, these websites may not be legal or safe to use. If you want to watch or download Bhouri movie online legally and safely, you can use platforms like Amazon Prime Video or Netflix. However, these platforms may not have the movie available in your region or language.

-

If you are looking for a movie that highlights the plight of women in rural India who face oppression, violence, and discrimination at the hands of men, you should consider watching Bhouri movie. The movie portrays the reality of child marriage, domestic abuse, sexual harassment, honor killing, and female infanticide that are prevalent in many parts of the country. The movie also showcases the courage and resilience of women who fight for their dignity and freedom against all odds.

-

How to review Bhouri movie?

-

If you have watched or downloaded Bhouri movie online, you may want to share your opinion and feedback about the movie with others. You can do this by writing a review of the movie on platforms like IMDb, Rotten Tomatoes, or Metacritic. These platforms allow you to rate the movie on a scale of 1 to 10 or 1 to 100 and write a brief summary of your thoughts and feelings about the movie. You can also read other reviews of the movie by critics and audiences and compare your views with them.

-

When writing a review of Bhouri movie, you should keep in mind some tips and guidelines that can help you write a good and honest review. Some of these tips and guidelines are:

-
    -
  • Be clear and concise. Write your review in simple and clear language that can be easily understood by anyone. Avoid using jargon, slang, or abbreviations that may confuse the readers. Keep your review short and to the point, without rambling or repeating yourself.
  • -
  • Be objective and fair. Write your review based on your own experience and observation of the movie, without being biased or influenced by external factors. Avoid making personal attacks or insults on the actors, directors, writers, or producers of the movie. Give credit where credit is due and criticize where criticism is warranted.
  • -
  • Be specific and relevant. Write your review focusing on the aspects of the movie that are relevant to the genre, theme, plot, characters, performance, direction, cinematography, music, editing, etc. Provide specific examples and evidence from the movie to support your opinions and claims. Avoid writing vague or general statements that do not add any value or insight to your review.
  • -
  • Be original and creative. Write your review in your own words and style, without copying or plagiarizing from other sources. Express your own voice and personality in your review, without imitating or mimicking others. Use humor, irony, sarcasm, or other literary devices to make your review more interesting and engaging.
  • -
-

Summary

-

In this article, we have shown you how to watch or download Bhouri movie online in 720p in hindi. We have also shown you how to write a review of the movie on platforms like IMDb, Rotten Tomatoes, or Metacritic. We have also given you some tips and guidelines on how to write a good and honest review of the movie.

-

Bhouri is a 2017 Hindi movie that tells the story of a young woman who is married to an old man and faces the exploitation of women in a male dominated society. The movie stars Raghuveer Yadav, Masha Paur, Aditya Pancholi, Kunickaa Sadanand, Manoj Joshi, and Shakti Kapoor in the lead roles. The movie is directed by Jasbir Bijender Bhati and written by Kushal Ved Bakshi. The movie was released on 24 February 2017 and received mixed reviews from critics and audiences.

-

If you are looking for a movie that highlights the plight of women in rural India who face oppression, violence, and discrimination at the hands of men, you should consider watching Bhouri movie. The movie portrays the reality of child marriage, domestic abuse, sexual harassment, honor killing, and female infanticide that are prevalent in many parts of the country. The movie also showcases the courage and resilience of women who fight for their dignity and freedom against all odds.

-

We hope that this article has been useful and informative for you. If you want to learn more about Bhouri movie or download it for free in 720p in hindi, please visit its official website at https://hindimoviesonline.to/bhouri-hindi/.

-

Conclusion

-

Bhouri movie download 720p in hindi is a movie that tells the story of a young woman who is married to an old man and faces the exploitation of women in a male dominated society. The movie stars Raghuveer Yadav, Masha Paur, Aditya Pancholi, Kunickaa Sadanand, Manoj Joshi, and Shakti Kapoor in the lead roles. The movie is directed by Jasbir Bijender Bhati and written by Kushal Ved Bakshi. The movie was released on 24 February 2017 and received mixed reviews from critics and audiences.

-

If you want to watch or download Bhouri movie online, you can stream it for free on websites like Hindimoviesonline.to or onlinemovieshindi.com. However, these websites may not be legal or safe to use. If you want to watch or download Bhouri movie online legally and safely, you can use platforms like Amazon Prime Video or Netflix. However, these platforms may not have the movie available in your region or language.

-

If you are looking for a movie that highlights the plight of women in rural India who face oppression, violence, and discrimination at the hands of men, you should consider watching Bhouri movie. The movie portrays the reality of child marriage, domestic abuse, sexual harassment, honor killing, and female infanticide that are prevalent in many parts of the country. The movie also showcases the courage and resilience of women who fight for their dignity and freedom against all odds.

-

If you have watched or downloaded Bhouri movie online, you may want to share your opinion and feedback about the movie with others. You can do this by writing a review of the movie on platforms like IMDb, Rotten Tomatoes, or Metacritic. You can also read other reviews of the movie by critics and audiences and compare your views with them. When writing a review of Bhouri movie, you should keep in mind some tips and guidelines that can help you write a good and honest review.

-

We hope that this article has been useful and informative for you. If you want to learn more about Bhouri movie or download it for free in 720p in hindi, please visit its official website at https://hindimoviesonline.to/bhouri-hindi/.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/CRACK Nitro Pro 9.5.2.29 (preactivated) RePack By D!akov Learn How to Convert Edit and Secure PDF Files with Nitro Pro.md b/spaces/bioriAsaeru/text-to-voice/CRACK Nitro Pro 9.5.2.29 (preactivated) RePack By D!akov Learn How to Convert Edit and Secure PDF Files with Nitro Pro.md deleted file mode 100644 index 16ce50c935ced1bd92405dd1a7366f869c8d269d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CRACK Nitro Pro 9.5.2.29 (preactivated) RePack By D!akov Learn How to Convert Edit and Secure PDF Files with Nitro Pro.md +++ /dev/null @@ -1,6 +0,0 @@ -

CRACK Nitro Pro 9.5.2.29 (preactivated) RePack By D!akov


Download File --->>> https://urloso.com/2uyR7Y



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/CopyTrans Contacts Full Portable The Best iTunes Alternative for iPhone Backup and Restore.md b/spaces/bioriAsaeru/text-to-voice/CopyTrans Contacts Full Portable The Best iTunes Alternative for iPhone Backup and Restore.md deleted file mode 100644 index 3f2e96a0ad57cc78f5b36a6dde6423d675d283d6..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CopyTrans Contacts Full Portable The Best iTunes Alternative for iPhone Backup and Restore.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

Features of CopyTrans Contacts

  • Backup: Create backups of iPhone contacts and save them to computer.
  • Backup Restore: Restore contacts from iCloud, Google, Outlook and other sources.
  • Clean Up: Delete duplicate contacts from your iPhone.
  • Customize: Personalize contact fields like phone number, address and more.
  • Easy Restore: Transfer all contacts from iPhone to PC with a single click.
  • Edit: Add, delete, rename and modify contacts on iPhone.
  • Export: Save iPhone contacts in various formats like vCard, CSV and HTML.
  • Manage: Create, delete and edit groups of contacts.
  • Merge: Merge contacts from different sources like iCloud, Google, Outlook and more.
  • Recover: Recover contacts from iPhone backups.
  • Refresh: Refresh contacts from iPhone or iCloud with one click.
  • Remove: Unlink contacts from an iPhone or other devices.
  • Secure: Keep your contact data secure with password encryption.
  • Share: Share contacts via email, AirDrop and more.
  • Sync: Automatically sync iPhone contacts with Outlook and Gmail.
  • Compatibility and LicenseThis download is licensed as shareware for the Windows operating system from iPhone tools and can be used as a free trial until the trial period ends (after an unspecified number of days). The CopyTrans Contacts 2.202 demo is available to all software users as a free download with potential restrictions and is not necessarily the full version of this software.What version of Windows can CopyTrans Contacts run on?CopyTrans Contacts can be used on a computer running Windows 11 or Windows 10. Previous versions of the operating system shouldn't be a problem with Windows 8, Windows 7 and Windows Vista having been tested. Windows XP is supported. It comes in both 32-bit and 64-bit downloads.Filed under: CopyTrans Contacts DownloadPortable SoftwareIPhone Management SoftwareWe have tested CopyTrans Contacts 2.202 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.Free Download for Windows 29.86 MB - Tested clean
  • $$ Cost:Free Trial

    -

    copytrans contacts full portable


    Download File ✸✸✸ https://urloso.com/2uyRaT



    -

    Just download CopyTrans Contacts and extract the contents of the zip file. Connect your iPhone to your PC and launch the program. The best thing about it is that you can run it from a portable flash drive and run it without installing. After connecting your device, it will show you all of the contacts on your iPhone.

    -

    CopyTrans Manage can be used as a backup tool for your iPad. However I recommend another tool from the CopyTrans to create full backup of your iPads or iPhones. Check the iCloner. Download it from www.copytrans.net.

    -

    The free version of iBackup Viewer works with full features, includes extracting contacts, exporting and printing sms & iMessage messages to PDF files, exporting phone call history, add safari visit history and bookmarks to desktop safari, viewing and recovering photos and videos.

    -

    Easily extract contacts from iPhone backups and export to Mac Address Book or Contacts.app. With iBackup Viewer, you can also save contacts as vcards (.vcf) files on disk, which are very portable to share with friends and online mail systems like Gmail.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Driver Usb Irda Sanghavi Comparison with Other USB to Infrared Adapters.md b/spaces/bioriAsaeru/text-to-voice/Driver Usb Irda Sanghavi Comparison with Other USB to Infrared Adapters.md deleted file mode 100644 index 800687609d95812a4dd03d575359cca60da61cf5..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Driver Usb Irda Sanghavi Comparison with Other USB to Infrared Adapters.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Driver Usb Irda Sanghavi


    Downloadhttps://urloso.com/2uyO8p



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/FIX ML1640 V1010083.fls A Simple and Effective Way to Fix Your Samsung ML 1640 Printer Issues.md b/spaces/bioriAsaeru/text-to-voice/FIX ML1640 V1010083.fls A Simple and Effective Way to Fix Your Samsung ML 1640 Printer Issues.md deleted file mode 100644 index 9081ff6cfb34d2a27954ff5d6632945e717b86e2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/FIX ML1640 V1010083.fls A Simple and Effective Way to Fix Your Samsung ML 1640 Printer Issues.md +++ /dev/null @@ -1,12 +0,0 @@ - -

    glynwhoo f4bc01c98b -exclusive-hindi-film-rab-ne-bana-di-jodi-free-download
    -cooking-dash-apk-mod-unlock-all-toppgio
    -sony-mhc-2600-service-manual-sony-top
    -trainer-lord-of-the-rings-war-in-the-north-pc-new-download
    -hd-online-player-fast-and-furious-4-full-movie-hd-fre-link
    -the-three-stooges-dual-audio-720p-462-zandchan
    -onroute-motorkaart-benelux-torrent-rar-phoeway
    -julun-yeti-reshimgathi-instrumental-song-download-wenell
    -jawargar-pashto-drama-3gp-for-mo-exclusive
    -better-indu-sarkar-full-mp4-movie-download
    -new-proevolutionsoccer2019key
    -alagappan-medicine-book-pdf-free-261-top
    -murder-3-hindi-full-movie-downloadk-justche
    -tekla-structural-designer-2015-crack-updated
    -new-naa-peru-surya-na-illu-india-movie-download-in-mp4
    -symantec-system-recovery-2013-serial-number-download-new
    -oh-my-god-tamil-movie-mp4-video-songs-free-download-free
    -ek-villain-full-movie-in-hindi-720p-torrent-weiiza
    -portable-copytrans-v4-842-11
    -pinnacle-studio-16-activation-key-link-keygen
    -repack-musicbandmanagerv13032018cheatcodes
    -iw5mp-ceg-exe-download-ger-verified
    -extra-quality-inazuma-eleven-strikers-pc-game-free-download
    -catia-v6r2009-x64-crack-link
    -bentley-staad-pro-v8i-20-with-crack-best-rar
    -hot-anonim-hai-sa-vorbim-album-download-zippy-update-episodios-str
    -projectile-weapons-pack-torrent-download-with-crack-full
    -new-sandeep-garg-economics-class-12-ebook-2146
    -miki-nagashima-9yo-1-extra-quality
    -better-file-activation-xml-autocom-keygen-45

    -

    fayflou f4bc01c98b -swiftshader-for-fifa-12-x86-d3d9dllrar
    -virtual-dj-2018-build-5281-activation-with-license-key-free-download-portable
    -computer-launcher-v7-92-mod-apk-latest-melvfeor
    -trishna-2-movie-download-in-hindi-720p-downloadgolkes-link
    -crack-keygen-3ds-max-2017-exclusive
    -portable-cat-sis-2009b-keygen-29
    -vivid-workshopdata-ati-12-1-rar-crack-401-isopoci
    -exclusive-webstorm-2020-1-crack-activation-key
    -3d-vista-virtual-tour-hot-crack
    -fifa-manager-07-no-cd-crack-hot
    -high-quality-morebox-301d-901d-flash-demo-34l
    -telecharger-logo-maker-avec-crack-link
    -2021-badrinath-ki-dulhania-hindi-movie-full-hd
    -finaldestination6onlinesubtitratinromana
    -topaz-denoise-ai-1-2-1
    -el-cielo-puede-esperar-1978-ver-online
    -student-of-the-year-2012-dvdrip-hindi-mp4-mobile-movie-245-upd
    -radio-controlled-thermo-clock-john-lewis-instructions
    -best-my-favorite-hobby-essay-in-urdu
    -new-uninhibited-1995-torrent-downloa
    -the-amazing-spider-man-1080p-300-mb-link-link
    -top-download-ebook-tuntunan-shalat-lengkap
    -dirigentes-del-mundo-futuro-carlos-cuauhtemoc-sanchez-pdf-download-best
    -download-kung-fu-panda-3-english-torrent-link
    -updated-sound-edge-51al-driver
    -autodesk-autocad-civil-3d-2018-0-2-x64-full-utorrent-exclusive
    -shark-attack-deathmatch-2-free-download-top-torrent
    -mr-x-movie-in-hindi-torrent-download-verified
    -deepika-padukone-chudai-ki-kahani-new
    -body-works-6-0-full-descarga-gratis-naolflam

    -

    FIX ML1640 V1010083.fls [UPDATED]


    Download Zip https://urloso.com/2uyQzq



    -

    talihib f4bc01c98b -kacey-kox-collection-torrents-hot
    -hindi-hd-1942-a-love-story-movies-1080p-torrent-jaydbill
    -patched-download-ppjoy-joystick-driver-0-8-4-6
    -pratiyogita-darpan-year-book-free-download-11-install
    -patched-utilization-of-electrical-energy-by-rajput-pdf
    -hollywood-movie-young-people-fucking-hindi-dubbed-free-download-simecha
    -mr-fraud-movie-hd-mp4-free-download-ellfelt
    -pacificrimmovieintelugufree-link-152
    -punar-vivah-serial-online-in-hindi-all-episodes-verified
    -mauseth-botanica-parte-generale-pdf-pdf
    -hai-katha-sangram-ki-ringtone-free-download-fowlquab
    -torrent-la-bible-du-tage-mage-hot
    -99-magac-ee-allah-macnahooda-pdf-exclusive
    -kmspico-11-1-9-portable-serial-key-patched
    -zathura-tamil-dubbed-movie-free-12-portable
    -top-harry-potter-libro-de-hechizos-pdf-download
    -ciudad-jardin-ebenezer-howard-pdf-download-updated
    -yvette-challande-methodologie-de-cerceau-compame
    -cracked-spider-man-the-edge-of-time-pc-download-torrent
    -break-ke-baad-in-hindi-720p-torrent
    -ekla-cholo-movie-song-mp3-download-2021
    -paying-guest-2009-movie-free-download-lovxan
    -100-years-malayalam-panchangam-pdf-free-download-updated
    -patched-tanaj-edicion-katz-pdf-free
    -programa-contable-monica-9-keygen-hot
    -download-facebook-2-in-1-melvcat
    -new-tamil-serial-actress-mahalakshmi-hot-images
    -download-film-sambandh-full-movies-high-quality
    -mitchell-ondemand-58235-crack-pirate-bay-genbre
    -roms-mame-0-139-full-exclusive-arcade-set-roms-18

    -

    maeelik f4bc01c98b -lukka-chuppi-malayalam-movie-download-15-verified
    -__full__-far-cry-3-save-file-downloadl
    -james-bond-007-blood-stone-crack-only-reloaded-emmytiti
    -vistitle-for-edius-6-crack-doiwnload-work
    -xforce-keygen-trulaser-2016-download-64-bit-exclusive
    -link-dx-atlas-2-3-serial-number
    -el-antidoto-oliver-burkeman-11-erwicah
    -osu-auto-aimbot-downloadl-leymor
    -internet-download-manager-idm-6-27-build-2-32bit-64bit-patch-serial-key-keygen-roctali
    -benito-lertxundi-discografia-completa-work
    -work-secondhand-serenade-discography-200714-channel-neo-13
    -literary-devices-in-the-tempest-act-1-__top__
    -2021-akvis-smartmask-10-5-2404-16912-crack-free-download
    -no-smoking-3-full-movie-in-hindi-download-free-top
    -tere-piche-ro-ro-ke-mar-jaungi-main-punjabi-ringtone-for-mobile-mp3-5
    -one-piece-episode-of-nami-1080pl-free
    -facegen-modeller-35-full-link-cracked
    -intervideo-windvr-3-crack-link-rar-file
    -rixlerexcelpasswordrecoverymaster35keygen-high-quality
    -motogp-08-reloaded-crack-best-only-serial-key
    -mere-sajna-sath-nibhana-movie-mp3-song-download-work
    -juegodemesasupermentepdf13
    -simunlockcodesforcoolpad5860e-_best_
    -maps-company-of-heroes-opposing-fronts-crack-updated
    -official-sony-xperia-z5-compact-so-02h-ntt-docomo-stock-rom-ftf-for-flashtool-marprai
    -patched-adobe-dreamweaver-cc-2018-v17-0-1-9346-x86x64-incl-crack-hot
    -lakshmi-sahasranamam-in-tamil-mp3-free-downloadl-sadhkar
    -full-kumki-video-songs-hd-1080p-blu-ray-tamil-free-download
    -hot-komplete-audio-6-control-panel-s
    -purenudism-little-princess

    -

    marmiyu f4bc01c98b -the-main-aur-charles-2-full-movie-free-download-dubbed-in-hindi-mp4-kaftho
    -localized-english-iw00-iwd-call-of-duty-black-11-opelwin
    -the-true-believer-eric-hoffer-epub-reader
    -my-pals-are-here-maths-homework-2b
    -download-komik-jepang-romantis-bahasa-indonesia-pdf-wakgav
    -peachtree-2012-serial-number-portable
    -nitro-pdf-pro-9-5-1-5-final-x86-x64-incl-keygen-new-core-serial-key
    -como-recuperar-partidas-guardadas-de-gta-san-andreas
    -upd-swiss-manager-unicode-crack
    -__link__-mackeeper-4-4-crack-and-torrent-with-activation-key-latest
    -g-eazy-these-things-happen-zip-download-21-hestgil
    -cardi-b-invasion-of-privacy-2018-mp3-320kbps-hunter
    -ara-soyza-sinhala-movie-free-124-__hot__
    -portable-dragon-quest-monsters-joker-2-pro-english-patch-download
    -kitab-al-fitan-urdu-pdf-freel-javyso
    -cardrecovery-v6-00-build-1206-serial-key-rar-lyttak
    -high-heat-baseball-2003-no-cd-cr
    -hot-dilwale-songs-hd-1080p-gerua
    -upd-perfume-the-story-of-a-murderer-dual-audio-eng-hindi
    -facehacker-v10-2012
    -new-winxppeisodownload
    -age-of-empires-2-full-indir-tek-link-upd
    -mxkey-v3-5-revision-2-7-cracked-12-agneluci
    -free-top-download-fatawa-e-alamgiri-bangla-19
    -kidnap-movie-2015-english-subtitles-download-updated
    -bombay-velvet-part-1-in-hindi-download-720p-dual-audio-torrent-download-free
    -microstation-v8i-crack-file-free-link-download
    -easeus-data-recovery-wizard-professional-v5-6-1-with-key-tordig-3
    -free-infernotool-unitool-v1-5-7-crack-fix-errormsvcp100-dll
    -free-exclusive-download-omsi-2-add-on-coachbus-250-exe

    -

    Simply desire to say your article is as astonishing.
    The clarity in your post is simply cool and i could assume you are an expert
    on this subject. Well with your permission let
    me to grab your feed to keep updated with forthcoming post.
    Thanks a million and please continue the enjoyable work.

    -

    This runtime package installs the Adobe Flash & AIR installers for the Mozilla Firefox & Google Chrome browsers.
    Because of the new ability to install XML files by Adobe, there is an updated Office
    Supplier.xml added, and a custom Adobe.xml added.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index 40844ddeb8d47ff58a6af49ab35bad84e14f5721..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -model.backbone.bottom_up.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/bumsika/Redshift-Diffusion-Demo/app.py b/spaces/bumsika/Redshift-Diffusion-Demo/app.py deleted file mode 100644 index aa7bd45ca5af97c170b8a706a8c3da1d8090531d..0000000000000000000000000000000000000000 --- a/spaces/bumsika/Redshift-Diffusion-Demo/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/nitrosocke/redshift-diffusion").queue(concurrency_count=20).launch() \ No newline at end of file diff --git a/spaces/caoyiming/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/caoyiming/vits-uma-genshin-honkai/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/caoyiming/vits-uma-genshin-honkai/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/captchaboy/FAST-ABINet-OCR/modules/model_vision.py b/spaces/captchaboy/FAST-ABINet-OCR/modules/model_vision.py deleted file mode 100644 index feb5a1112bf8b40d5a7ea492ab125d1ccacd4df7..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/modules/model_vision.py +++ /dev/null @@ -1,47 +0,0 @@ -import logging -import torch.nn as nn -from fastai.vision import * - -from modules.attention import * -from modules.backbone import ResTranformer -from modules.model import Model -from modules.resnet import resnet45 - - -class BaseVision(Model): - def __init__(self, config): - super().__init__(config) - self.loss_weight = ifnone(config.model_vision_loss_weight, 1.0) - self.out_channels = ifnone(config.model_vision_d_model, 512) - - if config.model_vision_backbone == 'transformer': - self.backbone = ResTranformer(config) - else: self.backbone = resnet45() - - if config.model_vision_attention == 'position': - mode = ifnone(config.model_vision_attention_mode, 'nearest') - self.attention = PositionAttention( - max_length=config.dataset_max_length + 1, # additional stop token - mode=mode, - ) - elif config.model_vision_attention == 'attention': - self.attention = Attention( - max_length=config.dataset_max_length + 1, # additional stop token - n_feature=8*32, - ) - else: - raise Exception(f'{config.model_vision_attention} is not valid.') - self.cls = nn.Linear(self.out_channels, self.charset.num_classes) - - if config.model_vision_checkpoint is not None: - logging.info(f'Read vision model from {config.model_vision_checkpoint}.') - self.load(config.model_vision_checkpoint) - - def forward(self, images, *args): - features = self.backbone(images) # (N, E, H, W) - attn_vecs, attn_scores = self.attention(features) # (N, T, E), (N, T, H, W) - logits = self.cls(attn_vecs) # (N, T, C) - pt_lengths = self._get_length(logits) - - return {'feature': attn_vecs, 'logits': logits, 'pt_lengths': pt_lengths, - 'attn_scores': attn_scores, 'loss_weight':self.loss_weight, 'name': 'vision'} diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/backbone.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/backbone.py deleted file mode 100644 index e1c765a6b38542f66cae55216bba697a6626d128..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/backbone/backbone.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from abc import ABCMeta, abstractmethod -from typing import Dict -import torch.nn as nn - -from detectron2.layers import ShapeSpec - -__all__ = ["Backbone"] - - -class Backbone(nn.Module, metaclass=ABCMeta): - """ - Abstract base class for network backbones. - """ - - def __init__(self): - """ - The `__init__` method of any subclass can specify its own set of arguments. - """ - super().__init__() - - @abstractmethod - def forward(self): - """ - Subclasses must override this method, but adhere to the same return type. - - Returns: - dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor - """ - pass - - @property - def size_divisibility(self) -> int: - """ - Some backbones require the input height and width to be divisible by a - specific integer. This is typically true for encoder / decoder type networks - with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific - input size divisibility is required. - """ - return 0 - - @property - def padding_constraints(self) -> Dict[str, int]: - """ - This property is a generalization of size_divisibility. Some backbones and training - recipes require specific padding constraints, such as enforcing divisibility by a specific - integer (e.g., FPN) or padding to a square (e.g., ViTDet with large-scale jitter - in :paper:vitdet). `padding_constraints` contains these optional items like: - { - "size_divisibility": int, - "square_size": int, - # Future options are possible - } - `size_divisibility` will read from here if presented and `square_size` indicates the - square padding size if `square_size` > 0. - - TODO: use type of Dict[str, int] to avoid torchscipt issues. The type of padding_constraints - could be generalized as TypedDict (Python 3.8+) to support more types in the future. - """ - return {} - - def output_shape(self): - """ - Returns: - dict[str->ShapeSpec] - """ - # this is a backward-compatible default - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/build.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/build.py deleted file mode 100644 index 34eb12d00d94ff905b796e75e2c4c5845257c8e9..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/build.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.utils.registry import Registry - -PROPOSAL_GENERATOR_REGISTRY = Registry("PROPOSAL_GENERATOR") -PROPOSAL_GENERATOR_REGISTRY.__doc__ = """ -Registry for proposal generator, which produces object proposals from feature maps. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - -from . import rpn, rrpn # noqa F401 isort:skip - - -def build_proposal_generator(cfg, input_shape): - """ - Build a proposal generator from `cfg.MODEL.PROPOSAL_GENERATOR.NAME`. - The name can be "PrecomputedProposals" to use no proposal generator. - """ - name = cfg.MODEL.PROPOSAL_GENERATOR.NAME - if name == "PrecomputedProposals": - return None - - return PROPOSAL_GENERATOR_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/changlisheng/shangChat/ChuanhuChatbot.py b/spaces/changlisheng/shangChat/ChuanhuChatbot.py deleted file mode 100644 index bbe0c83ee327fa1bfd2cb9228c3b51285f021ca2..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/ChuanhuChatbot.py +++ /dev/null @@ -1,423 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(): - gr.HTML(title) - user_info = gr.Markdown(value="", elem_id="user_info") - # gr.HTML('
    Duplicate Space
    ') - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown("多账号模式已开启,无需输入key,可直接开始对话", elem_id="usage_display") - else: - usageTxt = gr.Markdown("**发送消息** 或 **提交key** 以显示额度", elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - two_column = gr.Checkbox(label="双栏pdf", value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label="识别公式", value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False, visible=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API-Host...", - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - gr.HTML(footer.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(sum(token_count.value[-4:])), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - two_column.change(update_doc_config, [two_column], None) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "shangChatGPT" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=auth_list, - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=auth_list, - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/translation/README.md b/spaces/chendl/compositional_test/transformers/examples/pytorch/translation/README.md deleted file mode 100644 index 0593d577a01fdb032ce608658508ae1f44acb902..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/translation/README.md +++ /dev/null @@ -1,211 +0,0 @@ - - -## Translation - -This directory contains examples for finetuning and evaluating transformers on translation tasks. -Please tag @patil-suraj with any issues/unexpected behaviors, or send a PR! -For deprecated `bertabs` instructions, see [`bertabs/README.md`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/bertabs/README.md). -For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq). - -### Supported Architectures - -- `BartForConditionalGeneration` -- `FSMTForConditionalGeneration` (translation only) -- `MBartForConditionalGeneration` -- `MarianMTModel` -- `PegasusForConditionalGeneration` -- `T5ForConditionalGeneration` -- `MT5ForConditionalGeneration` - -`run_translation.py` is a lightweight examples of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it. - -For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files -and you also will find examples of these below. - - -## With Trainer - -Here is an example of a translation fine-tuning with a MarianMT model: - -```bash -python examples/pytorch/translation/run_translation.py \ - --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ - --do_train \ - --do_eval \ - --source_lang en \ - --target_lang ro \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --output_dir /tmp/tst-translation \ - --per_device_train_batch_size=4 \ - --per_device_eval_batch_size=4 \ - --overwrite_output_dir \ - --predict_with_generate -``` - -MBart and some T5 models require special handling. - -T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example: - -```bash -python examples/pytorch/translation/run_translation.py \ - --model_name_or_path t5-small \ - --do_train \ - --do_eval \ - --source_lang en \ - --target_lang ro \ - --source_prefix "translate English to Romanian: " \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --output_dir /tmp/tst-translation \ - --per_device_train_batch_size=4 \ - --per_device_eval_batch_size=4 \ - --overwrite_output_dir \ - --predict_with_generate -``` - -If you get a terrible BLEU score, make sure that you didn't forget to use the `--source_prefix` argument. - -For the aforementioned group of T5 models it's important to remember that if you switch to a different language pair, make sure to adjust the source and target values in all 3 language-specific command line argument: `--source_lang`, `--target_lang` and `--source_prefix`. - -MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example: - -```bash -python examples/pytorch/translation/run_translation.py \ - --model_name_or_path facebook/mbart-large-en-ro \ - --do_train \ - --do_eval \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --source_lang en_XX \ - --target_lang ro_RO \ - --output_dir /tmp/tst-translation \ - --per_device_train_batch_size=4 \ - --per_device_eval_batch_size=4 \ - --overwrite_output_dir \ - --predict_with_generate - ``` - -And here is how you would use the translation finetuning on your own files, after adjusting the -values for the arguments `--train_file`, `--validation_file` to match your setup: - -```bash -python examples/pytorch/translation/run_translation.py \ - --model_name_or_path t5-small \ - --do_train \ - --do_eval \ - --source_lang en \ - --target_lang ro \ - --source_prefix "translate English to Romanian: " \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --train_file path_to_jsonlines_file \ - --validation_file path_to_jsonlines_file \ - --output_dir /tmp/tst-translation \ - --per_device_train_batch_size=4 \ - --per_device_eval_batch_size=4 \ - --overwrite_output_dir \ - --predict_with_generate -``` - -The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key `"translation"` and its value another dictionary whose keys is the language pair. For example: - -```json -{ "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } } -{ "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } } -``` -Here the languages are Romanian (`ro`) and English (`en`). - -If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following: - -```bash -python examples/pytorch/translation/run_translation.py \ - --model_name_or_path t5-small \ - --do_train \ - --do_eval \ - --source_lang en \ - --target_lang de \ - --source_prefix "translate English to German: " \ - --dataset_name stas/wmt14-en-de-pre-processed \ - --output_dir /tmp/tst-translation \ - --per_device_train_batch_size=4 \ - --per_device_eval_batch_size=4 \ - --overwrite_output_dir \ - --predict_with_generate - ``` - -## With Accelerate - -Based on the script [`run_translation_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py). - -Like `run_translation.py`, this script allows you to fine-tune any of the models supported on a -translation task, the main difference is that this -script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like. - -It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer -or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by -the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally -after installing it: - -```bash -pip install git+https://github.com/huggingface/accelerate -``` - -then - -```bash -python run_translation_no_trainer.py \ - --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ - --source_lang en \ - --target_lang ro \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --output_dir ~/tmp/tst-translation -``` - -You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run - -```bash -accelerate config -``` - -and reply to the questions asked. Then - -```bash -accelerate test -``` - -that will check everything is ready for training. Finally, you can launch training with - -```bash -accelerate launch run_translation_no_trainer.py \ - --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ - --source_lang en \ - --target_lang ro \ - --dataset_name wmt16 \ - --dataset_config_name ro-en \ - --output_dir ~/tmp/tst-translation -``` - -This command is the same and will work for: - -- a CPU-only setup -- a setup with one GPU -- a distributed training with several GPUs (single or multi node) -- a training on TPUs - -Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it. diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/visualizing_image.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/visualizing_image.py deleted file mode 100644 index 163d661e873ec3d7d59afc20b35e8384640bb513..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/visualizing_image.py +++ /dev/null @@ -1,499 +0,0 @@ -""" - coding=utf-8 - Copyright 2018, Antonio Mendoza Hao Tan, Mohit Bansal - Adapted From Facebook Inc, Detectron2 - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License.import copy - """ -import colorsys -import io - -import cv2 -import matplotlib as mpl -import matplotlib.colors as mplc -import matplotlib.figure as mplfigure -import numpy as np -import torch -from matplotlib.backends.backend_agg import FigureCanvasAgg - -from utils import img_tensorize - - -_SMALL_OBJ = 1000 - - -class SingleImageViz: - def __init__( - self, - img, - scale=1.2, - edgecolor="g", - alpha=0.5, - linestyle="-", - saveas="test_out.jpg", - rgb=True, - pynb=False, - id2obj=None, - id2attr=None, - pad=0.7, - ): - """ - img: an RGB image of shape (H, W, 3). - """ - if isinstance(img, torch.Tensor): - img = img.numpy().astype("np.uint8") - if isinstance(img, str): - img = img_tensorize(img) - assert isinstance(img, np.ndarray) - - width, height = img.shape[1], img.shape[0] - fig = mplfigure.Figure(frameon=False) - dpi = fig.get_dpi() - width_in = (width * scale + 1e-2) / dpi - height_in = (height * scale + 1e-2) / dpi - fig.set_size_inches(width_in, height_in) - ax = fig.add_axes([0.0, 0.0, 1.0, 1.0]) - ax.axis("off") - ax.set_xlim(0.0, width) - ax.set_ylim(height) - - self.saveas = saveas - self.rgb = rgb - self.pynb = pynb - self.img = img - self.edgecolor = edgecolor - self.alpha = 0.5 - self.linestyle = linestyle - self.font_size = int(np.sqrt(min(height, width)) * scale // 3) - self.width = width - self.height = height - self.scale = scale - self.fig = fig - self.ax = ax - self.pad = pad - self.id2obj = id2obj - self.id2attr = id2attr - self.canvas = FigureCanvasAgg(fig) - - def add_box(self, box, color=None): - if color is None: - color = self.edgecolor - (x0, y0, x1, y1) = box - width = x1 - x0 - height = y1 - y0 - self.ax.add_patch( - mpl.patches.Rectangle( - (x0, y0), - width, - height, - fill=False, - edgecolor=color, - linewidth=self.font_size // 3, - alpha=self.alpha, - linestyle=self.linestyle, - ) - ) - - def draw_boxes(self, boxes, obj_ids=None, obj_scores=None, attr_ids=None, attr_scores=None): - if len(boxes.shape) > 2: - boxes = boxes[0] - if len(obj_ids.shape) > 1: - obj_ids = obj_ids[0] - if len(obj_scores.shape) > 1: - obj_scores = obj_scores[0] - if len(attr_ids.shape) > 1: - attr_ids = attr_ids[0] - if len(attr_scores.shape) > 1: - attr_scores = attr_scores[0] - if isinstance(boxes, torch.Tensor): - boxes = boxes.numpy() - if isinstance(boxes, list): - boxes = np.array(boxes) - assert isinstance(boxes, np.ndarray) - areas = np.prod(boxes[:, 2:] - boxes[:, :2], axis=1) - sorted_idxs = np.argsort(-areas).tolist() - boxes = boxes[sorted_idxs] if boxes is not None else None - obj_ids = obj_ids[sorted_idxs] if obj_ids is not None else None - obj_scores = obj_scores[sorted_idxs] if obj_scores is not None else None - attr_ids = attr_ids[sorted_idxs] if attr_ids is not None else None - attr_scores = attr_scores[sorted_idxs] if attr_scores is not None else None - - assigned_colors = [self._random_color(maximum=1) for _ in range(len(boxes))] - assigned_colors = [assigned_colors[idx] for idx in sorted_idxs] - if obj_ids is not None: - labels = self._create_text_labels_attr(obj_ids, obj_scores, attr_ids, attr_scores) - for i in range(len(boxes)): - color = assigned_colors[i] - self.add_box(boxes[i], color) - self.draw_labels(labels[i], boxes[i], color) - - def draw_labels(self, label, box, color): - x0, y0, x1, y1 = box - text_pos = (x0, y0) - instance_area = (y1 - y0) * (x1 - x0) - small = _SMALL_OBJ * self.scale - if instance_area < small or y1 - y0 < 40 * self.scale: - if y1 >= self.height - 5: - text_pos = (x1, y0) - else: - text_pos = (x0, y1) - - height_ratio = (y1 - y0) / np.sqrt(self.height * self.width) - lighter_color = self._change_color_brightness(color, brightness_factor=0.7) - font_size = np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) - font_size *= 0.75 * self.font_size - - self.draw_text( - text=label, - position=text_pos, - color=lighter_color, - ) - - def draw_text( - self, - text, - position, - color="g", - ha="left", - ): - rotation = 0 - font_size = self.font_size - color = np.maximum(list(mplc.to_rgb(color)), 0.2) - color[np.argmax(color)] = max(0.8, np.max(color)) - bbox = { - "facecolor": "black", - "alpha": self.alpha, - "pad": self.pad, - "edgecolor": "none", - } - x, y = position - self.ax.text( - x, - y, - text, - size=font_size * self.scale, - family="sans-serif", - bbox=bbox, - verticalalignment="top", - horizontalalignment=ha, - color=color, - zorder=10, - rotation=rotation, - ) - - def save(self, saveas=None): - if saveas is None: - saveas = self.saveas - if saveas.lower().endswith(".jpg") or saveas.lower().endswith(".png"): - cv2.imwrite( - saveas, - self._get_buffer()[:, :, ::-1], - ) - else: - self.fig.savefig(saveas) - - def _create_text_labels_attr(self, classes, scores, attr_classes, attr_scores): - labels = [self.id2obj[i] for i in classes] - attr_labels = [self.id2attr[i] for i in attr_classes] - labels = [ - f"{label} {score:.2f} {attr} {attr_score:.2f}" - for label, score, attr, attr_score in zip(labels, scores, attr_labels, attr_scores) - ] - return labels - - def _create_text_labels(self, classes, scores): - labels = [self.id2obj[i] for i in classes] - if scores is not None: - if labels is None: - labels = ["{:.0f}%".format(s * 100) for s in scores] - else: - labels = ["{} {:.0f}%".format(li, s * 100) for li, s in zip(labels, scores)] - return labels - - def _random_color(self, maximum=255): - idx = np.random.randint(0, len(_COLORS)) - ret = _COLORS[idx] * maximum - if not self.rgb: - ret = ret[::-1] - return ret - - def _get_buffer(self): - if not self.pynb: - s, (width, height) = self.canvas.print_to_buffer() - if (width, height) != (self.width, self.height): - img = cv2.resize(self.img, (width, height)) - else: - img = self.img - else: - buf = io.BytesIO() # works for cairo backend - self.canvas.print_rgba(buf) - width, height = self.width, self.height - s = buf.getvalue() - img = self.img - - buffer = np.frombuffer(s, dtype="uint8") - img_rgba = buffer.reshape(height, width, 4) - rgb, alpha = np.split(img_rgba, [3], axis=2) - - try: - import numexpr as ne # fuse them with numexpr - - visualized_image = ne.evaluate("img * (1 - alpha / 255.0) + rgb * (alpha / 255.0)") - except ImportError: - alpha = alpha.astype("float32") / 255.0 - visualized_image = img * (1 - alpha) + rgb * alpha - - return visualized_image.astype("uint8") - - def _change_color_brightness(self, color, brightness_factor): - assert brightness_factor >= -1.0 and brightness_factor <= 1.0 - color = mplc.to_rgb(color) - polygon_color = colorsys.rgb_to_hls(*mplc.to_rgb(color)) - modified_lightness = polygon_color[1] + (brightness_factor * polygon_color[1]) - modified_lightness = 0.0 if modified_lightness < 0.0 else modified_lightness - modified_lightness = 1.0 if modified_lightness > 1.0 else modified_lightness - modified_color = colorsys.hls_to_rgb(polygon_color[0], modified_lightness, polygon_color[2]) - return modified_color - - -# Color map -_COLORS = ( - np.array( - [ - 0.000, - 0.447, - 0.741, - 0.850, - 0.325, - 0.098, - 0.929, - 0.694, - 0.125, - 0.494, - 0.184, - 0.556, - 0.466, - 0.674, - 0.188, - 0.301, - 0.745, - 0.933, - 0.635, - 0.078, - 0.184, - 0.300, - 0.300, - 0.300, - 0.600, - 0.600, - 0.600, - 1.000, - 0.000, - 0.000, - 1.000, - 0.500, - 0.000, - 0.749, - 0.749, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.333, - 0.333, - 0.000, - 0.333, - 0.667, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 0.333, - 0.000, - 0.667, - 0.667, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 1.000, - 0.000, - 0.000, - 0.333, - 0.500, - 0.000, - 0.667, - 0.500, - 0.000, - 1.000, - 0.500, - 0.333, - 0.000, - 0.500, - 0.333, - 0.333, - 0.500, - 0.333, - 0.667, - 0.500, - 0.333, - 1.000, - 0.500, - 0.667, - 0.000, - 0.500, - 0.667, - 0.333, - 0.500, - 0.667, - 0.667, - 0.500, - 0.667, - 1.000, - 0.500, - 1.000, - 0.000, - 0.500, - 1.000, - 0.333, - 0.500, - 1.000, - 0.667, - 0.500, - 1.000, - 1.000, - 0.500, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.333, - 0.333, - 1.000, - 0.333, - 0.667, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.667, - 0.333, - 1.000, - 0.667, - 0.667, - 1.000, - 0.667, - 1.000, - 1.000, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 1.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.143, - 0.143, - 0.143, - 0.857, - 0.857, - 0.857, - 1.000, - 1.000, - 1.000, - ] - ) - .astype(np.float32) - .reshape(-1, 3) -) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py deleted file mode 100644 index 1abc02590c240377177d4ac12fe4848720e24959..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_P_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_P_(table_T_S_I_V_): - pass diff --git a/spaces/cihyFjudo/fairness-paper-search/Discover the Best of Spain with Caballero Rivista 11.md b/spaces/cihyFjudo/fairness-paper-search/Discover the Best of Spain with Caballero Rivista 11.md deleted file mode 100644 index e3a9d18cfcf9517e7beaee4fd5be4f68e5f041c8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Discover the Best of Spain with Caballero Rivista 11.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    In 1989 the Caballero High Top comes out, the shoe is a bulky high top; a style that was popular at the time, (Nike Jordan 1, Dunk High, Air walk Prototype, etc.). The shoes maintain the dragon theme set by caballeros decks at Powell.

    -

    caballero rivista 11


    Download File >>> https://tinurli.com/2uwjW4



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Serial-Key-For-Easy-Worship-2009-Imanberwix-Extra-Quality.md b/spaces/cihyFjudo/fairness-paper-search/Serial-Key-For-Easy-Worship-2009-Imanberwix-Extra-Quality.md deleted file mode 100644 index 1f80660aded191356241c4783d51c5bbe6447b0e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Serial-Key-For-Easy-Worship-2009-Imanberwix-Extra-Quality.md +++ /dev/null @@ -1,108 +0,0 @@ -## Serial Key For Easy Worship 2009 imanberwix - - - - - - ![Serial Key For Easy Worship 2009 Imanberwix \[Extra Quality\]](https://gurucrack.org/wp-content/uploads/2021/10/EasyWorship-Crack1.png) - - - - - -**Download >>> [https://venemena.blogspot.com/?download=2txRfG](https://venemena.blogspot.com/?download=2txRfG)** - - - - - - - - - - - - I can help you with writing an article with SEO optimization and HTML formatting for the keyword "Serial Key For Easy Worship 2009 imanberwix". Here is a possible title and article: - -# How to Get Serial Key For Easy Worship 2009 imanberwix - - - -Easy Worship 2009 is a software that allows you to create and display presentations for church services, conferences, and other events. It has features such as song lyrics, Bible verses, video clips, images, and more. However, to use Easy Worship 2009, you need a serial key that activates the software and unlocks all its functions. - - - -If you are looking for a serial key for Easy Worship 2009 imanberwix, you have come to the right place. In this article, we will show you how to get a serial key for Easy Worship 2009 imanberwix in a few simple steps. - - - -## What is imanberwix? - - - -Imanberwix is a website that provides serial keys for various software products, including Easy Worship 2009. It claims to offer genuine and working serial keys that can be used to activate the software without any hassle. However, imanberwix is not an official or authorized source of serial keys. It is a pirated website that may contain viruses, malware, or other harmful content. Therefore, we do not recommend using imanberwix or any other similar website to get serial keys for Easy Worship 2009 or any other software. - - - -## What are the risks of using imanberwix? - - - -Using imanberwix or any other pirated website to get serial keys for Easy Worship 2009 or any other software may expose you to several risks, such as: - - - -- Legal issues: Using pirated software is illegal and may violate the intellectual property rights of the software developers. You may face legal consequences such as fines or lawsuits if you are caught using pirated software. - -- Security issues: Downloading or installing pirated software may infect your computer with viruses, malware, spyware, ransomware, or other malicious programs that may damage your system, steal your data, or compromise your privacy. - -- Performance issues: Using pirated software may cause errors, crashes, glitches, or compatibility issues with your system or other software. You may also miss out on updates, patches, bug fixes, or new features that are available only for legitimate users of the software. - -- Ethical issues: Using pirated software is unfair and disrespectful to the software developers who invest their time, money, and effort to create and maintain the software. You may also deprive them of their rightful income and support that they deserve for their work. - - - -## How to get serial key for Easy Worship 2009 legally? - - - -The best and safest way to get a serial key for Easy Worship 2009 is to buy it from the official website of the software developer. By doing so, you will get a valid and authentic serial key that will activate the software and enable you to use all its features. You will also get access to updates, support, and customer service from the software developer. Moreover, you will support the software developer and respect their intellectual property rights. - - - -To buy a serial key for Easy Worship 2009 from the official website of the software developer, follow these steps: - - - -1. Go to [https://www.easyworship.com/](https://www.easyworship.com/). - -2. Click on "Buy Now" at the top right corner of the page. - -3. Select "EasyWorship 7" from the list of products. - -4. Choose your preferred subscription plan (monthly or annual) and click on "Add to Cart". - -5. Enter your billing information and payment method and click on "Place Order". - -6. You will receive an email confirmation with your serial key and download link for EasyWorship 7. - -7. Download and install EasyWorship 7 on your computer using the download link. - -8. Enter your serial key when prompted during the installation process. - -9. Enjoy using EasyWorship 7 with all its features. - - - -## Conclusion - - - -In this article, we have shown you how to get a serial key for Easy Worship 2009 imanberwix. We have also explained why using imanberwix - - dfd1c89656 - - - - - diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.h deleted file mode 100644 index b089a9863fbc22c9213ab81f179e407b029e91c4..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/binkdsp.h +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Bink DSP routines - * Copyright (c) 2009 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Bink DSP routines - */ - -#ifndef AVCODEC_BINKDSP_H -#define AVCODEC_BINKDSP_H - -#include - -#include "config.h" - -typedef struct BinkDSPContext { - void (*idct_put)(uint8_t *dest/*align 8*/, int line_size, int32_t *block/*align 16*/); - void (*idct_add)(uint8_t *dest/*align 8*/, int line_size, int32_t *block/*align 16*/); - void (*scale_block)(const uint8_t src[64]/*align 8*/, uint8_t *dst/*align 8*/, int linesize); - void (*add_pixels8)(uint8_t *av_restrict pixels, int16_t *block, int line_size); -} BinkDSPContext; - -void ff_binkdsp_init(BinkDSPContext *c); - -#endif /* AVCODEC_BINKDSP_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000dwt.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000dwt.h deleted file mode 100644 index 718d183ac159e1b6c07043d1ab286d7e1da24a75..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeg2000dwt.h +++ /dev/null @@ -1,68 +0,0 @@ -/* - * Discrete wavelet transform - * Copyright (c) 2007 Kamil Nowosad - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_JPEG2000DWT_H -#define AVCODEC_JPEG2000DWT_H - -/** - * @file - * Discrete wavelet transform - */ - -#include - -#define FF_DWT_MAX_DECLVLS 32 ///< max number of decomposition levels -#define F_LFTG_K 1.230174104914001f -#define F_LFTG_X 0.812893066115961f - -enum DWTType { - FF_DWT97, - FF_DWT53, - FF_DWT97_INT, - FF_DWT_NB -}; - -typedef struct DWTContext { - /// line lengths { horizontal, vertical } in consecutive decomposition levels - int linelen[FF_DWT_MAX_DECLVLS][2]; - uint8_t mod[FF_DWT_MAX_DECLVLS][2]; ///< coordinates (x0, y0) of decomp. levels mod 2 - uint8_t ndeclevels; ///< number of decomposition levels - uint8_t type; ///< 0 for 9/7; 1 for 5/3 - int32_t *i_linebuf; ///< int buffer used by transform - float *f_linebuf; ///< float buffer used by transform -} DWTContext; - -/** - * Initialize DWT. - * @param s DWT context - * @param border coordinates of transformed region {{x0, x1}, {y0, y1}} - * @param decomp_levels number of decomposition levels - * @param type 0 for DWT 9/7; 1 for DWT 5/3 - */ -int ff_jpeg2000_dwt_init(DWTContext *s, int border[2][2], - int decomp_levels, int type); - -int ff_dwt_encode(DWTContext *s, void *t); -int ff_dwt_decode(DWTContext *s, void *t); - -void ff_dwt_destroy(DWTContext *s); - -#endif /* AVCODEC_JPEG2000DWT_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download BlackRock Aladdin and Join the Collective Intelligence of 55000 Investment Professionals.md b/spaces/congsaPfin/Manga-OCR/logs/Download BlackRock Aladdin and Join the Collective Intelligence of 55000 Investment Professionals.md deleted file mode 100644 index 137e86bb5fcfff2d5ad3838e50d2cbef475523c7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download BlackRock Aladdin and Join the Collective Intelligence of 55000 Investment Professionals.md +++ /dev/null @@ -1,131 +0,0 @@ - -

    How to Download BlackRock Aladdin: A Guide for Investment Professionals

    -

    If you are an investment professional looking for a tech platform that can help you manage your portfolio across public and private markets, you might have heard of BlackRock Aladdin. But what is it exactly, and how can you download it? In this article, we will explain what BlackRock Aladdin is, how to access it, how to download it, and how to use it.

    -

    download blackrock aladdin


    Download Zip >> https://urlca.com/2uO9E8



    -

    What is BlackRock Aladdin and Why You Need It

    -

    Aladdin is a tech platform that unifies the investment management process

    -

    BlackRock Aladdin is a tech platform that unifies the investment management process across public and private markets. It is developed by BlackRock Solutions, the risk management division of BlackRock, Inc., the world's largest asset manager. As of 2020, Aladdin managed $21.6 trillion in assets, which was about 10% of the world's financial assets.

    -

    Aladdin stands for Asset, Liability and Debt and Derivative Investment Network. It is an electronic system that handles every aspect of the investment process, from portfolio construction, trading, operations, compliance, accounting, to risk analysis. It uses a common data language to provide a holistic view of your portfolio and the market. It also leverages artificial intelligence, machine learning, and big data to generate insights and recommendations for your investment decisions.

    -

    Aladdin provides insights, scale, and efficiency for your portfolio

    -

    By using BlackRock Aladdin, you can benefit from several advantages for your portfolio. Some of them are:

    -
      -
    • Insights: You can access sophisticated risk analytics and reporting tools that help you understand the drivers of risk and return in your portfolio. You can also get market insights and research from BlackRock's experts and analysts.
    • -
    • Scale: You can manage your portfolio across multiple asset classes, geographies, strategies, and time horizons. You can also leverage BlackRock's global network of partners and providers to access data, liquidity, and execution.
    • -
    • Efficiency: You can streamline your workflow and automate your tasks with comprehensive portfolio management tools. You can also reduce errors and costs by using a single platform with standardized data and processes.
    • -
    -

    How to Access BlackRock Aladdin

    -

    Aladdin Enterprise: an end-to-end portfolio management software

    -

    Aladdin Enterprise is the core product of BlackRock Aladdin. It is an end-to-end portfolio management software that combines risk analytics with portfolio management tools, trading, operations, compliance, and accounting tools on a single platform. It is suitable for asset managers, asset servicers, insurers, pension funds, and private markets.

    -

    To access Aladdin Enterprise, you need to contact BlackRock to request a demo or a subscription. You can fill out a form on their website or call their sales team at +1 (212) 810-5300. Once you have a subscription, you can log in to the Aladdin website or app with your credentials.

    -

    Aladdin Risk: a risk analytics and reporting tool

    -

    Aladdin Risk is a risk analytics and reporting tool that helps you measure the risk and performance of your portfolio across multiple dimensions. You can also create custom reports and dashboards to communicate your risk profile and strategy. It is suitable for asset owners, asset managers, wealth managers, and financial advisors.

    -

    To access Aladdin Risk, you need to contact BlackRock to request a demo or a subscription. You can fill out a form on their website or call their sales team at +1 (212) 810-5300. Once you have a subscription, you can log in to the Aladdin website or app with your credentials.

    -

    How to download blackrock aladdin software
    -Download blackrock aladdin for portfolio management
    -Blackrock aladdin download link
    -Download blackrock aladdin data cloud
    -Blackrock aladdin platform download
    -Download blackrock aladdin analytics modules
    -Blackrock aladdin enterprise download
    -Download blackrock aladdin risk management system
    -Blackrock aladdin download free trial
    -Download blackrock aladdin app
    -Blackrock aladdin download for windows
    -Download blackrock aladdin for linux
    -Blackrock aladdin download for mac
    -Download blackrock aladdin api
    -Blackrock aladdin download tutorial
    -Download blackrock aladdin user guide
    -Blackrock aladdin download requirements
    -Download blackrock aladdin latest version
    -Blackrock aladdin download problems
    -Download blackrock aladdin support
    -Blackrock aladdin download review
    -Download blackrock aladdin case study
    -Blackrock aladdin download cost
    -Download blackrock aladdin pricing
    -Blackrock aladdin download features
    -Download blackrock aladdin benefits
    -Blackrock aladdin download demo
    -Download blackrock aladdin webinar
    -Blackrock aladdin download comparison
    -Download blackrock aladdin alternatives
    -Blackrock aladdin download testimonials
    -Download blackrock aladdin awards
    -Blackrock aladdin download news
    -Download blackrock aladdin updates
    -Blackrock aladdin download tips
    -Download blackrock aladdin best practices
    -Blackrock aladdin download faq
    -Download blackrock aladdin forum
    -Blackrock aladdin download community
    -Download blackrock aladdin blog
    -Blackrock aladdin download video
    -Download blackrock aladdin podcast
    -Blackrock aladdin download ebook
    -Download blackrock aladdin white paper
    -Blackrock aladdin download infographic
    -Download blackrock aladdin presentation
    -Blackrock aladdin download report
    -Download blackrock aladdin research paper
    -Blackrock aladdin download course

    -

    Aladdin Accounting: an accounting and reporting solution

    -

    Aladdin Accounting is an accounting and reporting solution that helps you manage your books and records, produce financial statements, and comply with regulatory requirements. It supports multiple accounting standards, currencies, and asset classes. It is suitable for asset managers, asset servicers, insurers, pension funds, and private markets.

    -

    To access Aladdin Accounting, you need to contact BlackRock to request a demo or a subscription. You can fill out a form on their website or call their sales team at +1 (212) 810-5300. Once you have a subscription, you can log in to the Aladdin website or app with your credentials.

    -

    Aladdin Wealth: a wealth management platform

    -

    Aladdin Wealth is a wealth management platform that helps you deliver personalized advice and solutions to your clients. It combines portfolio analytics, risk management, financial planning, and client communication tools on a single platform. It is suitable for wealth managers, financial advisors, banks, and broker-dealers.

    -

    To access Aladdin Wealth, you need to contact BlackRock to request a demo or a subscription. You can fill out a form on their website or call their sales team at +1 (212) 810-5300. Once you have a subscription, you can log in to the Aladdin website or app with your credentials.

    -

    How to Download BlackRock Aladdin

    -

    Contact BlackRock to request a demo or a subscription

    -

    The first step to download BlackRock Aladdin is to contact BlackRock to request a demo or a subscription. Depending on the product you are interested in, you can fill out a form on their website or call their sales team at +1 (212) 810-5300. You will need to provide some information about yourself and your organization, such as your name, email address, phone number, company name, role, assets under management, and investment objectives.

    -

    BlackRock will then contact you to schedule a demo or discuss the subscription details. They will also provide you with the pricing and contract terms for the product you want to use.

    -

    Log in to the Aladdin website or app with your credentials

    -

    Once you have a subscription, you will receive an email from BlackRock with your login credentials and instructions on how to access the Aladdin website or app. You can use any web browser or device to log in to the Aladdin website or download the Aladdin app from the App Store or Google Play . You will need to enter your username and password to access the platform.

    -

    Download the Aladdin software or access it online

    -

    After logging in to the Aladdin website or app, you can choose to download the Aladdin software or access it online. The download option allows you to install the software on your computer and use it offline. The online option allows you to use the software through your web browser without installing anything.

    -

    To download the Aladdin software, you need to click on the download icon on the top right corner of the screen and follow the instructions. You will need to have Windows 10 or higher operating system and at least 8 GB of RAM on your computer. The download process may take several minutes depending on your internet speed.

    -

    To access the Aladdin software online, you need to click on the launch icon on the top right corner of the screen and choose the product you want to use. You will be redirected to a new tab where you can use the software through your web browser. You will need to have a stable internet connection and enable JavaScript and cookies on your browser.

    -

    How to Use BlackRock Aladdin

    -

    Explore the features and functions of Aladdin

    -

    Once you have downloaded or accessed the Aladdin software, you can start exploring its features and functions. Depending on the product you are using, you will see different menus and tabs on the screen that allow you to navigate through various modules and tools. Some of the common features and functions of Aladdin are:

    -
      -
    • Portfolio: You can view and manage your portfolio across multiple asset classes, strategies, and time horizons. You can also create and modify portfolio models, scenarios, and benchmarks.
    • -
    • Risk: You can measure and monitor the risk and performance of your portfolio across multiple dimensions, such as market, credit, liquidity, and operational risk. You can also generate risk reports and dashboards to communicate your risk profile and strategy.
    • -
    • Trading: You can execute trades across various markets and instruments, such as equities, fixed income, derivatives, and currencies. You can also access liquidity and execution services from BlackRock's partners and providers.
    • -
    • Operations: You can handle the operational aspects of your portfolio, such as settlement, reconciliation, custody, and reporting. You can also automate your workflows and tasks with Aladdin's tools.
    • -
    • Compliance: You can ensure that your portfolio complies with the regulatory and contractual requirements of your jurisdiction and clients. You can also set up and monitor compliance rules and alerts with Aladdin's tools.
    • -
    • Accounting: You can manage your books and records, produce financial statements, and comply with accounting standards. You can also access accounting data and reports from Aladdin's tools.
    • -
    • Wealth: You can deliver personalized advice and solutions to your clients based on their goals, preferences, and risk tolerance. You can also use Aladdin's tools to create financial plans, proposals, and portfolios for your clients.
    • -
    -

    Customize your settings and preferences

    -

    After exploring the features and functions of Aladdin, you can customize your settings and preferences to suit your needs and preferences. You can access the settings menu by clicking on the gear icon on the top right corner of the screen. Some of the settings you can customize are:

    -
      -
    • Language: You can choose the language you want to use for the Aladdin interface. Aladdin supports several languages, such as English, Chinese, Japanese, French, German, Spanish, Italian, Portuguese, Korean, Russian, Arabic, Hebrew, Turkish, Polish, Dutch, Swedish, Norwegian, Danish, Finnish, Greek, Hungarian, Romanian, Slovakian, Czech, Croatian, Slovenian, Serbian, Bulgarian.
    • -
    • Currency: You can choose the currency you want to use for your portfolio and reports. Aladdin supports multiple currencies from different regions and countries.
    • -
    • Time zone: You can choose the time zone you want to use for your portfolio and reports. Aladdin supports multiple time zones from different regions and countries.
    • -
    • Theme: You can choose the theme you want to use for the Aladdin interface. Aladdin offers two themes: light and dark.
    • -
    • Notifications: You can choose the notifications you want to receive from Aladdin. Aladdin offers various types of notifications, such as alerts, messages, updates, reminders, newsfeeds.
    • -
    -

    Connect with other Aladdin users and experts

    -

    The last step to use BlackRock Aladdin is to connect with other Aladdin users and experts. By doing so, you can learn from their experiences, share your feedback, ask questions, and get support. You can also access the latest news and updates about Aladdin and BlackRock. Some of the ways you can connect with other Aladdin users and experts are:

    -
      -
    • Aladdin Community: You can join the Aladdin Community, an online platform where you can interact with other Aladdin users and experts. You can post comments, questions, answers, tips, and best practices on various topics related to Aladdin. You can also browse through the existing posts and learn from others.
    • -
    • Aladdin Academy: You can enroll in the Aladdin Academy, an online learning platform where you can access courses and tutorials on how to use Aladdin. You can also earn certificates and badges to showcase your skills and knowledge.
    • -
    • Aladdin Events: You can attend the Aladdin Events, a series of webinars, workshops, conferences, and seminars where you can hear from BlackRock's leaders, experts, and partners about the latest trends and developments in the investment industry. You can also network with other Aladdin users and experts.
    • -
    • Aladdin Support: You can contact the Aladdin Support team, a group of dedicated professionals who are available 24/7 to help you with any issues or questions you may have about Aladdin. You can reach them by phone, email, chat, or ticket.
    • -
    -

    Conclusion

    -

    BlackRock Aladdin is a tech platform that unifies the investment management process across public and private markets. It offers various products and services that cater to different types of investment professionals, such as asset managers, asset owners, wealth managers, and financial advisors. To download BlackRock Aladdin, you need to contact BlackRock to request a demo or a subscription, log in to the Aladdin website or app with your credentials, and download the software or access it online. To use BlackRock Aladdin, you need to explore its features and functions, customize your settings and preferences, and connect with other Aladdin users and experts.

    -

    FAQs

    -

    What are the system requirements for downloading BlackRock Aladdin?

    -

    To download BlackRock Aladdin, you need to have Windows 10 or higher operating system and at least 8 GB of RAM on your computer. You also need to have a stable internet connection and enable JavaScript and cookies on your browser.

    -

    How much does BlackRock Aladdin cost?

    -

    The cost of BlackRock Aladdin depends on the product you want to use, the number of users you have, the assets under management you have, and the contract terms you agree on. You need to contact BlackRock to get a quote for your specific needs.

    -

    Is BlackRock Aladdin secure?

    -

    Yes, BlackRock Aladdin is secure. It uses advanced encryption, authentication, authorization, and auditing technologies to protect your data and transactions. It also complies with the highest standards of cybersecurity and data privacy regulations.

    -

    Can I use BlackRock Aladdin on my mobile device?

    -

    Yes, you can use BlackRock Aladdin on your mobile device. You can download the Aladdin app from the App Store or Google Play and log in with your credentials. You can also access the Aladdin website through your mobile browser.

    -

    Can I integrate BlackRock Aladdin with other platforms or tools?

    -

    Yes, you can integrate BlackRock Aladdin with other platforms or tools. Aladdin offers various APIs (application programming interfaces) that allow you to connect with external data sources, systems, or applications. You can also use Aladdin's tools to import or export data from or to other platforms or tools.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Game of Thrones Season 5 English Subtitles Download - 720p Bluray Edition.md b/spaces/congsaPfin/Manga-OCR/logs/Game of Thrones Season 5 English Subtitles Download - 720p Bluray Edition.md deleted file mode 100644 index 7b6ce1d15e9a38d6cb19a5d3103f9a07072e87a7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Game of Thrones Season 5 English Subtitles Download - 720p Bluray Edition.md +++ /dev/null @@ -1,153 +0,0 @@ - -

    How to Download Game of Thrones Season 5 Subtitles in English

    -

    Game of Thrones is one of the most popular and acclaimed TV shows of all time. The fantasy drama series is based on the novels by George R.R. Martin and follows the lives and struggles of various noble families in the fictional continent of Westeros. The show is known for its complex plot, rich characters, stunning visuals, and shocking twists.

    -

    However, if you are not a native English speaker, or if you have trouble understanding some of the accents and dialogues, you might need subtitles to fully enjoy the show. Subtitles can help you catch every detail and nuance of the story, as well as improve your language skills. In this article, we will show you how to download Game of Thrones Season 5 subtitles in English, as well as how to download the show in high-quality 720p BluRay format.

    -

    game of thrones season 5 subtitles english download 720p bluray


    Download File > https://urlca.com/2uO9B0



    -

    What is Game of Thrones Season 5 About?

    -

    Game of Thrones Season 5 is the fifth season of the show and consists of ten episodes. It aired from April 12, 2015, to June 14, 2015. The season continues the storylines from the previous seasons and introduces new characters and locations. Here is a brief summary of the main plot and characters of Game of Thrones Season 5:

    -
      -
    • In King's Landing, Cersei Lannister faces a power struggle with the Tyrells and the religious fanatics known as the Sparrows. She also has to deal with the threat of Daenerys Targaryen, who claims the Iron Throne as her birthright.
    • -
    • In Meereen, Daenerys Targaryen tries to rule as a benevolent queen, but faces resistance from the former slave masters and a mysterious group called the Sons of the Harpy. She also has to cope with her growing dragons, who become more dangerous and uncontrollable.
    • -
    • In the North, Jon Snow is elected as the new Lord Commander of the Night's Watch, but faces opposition from some of his brothers. He also has to deal with the arrival of Stannis Baratheon, who seeks his support in his claim for the Iron Throne.
    • -
    • In Winterfell, Sansa Stark is forced to marry Ramsay Bolton for from the drop-down menu. For example, select "English" if you want to download subtitles in English.
    • -
    • Click on the download button next to the subtitle file that matches your video file. For example, if your video file is in 720p BluRay quality, choose the subtitle file that has "720p.BluRay" in its name.
    • -
    • Save the subtitle file to your device and extract it if it is in a compressed format.
    • - -

      Subdl.com is a user-friendly and fast website that offers high-quality subtitles for Game of Thrones Season 5. You can also rate and comment on the subtitles that you download from this website.

      -

      Subscene.com

      -

      Subscene.com is another popular and reliable website for downloading subtitles for movies and TV shows. It has a huge database of subtitles in various languages and formats. You can easily find and download Game of Thrones Season 5 subtitles in English from this website by following these steps:

      -
        -
      1. Go to subscene.com and type "Game of Thrones Season 5" in the search box.
      2. -
      3. Select the episode that you want to download subtitles for from the list of results.
      4. -
      5. Choose the language that you want to download subtitles for from the list of available languages. For example, click on "English" if you want to download subtitles in English.
      6. -
      7. Click on the subtitle file that matches your video file. For example, if your video file is in 720p BluRay quality, choose the subtitle file that has "720p.BluRay" in its name.
      8. -
      9. Click on the download button and save the subtitle file to your device.
      10. -
      -

      Subscene.com is a simple and easy-to-use website that offers high-quality subtitles for Game of Thrones Season 5. You can also upload and request subtitles on this website.

      -

      Opensubtitles.org

      -

      Opensubtitles.org is one of the oldest and most trusted websites for downloading subtitles for movies and TV shows. It has a massive database of subtitles in various languages and formats. You can easily find and download Game of Thrones Season 5 subtitles in English from this website by following these steps:

      -
        -
      1. Go to opensubtitles.org and type "Game of Thrones Season 5" in the search box.
      2. -
      3. Select the episode that you want to download subtitles for from the list of results.
      4. -
      5. Choose the language that you want to download subtitles for from the list of available languages. For example, click on "English" if you want to download subtitles in English.
      6. -
      7. Click on the subtitle file that matches your video file. For example, if your video file is in 720p BluRay quality, choose the subtitle file that has "720p.BluRay" in its name.
      8. -
      9. Click on the download button and save the subtitle file to your device.
      10. -
      -

      Opensubtitles.org is a comprehensive and reliable website that offers high-quality subtitles for Game of Thrones Season 5. You can also register and login to this website to access more features and benefits.

      -

      game of thrones s05 english subs download hd
      -got season 5 720p bluray english subtitles
      -download game of thrones fifth season english srt files
      -game of thrones season 5 subtitles in english hd quality
      -got s05 720p bluray subs english download
      -game of thrones 5th season english subtitles download
      -got season 5 english srt files 720p bluray
      -download subtitles for game of thrones season 5 in english
      -game of thrones season 5 720p bluray english subs
      -got s05 english subtitles download hd quality
      -game of thrones fifth season subtitles english 720p bluray
      -got season 5 subs in english download hd
      -game of thrones s05 720p bluray subtitles english
      -download game of thrones season 5 english subs hd
      -got 5th season english subtitles 720p bluray download
      -game of thrones season 5 hd subtitles in english
      -got s05 subs english download 720p bluray
      -game of thrones fifth season 720p bluray subs in english
      -download got season 5 subtitles english hd quality
      -game of thrones s05 english subtitles 720p bluray
      -got season 5 hd subs in english download
      -game of thrones 5th season 720p bluray subtitles english
      -download got s05 english srt files hd quality
      -game of thrones season 5 subs english download hd
      -got fifth season subtitles in english 720p bluray
      -game of thrones s05 subs in english download hd quality
      -got season 5 english subtitles hd quality download
      -game of thrones fifth season hd subs in english
      -download got s05 subtitles english 720p bluray
      -game of thrones season 5 english srt files download hd
      -got 5th season subs in english download hd quality
      -game of thrones s05 hd subtitles english download
      -got season 5 subtitles english download 720p bluray
      -game of thrones fifth season srt files in english
      -download got s05 subs in english hd quality
      -game of thrones season 5 hd subs english download
      -got fifth season 720p bluray subtitles in english
      -game of thrones s05 subtitles in english download hd quality
      -got season 5 srt files in english download hd quality

      -

      How to Add Game of Thrones Season 5 Subtitles to Your Video Player?

      -

      Once you have downloaded Game of Thrones Season 5 subtitles in English, you need to add them to your video player to watch the show with subtitles. The process of adding subtitles to your video player may vary depending on the type of video player that you use. However, here are some general steps and tips to add Game of Thrones Season 5 subtitles to your video player:

      -
        -
      • Rename the subtitle file so that it has the same name as your video file, except for the extension. For example, if your video file is named "Game.of.Thrones.S05E01.720p.BluRay.x264.mkv", rename your subtitle file as "Game.of.Thrones.S05E01.720p.BluRay.x264.srt". This will help your video player recognize and load the subtitle file automatically.
      • -
      • Place the subtitle file in the same folder as your video file, or in a subfolder named "Subs" or "Subtitles". This will help your video player locate and load the subtitle file easily.
      • -
      • Open your video player and play your video file. If your video player supports subtitles, it should display them on the screen. If not, you may need to enable or select subtitles from the settings or menu of your video player.
      • -
      -

      Different video players may have different ways of adding or selecting subtitles. Here are some specific steps and tips for some of the most common video players:

      -

      VLC Media Player

      -

      VLC Media Player is one of the most popular and versatile video players that supports subtitles. You can add or select subtitles in VLC Media Player by following these steps:

      -
        -
      1. Open VLC Media Player and play your video file.
      2. -
      3. Click on the "Subtitles" menu and select "Add Subtitle File".
      4. -
      5. Browse and select the subtitle file that you have downloaded and renamed.
      6. -
      7. The subtitles should appear on the screen. You can adjust the size, position, and synchronization of the subtitles from the "Subtitles" menu.
      8. -
      -

      VLC Media Player is a free and open-source video player that can play almost any video format. You can download VLC Media Player from videolan.org.

      -

      Windows Media Player

      -

      Windows Media Player is the default video player for Windows operating systems. It supports subtitles, but you may need to install a codec or a plugin to enable them. You can add or select subtitles in Windows Media Player by following these steps:

      -
        -
      1. Download and install a codec or a plugin that supports subtitles, such as DirectVobSub or K-Lite Codec Pack. You can find them online from reputable sources.
      2. -
      3. Open Windows Media Player and play your video file.
      4. -
      5. Right-click on the screen and select "Lyrics, Captions, and Subtitles".
      6. -
      7. Select "On if Available" or "On Always" to enable subtitles.
      8. -
      9. The subtitles should appear on the screen. You can adjust the size, position, and synchronization of the subtitles from the settings or menu of your codec or plugin.
      10. -
      -

      Windows Media Player is a built-in video player that can play most common video formats. You can update Windows Media Player from microsoft.com.

      -

      Other Video Players

      -

      There are many other video players that support subtitles, such as KMPlayer, PotPlayer, GOM Player, MPC-HC, etc. You can find and download them online from reputable sources. The steps and tips to add or select subtitles in these video players may vary depending on their features and settings. However, they usually involve renaming and placing the subtitle file in the same folder as the video file, and enabling or selecting subtitles from the settings or menu of the video player.

      -

      How to Download Game of Thrones Season 5 in 720p BluRay Quality?

      -

      If you want to watch Game of Thrones Season 5 in high-quality 720p BluRay format, you have several options. You can either buy or rent the official DVD or BluRay discs, which come with high-quality video and audio. You can also stream the show online from platforms like HBO Max, Amazon Prime Video, or Netflix, which offer high-quality video and audio as well. However, if you prefer to download the show and watch it offline, you will need to find and download the show separately. Fortunately, there are many websites and sources that offer Game of Thrones Season 5 in 720p BluRay quality for free. Here are some of the best ones:

      -

      Torrent Sites

      -

      Torrent sites are one of the most popular and convenient ways to download movies and TV shows in high-quality formats. They use peer-to-peer technology to share files among users. You can easily find and download Game of Thrones Season 5 in 720p BluRay quality from torrent sites by following these steps:

      -
        -
      1. Download and install a torrent client, such as BitTorrent, uTorrent, qBittorrent, etc. You can find them online from reputable sources.
      2. -
      3. Go to a torrent site, such as The Pirate Bay, RARBG, 1337x, etc. You can find them online from reputable sources.
      4. -
      5. Type "Game of Thrones Season 5 720p BluRay" in the search box and press enter.
      6. -
      7. Select the torrent file that has the most seeders and leechers. Seeders are users who have the complete file and share it with others. Leechers are users who are downloading the file from others. The more seeders and leechers a torrent file has, the faster and more reliable it is.
      8. -
      9. Click on the download button or magnet link to download the torrent file to your device.
      10. -
      11. Open your torrent client and add the torrent file to start downloading Game of Thrones Season 5 in 720p BluRay quality.
      12. -
      -

      Torrent sites are fast and easy to use, but they also have some risks and drawbacks. They may contain viruses, malware, or fake files that can harm your device or data. They may also violate the copyright or legal rights of the creators or owners of the show. You should use torrent sites at your own risk and discretion, and respect the intellectual property of the show.

      -

      Streaming Sites

      -

      Streaming sites are another popular and convenient way to download movies and TV shows in high-quality formats. They use online servers to host and stream files to users. You can easily find and download Game of Thrones Season 5 in 720p BluRay quality from streaming sites by following these steps:

      -
        -
      1. Go to a streaming site, such as Fmovies, Putlocker, Solarmovie, etc. You can find them online from reputable sources.
      2. -
      3. Type "Game of Thrones Season 5" in the search box and press enter.
      4. -
      5. Select the episode that you want to download from the list of results.
      6. -
      7. Choose the quality that you want to download from the available options. For example, choose "720p" if you want to download Game of Thrones Season 5 in 720p BluRay quality.
      8. -
      9. Click on the play button or download link to start downloading Game of Thrones Season 5 in 720p BluRay quality.
      10. -
      -

      Streaming sites are simple and easy to use, but they also have some risks and drawbacks. They may contain pop-ups, ads, or redirects that can annoy or harm your device or data. They may also have low-quality or broken links that can affect your viewing or downloading experience. They may also violate the copyright or legal rights of the creators or owners of the show. You should use streaming sites at your own risk and discretion, and respect the intellectual property of the show.

      -

      Direct Download Links

      -

      Direct download links are another popular and convenient way to download movies and TV shows in high-quality formats. They use online servers to host and share files directly with users. You can easily find and download Game of Thrones Season 5 in 720p BluRay quality from direct download links by following these steps:

      -
        -
      1. Go to a direct download link site, such as Pahe.in, PSArips.com, MkvCage.ws, etc. You can find them online from reputable sources.
      2. -
      3. Type "Game of Thrones Season 5" in the search box and press enter.
      4. -
      5. Select the episode that you want to download from the list of results.
      6. -
      7. Choose the quality that you want to download from the available options. For example, choose "720p" if you want to download Game of Thrones Season 5 in 720p BluRay quality.
      8. -
      9. Click on the download button or link to start downloading Game of Thrones Season 5 in 720p BluRay quality.
      10. -
      -

      Direct download link sites are fast and easy to use, but they also have some risks and drawbacks. They may have limited storage or bandwidth that can affect your downloading speed or availability. They may also require registration or payment to access some files or features. They may also violate the copyright or legal rights of the creators or owners of the show. You should use direct download link sites at your own risk and discretion, and respect the intellectual property of the show.

      -

      Conclusion

      -

      In this article, we have shown you how to download Game of Thrones Season 5 subtitles in English, as well as how to download the show in high-quality 720p BluRay format. We have also provided you with some of the best websites and sources to find and download Game of Thrones Season 5 subtitles and videos for free. We hope that this article has been helpful and informative for you. If you are a fan of Game of Thrones, you don't want to miss this season. It is full of drama, action, romance, and surprises that will keep you hooked and entertained. So, what are you waiting for? Download Game of Thrones Season 5 subtitles and videos now and enjoy watching one of the best TV shows of all time.

      -

      Frequently Asked Questions

      -

      Here are some of the most frequently asked questions about Game of Thrones Season 5 subtitles and videos:

      -

      Q: How many languages are available for Game of Thrones Season 5 subtitles?

      -

      A: Game of Thrones Season 5 subtitles are available in more than 40 languages, including Arabic, Chinese, French, German, Hindi, Spanish, Turkish, etc. You can find them on various websites and sources that we have mentioned in this article.

      -

      Q: How can I sync Game of Thrones Season 5 subtitles with my video file?

      -

      A: Sometimes, Game of Thrones Season 5 subtitles may not be perfectly synced with your video file due to different frame rates or encoding methods. In that case, you can use a subtitle editor tool [user](#message such as Subtitle Edit, Aegisub, or Subtitle Workshop to adjust the timing and synchronization of the subtitles. You can find and download these tools online from reputable sources.

      -

      Q: How can I watch Game of Thrones Season 5 in 4K Ultra HD quality?

      -

      A: Game of Thrones Season 5 is not officially available in 4K Ultra HD quality, as it was not filmed or released in that format. However, some fans have created unofficial 4K versions of the show by using upscaling and enhancement techniques. You can find and download these versions from some torrent or direct download link sites that we have mentioned in this article. However, be aware that these versions may not be authentic or legal, and may have some quality or compatibility issues.

      -

      Q: How can I watch Game of Thrones Season 5 with commentary or behind-the-scenes features?

      -

      A: Game of Thrones Season 5 comes with commentary and behind-the-scenes features on the official DVD or BluRay discs, as well as on some streaming platforms like HBO Max or Amazon Prime Video. You can access these features by selecting the appropriate options from the settings or menu of your video player. Alternatively, you can find and download these features separately from some websites and sources that we have mentioned in this article.

      -

      Q: How can I watch Game of Thrones Season 5 with other fans or friends?

      -

      A: Game of Thrones Season 5 is a great show to watch with other fans or friends, as it can spark interesting discussions and reactions. You can watch Game of Thrones Season 5 with other fans or friends by using a video chat or streaming service, such as Zoom, Skype, Discord, Netflix Party, etc. You can also join online forums or communities, such as Reddit, Quora, Facebook, etc., to share your opinions and feedback on the show.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Princess Drawing Challenge Can You Draw All the Disney Princesses?.md b/spaces/congsaPfin/Manga-OCR/logs/Princess Drawing Challenge Can You Draw All the Disney Princesses?.md deleted file mode 100644 index 7b93680bb7d9d10a8943f7ef36f9c58b06d3279b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Princess Drawing Challenge Can You Draw All the Disney Princesses?.md +++ /dev/null @@ -1,118 +0,0 @@ - -

      Princess Drawing: How to Draw a Beautiful and Cute Princess

      -

      Do you love princesses? Do you want to learn how to draw your own princess? If you answered yes, then this article is for you! In this article, you will learn what a princess drawing is, why people like to draw princesses, what are the benefits of drawing princesses, and how to draw a princess step by step. You will also get some tips and tricks for drawing a princess, and a conclusion with a call to action. So, grab your pencil and paper, and let's get started!

      -

      Introduction

      -

      What is a princess drawing?

      -

      A princess drawing is a type of drawing that depicts a female character who is royalty or has a high social status. A princess drawing usually shows the character wearing a fancy dress, a crown, jewelry, and other accessories. A princess drawing can also show the character in different poses, settings, backgrounds, and situations.

      -

      princess drawing


      Downloadhttps://urlca.com/2uO6on



      -

      Why do people like to draw princesses?

      -

      People like to draw princesses for many reasons. Some of them are:

      -
        -
      • Princesses are beautiful, elegant, graceful, and charming. They have attractive features, such as big eyes, long hair, and smooth skin. They also wear colorful and stylish clothes that make them stand out.
      • -
      • Princesses are inspiring, brave, kind, and smart. They have positive qualities, such as courage, compassion, wisdom, and creativity. They also face challenges, overcome obstacles, and achieve their goals.
      • -
      • Princesses are fun, imaginative, and diverse. They have different personalities, hobbies, interests, and stories. They also come from different cultures, countries, and times. They can be modern or classic, realistic or fantasy, human or animal.
      • -
      -

      What are the benefits of drawing princesses?

      -

      Drawing princesses has many benefits for both children and adults. Some of them are:

      -
        -
      • Drawing princesses improves your drawing skills. You can learn how to draw different shapes, proportions, perspectives, expressions, movements, and details. You can also learn how to use different tools, techniques, colors, and effects.
      • -
      • Drawing princesses boosts your creativity. You can use your imagination to create your own princess characters or recreate your favorite ones. You can also experiment with different styles, themes, genres, and scenarios.
      • -
      • Drawing princesses enhances your mood. You can express your emotions, feelings, thoughts, and ideas through your drawings. You can also relax, have fun, and enjoy yourself while drawing.
      • -
      -

      How to Draw a Princess Step by Step

      -

      Now that you know what a princess drawing is, why people like to draw princesses, and what are the benefits of drawing princesses, let's learn how to draw a princess step by step. In this section, you will learn how to draw a simple and cute cartoon-style princess in seven easy steps. You can follow along with the image below or create your own version.

      - a princess drawing in a cartoon style -

      Step 1: Draw a circle for the head

      First, draw a circle for the head of the princess. You can use a compass, a round object, or your free hand to draw the circle. The circle should be big enough to fit the facial features and the hair of the princess.

      -

      Step 2: Draw guidelines for the face and the body

      -

      Next, draw two vertical lines and two horizontal lines inside the circle to divide it into four equal parts. These lines will help you place the eyes, nose, mouth, and ears of the princess. Then, draw a curved line below the circle to mark the chin and the jawline of the princess. The curved line should be slightly longer than the diameter of the circle.

      -

      After that, draw a long and narrow oval below the head to represent the body of the princess. The oval should be about three times as long as the head and slightly wider at the bottom. Then, draw a horizontal line across the middle of the oval to mark the waist of the princess.

      -

      How to draw a princess step by step
      -Princess drawing easy for kids
      -Disney princess drawing with color
      -Princess drawing images free download
      -Princess drawing book pdf
      -Princess drawing games online
      -Princess drawing and coloring pages
      -Princess drawing ideas for beginners
      -Princess drawing tutorial youtube
      -Princess drawing sketch pencil
      -Princess drawing outline vector
      -Princess drawing realistic style
      -Princess drawing cute kawaii
      -Princess drawing wallpaper hd
      -Princess drawing dress design
      -Princess drawing anime manga
      -Princess drawing fairy tale
      -Princess drawing mermaid underwater
      -Princess drawing frozen elsa
      -Princess drawing rapunzel tangled
      -Princess drawing belle beauty and the beast
      -Princess drawing jasmine aladdin
      -Princess drawing ariel the little mermaid
      -Princess drawing cinderella carriage
      -Princess drawing snow white dwarfs
      -Princess drawing moana ocean
      -Princess drawing mulan warrior
      -Princess drawing tiana frog
      -Princess drawing aurora sleeping beauty
      -Princess drawing sofia the first
      -Princess drawing elena of avalor
      -Princess drawing anna frozen 2
      -Princess drawing pocahontas native american
      -Princess drawing leia star wars
      -Princess drawing zelda legend of zelda
      -Princess drawing peach mario bros
      -Princess drawing daisy duck disney
      -Princess drawing rosie riveter feminist icon
      -Princess drawing mononoke studio ghibli
      -Princess drawing celestia my little pony
      -Princess drawing luna sailor moon
      -Princess drawing bubblegum adventure time
      -Princess drawing poppy trolls world tour
      -Princess drawing shuri black panther marvel comics
      -Princess drawing amethyst steven universe cartoon network
      -Princess drawing star butterfly star vs the forces of evil disney xd
      -Princess drawing rapunzel barbie as the princess and the pauper
      -Princess drawing elizabeth swann pirates of the caribbean
      -Princess drawing fiona shrek dreamworks animation

      -

      Step 3: Draw the eyes, nose, mouth, and ears

      -

      Now, draw two big and round eyes on the horizontal line that divides the circle into two halves. The eyes should be slightly apart from each other and close to the vertical line that divides the circle into two halves. Then, draw two small circles inside each eye to represent the pupils and add some eyelashes on the upper and lower eyelids.

      -

      Next, draw a small and curved nose below the eyes and on the vertical line that divides the circle into two halves. The nose should be about halfway between the eyes and the chin. Then, draw a smiling mouth below the nose and slightly to the right of the vertical line. The mouth should be curved upward and have a small gap between the lips.

      -

      Finally, draw two small and curved ears on both sides of the head. The ears should be aligned with the eyes and slightly above them.

      Step 4: Draw the hair and the crown

      -

      Next, draw the hair of the princess. You can choose any hairstyle you like, such as long, short, curly, straight, ponytail, bun, or braid. For this example, we will draw long and wavy hair that covers the ears and reaches the shoulders. To draw the hair, start from the top of the head and draw curved lines that follow the shape of the head and then extend outward. Then, draw more curved lines inside the hair to create some texture and volume.

      -

      After that, draw a crown on top of the head. The crown can be any shape or design you like, such as a tiara, a diadem, a circlet, or a coronet. For this example, we will draw a simple crown that has a band and five points. To draw the crown, start from the center of the head and draw a small circle. Then, draw four more circles on both sides of the first circle, with equal spacing. Then, draw a curved line that connects all the circles to form the band of the crown. Finally, draw a small triangle on top of each circle to form the points of the crown.

      -

      Step 5: Draw the dress and the arms

      -

      Now, draw the dress of the princess. You can choose any style or color you like, such as a ball gown, a mermaid dress, a cocktail dress, or a casual dress. For this example, we will draw a simple and cute pink dress that has a fitted bodice and a flared skirt. To draw the dress, start from the waistline and draw two curved lines that go down and outward to form the skirt of the dress. Then, draw two more curved lines that go up and inward to form the bodice of the dress. Then, draw some horizontal lines on the bodice to create some folds and wrinkles.

      -

      Next, draw the arms of the princess. The arms should be bent at the elbows and have small hands with fingers. To draw the arms, start from the shoulders and draw two curved lines that go down and inward to form the upper arms. Then, draw two more curved lines that go down and outward to form the lower arms. Then, draw two small ovals at the end of each arm to form the hands. Finally, draw some curved lines inside each hand to form the fingers.

      Step 6: Draw the legs and the shoes

      -

      Next, draw the legs and the shoes of the princess. The legs should be straight and have small feet with toes. The shoes can be any type or color you like, such as flats, heels, boots, or sandals. For this example, we will draw simple and cute pink shoes that match the dress. To draw the legs, start from the bottom of the skirt and draw two vertical lines that go down to form the thighs. Then, draw two more vertical lines that go down to form the calves. Then, draw two small ovals at the end of each leg to form the feet. Finally, draw some curved lines inside each foot to form the toes.

      -

      To draw the shoes, start from the feet and draw two curved lines that go up and around to form the upper part of the shoes. Then, draw two more curved lines that go down and around to form the lower part of the shoes. Then, draw some horizontal lines on the shoes to create some straps and buckles.

      -

      Step 7: Add details and colors

      -

      Finally, add some details and colors to your princess drawing. You can add any details you like, such as jewelry, accessories, patterns, or decorations. For this example, we will add some earrings, a necklace, a bracelet, and some flowers on the dress and the hair. To add the details, use small circles, ovals, stars, hearts, or other shapes to create the jewelry and accessories. Then, use small dots, lines, curves, or other shapes to create the patterns or decorations.

      -

      To add the colors, use any colors you like, such as crayons, markers, pencils, paints, or digital tools. For this example, we will use pink for the dress and the shoes, yellow for the hair and the crown, blue for the eyes and the earrings, white for the skin and the necklace, and green for the bracelet and the flowers. To add the colors, fill in the areas with solid colors or gradients. Then, use darker or lighter shades of the same color to create shadows or highlights.

      -

      Tips and Tricks for Drawing a Princess

      -

      Congratulations! You have learned how to draw a princess step by step. But don't stop here! You can improve your princess drawing skills by following these tips and tricks:

      -

      Tip 1: Use references and inspiration

      -

      One of the best ways to learn how to draw a princess is to use references and inspiration from other sources. You can look at pictures of real or fictional princesses online or in books or magazines. You can also watch movies or cartoons that feature princesses or read stories or fairy tales about them. You can learn from their appearance, personality, style, and story.

      -

      However, don't copy them exactly. Use them as a guide or a starting point for your own princess drawing. You can mix and match different elements from different sources or add your own twist to them.

      -

      Tip 2: Experiment with different styles and expressions

      -

      Another way to improve your princess drawing skills is to experiment with different styles and expressions. You can try different types of drawing styles, such as realistic, cartoonish, anime, or manga. You can also try different types of expressions, such as happy, sad, angry, surprised, or bored. You can see how different styles and expressions affect the mood, the tone, and the message of your princess drawing.

      -

      However, don't limit yourself to one style or expression. Use them as a way to explore your creativity and your preferences. You can find your own style and expression that suits your princess drawing.

      Tip 3: Practice and have fun

      -

      The last and most important tip for improving your princess drawing skills is to practice and have fun. Practice makes perfect, so the more you draw, the better you will get. You can practice by drawing different princesses, different poses, different backgrounds, and different situations. You can also practice by drawing from memory, from imagination, or from observation.

      -

      But don't forget to have fun while practicing. Drawing is a form of art and expression, so enjoy the process and the outcome. Don't be afraid to make mistakes or try new things. Don't be too hard on yourself or compare yourself to others. Be proud of your work and share it with others.

      -

      Conclusion

      -

      In conclusion, drawing a princess is a fun and rewarding activity that anyone can do. You can learn what a princess drawing is, why people like to draw princesses, what are the benefits of drawing princesses, and how to draw a princess step by step. You can also improve your princess drawing skills by following some tips and tricks, such as using references and inspiration, experimenting with different styles and expressions, and practicing and having fun.

      -

      So, what are you waiting for? Grab your pencil and paper, and start drawing your own princess today! You will be amazed by what you can create!

      -

      FAQs

      -

      Here are some frequently asked questions about princess drawing:

      -

      Q: How do I draw a realistic princess?

      -

      A: To draw a realistic princess, you need to pay attention to the proportions, the anatomy, the shading, and the details of the character. You can use a reference photo or a model to help you with the realism. You can also use a grid or a ruler to measure the distances and angles of the features. You can also use a light source to create shadows and highlights on the face and the body.

      -

      Q: How do I draw a Disney princess?

      -

      A: To draw a Disney princess, you need to follow the style and the characteristics of the Disney animation. Disney princesses usually have big eyes, small noses, thin lips, round faces, long hair, and slender bodies. They also wear colorful and elegant dresses that match their personalities and stories. You can use a reference image or a video of your favorite Disney princess to help you with the style.

      -

      Q: How do I draw an anime princess?

      -

      A: To draw an anime princess, you need to follow the style and the conventions of the anime genre. Anime princesses usually have large eyes, small noses, cute mouths, oval faces, spiky hair, and curvy bodies. They also wear fashionable and cute outfits that reflect their moods and themes. You can use a reference image or a manga of your favorite anime princess to help you with the style.

      -

      Q: How do I draw a fantasy princess?

      -

      A: To draw a fantasy princess, you need to use your imagination and creativity to create a unique and original character. Fantasy princesses can have any features, clothes, accessories, or backgrounds that you want. They can also belong to any race, culture, or time that you want. You can use a reference image or a book of your favorite fantasy princess to help you with the inspiration.

      -

      Q: How do I draw a cute princess?

      -

      A: To draw a cute princess, you need to use simple shapes, bright colors, and expressive features to create a charming and adorable character. Cute princesses usually have round eyes, small noses, smiling mouths, chubby cheeks, short hair, and petite bodies. They also wear simple and sweet dresses that have patterns or decorations. You can use a reference image or a sticker of your favorite cute princess to help you with the style.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Benefits of Using Xprofile Mod Apk for Instagram Analysis.md b/spaces/congsaPfin/Manga-OCR/logs/The Benefits of Using Xprofile Mod Apk for Instagram Analysis.md deleted file mode 100644 index 74ceb5e3c2e5d4db27bb797f8135ed81bee88806..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Benefits of Using Xprofile Mod Apk for Instagram Analysis.md +++ /dev/null @@ -1,201 +0,0 @@ -
      - - - - - - -

      Xprofile Instagram Mod Apk: What Is It and How to Use It?

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      HeadingSubheading

      Introduction

      A brief overview of what xprofile is, what it does, and why it is useful for Instagram users.

      What is Xprofile?

      A detailed explanation of what xprofile is, how it works, and what features it offers.

      -

      xprofile instagram mod apk


      DOWNLOAD ✸✸✸ https://urlca.com/2uObGV



      How Xprofile Analyzes Instagram Profiles

      A description of how xprofile collects and displays data about Instagram profiles, such as followers, engagement, reach, stories, and more.

      How Xprofile Helps You Grow Your Instagram Account

      A description of how xprofile helps you optimize your Instagram account, such as finding the best time to post, the best hashtags to use, the best content to create, and more.

      What is Xprofile Mod Apk?

      A detailed explanation of what xprofile mod apk is, how it differs from the original app, and what benefits it offers.

      How Xprofile Mod Apk Unlocks Premium Features

      A description of how xprofile mod apk unlocks premium features that are otherwise paid or limited in the original app, such as unlimited reports, advanced filters, custom themes, and more.

      How Xprofile Mod Apk Is Safe and Secure

      A description of how xprofile mod apk is safe and secure to use, as it does not require root access, does not collect personal data, does not violate Instagram's terms of service, and does not contain malware or viruses.

      -

      xprofile gold apk download for instagram analytics
      -how to install instagram mod apk with xprofile features
      -xprofile mod apk latest version 2023 free download
      -instagram analysis tool xprofile apk reddit
      -xprofile gold unlocked apk for instagram profile insights
      -instagram mod apk android with xprofile integration
      -xprofile mod apk terbaru 2021 untuk analisis instagram
      -how to use xprofile apk for instagram account tracking
      -xprofile gold mod apk no ads for instagram stats
      -instagram mod apk 286.0.0.20.69 with xprofile option
      -xprofile apk premium unlocked for instagram followers analysis
      -download xprofile mod apk delon.id for instagram tips
      -xprofile mod apk savepapajohns.com for instagram optimization
      -instagram mod apk tecnoandroid.net with xprofile feature
      -xprofile gold apk cracked for instagram engagement rate
      -how to get xprofile apk mod for free for instagram growth
      -xprofile mod apk v1.0.65 gold unlocked for instagram reports
      -instagram mod apk unlimited likes and followers with xprofile
      -xprofile gold apk hack for instagram profile improvement
      -download instagram mod apk with xprofile from apksapps.com
      -xprofile pro apk unlocked for instagram story views analysis
      -how to update instagram mod apk with xprofile support
      -xprofile mod apk s&z apps for instagram profile analytics
      -instagram mod apk latest version 2023 with xprofile feature
      -xprofile gold apk full version for instagram post performance
      -how to uninstall instagram mod apk with xprofile option
      -xprofile mod apk new update for instagram bio analysis
      -download instagram mod apk with xprofile from mediafire.com
      -xprofile premium apk free download for instagram hashtag analysis
      -how to fix instagram mod apk with xprofile error
      -xprofile mod apk unlimited gold for instagram audience analysis
      -download instagram mod apk with xprofile from happymod.com
      -xprofile vip apk unlocked for instagram reach and impressions
      -how to backup instagram mod apk with xprofile data
      -xprofile mod apk patched for instagram best time to post analysis
      -download instagram mod apk with xprofile from rexdl.com
      -xprofile plus apk unlocked for instagram content analysis
      -how to restore instagram mod apk with xprofile settings
      -xprofile mod apk unlocked all features for instagram competitor analysis
      -download instagram mod apk with xprofile from apkmody.io

      How to Download and Install Xprofile Mod Apk?

      A step-by-step guide on how to download and install xprofile mod apk on your Android device.

      Step 1: Enable Unknown Sources

      A instruction on how to enable unknown sources on your device settings to allow the installation of third-party apps.

      Step 2: Download Xprofile Mod Apk File

      A instruction on how to download the xprofile mod apk file from a reliable source.

      Step 3: Install Xprofile Mod Apk File

      A instruction on how to install the xprofile mod apk file on your device by tapping on it and following the prompts.

      Step 4: Launch Xprofile Mod Apk and Enjoy

      A instruction on how to launch the xprofile mod apk app and enjoy its premium features.

      Conclusion

      A summary of the main points of the article and a call to action for the readers to try out xprofile mod apk.

      Frequently Asked Questions (FAQs)

        -
      • What is the difference between xprofile and xprofile mod apk?
      • -
      • Is xprofile mod apk legal and safe to use?
      • -
      • Do I need to pay for xprofile mod apk?
      • -
      • Can I use xprofile mod apk for multiple Instagram accounts?
      • -
      • How can I update xprofile mod apk?
      • -
      - Article:

      Xprofile Instagram Mod Apk: What Is It and How to Use It?

      -

      If you are an Instagram user who wants to grow your account, improve your content, and understand your audience better, you might have heard of xprofile. Xprofile is a popular app that allows you to analyze your Instagram profile and get insights into your followers, engagement, reach, stories, and more. But did you know that there is a modified version of xprofile that offers even more features and benefits? It's called xprofile mod apk, and in this article, we will tell you everything you need to know about it. We will explain what xprofile mod apk is, how it differs from the original app, how to download and install it, and how to use it to boost your Instagram performance. Let's get started!

      -

      Introduction

      -

      Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It is a great place to share your photos and videos, connect with your friends and family, and discover new trends and interests. However, if you want to take your Instagram game to the next level, you need more than just posting and liking. You need to understand your Instagram profile and how it performs. You need to know who your followers are, what they like, when they are online, how they interact with your content, and more. You need to analyze your Instagram profile and get insights into your data. That's where xprofile comes in.

      -

      What is Xprofile?

      -

      Xprofile is an app that helps you analyze your Instagram profile and get insights into your data. It is a powerful tool that collects and displays information about your Instagram account, such as followers, engagement, reach, stories, and more. With xprofile, you can:

      -
        -
      • See who follows you and who doesn't.
      • -
      • See who views your stories and who skips them.
      • -
      • See who likes and comments on your posts and who ignores them.
      • -
      • See who blocks you and who unblocks you.
      • -
      • See who mentions you and who tags you.
      • -
      • See the demographics of your followers, such as age, gender, location, language, and more.
      • -
      • See the best time to post based on your followers' activity.
      • -
      • See the best hashtags to use based on your niche and audience.
      • -
      • See the best content to create based on your followers' preferences.
      • -
      • See how your posts perform over time and compare them with each other.
      • -
      • See how your stories perform over time and compare them with each other.
      • -
      • See how your account grows over time and compare it with other accounts.
      • -
      • And much more!
      • -
      -

      Xprofile is a must-have app for anyone who wants to grow their Instagram account, improve their content, and understand their audience better. It is easy to use, fast, and reliable. However, there is one catch: xprofile is not free. You can download the app for free from the Google Play Store or the App Store, but you will have to pay for some features or subscribe to a monthly or yearly plan. For example, you can only generate one report per day for free, but if you want more reports, you will have to pay $0.99 per report or $4.99 per month or $29.99 per year. You will also have to pay for some advanced filters, custom themes, ad removals, and more. But what if we told you that there is a way to get all these premium features for free? That's right: there is a modified version of xprofile that unlocks all these features without any cost. It's called xprofile mod apk.

      -

      What is Xprofile Mod Apk?

      -

      Xprofile mod apk is a modified version of xprofile that unlocks all the premium features that are otherwise paid or limited in the original app. With xprofile mod apk, you can:

      -
        -
      • Generate unlimited reports without any cost or limit.
      • -
      • Use advanced filters to sort and analyze your data in different ways.
      • -
      • Customize the theme of the app according to your preference.
      • -
      • Remove all the ads from the app for a smooth experience.
      • -
      • And much more!
      • -
      -

      Xprofile mod apk is a great way to enjoy all the benefits of xprofile without spending any money or compromising any quality. It is a perfect solution for anyone who wants to analyze their Instagram profile and get insights into their data without any hassle or restriction. But how does xprofile mod apk differ from the original app? How does it unlock all these premium features? And most importantly: how does it ensure that it is safe and secure to use? Let's find out.

      -

      How Xprofile Mod Apk Unlocks Premium Features

      -

      Xprofile mod apk unlocks premium features by modifying the code of the original app and bypassing its security checks. It does not require root access or any other special permission to work. It simply replaces the original app's code with its own code that enables all the premium features. It does not interfere with the original app's functionality or data. It simply enhances it and makes it more accessible and enjoyable for the users.

      -

      How Xprofile Mod Apk Is Safe and Secure

      -

      Xprofile mod apk is safe and secure to use, as it does not require root access, does not collect personal data, does not violate Instagram's terms of service, and does not contain malware or viruses. It is a trusted and verified app that has been tested and scanned by many users and antivirus programs. It does not pose any risk to your device or your Instagram account. However, you should always be careful when downloading and installing any third-party app, as some sources may be malicious or fraudulent. You should always download xprofile mod apk from a reliable source, such as the one we will provide in the next section. You should also check the permissions and reviews of the app before installing it. You should also backup your data and update your device regularly to avoid any potential issues.

      -

      How to Download and Install Xprofile Mod Apk?

      -

      Now that you know what xprofile mod apk is, how it differs from the original app, and how it is safe and secure to use, you might be wondering how to download and install it on your Android device. Well, don't worry: we have got you covered. Here is a step-by-step guide on how to download and install xprofile mod apk on your Android device:

      -

      Step 1: Enable Unknown Sources

      -

      The first step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store or the App Store. To do this, follow these steps:

      -
        -
      • Go to your device settings and tap on security or privacy.
      • -
      • Find the option that says unknown sources or install unknown apps and toggle it on.
      • -
      • A warning message may appear, but ignore it and tap on OK or Allow.
      • -
      -

      You have now enabled unknown sources on your device settings. You can proceed to the next step.

      -

      Step 2: Download Xprofile Mod Apk File

      -

      The next step is to download the xprofile mod apk file from a reliable source. You can use the link below to download the latest version of xprofile mod apk:

      -

      Xprofile Mod Apk Download Link

      -

      This link will take you to a secure and verified website where you can download the xprofile mod apk file without any hassle or risk. The file size is about 20 MB, so it should not take long to download. Once the download is complete, you can proceed to the next step.

      -

      Step 3: Install Xprofile Mod Apk File

      -

      The final step is to install the xprofile mod apk file on your device. To do this, follow these steps:

      -
        -
      • Go to your device file manager and find the xprofile mod apk file that you downloaded in the previous step.
      • -
      • Tap on the file and a pop-up window will appear asking you to install the app.
      • -
      • Tap on Install and wait for the installation process to finish.
      • -
      • A message will appear saying that the app has been installed successfully.
      • -
      -

      You have now installed xprofile mod apk on your device. You can proceed to the last step.

      -

      Step 4: Launch Xprofile Mod Apk and Enjoy

      -

      The final step is to launch the xprofile mod apk app and enjoy its premium features. To do this, follow these steps:

      -
        -
      • Go to your device app drawer and find the xprofile mod apk app icon.
      • -
      • Tap on the icon and the app will open.
      • -
      • You may be asked to log in with your Instagram account or create a new account.
      • -
      • Choose whichever option you prefer and follow the instructions on the screen.
      • -
      • You will then see the main interface of the app, where you can access all its features and functions.
      • -
      -

      You have now launched xprofile mod apk and are ready to use it. You can start analyzing your Instagram profile and get insights into your data. You can also customize the app according to your preference and enjoy its premium features without any cost or limit.

      -

      Conclusion

      -

      Xprofile mod apk is a modified version of xprofile that unlocks all the premium features that are otherwise paid or limited in the original app. It is a powerful tool that helps you analyze your Instagram profile and get insights into your data. It is easy to use, fast, reliable, safe, and secure. It is a must-have app for anyone who wants to grow their Instagram account, improve their content, and understand their audience better. It is a perfect solution for anyone who wants to enjoy all the benefits of xprofile without spending any money or compromising any quality. In this article, we have explained what xprofile mod apk is, how it differs from the original app, how to download and install it, and how to use it to boost your Instagram performance. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. And if you liked this article, don't forget to share it with your friends and followers. Thank you for reading and happy Instagramming!

      -

      Frequently Asked Questions (FAQs)

      -

      Here are some of the most frequently asked questions about xprofile mod apk:

      -
        -
      • What is the difference between xprofile and xprofile mod apk?
      • -
      • Xprofile is the original app that helps you analyze your Instagram profile and get insights into your data. Xprofile mod apk is a modified version of xprofile that unlocks all the premium features that are otherwise paid or limited in the original app.
      • -
      • Is xprofile mod apk legal and safe to use?
      • -
      • Xprofile mod apk is legal and safe to use, as it does not require root access, does not collect personal data, does not violate Instagram's terms of service, and does not contain malware or viruses. However, you should always be careful when downloading and installing any third-party app, as some sources may be malicious or fraudulent.
      • -
      • Do I need to pay for xprofile mod apk?
      • -
      • No, you do not need to pay for xprofile mod apk. It is a free app that unlocks all the premium features of xprofile without any cost or limit.
      • -
      • Can I use xprofile mod apk for multiple Instagram accounts?
      • -
      • Yes, you can use xprofile mod apk for multiple Instagram accounts. You can switch between different accounts within the app and analyze their data separately.
      • -
      • How can I update xprofile mod apk?
      • -
      • You can update xprofile mod apk by downloading and installing the latest version of the app from the same source that you used before. You can also check for updates within the app settings.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_jit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_jit.py deleted file mode 100644 index 7176b05e779787528a47f20d55d64d4a0f219360..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations_jit.py +++ /dev/null @@ -1,79 +0,0 @@ -""" Activations (jit) - -A collection of jit-scripted activations fn and modules with a common interface so that they can -easily be swapped. All have an `inplace` arg even if not used. - -All jit scripted activations are lacking in-place variations on purpose, scripted kernel fusion does not -currently work across in-place op boundaries, thus performance is equal to or less than the non-scripted -versions if they contain in-place ops. - -Copyright 2020 Ross Wightman -""" - -import torch -from torch import nn as nn -from torch.nn import functional as F - -__all__ = ['swish_jit', 'SwishJit', 'mish_jit', 'MishJit', - 'hard_sigmoid_jit', 'HardSigmoidJit', 'hard_swish_jit', 'HardSwishJit'] - - -@torch.jit.script -def swish_jit(x, inplace: bool = False): - """Swish - Described originally as SiLU (https://arxiv.org/abs/1702.03118v3) - and also as Swish (https://arxiv.org/abs/1710.05941). - - TODO Rename to SiLU with addition to PyTorch - """ - return x.mul(x.sigmoid()) - - -@torch.jit.script -def mish_jit(x, _inplace: bool = False): - """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 - """ - return x.mul(F.softplus(x).tanh()) - - -class SwishJit(nn.Module): - def __init__(self, inplace: bool = False): - super(SwishJit, self).__init__() - - def forward(self, x): - return swish_jit(x) - - -class MishJit(nn.Module): - def __init__(self, inplace: bool = False): - super(MishJit, self).__init__() - - def forward(self, x): - return mish_jit(x) - - -@torch.jit.script -def hard_sigmoid_jit(x, inplace: bool = False): - # return F.relu6(x + 3.) / 6. - return (x + 3).clamp(min=0, max=6).div(6.) # clamp seems ever so slightly faster? - - -class HardSigmoidJit(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSigmoidJit, self).__init__() - - def forward(self, x): - return hard_sigmoid_jit(x) - - -@torch.jit.script -def hard_swish_jit(x, inplace: bool = False): - # return x * (F.relu6(x + 3.) / 6) - return x * (x + 3).clamp(min=0, max=6).div(6.) # clamp seems ever so slightly faster? - - -class HardSwishJit(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSwishJit, self).__init__() - - def forward(self, x): - return hard_swish_jit(x) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/scatter_gather.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/scatter_gather.py deleted file mode 100644 index 900ff88566f8f14830590459dc4fd16d4b382e47..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/customview/ResultsView.java b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/customview/ResultsView.java deleted file mode 100644 index d055eb5f161a57fc439716efe6d49b7e45ef3fc7..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/customview/ResultsView.java +++ /dev/null @@ -1,23 +0,0 @@ -/* Copyright 2019 The TensorFlow Authors. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -package org.tensorflow.lite.examples.classification.customview; - -import java.util.List; -import org.tensorflow.lite.examples.classification.tflite.Classifier.Recognition; - -public interface ResultsView { - public void setResults(final List results); -} diff --git a/spaces/cozyanduofen/bingo/src/lib/hooks/chat-history.ts b/spaces/cozyanduofen/bingo/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_v2.py b/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_v2.py deleted file mode 100644 index dae46eaaf72fa22e091998451ab27e1e19d61773..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/tha3/mocap/ifacialmocap_v2.py +++ /dev/null @@ -1,89 +0,0 @@ -import math - -from tha3.mocap.ifacialmocap_constants import BLENDSHAPE_NAMES, HEAD_BONE_X, HEAD_BONE_Y, HEAD_BONE_Z, \ - RIGHT_EYE_BONE_X, RIGHT_EYE_BONE_Y, RIGHT_EYE_BONE_Z, LEFT_EYE_BONE_X, LEFT_EYE_BONE_Y, LEFT_EYE_BONE_Z, \ - HEAD_BONE_QUAT, LEFT_EYE_BONE_QUAT, RIGHT_EYE_BONE_QUAT - -IFACIALMOCAP_PORT = 49983 -IFACIALMOCAP_START_STRING = "iFacialMocap_sahuasouryya9218sauhuiayeta91555dy3719|sendDataVersion=v2".encode('utf-8') - - -def parse_ifacialmocap_v2_pose(ifacialmocap_output): - output = {} - parts = ifacialmocap_output.split("|") - for part in parts: - part = part.strip() - if len(part) == 0: - continue - if "&" in part: - components = part.split("&") - assert len(components) == 2 - key = components[0] - value = float(components[1]) / 100.0 - if key.endswith("_L"): - key = key[:-2] + "Left" - elif key.endswith("_R"): - key = key[:-2] + "Right" - if key in BLENDSHAPE_NAMES: - output[key] = value - elif part.startswith("=head#"): - components = part[len("=head#"):].split(",") - assert len(components) == 6 - output[HEAD_BONE_X] = float(components[0]) * math.pi / 180 - output[HEAD_BONE_Y] = float(components[1]) * math.pi / 180 - output[HEAD_BONE_Z] = float(components[2]) * math.pi / 180 - elif part.startswith("rightEye#"): - components = part[len("rightEye#"):].split(",") - output[RIGHT_EYE_BONE_X] = float(components[0]) * math.pi / 180 - output[RIGHT_EYE_BONE_Y] = float(components[1]) * math.pi / 180 - output[RIGHT_EYE_BONE_Z] = float(components[2]) * math.pi / 180 - elif part.startswith("leftEye#"): - components = part[len("leftEye#"):].split(",") - output[LEFT_EYE_BONE_X] = float(components[0]) * math.pi / 180 - output[LEFT_EYE_BONE_Y] = float(components[1]) * math.pi / 180 - output[LEFT_EYE_BONE_Z] = float(components[2]) * math.pi / 180 - output[HEAD_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - output[LEFT_EYE_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - output[RIGHT_EYE_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - return output - - -def parse_ifacialmocap_v1_pose(ifacialmocap_output): - output = {} - parts = ifacialmocap_output.split("|") - for part in parts: - part = part.strip() - if len(part) == 0: - continue - if part.startswith("=head#"): - components = part[len("=head#"):].split(",") - assert len(components) == 6 - output[HEAD_BONE_X] = float(components[0]) * math.pi / 180 - output[HEAD_BONE_Y] = float(components[1]) * math.pi / 180 - output[HEAD_BONE_Z] = float(components[2]) * math.pi / 180 - elif part.startswith("rightEye#"): - components = part[len("rightEye#"):].split(",") - output[RIGHT_EYE_BONE_X] = float(components[0]) * math.pi / 180 - output[RIGHT_EYE_BONE_Y] = float(components[1]) * math.pi / 180 - output[RIGHT_EYE_BONE_Z] = float(components[2]) * math.pi / 180 - elif part.startswith("leftEye#"): - components = part[len("leftEye#"):].split(",") - output[LEFT_EYE_BONE_X] = float(components[0]) * math.pi / 180 - output[LEFT_EYE_BONE_Y] = float(components[1]) * math.pi / 180 - output[LEFT_EYE_BONE_Z] = float(components[2]) * math.pi / 180 - else: - components = part.split("-") - assert len(components) == 2 - key = components[0] - value = float(components[1]) / 100.0 - if key.endswith("_L"): - key = key[:-2] + "Left" - elif key.endswith("_R"): - key = key[:-2] + "Right" - if key in BLENDSHAPE_NAMES: - output[key] = value - output[HEAD_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - output[LEFT_EYE_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - output[RIGHT_EYE_BONE_QUAT] = [0.0, 0.0, 0.0, 1.0] - return output - diff --git a/spaces/d8aai/simple-paper-qa/app.py b/spaces/d8aai/simple-paper-qa/app.py deleted file mode 100644 index 865457ebc22c39540e89b89bab6fb247aae10c77..0000000000000000000000000000000000000000 --- a/spaces/d8aai/simple-paper-qa/app.py +++ /dev/null @@ -1,218 +0,0 @@ -import os -from typing import Any - -import gradio as gr -import openai -import pandas as pd -from IPython.display import Markdown, display -from langchain.document_loaders import PyPDFLoader -from langchain.embeddings import OpenAIEmbeddings -from langchain.indexes import VectorstoreIndexCreator -from langchain.text_splitter import CharacterTextSplitter -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.llms import OpenAI -from langchain.vectorstores import DocArrayInMemorySearch -from uuid import uuid4 - -css_style = """ -.gradio-container { - font-family: "IBM Plex Mono"; -} -""" - - -class myClass: - def __init__(self) -> None: - self.openapi = "" - self.valid_key = False - self.docs_ready = False - self.status = "⚠️Waiting for documents and key⚠️" - self.uuid = uuid4() - pass - - def check_status(self): - if self.docs_ready and self.valid_key: - out = "✨Ready✨" - elif self.docs_ready: - out = "⚠️Waiting for key⚠️" - elif self.valid_key: - out = "⚠️Waiting for documents⚠️" - else: - out = "⚠️Waiting for documents and key⚠️" - - self.status = out - - def validate_key(self, myin): - assert isinstance(myin, str) - self.valid_key = True - self.openai_api_key = myin.strip() - self.embedding = OpenAIEmbeddings(openai_api_key=self.openai_api_key) - self.llm = OpenAI(openai_api_key=self.openai_api_key) - - self.check_status() - return [self.status] - - def request_pathname(self, files, data): - if files is None: - self.docs_ready = False - self.check_status() - return ( - pd.DataFrame(data, columns=["filepath", "citation string", "key"]), - self.status, - ) - for file in files: - # make sure we're not duplicating things in the dataset - if file.name in [x[0] for x in data]: - continue - data.append([file.name, None, None]) - - mydataset = pd.DataFrame(data, columns=["filepath", "citation string", "key"]) - validation_button = self.validate_dataset(mydataset) - - return mydataset, validation_button - - def validate_dataset(self, dataset): - self.docs_ready = dataset.iloc[-1, 0] != "" - self.dataset = dataset - - self.check_status() - - if self.status == "✨Ready✨": - self.get_index() - - return self.status - - def get_index(self): - if self.docs_ready and self.valid_key: - # os.environ["OPENAI_API_KEY"] = self.openai_api_key - - # myfile = "Angela Merkel - Wikipedia.pdf" - # loader = PyPDFLoader(file_path=myfile) - loaders = [PyPDFLoader(f) for f in self.dataset["filepath"]] - - self.index = VectorstoreIndexCreator( - vectorstore_cls=DocArrayInMemorySearch, - embedding=self.embedding, - text_splitter = RecursiveCharacterTextSplitter( - # Set a really small chunk size, just to show. - chunk_size = 1000, - chunk_overlap = 20, - length_function = len, - separators="." - ) - - ).from_loaders(loaders=loaders) - - # del os.environ["OPENAI_API_KEY"] - - pass - - def do_ask(self, question): - # os.environ["OPENAI_API_KEY"] = self.openai_api_key - # openai.api_key = self.openai_api_key - - if self.status == "✨Ready✨": - # os.environ["OPENAI_API_KEY"] = self.openai_api_key - - response = self.index.query(question=question, llm=self.llm) - # del os.environ["OPENAI_API_KEY"] - yield response - pass - - -def validate_key(myInstance: myClass, openai_api_key): - if myInstance is None: - myInstance = myClass() - - out = myInstance.validate_key(openai_api_key) - return myInstance, *out - - -def request_pathname(myInstance: myClass, files, data): - if myInstance is None: - myInstance = myClass() - out = myInstance.request_pathname(files, data) - return myInstance, *out - - -def do_ask(myInstance: myClass, question): - out = myInstance.do_ask(question) - return myInstance, *out - - -with gr.Blocks(css=css_style) as demo: - myInstance = gr.State() - openai_api_key = gr.State("") - docs = gr.State() - data = gr.State([]) - index = gr.State() - - gr.Markdown( - """ - # Document Question and Answer - *By D8a.ai* - Idea based on https://huggingface.co/spaces/whitead/paper-qa - Significant advances in langchain have made it possible to simplify the code. - This tool allows you to ask questions of your uploaded text, PDF documents. - It uses OpenAI's GPT models, so you need to enter your API key below. This - tool is under active development and currently uses a lot of tokens - up to 10,000 - for a single query. This is $0.10-0.20 per query, so please be careful! - * [langchain](https://github.com/hwchase17/langchain) is the main library this tool utilizes. - 1. Enter API Key ([What is that?](https://platform.openai.com/account/api-keys)) - 2. Upload your documents - 3. Ask questions - """ - ) - - openai_api_key = gr.Textbox( - label="OpenAI API Key", placeholder="sk-...", type="password" - ) - with gr.Tab("File upload"): - uploaded_files = gr.File( - label="Upload your pdf Dokument", file_count="multiple" - ) - - with gr.Accordion("See Docs:", open=False): - dataset = gr.Dataframe( - headers=["filepath", "citation string", "key"], - datatype=["str", "str", "str"], - col_count=(3, "fixed"), - interactive=False, - label="Documents and Citations", - overflow_row_behaviour="paginate", - max_rows=5, - ) - - buildb = gr.Textbox( - "⚠️Waiting for documents and key...", - label="Status", - interactive=False, - show_label=True, - max_lines=1, - ) - - query = gr.Textbox(placeholder="Enter your question here...", label="Question") - ask = gr.Button("Ask Question") - answer = gr.Markdown(label="Answer") - - openai_api_key.change( - validate_key, inputs=[myInstance, openai_api_key], outputs=[myInstance, buildb] - ) - - uploaded_files.change( - request_pathname, - inputs=[myInstance, uploaded_files, data], - outputs=[myInstance, dataset, buildb], - ) - - ask.click( - do_ask, - inputs=[myInstance, query], - outputs=[myInstance, answer], - ) - - - - -demo.queue(concurrency_count=20) -demo.launch(show_error=True) \ No newline at end of file diff --git a/spaces/dahaoGPT/THUDM-chatglm2-6b/app.py b/spaces/dahaoGPT/THUDM-chatglm2-6b/app.py deleted file mode 100644 index 178500883f421fa82a74ed826246337066c7194a..0000000000000000000000000000000000000000 --- a/spaces/dahaoGPT/THUDM-chatglm2-6b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/THUDM/chatglm2-6b").launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/charset_normalizer/constant.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/charset_normalizer/constant.py deleted file mode 100644 index 3188108d6ba511bf92edd4d5ee9ca8b41311547b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/charset_normalizer/constant.py +++ /dev/null @@ -1,495 +0,0 @@ -from codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE -from encodings.aliases import aliases -from re import IGNORECASE, compile as re_compile -from typing import Dict, List, Set, Union - -from .assets import FREQUENCIES - -# Contain for each eligible encoding a list of/item bytes SIG/BOM -ENCODING_MARKS: Dict[str, Union[bytes, List[bytes]]] = { - "utf_8": BOM_UTF8, - "utf_7": [ - b"\x2b\x2f\x76\x38", - b"\x2b\x2f\x76\x39", - b"\x2b\x2f\x76\x2b", - b"\x2b\x2f\x76\x2f", - b"\x2b\x2f\x76\x38\x2d", - ], - "gb18030": b"\x84\x31\x95\x33", - "utf_32": [BOM_UTF32_BE, BOM_UTF32_LE], - "utf_16": [BOM_UTF16_BE, BOM_UTF16_LE], -} - -TOO_SMALL_SEQUENCE: int = 32 -TOO_BIG_SEQUENCE: int = int(10e6) - -UTF8_MAXIMAL_ALLOCATION: int = 1112064 - -UNICODE_RANGES_COMBINED: Dict[str, range] = { - "Control character": range(31 + 1), - "Basic Latin": range(32, 127 + 1), - "Latin-1 Supplement": range(128, 255 + 1), - "Latin Extended-A": range(256, 383 + 1), - "Latin Extended-B": range(384, 591 + 1), - "IPA Extensions": range(592, 687 + 1), - "Spacing Modifier Letters": range(688, 767 + 1), - "Combining Diacritical Marks": range(768, 879 + 1), - "Greek and Coptic": range(880, 1023 + 1), - "Cyrillic": range(1024, 1279 + 1), - "Cyrillic Supplement": range(1280, 1327 + 1), - "Armenian": range(1328, 1423 + 1), - "Hebrew": range(1424, 1535 + 1), - "Arabic": range(1536, 1791 + 1), - "Syriac": range(1792, 1871 + 1), - "Arabic Supplement": range(1872, 1919 + 1), - "Thaana": range(1920, 1983 + 1), - "NKo": range(1984, 2047 + 1), - "Samaritan": range(2048, 2111 + 1), - "Mandaic": range(2112, 2143 + 1), - "Syriac Supplement": range(2144, 2159 + 1), - "Arabic Extended-A": range(2208, 2303 + 1), - "Devanagari": range(2304, 2431 + 1), - "Bengali": range(2432, 2559 + 1), - "Gurmukhi": range(2560, 2687 + 1), - "Gujarati": range(2688, 2815 + 1), - "Oriya": range(2816, 2943 + 1), - "Tamil": range(2944, 3071 + 1), - "Telugu": range(3072, 3199 + 1), - "Kannada": range(3200, 3327 + 1), - "Malayalam": range(3328, 3455 + 1), - "Sinhala": range(3456, 3583 + 1), - "Thai": range(3584, 3711 + 1), - "Lao": range(3712, 3839 + 1), - "Tibetan": range(3840, 4095 + 1), - "Myanmar": range(4096, 4255 + 1), - "Georgian": range(4256, 4351 + 1), - "Hangul Jamo": range(4352, 4607 + 1), - "Ethiopic": range(4608, 4991 + 1), - "Ethiopic Supplement": range(4992, 5023 + 1), - "Cherokee": range(5024, 5119 + 1), - "Unified Canadian Aboriginal Syllabics": range(5120, 5759 + 1), - "Ogham": range(5760, 5791 + 1), - "Runic": range(5792, 5887 + 1), - "Tagalog": range(5888, 5919 + 1), - "Hanunoo": range(5920, 5951 + 1), - "Buhid": range(5952, 5983 + 1), - "Tagbanwa": range(5984, 6015 + 1), - "Khmer": range(6016, 6143 + 1), - "Mongolian": range(6144, 6319 + 1), - "Unified Canadian Aboriginal Syllabics Extended": range(6320, 6399 + 1), - "Limbu": range(6400, 6479 + 1), - "Tai Le": range(6480, 6527 + 1), - "New Tai Lue": range(6528, 6623 + 1), - "Khmer Symbols": range(6624, 6655 + 1), - "Buginese": range(6656, 6687 + 1), - "Tai Tham": range(6688, 6831 + 1), - "Combining Diacritical Marks Extended": range(6832, 6911 + 1), - "Balinese": range(6912, 7039 + 1), - "Sundanese": range(7040, 7103 + 1), - "Batak": range(7104, 7167 + 1), - "Lepcha": range(7168, 7247 + 1), - "Ol Chiki": range(7248, 7295 + 1), - "Cyrillic Extended C": range(7296, 7311 + 1), - "Sundanese Supplement": range(7360, 7375 + 1), - "Vedic Extensions": range(7376, 7423 + 1), - "Phonetic Extensions": range(7424, 7551 + 1), - "Phonetic Extensions Supplement": range(7552, 7615 + 1), - "Combining Diacritical Marks Supplement": range(7616, 7679 + 1), - "Latin Extended Additional": range(7680, 7935 + 1), - "Greek Extended": range(7936, 8191 + 1), - "General Punctuation": range(8192, 8303 + 1), - "Superscripts and Subscripts": range(8304, 8351 + 1), - "Currency Symbols": range(8352, 8399 + 1), - "Combining Diacritical Marks for Symbols": range(8400, 8447 + 1), - "Letterlike Symbols": range(8448, 8527 + 1), - "Number Forms": range(8528, 8591 + 1), - "Arrows": range(8592, 8703 + 1), - "Mathematical Operators": range(8704, 8959 + 1), - "Miscellaneous Technical": range(8960, 9215 + 1), - "Control Pictures": range(9216, 9279 + 1), - "Optical Character Recognition": range(9280, 9311 + 1), - "Enclosed Alphanumerics": range(9312, 9471 + 1), - "Box Drawing": range(9472, 9599 + 1), - "Block Elements": range(9600, 9631 + 1), - "Geometric Shapes": range(9632, 9727 + 1), - "Miscellaneous Symbols": range(9728, 9983 + 1), - "Dingbats": range(9984, 10175 + 1), - "Miscellaneous Mathematical Symbols-A": range(10176, 10223 + 1), - "Supplemental Arrows-A": range(10224, 10239 + 1), - "Braille Patterns": range(10240, 10495 + 1), - "Supplemental Arrows-B": range(10496, 10623 + 1), - "Miscellaneous Mathematical Symbols-B": range(10624, 10751 + 1), - "Supplemental Mathematical Operators": range(10752, 11007 + 1), - "Miscellaneous Symbols and Arrows": range(11008, 11263 + 1), - "Glagolitic": range(11264, 11359 + 1), - "Latin Extended-C": range(11360, 11391 + 1), - "Coptic": range(11392, 11519 + 1), - "Georgian Supplement": range(11520, 11567 + 1), - "Tifinagh": range(11568, 11647 + 1), - "Ethiopic Extended": range(11648, 11743 + 1), - "Cyrillic Extended-A": range(11744, 11775 + 1), - "Supplemental Punctuation": range(11776, 11903 + 1), - "CJK Radicals Supplement": range(11904, 12031 + 1), - "Kangxi Radicals": range(12032, 12255 + 1), - "Ideographic Description Characters": range(12272, 12287 + 1), - "CJK Symbols and Punctuation": range(12288, 12351 + 1), - "Hiragana": range(12352, 12447 + 1), - "Katakana": range(12448, 12543 + 1), - "Bopomofo": range(12544, 12591 + 1), - "Hangul Compatibility Jamo": range(12592, 12687 + 1), - "Kanbun": range(12688, 12703 + 1), - "Bopomofo Extended": range(12704, 12735 + 1), - "CJK Strokes": range(12736, 12783 + 1), - "Katakana Phonetic Extensions": range(12784, 12799 + 1), - "Enclosed CJK Letters and Months": range(12800, 13055 + 1), - "CJK Compatibility": range(13056, 13311 + 1), - "CJK Unified Ideographs Extension A": range(13312, 19903 + 1), - "Yijing Hexagram Symbols": range(19904, 19967 + 1), - "CJK Unified Ideographs": range(19968, 40959 + 1), - "Yi Syllables": range(40960, 42127 + 1), - "Yi Radicals": range(42128, 42191 + 1), - "Lisu": range(42192, 42239 + 1), - "Vai": range(42240, 42559 + 1), - "Cyrillic Extended-B": range(42560, 42655 + 1), - "Bamum": range(42656, 42751 + 1), - "Modifier Tone Letters": range(42752, 42783 + 1), - "Latin Extended-D": range(42784, 43007 + 1), - "Syloti Nagri": range(43008, 43055 + 1), - "Common Indic Number Forms": range(43056, 43071 + 1), - "Phags-pa": range(43072, 43135 + 1), - "Saurashtra": range(43136, 43231 + 1), - "Devanagari Extended": range(43232, 43263 + 1), - "Kayah Li": range(43264, 43311 + 1), - "Rejang": range(43312, 43359 + 1), - "Hangul Jamo Extended-A": range(43360, 43391 + 1), - "Javanese": range(43392, 43487 + 1), - "Myanmar Extended-B": range(43488, 43519 + 1), - "Cham": range(43520, 43615 + 1), - "Myanmar Extended-A": range(43616, 43647 + 1), - "Tai Viet": range(43648, 43743 + 1), - "Meetei Mayek Extensions": range(43744, 43775 + 1), - "Ethiopic Extended-A": range(43776, 43823 + 1), - "Latin Extended-E": range(43824, 43887 + 1), - "Cherokee Supplement": range(43888, 43967 + 1), - "Meetei Mayek": range(43968, 44031 + 1), - "Hangul Syllables": range(44032, 55215 + 1), - "Hangul Jamo Extended-B": range(55216, 55295 + 1), - "High Surrogates": range(55296, 56191 + 1), - "High Private Use Surrogates": range(56192, 56319 + 1), - "Low Surrogates": range(56320, 57343 + 1), - "Private Use Area": range(57344, 63743 + 1), - "CJK Compatibility Ideographs": range(63744, 64255 + 1), - "Alphabetic Presentation Forms": range(64256, 64335 + 1), - "Arabic Presentation Forms-A": range(64336, 65023 + 1), - "Variation Selectors": range(65024, 65039 + 1), - "Vertical Forms": range(65040, 65055 + 1), - "Combining Half Marks": range(65056, 65071 + 1), - "CJK Compatibility Forms": range(65072, 65103 + 1), - "Small Form Variants": range(65104, 65135 + 1), - "Arabic Presentation Forms-B": range(65136, 65279 + 1), - "Halfwidth and Fullwidth Forms": range(65280, 65519 + 1), - "Specials": range(65520, 65535 + 1), - "Linear B Syllabary": range(65536, 65663 + 1), - "Linear B Ideograms": range(65664, 65791 + 1), - "Aegean Numbers": range(65792, 65855 + 1), - "Ancient Greek Numbers": range(65856, 65935 + 1), - "Ancient Symbols": range(65936, 65999 + 1), - "Phaistos Disc": range(66000, 66047 + 1), - "Lycian": range(66176, 66207 + 1), - "Carian": range(66208, 66271 + 1), - "Coptic Epact Numbers": range(66272, 66303 + 1), - "Old Italic": range(66304, 66351 + 1), - "Gothic": range(66352, 66383 + 1), - "Old Permic": range(66384, 66431 + 1), - "Ugaritic": range(66432, 66463 + 1), - "Old Persian": range(66464, 66527 + 1), - "Deseret": range(66560, 66639 + 1), - "Shavian": range(66640, 66687 + 1), - "Osmanya": range(66688, 66735 + 1), - "Osage": range(66736, 66815 + 1), - "Elbasan": range(66816, 66863 + 1), - "Caucasian Albanian": range(66864, 66927 + 1), - "Linear A": range(67072, 67455 + 1), - "Cypriot Syllabary": range(67584, 67647 + 1), - "Imperial Aramaic": range(67648, 67679 + 1), - "Palmyrene": range(67680, 67711 + 1), - "Nabataean": range(67712, 67759 + 1), - "Hatran": range(67808, 67839 + 1), - "Phoenician": range(67840, 67871 + 1), - "Lydian": range(67872, 67903 + 1), - "Meroitic Hieroglyphs": range(67968, 67999 + 1), - "Meroitic Cursive": range(68000, 68095 + 1), - "Kharoshthi": range(68096, 68191 + 1), - "Old South Arabian": range(68192, 68223 + 1), - "Old North Arabian": range(68224, 68255 + 1), - "Manichaean": range(68288, 68351 + 1), - "Avestan": range(68352, 68415 + 1), - "Inscriptional Parthian": range(68416, 68447 + 1), - "Inscriptional Pahlavi": range(68448, 68479 + 1), - "Psalter Pahlavi": range(68480, 68527 + 1), - "Old Turkic": range(68608, 68687 + 1), - "Old Hungarian": range(68736, 68863 + 1), - "Rumi Numeral Symbols": range(69216, 69247 + 1), - "Brahmi": range(69632, 69759 + 1), - "Kaithi": range(69760, 69839 + 1), - "Sora Sompeng": range(69840, 69887 + 1), - "Chakma": range(69888, 69967 + 1), - "Mahajani": range(69968, 70015 + 1), - "Sharada": range(70016, 70111 + 1), - "Sinhala Archaic Numbers": range(70112, 70143 + 1), - "Khojki": range(70144, 70223 + 1), - "Multani": range(70272, 70319 + 1), - "Khudawadi": range(70320, 70399 + 1), - "Grantha": range(70400, 70527 + 1), - "Newa": range(70656, 70783 + 1), - "Tirhuta": range(70784, 70879 + 1), - "Siddham": range(71040, 71167 + 1), - "Modi": range(71168, 71263 + 1), - "Mongolian Supplement": range(71264, 71295 + 1), - "Takri": range(71296, 71375 + 1), - "Ahom": range(71424, 71487 + 1), - "Warang Citi": range(71840, 71935 + 1), - "Zanabazar Square": range(72192, 72271 + 1), - "Soyombo": range(72272, 72367 + 1), - "Pau Cin Hau": range(72384, 72447 + 1), - "Bhaiksuki": range(72704, 72815 + 1), - "Marchen": range(72816, 72895 + 1), - "Masaram Gondi": range(72960, 73055 + 1), - "Cuneiform": range(73728, 74751 + 1), - "Cuneiform Numbers and Punctuation": range(74752, 74879 + 1), - "Early Dynastic Cuneiform": range(74880, 75087 + 1), - "Egyptian Hieroglyphs": range(77824, 78895 + 1), - "Anatolian Hieroglyphs": range(82944, 83583 + 1), - "Bamum Supplement": range(92160, 92735 + 1), - "Mro": range(92736, 92783 + 1), - "Bassa Vah": range(92880, 92927 + 1), - "Pahawh Hmong": range(92928, 93071 + 1), - "Miao": range(93952, 94111 + 1), - "Ideographic Symbols and Punctuation": range(94176, 94207 + 1), - "Tangut": range(94208, 100351 + 1), - "Tangut Components": range(100352, 101119 + 1), - "Kana Supplement": range(110592, 110847 + 1), - "Kana Extended-A": range(110848, 110895 + 1), - "Nushu": range(110960, 111359 + 1), - "Duployan": range(113664, 113823 + 1), - "Shorthand Format Controls": range(113824, 113839 + 1), - "Byzantine Musical Symbols": range(118784, 119039 + 1), - "Musical Symbols": range(119040, 119295 + 1), - "Ancient Greek Musical Notation": range(119296, 119375 + 1), - "Tai Xuan Jing Symbols": range(119552, 119647 + 1), - "Counting Rod Numerals": range(119648, 119679 + 1), - "Mathematical Alphanumeric Symbols": range(119808, 120831 + 1), - "Sutton SignWriting": range(120832, 121519 + 1), - "Glagolitic Supplement": range(122880, 122927 + 1), - "Mende Kikakui": range(124928, 125151 + 1), - "Adlam": range(125184, 125279 + 1), - "Arabic Mathematical Alphabetic Symbols": range(126464, 126719 + 1), - "Mahjong Tiles": range(126976, 127023 + 1), - "Domino Tiles": range(127024, 127135 + 1), - "Playing Cards": range(127136, 127231 + 1), - "Enclosed Alphanumeric Supplement": range(127232, 127487 + 1), - "Enclosed Ideographic Supplement": range(127488, 127743 + 1), - "Miscellaneous Symbols and Pictographs": range(127744, 128511 + 1), - "Emoticons range(Emoji)": range(128512, 128591 + 1), - "Ornamental Dingbats": range(128592, 128639 + 1), - "Transport and Map Symbols": range(128640, 128767 + 1), - "Alchemical Symbols": range(128768, 128895 + 1), - "Geometric Shapes Extended": range(128896, 129023 + 1), - "Supplemental Arrows-C": range(129024, 129279 + 1), - "Supplemental Symbols and Pictographs": range(129280, 129535 + 1), - "CJK Unified Ideographs Extension B": range(131072, 173791 + 1), - "CJK Unified Ideographs Extension C": range(173824, 177983 + 1), - "CJK Unified Ideographs Extension D": range(177984, 178207 + 1), - "CJK Unified Ideographs Extension E": range(178208, 183983 + 1), - "CJK Unified Ideographs Extension F": range(183984, 191471 + 1), - "CJK Compatibility Ideographs Supplement": range(194560, 195103 + 1), - "Tags": range(917504, 917631 + 1), - "Variation Selectors Supplement": range(917760, 917999 + 1), -} - - -UNICODE_SECONDARY_RANGE_KEYWORD: List[str] = [ - "Supplement", - "Extended", - "Extensions", - "Modifier", - "Marks", - "Punctuation", - "Symbols", - "Forms", - "Operators", - "Miscellaneous", - "Drawing", - "Block", - "Shapes", - "Supplemental", - "Tags", -] - -RE_POSSIBLE_ENCODING_INDICATION = re_compile( - r"(?:(?:encoding)|(?:charset)|(?:coding))(?:[\:= ]{1,10})(?:[\"\']?)([a-zA-Z0-9\-_]+)(?:[\"\']?)", - IGNORECASE, -) - -IANA_SUPPORTED: List[str] = sorted( - filter( - lambda x: x.endswith("_codec") is False - and x not in {"rot_13", "tactis", "mbcs"}, - list(set(aliases.values())), - ) -) - -IANA_SUPPORTED_COUNT: int = len(IANA_SUPPORTED) - -# pre-computed code page that are similar using the function cp_similarity. -IANA_SUPPORTED_SIMILAR: Dict[str, List[str]] = { - "cp037": ["cp1026", "cp1140", "cp273", "cp500"], - "cp1026": ["cp037", "cp1140", "cp273", "cp500"], - "cp1125": ["cp866"], - "cp1140": ["cp037", "cp1026", "cp273", "cp500"], - "cp1250": ["iso8859_2"], - "cp1251": ["kz1048", "ptcp154"], - "cp1252": ["iso8859_15", "iso8859_9", "latin_1"], - "cp1253": ["iso8859_7"], - "cp1254": ["iso8859_15", "iso8859_9", "latin_1"], - "cp1257": ["iso8859_13"], - "cp273": ["cp037", "cp1026", "cp1140", "cp500"], - "cp437": ["cp850", "cp858", "cp860", "cp861", "cp862", "cp863", "cp865"], - "cp500": ["cp037", "cp1026", "cp1140", "cp273"], - "cp850": ["cp437", "cp857", "cp858", "cp865"], - "cp857": ["cp850", "cp858", "cp865"], - "cp858": ["cp437", "cp850", "cp857", "cp865"], - "cp860": ["cp437", "cp861", "cp862", "cp863", "cp865"], - "cp861": ["cp437", "cp860", "cp862", "cp863", "cp865"], - "cp862": ["cp437", "cp860", "cp861", "cp863", "cp865"], - "cp863": ["cp437", "cp860", "cp861", "cp862", "cp865"], - "cp865": ["cp437", "cp850", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863"], - "cp866": ["cp1125"], - "iso8859_10": ["iso8859_14", "iso8859_15", "iso8859_4", "iso8859_9", "latin_1"], - "iso8859_11": ["tis_620"], - "iso8859_13": ["cp1257"], - "iso8859_14": [ - "iso8859_10", - "iso8859_15", - "iso8859_16", - "iso8859_3", - "iso8859_9", - "latin_1", - ], - "iso8859_15": [ - "cp1252", - "cp1254", - "iso8859_10", - "iso8859_14", - "iso8859_16", - "iso8859_3", - "iso8859_9", - "latin_1", - ], - "iso8859_16": [ - "iso8859_14", - "iso8859_15", - "iso8859_2", - "iso8859_3", - "iso8859_9", - "latin_1", - ], - "iso8859_2": ["cp1250", "iso8859_16", "iso8859_4"], - "iso8859_3": ["iso8859_14", "iso8859_15", "iso8859_16", "iso8859_9", "latin_1"], - "iso8859_4": ["iso8859_10", "iso8859_2", "iso8859_9", "latin_1"], - "iso8859_7": ["cp1253"], - "iso8859_9": [ - "cp1252", - "cp1254", - "cp1258", - "iso8859_10", - "iso8859_14", - "iso8859_15", - "iso8859_16", - "iso8859_3", - "iso8859_4", - "latin_1", - ], - "kz1048": ["cp1251", "ptcp154"], - "latin_1": [ - "cp1252", - "cp1254", - "cp1258", - "iso8859_10", - "iso8859_14", - "iso8859_15", - "iso8859_16", - "iso8859_3", - "iso8859_4", - "iso8859_9", - ], - "mac_iceland": ["mac_roman", "mac_turkish"], - "mac_roman": ["mac_iceland", "mac_turkish"], - "mac_turkish": ["mac_iceland", "mac_roman"], - "ptcp154": ["cp1251", "kz1048"], - "tis_620": ["iso8859_11"], -} - - -CHARDET_CORRESPONDENCE: Dict[str, str] = { - "iso2022_kr": "ISO-2022-KR", - "iso2022_jp": "ISO-2022-JP", - "euc_kr": "EUC-KR", - "tis_620": "TIS-620", - "utf_32": "UTF-32", - "euc_jp": "EUC-JP", - "koi8_r": "KOI8-R", - "iso8859_1": "ISO-8859-1", - "iso8859_2": "ISO-8859-2", - "iso8859_5": "ISO-8859-5", - "iso8859_6": "ISO-8859-6", - "iso8859_7": "ISO-8859-7", - "iso8859_8": "ISO-8859-8", - "utf_16": "UTF-16", - "cp855": "IBM855", - "mac_cyrillic": "MacCyrillic", - "gb2312": "GB2312", - "gb18030": "GB18030", - "cp932": "CP932", - "cp866": "IBM866", - "utf_8": "utf-8", - "utf_8_sig": "UTF-8-SIG", - "shift_jis": "SHIFT_JIS", - "big5": "Big5", - "cp1250": "windows-1250", - "cp1251": "windows-1251", - "cp1252": "Windows-1252", - "cp1253": "windows-1253", - "cp1255": "windows-1255", - "cp1256": "windows-1256", - "cp1254": "Windows-1254", - "cp949": "CP949", -} - - -COMMON_SAFE_ASCII_CHARACTERS: Set[str] = { - "<", - ">", - "=", - ":", - "/", - "&", - ";", - "{", - "}", - "[", - "]", - ",", - "|", - '"', - "-", -} - - -KO_NAMES: Set[str] = {"johab", "cp949", "euc_kr"} -ZH_NAMES: Set[str] = {"big5", "cp950", "big5hkscs", "hz"} - -LANGUAGE_SUPPORTED_COUNT: int = len(FREQUENCIES) - -# Logging LEVEL below DEBUG -TRACE: int = 5 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/colorLib/errors.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/colorLib/errors.py deleted file mode 100644 index 18cbebbaf91ff7d5a515321a006be3eb1d83faaf..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/colorLib/errors.py +++ /dev/null @@ -1,2 +0,0 @@ -class ColorLibError(Exception): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/__init__.py deleted file mode 100644 index 301fead45c765c60e2e27f07eb174a2675d6f554..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/__init__.py +++ /dev/null @@ -1,64 +0,0 @@ -from importlib.metadata import entry_points - -from . import _version, caching -from .callbacks import Callback -from .compression import available_compressions -from .core import get_fs_token_paths, open, open_files, open_local -from .exceptions import FSTimeoutError -from .mapping import FSMap, get_mapper -from .registry import ( - available_protocols, - filesystem, - get_filesystem_class, - register_implementation, - registry, -) -from .spec import AbstractFileSystem - -__version__ = _version.get_versions()["version"] - -__all__ = [ - "AbstractFileSystem", - "FSTimeoutError", - "FSMap", - "filesystem", - "register_implementation", - "get_filesystem_class", - "get_fs_token_paths", - "get_mapper", - "open", - "open_files", - "open_local", - "registry", - "caching", - "Callback", - "available_protocols", - "available_compressions", -] - - -def process_entries(): - if entry_points is not None: - try: - eps = entry_points() - except TypeError: - pass # importlib-metadata < 0.8 - else: - if hasattr(eps, "select"): # Python 3.10+ / importlib_metadata >= 3.9.0 - specs = eps.select(group="fsspec.specs") - else: - specs = eps.get("fsspec.specs", []) - for spec in specs: - err_msg = f"Unable to load filesystem from {spec}" - register_implementation( - spec.name, - spec.value.replace(":", "."), - errtxt=err_msg, - # We take our implementations as the ones to overload with if - # for some reason we encounter some, may be the same, already - # registered - clobber=True, - ) - - -process_entries() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py deleted file mode 100644 index 727cbb441e7a19ef8ce9838059bb01c32b2da9f0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py +++ /dev/null @@ -1,5176 +0,0 @@ -# coding=utf-8 -# Copyright 2019-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from __future__ import annotations - -import inspect -import json -import pprint -import re -import textwrap -import warnings -from concurrent.futures import Future, ThreadPoolExecutor -from dataclasses import dataclass, field -from datetime import datetime -from functools import wraps -from itertools import islice -from pathlib import Path -from typing import Any, BinaryIO, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, TypeVar, Union, overload -from urllib.parse import quote - -import requests -from requests.exceptions import HTTPError - -from huggingface_hub.utils import ( - IGNORE_GIT_FOLDER_PATTERNS, - EntryNotFoundError, - LocalTokenNotFoundError, - RepositoryNotFoundError, - experimental, - get_session, -) - -from ._commit_api import ( - CommitOperation, - CommitOperationAdd, - CommitOperationCopy, - CommitOperationDelete, - fetch_lfs_files_to_copy, - fetch_upload_modes, - prepare_commit_payload, - upload_lfs_files, - warn_on_overwriting_operations, -) -from ._multi_commits import ( - MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_BAD_REQUEST_TEMPLATE, - MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_NO_CHANGES_TEMPLATE, - MULTI_COMMIT_PR_CLOSING_COMMENT_TEMPLATE, - MULTI_COMMIT_PR_COMPLETION_COMMENT_TEMPLATE, - MultiCommitException, - MultiCommitStep, - MultiCommitStrategy, - multi_commit_create_pull_request, - multi_commit_generate_comment, - multi_commit_parse_pr_description, - plan_multi_commits, -) -from ._space_api import SpaceHardware, SpaceRuntime -from .community import ( - Discussion, - DiscussionComment, - DiscussionStatusChange, - DiscussionTitleChange, - DiscussionWithDetails, - deserialize_event, -) -from .constants import ( - DEFAULT_REVISION, - ENDPOINT, - REGEX_COMMIT_OID, - REPO_TYPE_MODEL, - REPO_TYPES, - REPO_TYPES_MAPPING, - REPO_TYPES_URL_PREFIXES, - SPACES_SDK_TYPES, -) -from .utils import ( # noqa: F401 # imported for backward compatibility - BadRequestError, - HfFolder, - HfHubHTTPError, - build_hf_headers, - filter_repo_objects, - hf_raise_for_status, - logging, - paginate, - parse_datetime, - validate_hf_hub_args, -) -from .utils._deprecation import ( - _deprecate_arguments, -) -from .utils._typing import CallableT, Literal, TypedDict -from .utils.endpoint_helpers import ( - AttributeDictionary, - DatasetFilter, - DatasetTags, - ModelFilter, - ModelTags, - _filter_emissions, -) - - -R = TypeVar("R") # Return type - -USERNAME_PLACEHOLDER = "hf_user" -_REGEX_DISCUSSION_URL = re.compile(r".*/discussions/(\d+)$") - - -logger = logging.get_logger(__name__) - - -class ReprMixin: - """Mixin to create the __repr__ for a class""" - - def __repr__(self): - formatted_value = pprint.pformat(self.__dict__, width=119, compact=True) - if "\n" in formatted_value: - return f"{self.__class__.__name__}: {{ \n{textwrap.indent(formatted_value, ' ')}\n}}" - else: - return f"{self.__class__.__name__}: {formatted_value}" - - -def repo_type_and_id_from_hf_id(hf_id: str, hub_url: Optional[str] = None) -> Tuple[Optional[str], Optional[str], str]: - """ - Returns the repo type and ID from a huggingface.co URL linking to a - repository - - Args: - hf_id (`str`): - An URL or ID of a repository on the HF hub. Accepted values are: - - - https://huggingface.co/// - - https://huggingface.co// - - hf://// - - hf:/// - - // - - / - - - hub_url (`str`, *optional*): - The URL of the HuggingFace Hub, defaults to https://huggingface.co - - Returns: - A tuple with three items: repo_type (`str` or `None`), namespace (`str` or - `None`) and repo_id (`str`). - - Raises: - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If URL cannot be parsed. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If `repo_type` is unknown. - """ - input_hf_id = hf_id - hub_url = re.sub(r"https?://", "", hub_url if hub_url is not None else ENDPOINT) - is_hf_url = hub_url in hf_id and "@" not in hf_id - - HFFS_PREFIX = "hf://" - if hf_id.startswith(HFFS_PREFIX): # Remove "hf://" prefix if exists - hf_id = hf_id[len(HFFS_PREFIX) :] - - url_segments = hf_id.split("/") - is_hf_id = len(url_segments) <= 3 - - namespace: Optional[str] - if is_hf_url: - namespace, repo_id = url_segments[-2:] - if namespace == hub_url: - namespace = None - if len(url_segments) > 2 and hub_url not in url_segments[-3]: - repo_type = url_segments[-3] - elif namespace in REPO_TYPES_MAPPING: - # Mean canonical dataset or model - repo_type = REPO_TYPES_MAPPING[namespace] - namespace = None - else: - repo_type = None - elif is_hf_id: - if len(url_segments) == 3: - # Passed // or // - repo_type, namespace, repo_id = url_segments[-3:] - elif len(url_segments) == 2: - if url_segments[0] in REPO_TYPES_MAPPING: - # Passed '' or 'datasets/' for a canonical model or dataset - repo_type = REPO_TYPES_MAPPING[url_segments[0]] - namespace = None - repo_id = hf_id.split("/")[-1] - else: - # Passed / or / - namespace, repo_id = hf_id.split("/")[-2:] - repo_type = None - else: - # Passed - repo_id = url_segments[0] - namespace, repo_type = None, None - else: - raise ValueError(f"Unable to retrieve user and repo ID from the passed HF ID: {hf_id}") - - # Check if repo type is known (mapping "spaces" => "space" + empty value => `None`) - if repo_type in REPO_TYPES_MAPPING: - repo_type = REPO_TYPES_MAPPING[repo_type] - if repo_type == "": - repo_type = None - if repo_type not in REPO_TYPES: - raise ValueError(f"Unknown `repo_type`: '{repo_type}' ('{input_hf_id}')") - - return repo_type, namespace, repo_id - - -class BlobLfsInfo(TypedDict, total=False): - size: int - sha256: str - pointer_size: int - - -@dataclass -class CommitInfo: - """Data structure containing information about a newly created commit. - - Returned by [`create_commit`]. - - Args: - commit_url (`str`): - Url where to find the commit. - - commit_message (`str`): - The summary (first line) of the commit that has been created. - - commit_description (`str`): - Description of the commit that has been created. Can be empty. - - oid (`str`): - Commit hash id. Example: `"91c54ad1727ee830252e457677f467be0bfd8a57"`. - - pr_url (`str`, *optional*): - Url to the PR that has been created, if any. Populated when `create_pr=True` - is passed. - - pr_revision (`str`, *optional*): - Revision of the PR that has been created, if any. Populated when - `create_pr=True` is passed. Example: `"refs/pr/1"`. - - pr_num (`int`, *optional*): - Number of the PR discussion that has been created, if any. Populated when - `create_pr=True` is passed. Can be passed as `discussion_num` in - [`get_discussion_details`]. Example: `1`. - """ - - commit_url: str - commit_message: str - commit_description: str - oid: str - pr_url: Optional[str] = None - - # Computed from `pr_url` in `__post_init__` - pr_revision: Optional[str] = field(init=False) - pr_num: Optional[str] = field(init=False) - - def __post_init__(self): - """Populate pr-related fields after initialization. - - See https://docs.python.org/3.10/library/dataclasses.html#post-init-processing. - """ - if self.pr_url is not None: - self.pr_revision = _parse_revision_from_pr_url(self.pr_url) - self.pr_num = int(self.pr_revision.split("/")[-1]) - else: - self.pr_revision = None - self.pr_num = None - - -class RepoUrl(str): - """Subclass of `str` describing a repo URL on the Hub. - - `RepoUrl` is returned by `HfApi.create_repo`. It inherits from `str` for backward - compatibility. At initialization, the URL is parsed to populate properties: - - endpoint (`str`) - - namespace (`Optional[str]`) - - repo_name (`str`) - - repo_id (`str`) - - repo_type (`Literal["model", "dataset", "space"]`) - - url (`str`) - - Args: - url (`Any`): - String value of the repo url. - endpoint (`str`, *optional*): - Endpoint of the Hub. Defaults to . - - Example: - ```py - >>> RepoUrl('https://huggingface.co/gpt2') - RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2') - - >>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co') - RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset') - - >>> RepoUrl('hf://datasets/my-user/my-dataset') - RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset') - - >>> HfApi.create_repo("dummy_model") - RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model') - ``` - - Raises: - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If URL cannot be parsed. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If `repo_type` is unknown. - """ - - def __new__(cls, url: Any, endpoint: Optional[str] = None): - return super(RepoUrl, cls).__new__(cls, url) - - def __init__(self, url: Any, endpoint: Optional[str] = None) -> None: - super().__init__() - # Parse URL - self.endpoint = endpoint or ENDPOINT - repo_type, namespace, repo_name = repo_type_and_id_from_hf_id(self, hub_url=self.endpoint) - - # Populate fields - self.namespace = namespace - self.repo_name = repo_name - self.repo_id = repo_name if namespace is None else f"{namespace}/{repo_name}" - self.repo_type = repo_type or REPO_TYPE_MODEL - self.url = str(self) # just in case it's needed - - def __repr__(self) -> str: - return f"RepoUrl('{self}', endpoint='{self.endpoint}', repo_type='{self.repo_type}', repo_id='{self.repo_id}')" - - -class RepoFile(ReprMixin): - """ - Data structure that represents a public file inside a repo, accessible from huggingface.co - - Args: - rfilename (str): - file name, relative to the repo root. This is the only attribute that's guaranteed to be here, but under - certain conditions there can certain other stuff. - size (`int`, *optional*): - The file's size, in bytes. This attribute is present when `files_metadata` argument of [`repo_info`] is set - to `True`. It's `None` otherwise. - blob_id (`str`, *optional*): - The file's git OID. This attribute is present when `files_metadata` argument of [`repo_info`] is set to - `True`. It's `None` otherwise. - lfs (`BlobLfsInfo`, *optional*): - The file's LFS metadata. This attribute is present when`files_metadata` argument of [`repo_info`] is set to - `True` and the file is stored with Git LFS. It's `None` otherwise. - """ - - def __init__( - self, - rfilename: str, - size: Optional[int] = None, - blobId: Optional[str] = None, - lfs: Optional[BlobLfsInfo] = None, - **kwargs, - ): - self.rfilename = rfilename # filename relative to the repo root - - # Optional file metadata - self.size = size - self.blob_id = blobId - self.lfs = lfs - - # Hack to ensure backward compatibility with future versions of the API. - # See discussion in https://github.com/huggingface/huggingface_hub/pull/951#discussion_r926460408 - for k, v in kwargs.items(): - setattr(self, k, v) - - -class ModelInfo(ReprMixin): - """ - Info about a model accessible from huggingface.co - - Attributes: - modelId (`str`, *optional*): - ID of model repository. - sha (`str`, *optional*): - repo sha at this particular revision - lastModified (`str`, *optional*): - date of last commit to repo - tags (`List[str]`, *optional*): - List of tags. - pipeline_tag (`str`, *optional*): - Pipeline tag to identify the correct widget. - siblings (`List[RepoFile]`, *optional*): - list of ([`huggingface_hub.hf_api.RepoFile`]) objects that constitute the model. - private (`bool`, *optional*, defaults to `False`): - is the repo private - author (`str`, *optional*): - repo author - config (`Dict`, *optional*): - Model configuration information - securityStatus (`Dict`, *optional*): - Security status of the model. - Example: `{"containsInfected": False}` - kwargs (`Dict`, *optional*): - Kwargs that will be become attributes of the class. - """ - - def __init__( - self, - *, - modelId: Optional[str] = None, - sha: Optional[str] = None, - lastModified: Optional[str] = None, - tags: Optional[List[str]] = None, - pipeline_tag: Optional[str] = None, - siblings: Optional[List[Dict]] = None, - private: bool = False, - author: Optional[str] = None, - config: Optional[Dict] = None, - securityStatus: Optional[Dict] = None, - **kwargs, - ): - self.modelId = modelId - self.sha = sha - self.lastModified = lastModified - self.tags = tags - self.pipeline_tag = pipeline_tag - self.siblings = [RepoFile(**x) for x in siblings] if siblings is not None else [] - self.private = private - self.author = author - self.config = config - self.securityStatus = securityStatus - for k, v in kwargs.items(): - setattr(self, k, v) - - def __str__(self): - r = f"Model Name: {self.modelId}, Tags: {self.tags}" - if self.pipeline_tag: - r += f", Task: {self.pipeline_tag}" - return r - - -class DatasetInfo(ReprMixin): - """ - Info about a dataset accessible from huggingface.co - - Attributes: - id (`str`, *optional*): - ID of dataset repository. - sha (`str`, *optional*): - repo sha at this particular revision - lastModified (`str`, *optional*): - date of last commit to repo - tags (`List[str]`, *optional*): - List of tags. - siblings (`List[RepoFile]`, *optional*): - list of [`huggingface_hub.hf_api.RepoFile`] objects that constitute the dataset. - private (`bool`, *optional*, defaults to `False`): - is the repo private - author (`str`, *optional*): - repo author - description (`str`, *optional*): - Description of the dataset - citation (`str`, *optional*): - Dataset citation - cardData (`Dict`, *optional*): - Metadata of the model card as a dictionary. - kwargs (`Dict`, *optional*): - Kwargs that will be become attributes of the class. - """ - - def __init__( - self, - *, - id: Optional[str] = None, - sha: Optional[str] = None, - lastModified: Optional[str] = None, - tags: Optional[List[str]] = None, - siblings: Optional[List[Dict]] = None, - private: bool = False, - author: Optional[str] = None, - description: Optional[str] = None, - citation: Optional[str] = None, - cardData: Optional[dict] = None, - **kwargs, - ): - self.id = id - self.sha = sha - self.lastModified = lastModified - self.tags = tags - self.private = private - self.author = author - self.description = description - self.citation = citation - self.cardData = cardData - self.siblings = [RepoFile(**x) for x in siblings] if siblings is not None else [] - # Legacy stuff, "key" is always returned with an empty string - # because of old versions of the datasets lib that need this field - kwargs.pop("key", None) - # Store all the other fields returned by the API - for k, v in kwargs.items(): - setattr(self, k, v) - - def __str__(self): - r = f"Dataset Name: {self.id}, Tags: {self.tags}" - return r - - -class SpaceInfo(ReprMixin): - """ - Info about a Space accessible from huggingface.co - - This is a "dataclass" like container that just sets on itself any attribute - passed by the server. - - Attributes: - id (`str`, *optional*): - id of space - sha (`str`, *optional*): - repo sha at this particular revision - lastModified (`str`, *optional*): - date of last commit to repo - siblings (`List[RepoFile]`, *optional*): - list of [`huggingface_hub.hf_api.RepoFIle`] objects that constitute the Space - private (`bool`, *optional*, defaults to `False`): - is the repo private - author (`str`, *optional*): - repo author - kwargs (`Dict`, *optional*): - Kwargs that will be become attributes of the class. - """ - - def __init__( - self, - *, - id: Optional[str] = None, - sha: Optional[str] = None, - lastModified: Optional[str] = None, - siblings: Optional[List[Dict]] = None, - private: bool = False, - author: Optional[str] = None, - **kwargs, - ): - self.id = id - self.sha = sha - self.lastModified = lastModified - self.siblings = [RepoFile(**x) for x in siblings] if siblings is not None else [] - self.private = private - self.author = author - for k, v in kwargs.items(): - setattr(self, k, v) - - -class MetricInfo(ReprMixin): - """ - Info about a public metric accessible from huggingface.co - """ - - def __init__( - self, - *, - id: Optional[str] = None, # id of metric - description: Optional[str] = None, - citation: Optional[str] = None, - **kwargs, - ): - self.id = id - self.description = description - self.citation = citation - # Legacy stuff, "key" is always returned with an empty string - # because of old versions of the datasets lib that need this field - kwargs.pop("key", None) - # Store all the other fields returned by the API - for k, v in kwargs.items(): - setattr(self, k, v) - - def __str__(self): - r = f"Metric Name: {self.id}" - return r - - -class ModelSearchArguments(AttributeDictionary): - """ - A nested namespace object holding all possible values for properties of - models currently hosted in the Hub with tab-completion. If a value starts - with a number, it will only exist in the dictionary - - Example: - - ```python - >>> args = ModelSearchArguments() - - >>> args.author.huggingface - 'huggingface' - - >>> args.language.en - 'en' - ``` - - - - `ModelSearchArguments` is a legacy class meant for exploratory purposes only. Its - initialization requires listing all models on the Hub which makes it increasingly - slower as the number of repos on the Hub increases. - - - """ - - def __init__(self, api: Optional["HfApi"] = None): - self._api = api if api is not None else HfApi() - tags = self._api.get_model_tags() - super().__init__(tags) - self._process_models() - - def _process_models(self): - def clean(s: str) -> str: - return s.replace(" ", "").replace("-", "_").replace(".", "_") - - models = self._api.list_models() - author_dict, model_name_dict = AttributeDictionary(), AttributeDictionary() - for model in models: - if "/" in model.modelId: - author, name = model.modelId.split("/") - author_dict[author] = clean(author) - else: - name = model.modelId - model_name_dict[name] = clean(name) - self["model_name"] = model_name_dict - self["author"] = author_dict - - -class DatasetSearchArguments(AttributeDictionary): - """ - A nested namespace object holding all possible values for properties of - datasets currently hosted in the Hub with tab-completion. If a value starts - with a number, it will only exist in the dictionary - - Example: - - ```python - >>> args = DatasetSearchArguments() - - >>> args.author.huggingface - 'huggingface' - - >>> args.language.en - 'language:en' - ``` - - - - `DatasetSearchArguments` is a legacy class meant for exploratory purposes only. Its - initialization requires listing all datasets on the Hub which makes it increasingly - slower as the number of repos on the Hub increases. - - - """ - - def __init__(self, api: Optional["HfApi"] = None): - self._api = api if api is not None else HfApi() - tags = self._api.get_dataset_tags() - super().__init__(tags) - self._process_models() - - def _process_models(self): - def clean(s: str): - return s.replace(" ", "").replace("-", "_").replace(".", "_") - - datasets = self._api.list_datasets() - author_dict, dataset_name_dict = AttributeDictionary(), AttributeDictionary() - for dataset in datasets: - if "/" in dataset.id: - author, name = dataset.id.split("/") - author_dict[author] = clean(author) - else: - name = dataset.id - dataset_name_dict[name] = clean(name) - self["dataset_name"] = dataset_name_dict - self["author"] = author_dict - - -@dataclass -class GitRefInfo: - """ - Contains information about a git reference for a repo on the Hub. - - Args: - name (`str`): - Name of the reference (e.g. tag name or branch name). - ref (`str`): - Full git ref on the Hub (e.g. `"refs/heads/main"` or `"refs/tags/v1.0"`). - target_commit (`str`): - OID of the target commit for the ref (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`) - """ - - name: str - ref: str - target_commit: str - - def __init__(self, data: Dict) -> None: - self.name = data["name"] - self.ref = data["ref"] - self.target_commit = data["targetCommit"] - - -@dataclass -class GitRefs: - """ - Contains information about all git references for a repo on the Hub. - - Object is returned by [`list_repo_refs`]. - - Args: - branches (`List[GitRefInfo]`): - A list of [`GitRefInfo`] containing information about branches on the repo. - converts (`List[GitRefInfo]`): - A list of [`GitRefInfo`] containing information about "convert" refs on the repo. - Converts are refs used (internally) to push preprocessed data in Dataset repos. - tags (`List[GitRefInfo]`): - A list of [`GitRefInfo`] containing information about tags on the repo. - """ - - branches: List[GitRefInfo] - converts: List[GitRefInfo] - tags: List[GitRefInfo] - - -@dataclass -class GitCommitInfo: - """ - Contains information about a git commit for a repo on the Hub. Check out [`list_repo_commits`] for more details. - - Args: - commit_id (`str`): - OID of the commit (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`) - authors (`List[str]`): - List of authors of the commit. - created_at (`datetime`): - Datetime when the commit was created. - title (`str`): - Title of the commit. This is a free-text value entered by the authors. - message (`str`): - Description of the commit. This is a free-text value entered by the authors. - formatted_title (`str`): - Title of the commit formatted as HTML. Only returned if `formatted=True` is set. - formatted_message (`str`): - Description of the commit formatted as HTML. Only returned if `formatted=True` is set. - """ - - commit_id: str - - authors: List[str] - created_at: datetime - title: str - message: str - - formatted_title: Optional[str] - formatted_message: Optional[str] - - def __init__(self, data: Dict) -> None: - self.commit_id = data["id"] - self.authors = [author["user"] for author in data["authors"]] - self.created_at = parse_datetime(data["date"]) - self.title = data["title"] - self.message = data["message"] - - self.formatted_title = data.get("formatted", {}).get("title") - self.formatted_message = data.get("formatted", {}).get("message") - - -@dataclass -class UserLikes: - """ - Contains information about a user likes on the Hub. - - Args: - user (`str`): - Name of the user for which we fetched the likes. - total (`int`): - Total number of likes. - datasets (`List[str]`): - List of datasets liked by the user (as repo_ids). - models (`List[str]`): - List of models liked by the user (as repo_ids). - spaces (`List[str]`): - List of spaces liked by the user (as repo_ids). - """ - - # Metadata - user: str - total: int - - # User likes - datasets: List[str] - models: List[str] - spaces: List[str] - - -def future_compatible(fn: CallableT) -> CallableT: - """Wrap a method of `HfApi` to handle `run_as_future=True`. - - A method flagged as "future_compatible" will be called in a thread if `run_as_future=True` and return a - `concurrent.futures.Future` instance. Otherwise, it will be called normally and return the result. - """ - sig = inspect.signature(fn) - args_params = list(sig.parameters)[1:] # remove "self" from list - - @wraps(fn) - def _inner(self, *args, **kwargs): - # Get `run_as_future` value if provided (default to False) - if "run_as_future" in kwargs: - run_as_future = kwargs["run_as_future"] - kwargs["run_as_future"] = False # avoid recursion error - else: - run_as_future = False - for param, value in zip(args_params, args): - if param == "run_as_future": - run_as_future = value - break - - # Call the function in a thread if `run_as_future=True` - if run_as_future: - return self.run_as_future(fn, self, *args, **kwargs) - - # Otherwise, call the function normally - return fn(self, *args, **kwargs) - - _inner.is_future_compatible = True # type: ignore - return _inner # type: ignore - - -class HfApi: - def __init__( - self, - endpoint: Optional[str] = None, - token: Optional[str] = None, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, - ) -> None: - """Create a HF client to interact with the Hub via HTTP. - - The client is initialized with some high-level settings used in all requests - made to the Hub (HF endpoint, authentication, user agents...). Using the `HfApi` - client is preferred but not mandatory as all of its public methods are exposed - directly at the root of `huggingface_hub`. - - Args: - endpoint (`str`, *optional*): - Hugging Face Hub base url. Will default to https://huggingface.co/. To - be set if you are using a private hub. Otherwise, one can set the - `HF_ENDPOINT` environment variable. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if - not provided. - library_name (`str`, *optional*): - The name of the library that is making the HTTP request. Will be added to - the user-agent header. Example: `"transformers"`. - library_version (`str`, *optional*): - The version of the library that is making the HTTP request. Will be added - to the user-agent header. Example: `"4.24.0"`. - user_agent (`str`, `dict`, *optional*): - The user agent info in the form of a dictionary or a single string. It will - be completed with information about the installed packages. - """ - self.endpoint = endpoint if endpoint is not None else ENDPOINT - self.token = token - self.library_name = library_name - self.library_version = library_version - self.user_agent = user_agent - self._thread_pool: Optional[ThreadPoolExecutor] = None - - def run_as_future(self, fn: Callable[..., R], *args, **kwargs) -> Future[R]: - """ - Run a method in the background and return a Future instance. - - The main goal is to run methods without blocking the main thread (e.g. to push data during a training). - Background jobs are queued to preserve order but are not ran in parallel. If you need to speed-up your scripts - by parallelizing lots of call to the API, you must setup and use your own [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor). - - Note: Most-used methods like [`upload_file`], [`upload_folder`] and [`create_commit`] have a `run_as_future: bool` - argument to directly call them in the background. This is equivalent to calling `api.run_as_future(...)` on them - but less verbose. - - Args: - fn (`Callable`): - The method to run in the background. - *args, **kwargs: - Arguments with which the method will be called. - - Return: - `Future`: a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) instance to - get the result of the task. - - Example: - ```py - >>> from huggingface_hub import HfApi - >>> api = HfApi() - >>> future = api.run_as_future(api.whoami) # instant - >>> future.done() - False - >>> future.result() # wait until complete and return result - (...) - >>> future.done() - True - ``` - """ - if self._thread_pool is None: - self._thread_pool = ThreadPoolExecutor(max_workers=1) - self._thread_pool - return self._thread_pool.submit(fn, *args, **kwargs) - - @validate_hf_hub_args - def whoami(self, token: Optional[str] = None) -> Dict: - """ - Call HF API to know "whoami". - - Args: - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if - not provided. - """ - r = get_session().get( - f"{self.endpoint}/api/whoami-v2", - headers=self._build_hf_headers( - # If `token` is provided and not `None`, it will be used by default. - # Otherwise, the token must be retrieved from cache or env variable. - token=(token or self.token or True), - ), - ) - try: - hf_raise_for_status(r) - except HTTPError as e: - raise HTTPError( - "Invalid user token. If you didn't pass a user token, make sure you " - "are properly logged in by executing `huggingface-cli login`, and " - "if you did pass a user token, double-check it's correct." - ) from e - return r.json() - - def get_token_permission(self, token: Optional[str] = None) -> Literal["read", "write", None]: - """ - Check if a given `token` is valid and return its permissions. - - For more details about tokens, please refer to https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens. - - Args: - token (`str`, *optional*): - The token to check for validity. Defaults to the one saved locally. - - Returns: - `Literal["read", "write", None]`: Permission granted by the token ("read" or "write"). Returns `None` if no - token passed or token is invalid. - """ - try: - return self.whoami(token=token)["auth"]["accessToken"]["role"] - except (LocalTokenNotFoundError, HTTPError): - return None - - def get_model_tags(self) -> ModelTags: - """ - List all valid model tags as a nested namespace object - """ - path = f"{self.endpoint}/api/models-tags-by-type" - r = get_session().get(path) - hf_raise_for_status(r) - d = r.json() - return ModelTags(d) - - def get_dataset_tags(self) -> DatasetTags: - """ - List all valid dataset tags as a nested namespace object. - """ - path = f"{self.endpoint}/api/datasets-tags-by-type" - r = get_session().get(path) - hf_raise_for_status(r) - d = r.json() - return DatasetTags(d) - - @validate_hf_hub_args - def list_models( - self, - *, - filter: Union[ModelFilter, str, Iterable[str], None] = None, - author: Optional[str] = None, - search: Optional[str] = None, - emissions_thresholds: Optional[Tuple[float, float]] = None, - sort: Union[Literal["lastModified"], str, None] = None, - direction: Optional[Literal[-1]] = None, - limit: Optional[int] = None, - full: Optional[bool] = None, - cardData: bool = False, - fetch_config: bool = False, - token: Optional[Union[bool, str]] = None, - ) -> Iterable[ModelInfo]: - """ - List models hosted on the Huggingface Hub, given some filters. - - Args: - filter ([`ModelFilter`] or `str` or `Iterable`, *optional*): - A string or [`ModelFilter`] which can be used to identify models - on the Hub. - author (`str`, *optional*): - A string which identify the author (user or organization) of the - returned models - search (`str`, *optional*): - A string that will be contained in the returned model ids. - emissions_thresholds (`Tuple`, *optional*): - A tuple of two ints or floats representing a minimum and maximum - carbon footprint to filter the resulting models with in grams. - sort (`Literal["lastModified"]` or `str`, *optional*): - The key with which to sort the resulting models. Possible values - are the properties of the [`huggingface_hub.hf_api.ModelInfo`] class. - direction (`Literal[-1]` or `int`, *optional*): - Direction in which to sort. The value `-1` sorts by descending - order while all other values sort by ascending order. - limit (`int`, *optional*): - The limit on the number of models fetched. Leaving this option - to `None` fetches all models. - full (`bool`, *optional*): - Whether to fetch all model data, including the `lastModified`, - the `sha`, the files and the `tags`. This is set to `True` by - default when using a filter. - cardData (`bool`, *optional*): - Whether to grab the metadata for the model as well. Can contain - useful information such as carbon emissions, metrics, and - datasets trained on. - fetch_config (`bool`, *optional*): - Whether to fetch the model configs as well. This is not included - in `full` due to its size. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - `Iterable[ModelInfo]`: an iterable of [`huggingface_hub.hf_api.ModelInfo`] objects. - - Example usage with the `filter` argument: - - ```python - >>> from huggingface_hub import HfApi - - >>> api = HfApi() - - >>> # List all models - >>> api.list_models() - - >>> # Get all valid search arguments - >>> args = ModelSearchArguments() - - >>> # List only the text classification models - >>> api.list_models(filter="text-classification") - >>> # Using the `ModelFilter` - >>> filt = ModelFilter(task="text-classification") - >>> # With `ModelSearchArguments` - >>> filt = ModelFilter(task=args.pipeline_tags.TextClassification) - >>> api.list_models(filter=filt) - - >>> # Using `ModelFilter` and `ModelSearchArguments` to find text classification in both PyTorch and TensorFlow - >>> filt = ModelFilter( - ... task=args.pipeline_tags.TextClassification, - ... library=[args.library.PyTorch, args.library.TensorFlow], - ... ) - >>> api.list_models(filter=filt) - - >>> # List only models from the AllenNLP library - >>> api.list_models(filter="allennlp") - >>> # Using `ModelFilter` and `ModelSearchArguments` - >>> filt = ModelFilter(library=args.library.allennlp) - ``` - - Example usage with the `search` argument: - - ```python - >>> from huggingface_hub import HfApi - - >>> api = HfApi() - - >>> # List all models with "bert" in their name - >>> api.list_models(search="bert") - - >>> # List all models with "bert" in their name made by google - >>> api.list_models(search="bert", author="google") - ``` - """ - if emissions_thresholds is not None and cardData is None: - raise ValueError("`emissions_thresholds` were passed without setting `cardData=True`.") - - path = f"{self.endpoint}/api/models" - headers = self._build_hf_headers(token=token) - params = {} - if filter is not None: - if isinstance(filter, ModelFilter): - params = self._unpack_model_filter(filter) - else: - params.update({"filter": filter}) - params.update({"full": True}) - if author is not None: - params.update({"author": author}) - if search is not None: - params.update({"search": search}) - if sort is not None: - params.update({"sort": sort}) - if direction is not None: - params.update({"direction": direction}) - if limit is not None: - params.update({"limit": limit}) - if full is not None: - if full: - params.update({"full": True}) - elif "full" in params: - del params["full"] - if fetch_config: - params.update({"config": True}) - if cardData: - params.update({"cardData": True}) - - # `items` is a generator - items = paginate(path, params=params, headers=headers) - if limit is not None: - items = islice(items, limit) # Do not iterate over all pages - if emissions_thresholds is not None: - items = _filter_emissions(items, *emissions_thresholds) - for item in items: - yield ModelInfo(**item) - - def _unpack_model_filter(self, model_filter: ModelFilter): - """ - Unpacks a [`ModelFilter`] into something readable for `list_models` - """ - model_str = "" - tags = [] - - # Handling author - if model_filter.author is not None: - model_str = f"{model_filter.author}/" - - # Handling model_name - if model_filter.model_name is not None: - model_str += model_filter.model_name - - filter_list: List[str] = [] - - # Handling tasks - if model_filter.task is not None: - filter_list.extend([model_filter.task] if isinstance(model_filter.task, str) else model_filter.task) - - # Handling dataset - if model_filter.trained_dataset is not None: - if not isinstance(model_filter.trained_dataset, (list, tuple)): - model_filter.trained_dataset = [model_filter.trained_dataset] - for dataset in model_filter.trained_dataset: - if "dataset:" not in dataset: - dataset = f"dataset:{dataset}" - filter_list.append(dataset) - - # Handling library - if model_filter.library: - filter_list.extend( - [model_filter.library] if isinstance(model_filter.library, str) else model_filter.library - ) - - # Handling tags - if model_filter.tags: - tags.extend([model_filter.tags] if isinstance(model_filter.tags, str) else model_filter.tags) - - query_dict: Dict[str, Any] = {} - if model_str is not None: - query_dict["search"] = model_str - if len(tags) > 0: - query_dict["tags"] = tags - if isinstance(model_filter.language, list): - filter_list.extend(model_filter.language) - elif isinstance(model_filter.language, str): - filter_list.append(model_filter.language) - query_dict["filter"] = tuple(filter_list) - return query_dict - - @validate_hf_hub_args - def list_datasets( - self, - *, - filter: Union[DatasetFilter, str, Iterable[str], None] = None, - author: Optional[str] = None, - search: Optional[str] = None, - sort: Union[Literal["lastModified"], str, None] = None, - direction: Optional[Literal[-1]] = None, - limit: Optional[int] = None, - full: Optional[bool] = None, - token: Optional[str] = None, - ) -> Iterable[DatasetInfo]: - """ - List datasets hosted on the Huggingface Hub, given some filters. - - Args: - filter ([`DatasetFilter`] or `str` or `Iterable`, *optional*): - A string or [`DatasetFilter`] which can be used to identify - datasets on the hub. - author (`str`, *optional*): - A string which identify the author of the returned datasets. - search (`str`, *optional*): - A string that will be contained in the returned datasets. - sort (`Literal["lastModified"]` or `str`, *optional*): - The key with which to sort the resulting datasets. Possible - values are the properties of the [`huggingface_hub.hf_api.DatasetInfo`] class. - direction (`Literal[-1]` or `int`, *optional*): - Direction in which to sort. The value `-1` sorts by descending - order while all other values sort by ascending order. - limit (`int`, *optional*): - The limit on the number of datasets fetched. Leaving this option - to `None` fetches all datasets. - full (`bool`, *optional*): - Whether to fetch all dataset data, including the `lastModified` - and the `cardData`. Can contain useful information such as the - PapersWithCode ID. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - `Iterable[DatasetInfo]`: an iterable of [`huggingface_hub.hf_api.DatasetInfo`] objects. - - Example usage with the `filter` argument: - - ```python - >>> from huggingface_hub import HfApi - - >>> api = HfApi() - - >>> # List all datasets - >>> api.list_datasets() - - >>> # Get all valid search arguments - >>> args = DatasetSearchArguments() - - >>> # List only the text classification datasets - >>> api.list_datasets(filter="task_categories:text-classification") - >>> # Using the `DatasetFilter` - >>> filt = DatasetFilter(task_categories="text-classification") - >>> # With `DatasetSearchArguments` - >>> filt = DatasetFilter(task=args.task_categories.text_classification) - >>> api.list_models(filter=filt) - - >>> # List only the datasets in russian for language modeling - >>> api.list_datasets( - ... filter=("language:ru", "task_ids:language-modeling") - ... ) - >>> # Using the `DatasetFilter` - >>> filt = DatasetFilter(language="ru", task_ids="language-modeling") - >>> # With `DatasetSearchArguments` - >>> filt = DatasetFilter( - ... language=args.language.ru, - ... task_ids=args.task_ids.language_modeling, - ... ) - >>> api.list_datasets(filter=filt) - ``` - - Example usage with the `search` argument: - - ```python - >>> from huggingface_hub import HfApi - - >>> api = HfApi() - - >>> # List all datasets with "text" in their name - >>> api.list_datasets(search="text") - - >>> # List all datasets with "text" in their name made by google - >>> api.list_datasets(search="text", author="google") - ``` - """ - path = f"{self.endpoint}/api/datasets" - headers = self._build_hf_headers(token=token) - params = {} - if filter is not None: - if isinstance(filter, DatasetFilter): - params = self._unpack_dataset_filter(filter) - else: - params.update({"filter": filter}) - if author is not None: - params.update({"author": author}) - if search is not None: - params.update({"search": search}) - if sort is not None: - params.update({"sort": sort}) - if direction is not None: - params.update({"direction": direction}) - if limit is not None: - params.update({"limit": limit}) - if full: - params.update({"full": True}) - - items = paginate(path, params=params, headers=headers) - if limit is not None: - items = islice(items, limit) # Do not iterate over all pages - for item in items: - yield DatasetInfo(**item) - - def _unpack_dataset_filter(self, dataset_filter: DatasetFilter): - """ - Unpacks a [`DatasetFilter`] into something readable for `list_datasets` - """ - dataset_str = "" - - # Handling author - if dataset_filter.author is not None: - dataset_str = f"{dataset_filter.author}/" - - # Handling dataset_name - if dataset_filter.dataset_name is not None: - dataset_str += dataset_filter.dataset_name - - filter_list = [] - data_attributes = [ - "benchmark", - "language_creators", - "language", - "multilinguality", - "size_categories", - "task_categories", - "task_ids", - ] - - for attr in data_attributes: - curr_attr = getattr(dataset_filter, attr) - if curr_attr is not None: - if not isinstance(curr_attr, (list, tuple)): - curr_attr = [curr_attr] - for data in curr_attr: - if f"{attr}:" not in data: - data = f"{attr}:{data}" - filter_list.append(data) - - query_dict: Dict[str, Any] = {} - if dataset_str is not None: - query_dict["search"] = dataset_str - query_dict["filter"] = tuple(filter_list) - return query_dict - - def list_metrics(self) -> List[MetricInfo]: - """ - Get the public list of all the metrics on huggingface.co - - Returns: - `List[MetricInfo]`: a list of [`MetricInfo`] objects which. - """ - path = f"{self.endpoint}/api/metrics" - r = get_session().get(path) - hf_raise_for_status(r) - d = r.json() - return [MetricInfo(**x) for x in d] - - @validate_hf_hub_args - def list_spaces( - self, - *, - filter: Union[str, Iterable[str], None] = None, - author: Optional[str] = None, - search: Optional[str] = None, - sort: Union[Literal["lastModified"], str, None] = None, - direction: Optional[Literal[-1]] = None, - limit: Optional[int] = None, - datasets: Union[str, Iterable[str], None] = None, - models: Union[str, Iterable[str], None] = None, - linked: bool = False, - full: Optional[bool] = None, - token: Optional[str] = None, - ) -> Iterable[SpaceInfo]: - """ - List spaces hosted on the Huggingface Hub, given some filters. - - Args: - filter (`str` or `Iterable`, *optional*): - A string tag or list of tags that can be used to identify Spaces on the Hub. - author (`str`, *optional*): - A string which identify the author of the returned Spaces. - search (`str`, *optional*): - A string that will be contained in the returned Spaces. - sort (`Literal["lastModified"]` or `str`, *optional*): - The key with which to sort the resulting Spaces. Possible - values are the properties of the [`huggingface_hub.hf_api.SpaceInfo`]` class. - direction (`Literal[-1]` or `int`, *optional*): - Direction in which to sort. The value `-1` sorts by descending - order while all other values sort by ascending order. - limit (`int`, *optional*): - The limit on the number of Spaces fetched. Leaving this option - to `None` fetches all Spaces. - datasets (`str` or `Iterable`, *optional*): - Whether to return Spaces that make use of a dataset. - The name of a specific dataset can be passed as a string. - models (`str` or `Iterable`, *optional*): - Whether to return Spaces that make use of a model. - The name of a specific model can be passed as a string. - linked (`bool`, *optional*): - Whether to return Spaces that make use of either a model or a dataset. - full (`bool`, *optional*): - Whether to fetch all Spaces data, including the `lastModified` - and the `cardData`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - `Iterable[SpaceInfo]`: an iterable of [`huggingface_hub.hf_api.SpaceInfo`] objects. - """ - path = f"{self.endpoint}/api/spaces" - headers = self._build_hf_headers(token=token) - params: Dict[str, Any] = {} - if filter is not None: - params.update({"filter": filter}) - if author is not None: - params.update({"author": author}) - if search is not None: - params.update({"search": search}) - if sort is not None: - params.update({"sort": sort}) - if direction is not None: - params.update({"direction": direction}) - if limit is not None: - params.update({"limit": limit}) - if full: - params.update({"full": True}) - if linked: - params.update({"linked": True}) - if datasets is not None: - params.update({"datasets": datasets}) - if models is not None: - params.update({"models": models}) - - items = paginate(path, params=params, headers=headers) - if limit is not None: - items = islice(items, limit) # Do not iterate over all pages - for item in items: - yield SpaceInfo(**item) - - @validate_hf_hub_args - def like( - self, - repo_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> None: - """ - Like a given repo on the Hub (e.g. set as favorite). - - See also [`unlike`] and [`list_liked_repos`]. - - Args: - repo_id (`str`): - The repository to like. Example: `"user/my-cool-model"`. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if liking a dataset or space, `None` or - `"model"` if liking a model. Default is `None`. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - - Example: - ```python - >>> from huggingface_hub import like, list_liked_repos, unlike - >>> like("gpt2") - >>> "gpt2" in list_liked_repos().models - True - >>> unlike("gpt2") - >>> "gpt2" in list_liked_repos().models - False - ``` - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - response = get_session().post( - url=f"{self.endpoint}/api/{repo_type}s/{repo_id}/like", - headers=self._build_hf_headers(token=token), - ) - hf_raise_for_status(response) - - @validate_hf_hub_args - def unlike( - self, - repo_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> None: - """ - Unlike a given repo on the Hub (e.g. remove from favorite list). - - See also [`like`] and [`list_liked_repos`]. - - Args: - repo_id (`str`): - The repository to unlike. Example: `"user/my-cool-model"`. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if unliking a dataset or space, `None` or - `"model"` if unliking a model. Default is `None`. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - - Example: - ```python - >>> from huggingface_hub import like, list_liked_repos, unlike - >>> like("gpt2") - >>> "gpt2" in list_liked_repos().models - True - >>> unlike("gpt2") - >>> "gpt2" in list_liked_repos().models - False - ``` - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - response = get_session().delete( - url=f"{self.endpoint}/api/{repo_type}s/{repo_id}/like", headers=self._build_hf_headers(token=token) - ) - hf_raise_for_status(response) - - @validate_hf_hub_args - def list_liked_repos( - self, - user: Optional[str] = None, - *, - token: Optional[str] = None, - ) -> UserLikes: - """ - List all public repos liked by a user on huggingface.co. - - This list is public so token is optional. If `user` is not passed, it defaults to - the logged in user. - - See also [`like`] and [`unlike`]. - - Args: - user (`str`, *optional*): - Name of the user for which you want to fetch the likes. - token (`str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - Used only if `user` is not passed to implicitly determine the current - user name. - - Returns: - [`UserLikes`]: object containing the user name and 3 lists of repo ids (1 for - models, 1 for datasets and 1 for Spaces). - - Raises: - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If `user` is not passed and no token found (either from argument or from machine). - - Example: - ```python - >>> from huggingface_hub import list_liked_repos - - >>> likes = list_liked_repos("julien-c") - - >>> likes.user - "julien-c" - - >>> likes.models - ["osanseviero/streamlit_1.15", "Xhaheen/ChatGPT_HF", ...] - ``` - """ - # User is either provided explicitly or retrieved from current token. - if user is None: - me = self.whoami(token=token) - if me["type"] == "user": - user = me["name"] - else: - raise ValueError( - "Cannot list liked repos. You must provide a 'user' as input or be logged in as a user." - ) - - path = f"{self.endpoint}/api/users/{user}/likes" - headers = self._build_hf_headers(token=token) - - likes = list(paginate(path, params={}, headers=headers)) - # Looping over a list of items similar to: - # { - # 'createdAt': '2021-09-09T21:53:27.000Z', - # 'repo': { - # 'name': 'PaddlePaddle/PaddleOCR', - # 'type': 'space' - # } - # } - # Let's loop 3 times over the received list. Less efficient but more straightforward to read. - return UserLikes( - user=user, - total=len(likes), - models=[like["repo"]["name"] for like in likes if like["repo"]["type"] == "model"], - datasets=[like["repo"]["name"] for like in likes if like["repo"]["type"] == "dataset"], - spaces=[like["repo"]["name"] for like in likes if like["repo"]["type"] == "space"], - ) - - @validate_hf_hub_args - def model_info( - self, - repo_id: str, - *, - revision: Optional[str] = None, - timeout: Optional[float] = None, - securityStatus: Optional[bool] = None, - files_metadata: bool = False, - token: Optional[Union[bool, str]] = None, - ) -> ModelInfo: - """ - Get info on one specific model on huggingface.co - - Model can be private if you pass an acceptable token or are logged in. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - revision (`str`, *optional*): - The revision of the model repository from which to get the - information. - timeout (`float`, *optional*): - Whether to set a timeout for the request to the Hub. - securityStatus (`bool`, *optional*): - Whether to retrieve the security status from the model - repository as well. - files_metadata (`bool`, *optional*): - Whether or not to retrieve metadata for files in the repository - (size, LFS metadata, etc). Defaults to `False`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - [`huggingface_hub.hf_api.ModelInfo`]: The model repository information. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - - """ - headers = self._build_hf_headers(token=token) - path = ( - f"{self.endpoint}/api/models/{repo_id}" - if revision is None - else (f"{self.endpoint}/api/models/{repo_id}/revision/{quote(revision, safe='')}") - ) - params = {} - if securityStatus: - params["securityStatus"] = True - if files_metadata: - params["blobs"] = True - r = get_session().get(path, headers=headers, timeout=timeout, params=params) - hf_raise_for_status(r) - d = r.json() - return ModelInfo(**d) - - @validate_hf_hub_args - def dataset_info( - self, - repo_id: str, - *, - revision: Optional[str] = None, - timeout: Optional[float] = None, - files_metadata: bool = False, - token: Optional[Union[bool, str]] = None, - ) -> DatasetInfo: - """ - Get info on one specific dataset on huggingface.co. - - Dataset can be private if you pass an acceptable token. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - revision (`str`, *optional*): - The revision of the dataset repository from which to get the - information. - timeout (`float`, *optional*): - Whether to set a timeout for the request to the Hub. - files_metadata (`bool`, *optional*): - Whether or not to retrieve metadata for files in the repository - (size, LFS metadata, etc). Defaults to `False`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - [`hf_api.DatasetInfo`]: The dataset repository information. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - - """ - headers = self._build_hf_headers(token=token) - path = ( - f"{self.endpoint}/api/datasets/{repo_id}" - if revision is None - else (f"{self.endpoint}/api/datasets/{repo_id}/revision/{quote(revision, safe='')}") - ) - params = {} - if files_metadata: - params["blobs"] = True - - r = get_session().get(path, headers=headers, timeout=timeout, params=params) - hf_raise_for_status(r) - d = r.json() - return DatasetInfo(**d) - - @validate_hf_hub_args - def space_info( - self, - repo_id: str, - *, - revision: Optional[str] = None, - timeout: Optional[float] = None, - files_metadata: bool = False, - token: Optional[Union[bool, str]] = None, - ) -> SpaceInfo: - """ - Get info on one specific Space on huggingface.co. - - Space can be private if you pass an acceptable token. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - revision (`str`, *optional*): - The revision of the space repository from which to get the - information. - timeout (`float`, *optional*): - Whether to set a timeout for the request to the Hub. - files_metadata (`bool`, *optional*): - Whether or not to retrieve metadata for files in the repository - (size, LFS metadata, etc). Defaults to `False`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - [`~hf_api.SpaceInfo`]: The space repository information. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - - """ - headers = self._build_hf_headers(token=token) - path = ( - f"{self.endpoint}/api/spaces/{repo_id}" - if revision is None - else (f"{self.endpoint}/api/spaces/{repo_id}/revision/{quote(revision, safe='')}") - ) - params = {} - if files_metadata: - params["blobs"] = True - - r = get_session().get(path, headers=headers, timeout=timeout, params=params) - hf_raise_for_status(r) - d = r.json() - return SpaceInfo(**d) - - @validate_hf_hub_args - def repo_info( - self, - repo_id: str, - *, - revision: Optional[str] = None, - repo_type: Optional[str] = None, - timeout: Optional[float] = None, - files_metadata: bool = False, - token: Optional[Union[bool, str]] = None, - ) -> Union[ModelInfo, DatasetInfo, SpaceInfo]: - """ - Get the info object for a given repo of a given type. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - revision (`str`, *optional*): - The revision of the repository from which to get the - information. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if getting repository info from a dataset or a space, - `None` or `"model"` if getting repository info from a model. Default is `None`. - timeout (`float`, *optional*): - Whether to set a timeout for the request to the Hub. - files_metadata (`bool`, *optional*): - Whether or not to retrieve metadata for files in the repository - (size, LFS metadata, etc). Defaults to `False`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - `Union[SpaceInfo, DatasetInfo, ModelInfo]`: The repository information, as a - [`huggingface_hub.hf_api.DatasetInfo`], [`huggingface_hub.hf_api.ModelInfo`] - or [`huggingface_hub.hf_api.SpaceInfo`] object. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - - """ - if repo_type is None or repo_type == "model": - method = self.model_info - elif repo_type == "dataset": - method = self.dataset_info # type: ignore - elif repo_type == "space": - method = self.space_info # type: ignore - else: - raise ValueError("Unsupported repo type.") - return method( - repo_id, - revision=revision, - token=token, - timeout=timeout, - files_metadata=files_metadata, - ) - - @validate_hf_hub_args - def list_files_info( - self, - repo_id: str, - paths: Union[List[str], str, None] = None, - *, - expand: bool = False, - revision: Optional[str] = None, - repo_type: Optional[str] = None, - token: Optional[Union[bool, str]] = None, - ) -> Iterable[RepoFile]: - """ - List files on a repo and get information about them. - - Takes as input a list of paths. Those paths can be either files or folders. Two server endpoints are called: - 1. POST "/paths-info" to get information about the provided paths. Called once. - 2. GET "/tree?recursive=True" to paginate over the input folders. Called only if a folder path is provided as - input. Will be called multiple times to follow pagination. - If no path is provided as input, step 1. is ignored and all files from the repo are listed. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated by a `/`. - paths (`Union[List[str], str, None]`, *optional*): - The paths to get information about. Paths to files are directly resolved. Paths to folders are resolved - recursively which means that information is returned about all files in the folder and its subfolders. - If `None`, all files are returned (the default). If a path do not exist, it is ignored without raising - an exception. - expand (`bool`, *optional*, defaults to `False`): - Whether to fetch more information about the files (e.g. last commit and security scan results). This - operation is more expensive for the server so only 50 results are returned per page (instead of 1000). - As pagination is implemented in `huggingface_hub`, this is transparent for you except for the time it - takes to get the results. - revision (`str`, *optional*): - The revision of the repository from which to get the information. Defaults to `"main"` branch. - repo_type (`str`, *optional*): - The type of the repository from which to get the information (`"model"`, `"dataset"` or `"space"`. - Defaults to `"model"`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). If `None` or `True` and - machine is logged in (through `huggingface-cli login` or [`~huggingface_hub.login`]), token will be - retrieved from the cache. If `False`, token is not sent in the request header. - - Returns: - `Iterable[RepoFile]`: - The information about the files, as an iterable of [`RepoFile`] objects. The order of the files is - not guaranteed. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo - does not exist. - [`~utils.RevisionNotFoundError`]: - If revision is not found (error 404) on the repo. - - Examples: - - Get information about files on a repo. - ```py - >>> from huggingface_hub import list_files_info - >>> files_info = list_files_info("lysandre/arxiv-nlp", ["README.md", "config.json"]) - >>> files_info - - >>> list(files_info) - [ - RepoFile: {"blob_id": "43bd404b159de6fba7c2f4d3264347668d43af25", "lfs": None, "rfilename": "README.md", "size": 391}, - RepoFile: {"blob_id": "2f9618c3a19b9a61add74f70bfb121335aeef666", "lfs": None, "rfilename": "config.json", "size": 554}, - ] - ``` - - Get even more information about files on a repo (last commit and security scan results) - ```py - >>> from huggingface_hub import list_files_info - >>> files_info = list_files_info("prompthero/openjourney-v4", expand=True) - >>> list(files_info) - [ - RepoFile: { - {'blob_id': '815004af1a321eaed1d93f850b2e94b0c0678e42', - 'lastCommit': {'date': '2023-03-21T09:05:27.000Z', - 'id': '47b62b20b20e06b9de610e840282b7e6c3d51190', - 'title': 'Upload diffusers weights (#48)'}, - 'lfs': None, - 'rfilename': 'model_index.json', - 'security': {'avScan': {'virusFound': False, 'virusNames': None}, - 'blobId': '815004af1a321eaed1d93f850b2e94b0c0678e42', - 'name': 'model_index.json', - 'pickleImportScan': None, - 'repositoryId': 'models/prompthero/openjourney-v4', - 'safe': True}, - 'size': 584} - }, - RepoFile: { - {'blob_id': 'd2343d78b33ac03dade1d525538b02b130d0a3a0', - 'lastCommit': {'date': '2023-03-21T09:05:27.000Z', - 'id': '47b62b20b20e06b9de610e840282b7e6c3d51190', - 'title': 'Upload diffusers weights (#48)'}, - 'lfs': {'pointer_size': 134, - 'sha256': 'dcf4507d99b88db73f3916e2a20169fe74ada6b5582e9af56cfa80f5f3141765', - 'size': 334711857}, - 'rfilename': 'vae/diffusion_pytorch_model.bin', - 'security': {'avScan': {'virusFound': False, 'virusNames': None}, - 'blobId': 'd2343d78b33ac03dade1d525538b02b130d0a3a0', - 'name': 'vae/diffusion_pytorch_model.bin', - 'pickleImportScan': {'highestSafetyLevel': 'innocuous', - 'imports': [{'module': 'torch._utils', - 'name': '_rebuild_tensor_v2', - 'safety': 'innocuous'}, - {'module': 'collections', 'name': 'OrderedDict', 'safety': 'innocuous'}, - {'module': 'torch', 'name': 'FloatStorage', 'safety': 'innocuous'}]}, - 'repositoryId': 'models/prompthero/openjourney-v4', - 'safe': True}, - 'size': 334711857} - }, - (...) - ] - ``` - - List LFS files from the "vae/" folder in "stabilityai/stable-diffusion-2" repository. - - ```py - >>> from huggingface_hub import list_files_info - >>> [info.rfilename for info in list_files_info("stabilityai/stable-diffusion-2", "vae") if info.lfs is not None] - ['vae/diffusion_pytorch_model.bin', 'vae/diffusion_pytorch_model.safetensors'] - ``` - - List all files on a repo. - ```py - >>> from huggingface_hub import list_files_info - >>> [info.rfilename for info in list_files_info("glue", repo_type="dataset")] - ['.gitattributes', 'README.md', 'dataset_infos.json', 'glue.py'] - ``` - """ - repo_type = repo_type or REPO_TYPE_MODEL - revision = quote(revision, safe="") if revision is not None else DEFAULT_REVISION - headers = self._build_hf_headers(token=token) - - def _format_as_repo_file(info: Dict) -> RepoFile: - # Quick alias very specific to the server return type of /paths-info and /tree endpoints. Let's keep this - # logic here. - rfilename = info.pop("path") - size = info.pop("size") - blobId = info.pop("oid") - lfs = info.pop("lfs", None) - info.pop("type", None) # "file" or "folder" -> not needed in practice since we know it's a file - if lfs is not None: - lfs = BlobLfsInfo(size=lfs["size"], sha256=lfs["oid"], pointer_size=lfs["pointerSize"]) - return RepoFile(rfilename=rfilename, size=size, blobId=blobId, lfs=lfs, **info) - - folder_paths = [] - if paths is None: - # `paths` is not provided => list all files from the repo - folder_paths.append("") - elif paths == []: - # corner case: server would return a 400 error if `paths` is an empty list. Let's return early. - return - else: - # `paths` is provided => get info about those - response = get_session().post( - f"{self.endpoint}/api/{repo_type}s/{repo_id}/paths-info/{revision}", - data={ - "paths": paths if isinstance(paths, list) else [paths], - "expand": True, - }, - headers=headers, - ) - hf_raise_for_status(response) - paths_info = response.json() - - # List top-level files first - for path_info in paths_info: - if path_info["type"] == "file": - yield _format_as_repo_file(path_info) - else: - folder_paths.append(path_info["path"]) - - # List files in subdirectories - for path in folder_paths: - encoded_path = "/" + quote(path, safe="") if path else "" - tree_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tree/{revision}{encoded_path}" - for subpath_info in paginate(path=tree_url, headers=headers, params={"recursive": True, "expand": expand}): - if subpath_info["type"] == "file": - yield _format_as_repo_file(subpath_info) - - @_deprecate_arguments(version="0.17", deprecated_args=["timeout"], custom_message="timeout is not used anymore.") - @validate_hf_hub_args - def list_repo_files( - self, - repo_id: str, - *, - revision: Optional[str] = None, - repo_type: Optional[str] = None, - timeout: Optional[float] = None, - token: Optional[Union[bool, str]] = None, - ) -> List[str]: - """ - Get the list of files in a given repo. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated by a `/`. - revision (`str`, *optional*): - The revision of the model repository from which to get the information. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or space, `None` or `"model"` if uploading to - a model. Default is `None`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). If `None` or `True` and - machine is logged in (through `huggingface-cli login` or [`~huggingface_hub.login`]), token will be - retrieved from the cache. If `False`, token is not sent in the request header. - - Returns: - `List[str]`: the list of files in a given repository. - """ - return [ - f.rfilename - for f in self.list_files_info( - repo_id=repo_id, paths=None, revision=revision, repo_type=repo_type, token=token - ) - ] - - @validate_hf_hub_args - def list_repo_refs( - self, - repo_id: str, - *, - repo_type: Optional[str] = None, - token: Optional[Union[bool, str]] = None, - ) -> GitRefs: - """ - Get the list of refs of a given repo (both tags and branches). - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if listing refs from a dataset or a Space, - `None` or `"model"` if listing from a model. Default is `None`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Example: - ```py - >>> from huggingface_hub import HfApi - >>> api = HfApi() - >>> api.list_repo_refs("gpt2") - GitRefs(branches=[GitRefInfo(name='main', ref='refs/heads/main', target_commit='e7da7f221d5bf496a48136c0cd264e630fe9fcc8')], converts=[], tags=[]) - - >>> api.list_repo_refs("bigcode/the-stack", repo_type='dataset') - GitRefs( - branches=[ - GitRefInfo(name='main', ref='refs/heads/main', target_commit='18edc1591d9ce72aa82f56c4431b3c969b210ae3'), - GitRefInfo(name='v1.1.a1', ref='refs/heads/v1.1.a1', target_commit='f9826b862d1567f3822d3d25649b0d6d22ace714') - ], - converts=[], - tags=[ - GitRefInfo(name='v1.0', ref='refs/tags/v1.0', target_commit='c37a8cd1e382064d8aced5e05543c5f7753834da') - ] - ) - ``` - - Returns: - [`GitRefs`]: object containing all information about branches and tags for a - repo on the Hub. - """ - repo_type = repo_type or REPO_TYPE_MODEL - response = get_session().get( - f"{self.endpoint}/api/{repo_type}s/{repo_id}/refs", headers=self._build_hf_headers(token=token) - ) - hf_raise_for_status(response) - data = response.json() - return GitRefs( - branches=[GitRefInfo(item) for item in data["branches"]], - converts=[GitRefInfo(item) for item in data["converts"]], - tags=[GitRefInfo(item) for item in data["tags"]], - ) - - @validate_hf_hub_args - def list_repo_commits( - self, - repo_id: str, - *, - repo_type: Optional[str] = None, - token: Optional[Union[bool, str]] = None, - revision: Optional[str] = None, - formatted: bool = False, - ) -> List[GitCommitInfo]: - """ - Get the list of commits of a given revision for a repo on the Hub. - - Commits are sorted by date (last commit first). - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated by a `/`. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if listing commits from a dataset or a Space, `None` or `"model"` if - listing from a model. Default is `None`. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - formatted (`bool`): - Whether to return the HTML-formatted title and description of the commits. Defaults to False. - - Example: - ```py - >>> from huggingface_hub import HfApi - >>> api = HfApi() - - # Commits are sorted by date (last commit first) - >>> initial_commit = api.list_repo_commits("gpt2")[-1] - - # Initial commit is always a system commit containing the `.gitattributes` file. - >>> initial_commit - GitCommitInfo( - commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8', - authors=['system'], - created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc), - title='initial commit', - message='', - formatted_title=None, - formatted_message=None - ) - - # Create an empty branch by deriving from initial commit - >>> api.create_branch("gpt2", "new_empty_branch", revision=initial_commit.commit_id) - ``` - - Returns: - List[[`GitCommitInfo`]]: list of objects containing information about the commits for a repo on the Hub. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private but not authenticated or repo - does not exist. - [`~utils.RevisionNotFoundError`]: - If revision is not found (error 404) on the repo. - """ - repo_type = repo_type or REPO_TYPE_MODEL - revision = quote(revision, safe="") if revision is not None else DEFAULT_REVISION - - # Paginate over results and return the list of commits. - return [ - GitCommitInfo(item) - for item in paginate( - f"{self.endpoint}/api/{repo_type}s/{repo_id}/commits/{revision}", - headers=self._build_hf_headers(token=token), - params={"expand[]": "formatted"} if formatted else {}, - ) - ] - - @validate_hf_hub_args - def create_repo( - self, - repo_id: str, - *, - token: Optional[str] = None, - private: bool = False, - repo_type: Optional[str] = None, - exist_ok: bool = False, - space_sdk: Optional[str] = None, - space_hardware: Optional[str] = None, - ) -> RepoUrl: - """Create an empty repo on the HuggingFace Hub. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - private (`bool`, *optional*, defaults to `False`): - Whether the model repo should be private. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - exist_ok (`bool`, *optional*, defaults to `False`): - If `True`, do not raise an error if repo already exists. - space_sdk (`str`, *optional*): - Choice of SDK to use if repo_type is "space". Can be "streamlit", "gradio", "docker", or "static". - space_hardware (`SpaceHardware` or `str`, *optional*): - Choice of Hardware if repo_type is "space". See [`SpaceHardware`] for a complete list. - - Returns: - [`RepoUrl`]: URL to the newly created repo. Value is a subclass of `str` containing - attributes like `endpoint`, `repo_type` and `repo_id`. - """ - organization, name = repo_id.split("/") if "/" in repo_id else (None, repo_id) - - path = f"{self.endpoint}/api/repos/create" - - if repo_type not in REPO_TYPES: - raise ValueError("Invalid repo type") - - json = {"name": name, "organization": organization, "private": private} - if repo_type is not None: - json["type"] = repo_type - if repo_type == "space": - if space_sdk is None: - raise ValueError( - "No space_sdk provided. `create_repo` expects space_sdk to be one" - f" of {SPACES_SDK_TYPES} when repo_type is 'space'`" - ) - if space_sdk not in SPACES_SDK_TYPES: - raise ValueError(f"Invalid space_sdk. Please choose one of {SPACES_SDK_TYPES}.") - json["sdk"] = space_sdk - - if space_sdk is not None and repo_type != "space": - warnings.warn("Ignoring provided space_sdk because repo_type is not 'space'.") - - if space_hardware is not None: - if repo_type == "space": - json["hardware"] = space_hardware - else: - warnings.warn("Ignoring provided space_hardware because repo_type is not 'space'.") - - if getattr(self, "_lfsmultipartthresh", None): - # Testing purposes only. - # See https://github.com/huggingface/huggingface_hub/pull/733/files#r820604472 - json["lfsmultipartthresh"] = self._lfsmultipartthresh # type: ignore - headers = self._build_hf_headers(token=token, is_write_action=True) - r = get_session().post(path, headers=headers, json=json) - - try: - hf_raise_for_status(r) - except HTTPError as err: - if exist_ok and err.response.status_code == 409: - # Repo already exists and `exist_ok=True` - pass - elif exist_ok and err.response.status_code == 403: - # No write permission on the namespace but repo might already exist - try: - self.repo_info(repo_id=repo_id, repo_type=repo_type, token=token) - if repo_type is None or repo_type == REPO_TYPE_MODEL: - return RepoUrl(f"{self.endpoint}/{repo_id}") - return RepoUrl(f"{self.endpoint}/{repo_type}/{repo_id}") - except HfHubHTTPError: - raise - else: - raise - - d = r.json() - return RepoUrl(d["url"], endpoint=self.endpoint) - - @validate_hf_hub_args - def delete_repo( - self, - repo_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ): - """ - Delete a repo from the HuggingFace Hub. CAUTION: this is irreversible. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - organization, name = repo_id.split("/") if "/" in repo_id else (None, repo_id) - - path = f"{self.endpoint}/api/repos/delete" - - if repo_type not in REPO_TYPES: - raise ValueError("Invalid repo type") - - json = {"name": name, "organization": organization} - if repo_type is not None: - json["type"] = repo_type - - headers = self._build_hf_headers(token=token, is_write_action=True) - r = get_session().delete(path, headers=headers, json=json) - hf_raise_for_status(r) - - @validate_hf_hub_args - def update_repo_visibility( - self, - repo_id: str, - private: bool = False, - *, - token: Optional[str] = None, - organization: Optional[str] = None, - repo_type: Optional[str] = None, - name: Optional[str] = None, - ) -> Dict[str, bool]: - """Update the visibility setting of a repository. - - Args: - repo_id (`str`, *optional*): - A namespace (user or an organization) and a repo name separated - by a `/`. - private (`bool`, *optional*, defaults to `False`): - Whether the model repo should be private. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - - Returns: - The HTTP response in json. - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - if repo_type not in REPO_TYPES: - raise ValueError("Invalid repo type") - - organization, name = repo_id.split("/") if "/" in repo_id else (None, repo_id) - - if organization is None: - namespace = self.whoami(token)["name"] - else: - namespace = organization - - if repo_type is None: - repo_type = REPO_TYPE_MODEL # default repo type - - r = get_session().put( - url=f"{self.endpoint}/api/{repo_type}s/{namespace}/{name}/settings", - headers=self._build_hf_headers(token=token, is_write_action=True), - json={"private": private}, - ) - hf_raise_for_status(r) - return r.json() - - def move_repo( - self, - from_id: str, - to_id: str, - *, - repo_type: Optional[str] = None, - token: Optional[str] = None, - ): - """ - Moving a repository from namespace1/repo_name1 to namespace2/repo_name2 - - Note there are certain limitations. For more information about moving - repositories, please see - https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo. - - Args: - from_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. Original repository identifier. - to_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. Final repository identifier. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - - - Raises the following errors: - - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - if len(from_id.split("/")) != 2: - raise ValueError(f"Invalid repo_id: {from_id}. It should have a namespace (:namespace:/:repo_name:)") - - if len(to_id.split("/")) != 2: - raise ValueError(f"Invalid repo_id: {to_id}. It should have a namespace (:namespace:/:repo_name:)") - - if repo_type is None: - repo_type = REPO_TYPE_MODEL # Hub won't accept `None`. - - json = {"fromRepo": from_id, "toRepo": to_id, "type": repo_type} - - path = f"{self.endpoint}/api/repos/move" - headers = self._build_hf_headers(token=token, is_write_action=True) - r = get_session().post(path, headers=headers, json=json) - try: - hf_raise_for_status(r) - except HfHubHTTPError as e: - e.append_to_message( - "\nFor additional documentation please see" - " https://hf.co/docs/hub/repositories-settings#renaming-or-transferring-a-repo." - ) - raise - - @overload - def create_commit( # type: ignore - self, - repo_id: str, - operations: Iterable[CommitOperation], - *, - commit_message: str, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - num_threads: int = 5, - parent_commit: Optional[str] = None, - run_as_future: Literal[False] = ..., - ) -> CommitInfo: - ... - - @overload - def create_commit( - self, - repo_id: str, - operations: Iterable[CommitOperation], - *, - commit_message: str, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - num_threads: int = 5, - parent_commit: Optional[str] = None, - run_as_future: Literal[True] = ..., - ) -> Future[CommitInfo]: - ... - - @validate_hf_hub_args - @future_compatible - def create_commit( - self, - repo_id: str, - operations: Iterable[CommitOperation], - *, - commit_message: str, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - num_threads: int = 5, - parent_commit: Optional[str] = None, - run_as_future: bool = False, - ) -> Union[CommitInfo, Future[CommitInfo]]: - """ - Creates a commit in the given repo, deleting & uploading files as needed. - - Args: - repo_id (`str`): - The repository in which the commit will be created, for example: - `"username/custom_transformers"` - - operations (`Iterable` of [`~hf_api.CommitOperation`]): - An iterable of operations to include in the commit, either: - - - [`~hf_api.CommitOperationAdd`] to upload a file - - [`~hf_api.CommitOperationDelete`] to delete a file - - [`~hf_api.CommitOperationCopy`] to copy a file - - commit_message (`str`): - The summary (first line) of the commit that will be created. - - commit_description (`str`, *optional*): - The description of the commit that will be created - - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will - default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request with that commit. Defaults to `False`. - If `revision` is not set, PR is opened against the `"main"` branch. If - `revision` is set and is a branch, PR is opened against this branch. If - `revision` is set and is not a branch name (example: a commit oid), an - `RevisionNotFoundError` is returned by the server. - - num_threads (`int`, *optional*): - Number of concurrent threads for uploading files. Defaults to 5. - Setting it to 2 means at most 2 files will be uploaded concurrently. - - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. - Shorthands (7 first characters) are also supported. If specified and `create_pr` is `False`, - the commit will fail if `revision` does not point to `parent_commit`. If specified and `create_pr` - is `True`, the pull request will be created from `parent_commit`. Specifying `parent_commit` - ensures the repo has not changed before committing the changes, and can be especially useful - if the repo is updated / committed to concurrently. - run_as_future (`bool`, *optional*): - Whether or not to run this method in the background. Background jobs are run sequentially without - blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) - object. Defaults to `False`. - - Returns: - [`CommitInfo`] or `Future`: - Instance of [`CommitInfo`] containing information about the newly created commit (commit hash, commit - url, pr url, commit message,...). If `run_as_future=True` is passed, returns a Future object which will - contain the result when executed. - - Raises: - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If commit message is empty. - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If parent commit is not a valid commit OID. - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If the Hub API returns an HTTP 400 error (bad request) - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If `create_pr` is `True` and revision is neither `None` nor `"main"`. - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - - - - `create_commit` assumes that the repo already exists on the Hub. If you get a - Client error 404, please make sure you are authenticated and that `repo_id` and - `repo_type` are set correctly. If repo does not exist, create it first using - [`~hf_api.create_repo`]. - - - - - - `create_commit` is limited to 25k LFS files and a 1GB payload for regular files. - - - """ - _CREATE_COMMIT_NO_REPO_ERROR_MESSAGE = ( - "\nNote: Creating a commit assumes that the repo already exists on the" - " Huggingface Hub. Please use `create_repo` if it's not the case." - ) - - if parent_commit is not None and not REGEX_COMMIT_OID.fullmatch(parent_commit): - raise ValueError( - f"`parent_commit` is not a valid commit OID. It must match the following regex: {REGEX_COMMIT_OID}" - ) - - if commit_message is None or len(commit_message) == 0: - raise ValueError("`commit_message` can't be empty, please pass a value.") - - commit_description = commit_description if commit_description is not None else "" - repo_type = repo_type if repo_type is not None else REPO_TYPE_MODEL - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - revision = quote(revision, safe="") if revision is not None else DEFAULT_REVISION - create_pr = create_pr if create_pr is not None else False - - operations = list(operations) - additions = [op for op in operations if isinstance(op, CommitOperationAdd)] - copies = [op for op in operations if isinstance(op, CommitOperationCopy)] - nb_additions = len(additions) - nb_copies = len(copies) - nb_deletions = len(operations) - nb_additions - nb_copies - - logger.debug( - f"About to commit to the hub: {len(additions)} addition(s), {len(copies)} copie(s) and" - f" {nb_deletions} deletion(s)." - ) - - # If updating twice the same file or update then delete a file in a single commit - warn_on_overwriting_operations(operations) - - try: - upload_modes = fetch_upload_modes( - additions=additions, - repo_type=repo_type, - repo_id=repo_id, - token=token or self.token, - revision=revision, - endpoint=self.endpoint, - create_pr=create_pr, - ) - except RepositoryNotFoundError as e: - e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) - raise - files_to_copy = fetch_lfs_files_to_copy( - copies=copies, - repo_type=repo_type, - repo_id=repo_id, - token=token or self.token, - revision=revision, - endpoint=self.endpoint, - ) - upload_lfs_files( - additions=[addition for addition in additions if upload_modes[addition.path_in_repo] == "lfs"], - repo_type=repo_type, - repo_id=repo_id, - token=token or self.token, - endpoint=self.endpoint, - num_threads=num_threads, - ) - commit_payload = prepare_commit_payload( - operations=operations, - upload_modes=upload_modes, - files_to_copy=files_to_copy, - commit_message=commit_message, - commit_description=commit_description, - parent_commit=parent_commit, - ) - commit_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/commit/{revision}" - - def _payload_as_ndjson() -> Iterable[bytes]: - for item in commit_payload: - yield json.dumps(item).encode() - yield b"\n" - - headers = { - # See https://github.com/huggingface/huggingface_hub/issues/1085#issuecomment-1265208073 - "Content-Type": "application/x-ndjson", - **self._build_hf_headers(token=token, is_write_action=True), - } - data = b"".join(_payload_as_ndjson()) - params = {"create_pr": "1"} if create_pr else None - - try: - commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params) - hf_raise_for_status(commit_resp, endpoint_name="commit") - except RepositoryNotFoundError as e: - e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) - raise - except EntryNotFoundError as e: - if nb_deletions > 0 and "A file with this name doesn't exist" in str(e): - e.append_to_message( - "\nMake sure to differentiate file and folder paths in delete" - " operations with a trailing '/' or using `is_folder=True/False`." - ) - raise - - commit_data = commit_resp.json() - return CommitInfo( - commit_url=commit_data["commitUrl"], - commit_message=commit_message, - commit_description=commit_description, - oid=commit_data["commitOid"], - pr_url=commit_data["pullRequestUrl"] if create_pr else None, - ) - - @experimental - @validate_hf_hub_args - def create_commits_on_pr( - self, - *, - repo_id: str, - addition_commits: List[List[CommitOperationAdd]], - deletion_commits: List[List[CommitOperationDelete]], - commit_message: str, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - merge_pr: bool = True, - num_threads: int = 5, # TODO: use to multithread uploads - verbose: bool = False, - ) -> str: - """Push changes to the Hub in multiple commits. - - Commits are pushed to a draft PR branch. If the upload fails or gets interrupted, it can be resumed. Progress - is tracked in the PR description. At the end of the process, the PR is set as open and the title is updated to - match the initial commit message. If `merge_pr=True` is passed, the PR is merged automatically. - - All deletion commits are pushed first, followed by the addition commits. The order of the commits is not - guaranteed as we might implement parallel commits in the future. Be sure that your are not updating several - times the same file. - - - - `create_commits_on_pr` is experimental. Its API and behavior is subject to change in the future without prior notice. - - - - Args: - repo_id (`str`): - The repository in which the commits will be pushed. Example: `"username/my-cool-model"`. - - addition_commits (`List` of `List` of [`~hf_api.CommitOperationAdd`]): - A list containing lists of [`~hf_api.CommitOperationAdd`]. Each sublist will result in a commit on the - PR. - - deletion_commits - A list containing lists of [`~hf_api.CommitOperationDelete`]. Each sublist will result in a commit on - the PR. Deletion commits are pushed before addition commits. - - commit_message (`str`): - The summary (first line) of the commit that will be created. Will also be the title of the PR. - - commit_description (`str`, *optional*): - The description of the commit that will be created. The description will be added to the PR. - - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or space, `None` or `"model"` if uploading to - a model. Default is `None`. - - merge_pr (`bool`): - If set to `True`, the Pull Request is merged at the end of the process. Defaults to `True`. - - num_threads (`int`, *optional*): - Number of concurrent threads for uploading files. Defaults to 5. - - verbose (`bool`): - If set to `True`, process will run on verbose mode i.e. print information about the ongoing tasks. - Defaults to `False`. - - Returns: - `str`: URL to the created PR. - - Example: - ```python - >>> from huggingface_hub import HfApi, plan_multi_commits - >>> addition_commits, deletion_commits = plan_multi_commits( - ... operations=[ - ... CommitOperationAdd(...), - ... CommitOperationAdd(...), - ... CommitOperationDelete(...), - ... CommitOperationDelete(...), - ... CommitOperationAdd(...), - ... ], - ... ) - >>> HfApi().create_commits_on_pr( - ... repo_id="my-cool-model", - ... addition_commits=addition_commits, - ... deletion_commits=deletion_commits, - ... (...) - ... verbose=True, - ... ) - ``` - - Raises: - [`MultiCommitException`]: - If an unexpected issue occur in the process: empty commits, unexpected commits in a PR, unexpected PR - description, etc. - - - - `create_commits_on_pr` assumes that the repo already exists on the Hub. If you get a Client error 404, please - make sure you are authenticated and that `repo_id` and `repo_type` are set correctly. If repo does not exist, - create it first using [`~hf_api.create_repo`]. - - - """ - logger = logging.get_logger(__name__ + ".create_commits_on_pr") - if verbose: - logger.setLevel("INFO") - - # 1. Get strategy ID - logger.info( - f"Will create {len(deletion_commits)} deletion commit(s) and {len(addition_commits)} addition commit(s)," - f" totalling {sum(len(ops) for ops in addition_commits+deletion_commits)} atomic operations." - ) - strategy = MultiCommitStrategy( - addition_commits=[MultiCommitStep(operations=operations) for operations in addition_commits], # type: ignore - deletion_commits=[MultiCommitStep(operations=operations) for operations in deletion_commits], # type: ignore - ) - logger.info(f"Multi-commits strategy with ID {strategy.id}.") - - # 2. Get or create a PR with this strategy ID - for discussion in self.get_repo_discussions(repo_id=repo_id, repo_type=repo_type, token=token): - # search for a draft PR with strategy ID - if discussion.is_pull_request and discussion.status == "draft" and strategy.id in discussion.title: - pr = self.get_discussion_details( - repo_id=repo_id, discussion_num=discussion.num, repo_type=repo_type, token=token - ) - logger.info(f"PR already exists: {pr.url}. Will resume process where it stopped.") - break - else: - # did not find a PR matching the strategy ID - pr = multi_commit_create_pull_request( - self, - repo_id=repo_id, - commit_message=commit_message, - commit_description=commit_description, - strategy=strategy, - token=token, - repo_type=repo_type, - ) - logger.info(f"New PR created: {pr.url}") - - # 3. Parse PR description to check consistency with strategy (e.g. same commits are scheduled) - for event in pr.events: - if isinstance(event, DiscussionComment): - pr_comment = event - break - else: - raise MultiCommitException(f"PR #{pr.num} must have at least 1 comment") - - description_commits = multi_commit_parse_pr_description(pr_comment.content) - if len(description_commits) != len(strategy.all_steps): - raise MultiCommitException( - f"Corrupted multi-commit PR #{pr.num}: got {len(description_commits)} steps in" - f" description but {len(strategy.all_steps)} in strategy." - ) - for step_id in strategy.all_steps: - if step_id not in description_commits: - raise MultiCommitException( - f"Corrupted multi-commit PR #{pr.num}: expected step {step_id} but didn't find" - f" it (have {', '.join(description_commits)})." - ) - - # 4. Retrieve commit history (and check consistency) - commits_on_main_branch = { - commit.commit_id - for commit in self.list_repo_commits( - repo_id=repo_id, repo_type=repo_type, token=token, revision=DEFAULT_REVISION - ) - } - pr_commits = [ - commit - for commit in self.list_repo_commits( - repo_id=repo_id, repo_type=repo_type, token=token, revision=pr.git_reference - ) - if commit.commit_id not in commits_on_main_branch - ] - if len(pr_commits) > 0: - logger.info(f"Found {len(pr_commits)} existing commits on the PR.") - - # At this point `pr_commits` is a list of commits pushed to the PR. We expect all of these commits (if any) to have - # a step_id as title. We raise exception if an unexpected commit has been pushed. - if len(pr_commits) > len(strategy.all_steps): - raise MultiCommitException( - f"Corrupted multi-commit PR #{pr.num}: scheduled {len(strategy.all_steps)} steps but" - f" {len(pr_commits)} commits have already been pushed to the PR." - ) - - # Check which steps are already completed - remaining_additions = {step.id: step for step in strategy.addition_commits} - remaining_deletions = {step.id: step for step in strategy.deletion_commits} - for commit in pr_commits: - if commit.title in remaining_additions: - step = remaining_additions.pop(commit.title) - step.completed = True - elif commit.title in remaining_deletions: - step = remaining_deletions.pop(commit.title) - step.completed = True - - if len(remaining_deletions) > 0 and len(remaining_additions) < len(strategy.addition_commits): - raise MultiCommitException( - f"Corrupted multi-commit PR #{pr.num}: some addition commits have already been pushed to the PR but" - " deletion commits are not all completed yet." - ) - nb_remaining = len(remaining_deletions) + len(remaining_additions) - if len(pr_commits) > 0: - logger.info( - f"{nb_remaining} commits remaining ({len(remaining_deletions)} deletion commits and" - f" {len(remaining_additions)} addition commits)" - ) - - # 5. Push remaining commits to the PR + update description - # TODO: multi-thread this - for step in list(remaining_deletions.values()) + list(remaining_additions.values()): - # Push new commit - self.create_commit( - repo_id=repo_id, - repo_type=repo_type, - token=token, - commit_message=step.id, - revision=pr.git_reference, - num_threads=num_threads, - operations=step.operations, - create_pr=False, - ) - step.completed = True - nb_remaining -= 1 - logger.info(f" step {step.id} completed (still {nb_remaining} to go).") - - # Update PR description - self.edit_discussion_comment( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - comment_id=pr_comment.id, - new_content=multi_commit_generate_comment( - commit_message=commit_message, commit_description=commit_description, strategy=strategy - ), - ) - logger.info("All commits have been pushed.") - - # 6. Update PR (and merge) - self.rename_discussion( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - new_title=commit_message, - ) - self.change_discussion_status( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - new_status="open", - comment=MULTI_COMMIT_PR_COMPLETION_COMMENT_TEMPLATE, - ) - logger.info("PR is now open for reviews.") - - if merge_pr: # User don't want a PR => merge it - try: - self.merge_pull_request( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - comment=MULTI_COMMIT_PR_CLOSING_COMMENT_TEMPLATE, - ) - logger.info("PR has been automatically merged (`merge_pr=True` was passed).") - except BadRequestError as error: - if error.server_message is not None and "no associated changes" in error.server_message: - # PR cannot be merged as no changes are associated. We close the PR without merging with a comment to - # explain. - self.change_discussion_status( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - comment=MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_NO_CHANGES_TEMPLATE, - new_status="closed", - ) - logger.warning("Couldn't merge the PR: no associated changes.") - else: - # PR cannot be merged for another reason (conflicting files for example). We comment the PR to explain - # and re-raise the exception. - self.comment_discussion( - repo_id=repo_id, - repo_type=repo_type, - token=token, - discussion_num=pr.num, - comment=MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_BAD_REQUEST_TEMPLATE.format( - error_message=error.server_message - ), - ) - raise MultiCommitException( - f"Couldn't merge Pull Request in multi-commit: {error.server_message}" - ) from error - - return pr.url - - @overload - def upload_file( # type: ignore - self, - *, - path_or_fileobj: Union[str, Path, bytes, BinaryIO], - path_in_repo: str, - repo_id: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - run_as_future: Literal[False] = ..., - ) -> str: - ... - - @overload - def upload_file( - self, - *, - path_or_fileobj: Union[str, Path, bytes, BinaryIO], - path_in_repo: str, - repo_id: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - run_as_future: Literal[True] = ..., - ) -> Future[str]: - ... - - @validate_hf_hub_args - @future_compatible - def upload_file( - self, - *, - path_or_fileobj: Union[str, Path, bytes, BinaryIO], - path_in_repo: str, - repo_id: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - run_as_future: bool = False, - ) -> Union[str, Future[str]]: - """ - Upload a local file (up to 50 GB) to the given repo. The upload is done - through a HTTP post request, and doesn't require git or git-lfs to be - installed. - - Args: - path_or_fileobj (`str`, `Path`, `bytes`, or `IO`): - Path to a file on the local machine or binary data stream / - fileobj / buffer. - path_in_repo (`str`): - Relative filepath in the repo, for example: - `"checkpoints/1fec34a/weights.bin"` - repo_id (`str`): - The repository to which the file will be uploaded, for example: - `"username/custom_transformers"` - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will - default to the stored token. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit - commit_description (`str` *optional*) - The description of the generated commit - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request with that commit. Defaults to `False`. - If `revision` is not set, PR is opened against the `"main"` branch. If - `revision` is set and is a branch, PR is opened against this branch. If - `revision` is set and is not a branch name (example: a commit oid), an - `RevisionNotFoundError` is returned by the server. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - run_as_future (`bool`, *optional*): - Whether or not to run this method in the background. Background jobs are run sequentially without - blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) - object. Defaults to `False`. - - - Returns: - `str` or `Future`: The URL to visualize the uploaded file on the hub. If `run_as_future=True` is passed, - returns a Future object which will contain the result when executed. - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - - - - - `upload_file` assumes that the repo already exists on the Hub. If you get a - Client error 404, please make sure you are authenticated and that `repo_id` and - `repo_type` are set correctly. If repo does not exist, create it first using - [`~hf_api.create_repo`]. - - - - Example: - - ```python - >>> from huggingface_hub import upload_file - - >>> with open("./local/filepath", "rb") as fobj: - ... upload_file( - ... path_or_fileobj=fileobj, - ... path_in_repo="remote/file/path.h5", - ... repo_id="username/my-dataset", - ... repo_type="dataset", - ... token="my_token", - ... ) - "https://huggingface.co/datasets/username/my-dataset/blob/main/remote/file/path.h5" - - >>> upload_file( - ... path_or_fileobj=".\\\\local\\\\file\\\\path", - ... path_in_repo="remote/file/path.h5", - ... repo_id="username/my-model", - ... token="my_token", - ... ) - "https://huggingface.co/username/my-model/blob/main/remote/file/path.h5" - - >>> upload_file( - ... path_or_fileobj=".\\\\local\\\\file\\\\path", - ... path_in_repo="remote/file/path.h5", - ... repo_id="username/my-model", - ... token="my_token", - ... create_pr=True, - ... ) - "https://huggingface.co/username/my-model/blob/refs%2Fpr%2F1/remote/file/path.h5" - ``` - """ - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - - commit_message = ( - commit_message if commit_message is not None else f"Upload {path_in_repo} with huggingface_hub" - ) - operation = CommitOperationAdd( - path_or_fileobj=path_or_fileobj, - path_in_repo=path_in_repo, - ) - - commit_info = self.create_commit( - repo_id=repo_id, - repo_type=repo_type, - operations=[operation], - commit_message=commit_message, - commit_description=commit_description, - token=token, - revision=revision, - create_pr=create_pr, - parent_commit=parent_commit, - ) - - if commit_info.pr_url is not None: - revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe="") - if repo_type in REPO_TYPES_URL_PREFIXES: - repo_id = REPO_TYPES_URL_PREFIXES[repo_type] + repo_id - revision = revision if revision is not None else DEFAULT_REVISION - # Similar to `hf_hub_url` but it's "blob" instead of "resolve" - return f"{self.endpoint}/{repo_id}/blob/{revision}/{path_in_repo}" - - @overload - def upload_folder( # type: ignore - self, - *, - repo_id: str, - folder_path: Union[str, Path], - path_in_repo: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - delete_patterns: Optional[Union[List[str], str]] = None, - multi_commits: bool = False, - multi_commits_verbose: bool = False, - run_as_future: Literal[False] = ..., - ) -> str: - ... - - @overload - def upload_folder( - self, - *, - repo_id: str, - folder_path: Union[str, Path], - path_in_repo: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - delete_patterns: Optional[Union[List[str], str]] = None, - multi_commits: bool = False, - multi_commits_verbose: bool = False, - run_as_future: Literal[True] = ..., - ) -> Future[str]: - ... - - @validate_hf_hub_args - @future_compatible - def upload_folder( - self, - *, - repo_id: str, - folder_path: Union[str, Path], - path_in_repo: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - delete_patterns: Optional[Union[List[str], str]] = None, - multi_commits: bool = False, - multi_commits_verbose: bool = False, - run_as_future: bool = False, - ) -> Union[str, Future[str]]: - """ - Upload a local folder to the given repo. The upload is done through a HTTP requests, and doesn't require git or - git-lfs to be installed. - - The structure of the folder will be preserved. Files with the same name already present in the repository will - be overwritten. Others will be left untouched. - - Use the `allow_patterns` and `ignore_patterns` arguments to specify which files to upload. These parameters - accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as - documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). If both `allow_patterns` and - `ignore_patterns` are provided, both constraints apply. By default, all files from the folder are uploaded. - - Use the `delete_patterns` argument to specify remote files you want to delete. Input type is the same as for - `allow_patterns` (see above). If `path_in_repo` is also provided, the patterns are matched against paths - relative to this folder. For example, `upload_folder(..., path_in_repo="experiment", delete_patterns="logs/*")` - will delete any remote file under `./experiment/logs/`. Note that the `.gitattributes` file will not be deleted - even if it matches the patterns. - - Any `.git/` folder present in any subdirectory will be ignored. However, please be aware that the `.gitignore` - file is not taken into account. - - Uses `HfApi.create_commit` under the hood. - - Args: - repo_id (`str`): - The repository to which the file will be uploaded, for example: - `"username/custom_transformers"` - folder_path (`str` or `Path`): - Path to the folder to upload on the local file system - path_in_repo (`str`, *optional*): - Relative path of the directory in the repo, for example: - `"checkpoints/1fec34a/results"`. Will default to the root folder of the repository. - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will - default to the stored token. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit. Defaults to: - `f"Upload {path_in_repo} with huggingface_hub"` - commit_description (`str` *optional*): - The description of the generated commit - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request with that commit. Defaults to `False`. If `revision` is not - set, PR is opened against the `"main"` branch. If `revision` is set and is a branch, PR is opened - against this branch. If `revision` is set and is not a branch name (example: a commit oid), an - `RevisionNotFoundError` is returned by the server. If both `multi_commits` and `create_pr` are True, - the PR created in the multi-commit process is kept opened. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are uploaded. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not uploaded. - delete_patterns (`List[str]` or `str`, *optional*): - If provided, remote files matching any of the patterns will be deleted from the repo while committing - new files. This is useful if you don't know which files have already been uploaded. - Note: to avoid discrepancies the `.gitattributes` file is not deleted even if it matches the pattern. - multi_commits (`bool`): - If True, changes are pushed to a PR using a multi-commit process. Defaults to `False`. - multi_commits_verbose (`bool`): - If True and `multi_commits` is used, more information will be displayed to the user. - run_as_future (`bool`, *optional*): - Whether or not to run this method in the background. Background jobs are run sequentially without - blocking the main thread. Passing `run_as_future=True` will return a [Future](https://docs.python.org/3/library/concurrent.futures.html#future-objects) - object. Defaults to `False`. - - Returns: - `str` or `Future[str]`: A URL to visualize the uploaded folder on the hub. If `run_as_future=True` is passed, - returns a Future object which will contain the result when executed. - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - - - - - `upload_folder` assumes that the repo already exists on the Hub. If you get a Client error 404, please make - sure you are authenticated and that `repo_id` and `repo_type` are set correctly. If repo does not exist, create - it first using [`~hf_api.create_repo`]. - - - - - - `multi_commits` is experimental. Its API and behavior is subject to change in the future without prior notice. - - - - Example: - - ```python - # Upload checkpoints folder except the log files - >>> upload_folder( - ... folder_path="local/checkpoints", - ... path_in_repo="remote/experiment/checkpoints", - ... repo_id="username/my-dataset", - ... repo_type="datasets", - ... token="my_token", - ... ignore_patterns="**/logs/*.txt", - ... ) - # "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints" - - # Upload checkpoints folder including logs while deleting existing logs from the repo - # Useful if you don't know exactly which log files have already being pushed - >>> upload_folder( - ... folder_path="local/checkpoints", - ... path_in_repo="remote/experiment/checkpoints", - ... repo_id="username/my-dataset", - ... repo_type="datasets", - ... token="my_token", - ... delete_patterns="**/logs/*.txt", - ... ) - "https://huggingface.co/datasets/username/my-dataset/tree/main/remote/experiment/checkpoints" - - # Upload checkpoints folder while creating a PR - >>> upload_folder( - ... folder_path="local/checkpoints", - ... path_in_repo="remote/experiment/checkpoints", - ... repo_id="username/my-dataset", - ... repo_type="datasets", - ... token="my_token", - ... create_pr=True, - ... ) - "https://huggingface.co/datasets/username/my-dataset/tree/refs%2Fpr%2F1/remote/experiment/checkpoints" - - ``` - """ - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - - if multi_commits: - if revision is not None and revision != DEFAULT_REVISION: - raise ValueError("Cannot use `multi_commit` to commit changes other than the main branch.") - - # By default, upload folder to the root directory in repo. - if path_in_repo is None: - path_in_repo = "" - - # Do not upload .git folder - if ignore_patterns is None: - ignore_patterns = [] - elif isinstance(ignore_patterns, str): - ignore_patterns = [ignore_patterns] - ignore_patterns += IGNORE_GIT_FOLDER_PATTERNS - - delete_operations = self._prepare_upload_folder_deletions( - repo_id=repo_id, - repo_type=repo_type, - revision=DEFAULT_REVISION if create_pr else revision, - token=token, - path_in_repo=path_in_repo, - delete_patterns=delete_patterns, - ) - add_operations = _prepare_upload_folder_additions( - folder_path, - path_in_repo, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - ) - - # Optimize operations: if some files will be overwritten, we don't need to delete them first - if len(add_operations) > 0: - added_paths = set(op.path_in_repo for op in add_operations) - delete_operations = [ - delete_op for delete_op in delete_operations if delete_op.path_in_repo not in added_paths - ] - commit_operations = delete_operations + add_operations - - pr_url: Optional[str] - commit_message = commit_message or "Upload folder using huggingface_hub" - if multi_commits: - addition_commits, deletion_commits = plan_multi_commits(operations=commit_operations) - pr_url = self.create_commits_on_pr( - repo_id=repo_id, - repo_type=repo_type, - addition_commits=addition_commits, - deletion_commits=deletion_commits, - commit_message=commit_message, - commit_description=commit_description, - token=token, - merge_pr=not create_pr, - verbose=multi_commits_verbose, - ) - else: - commit_info = self.create_commit( - repo_type=repo_type, - repo_id=repo_id, - operations=commit_operations, - commit_message=commit_message, - commit_description=commit_description, - token=token, - revision=revision, - create_pr=create_pr, - parent_commit=parent_commit, - ) - pr_url = commit_info.pr_url - - if create_pr and pr_url is not None: - revision = quote(_parse_revision_from_pr_url(pr_url), safe="") - if repo_type in REPO_TYPES_URL_PREFIXES: - repo_id = REPO_TYPES_URL_PREFIXES[repo_type] + repo_id - revision = revision if revision is not None else DEFAULT_REVISION - # Similar to `hf_hub_url` but it's "tree" instead of "resolve" - return f"{self.endpoint}/{repo_id}/tree/{revision}/{path_in_repo}" - - @validate_hf_hub_args - def delete_file( - self, - path_in_repo: str, - repo_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - ) -> CommitInfo: - """ - Deletes a file in the given repo. - - Args: - path_in_repo (`str`): - Relative filepath in the repo, for example: - `"checkpoints/1fec34a/weights.bin"` - repo_id (`str`): - The repository from which the file will be deleted, for example: - `"username/custom_transformers"` - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will - default to the stored token. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if the file is in a dataset or - space, `None` or `"model"` if in a model. Default is `None`. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit. Defaults to - `f"Delete {path_in_repo} with huggingface_hub"`. - commit_description (`str` *optional*) - The description of the generated commit - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request with that commit. Defaults to `False`. - If `revision` is not set, PR is opened against the `"main"` branch. If - `revision` is set and is a branch, PR is opened against this branch. If - `revision` is set and is not a branch name (example: a commit oid), an - `RevisionNotFoundError` is returned by the server. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - [`~utils.EntryNotFoundError`] - If the file to download cannot be found. - - - - """ - commit_message = ( - commit_message if commit_message is not None else f"Delete {path_in_repo} with huggingface_hub" - ) - - operations = [CommitOperationDelete(path_in_repo=path_in_repo)] - - return self.create_commit( - repo_id=repo_id, - repo_type=repo_type, - token=token, - operations=operations, - revision=revision, - commit_message=commit_message, - commit_description=commit_description, - create_pr=create_pr, - parent_commit=parent_commit, - ) - - @validate_hf_hub_args - def delete_folder( - self, - path_in_repo: str, - repo_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - commit_message: Optional[str] = None, - commit_description: Optional[str] = None, - create_pr: Optional[bool] = None, - parent_commit: Optional[str] = None, - ) -> CommitInfo: - """ - Deletes a folder in the given repo. - - Simple wrapper around [`create_commit`] method. - - Args: - path_in_repo (`str`): - Relative folder path in the repo, for example: `"checkpoints/1fec34a"`. - repo_id (`str`): - The repository from which the folder will be deleted, for example: - `"username/custom_transformers"` - token (`str`, *optional*): - Authentication token, obtained with `HfApi.login` method. Will default - to the stored token. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if the folder is in a dataset or - space, `None` or `"model"` if in a model. Default is `None`. - revision (`str`, *optional*): - The git revision to commit from. Defaults to the head of the `"main"` branch. - commit_message (`str`, *optional*): - The summary / title / first line of the generated commit. Defaults to - `f"Delete folder {path_in_repo} with huggingface_hub"`. - commit_description (`str` *optional*) - The description of the generated commit. - create_pr (`boolean`, *optional*): - Whether or not to create a Pull Request with that commit. Defaults to `False`. - If `revision` is not set, PR is opened against the `"main"` branch. If - `revision` is set and is a branch, PR is opened against this branch. If - `revision` is set and is not a branch name (example: a commit oid), an - `RevisionNotFoundError` is returned by the server. - parent_commit (`str`, *optional*): - The OID / SHA of the parent commit, as a hexadecimal string. Shorthands (7 first characters) are also supported. - If specified and `create_pr` is `False`, the commit will fail if `revision` does not point to `parent_commit`. - If specified and `create_pr` is `True`, the pull request will be created from `parent_commit`. - Specifying `parent_commit` ensures the repo has not changed before committing the changes, and can be - especially useful if the repo is updated / committed to concurrently. - """ - return self.create_commit( - repo_id=repo_id, - repo_type=repo_type, - token=token, - operations=[CommitOperationDelete(path_in_repo=path_in_repo, is_folder=True)], - revision=revision, - commit_message=( - commit_message if commit_message is not None else f"Delete folder {path_in_repo} with huggingface_hub" - ), - commit_description=commit_description, - create_pr=create_pr, - parent_commit=parent_commit, - ) - - @validate_hf_hub_args - def create_branch( - self, - repo_id: str, - *, - branch: str, - revision: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - exist_ok: bool = False, - ) -> None: - """ - Create a new branch for a repo on the Hub, starting from the specified revision (defaults to `main`). - To find a revision suiting your needs, you can use [`list_repo_refs`] or [`list_repo_commits`]. - - Args: - repo_id (`str`): - The repository in which the branch will be created. - Example: `"user/my-cool-model"`. - - branch (`str`): - The name of the branch to create. - - revision (`str`, *optional*): - The git revision to create the branch from. It can be a branch name or - the OID/SHA of a commit, as a hexadecimal string. Defaults to the head - of the `"main"` branch. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if creating a branch on a dataset or - space, `None` or `"model"` if tagging a model. Default is `None`. - - exist_ok (`bool`, *optional*, defaults to `False`): - If `True`, do not raise an error if branch already exists. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - [`~utils.BadRequestError`]: - If invalid reference for a branch. Ex: `refs/pr/5` or 'refs/foo/bar'. - [`~utils.HfHubHTTPError`]: - If the branch already exists on the repo (error 409) and `exist_ok` is - set to `False`. - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - branch = quote(branch, safe="") - - # Prepare request - branch_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/branch/{branch}" - headers = self._build_hf_headers(token=token, is_write_action=True) - payload = {} - if revision is not None: - payload["startingPoint"] = revision - - # Create branch - response = get_session().post(url=branch_url, headers=headers, json=payload) - try: - hf_raise_for_status(response) - except HfHubHTTPError as e: - if not (e.response.status_code == 409 and exist_ok): - raise - - @validate_hf_hub_args - def delete_branch( - self, - repo_id: str, - *, - branch: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> None: - """ - Delete a branch from a repo on the Hub. - - Args: - repo_id (`str`): - The repository in which a branch will be deleted. - Example: `"user/my-cool-model"`. - - branch (`str`): - The name of the branch to delete. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if creating a branch on a dataset or - space, `None` or `"model"` if tagging a model. Default is `None`. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - [`~utils.HfHubHTTPError`]: - If trying to delete a protected branch. Ex: `main` cannot be deleted. - [`~utils.HfHubHTTPError`]: - If trying to delete a branch that does not exist. - - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - branch = quote(branch, safe="") - - # Prepare request - branch_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/branch/{branch}" - headers = self._build_hf_headers(token=token, is_write_action=True) - - # Delete branch - response = get_session().delete(url=branch_url, headers=headers) - hf_raise_for_status(response) - - @validate_hf_hub_args - def create_tag( - self, - repo_id: str, - *, - tag: str, - tag_message: Optional[str] = None, - revision: Optional[str] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - exist_ok: bool = False, - ) -> None: - """ - Tag a given commit of a repo on the Hub. - - Args: - repo_id (`str`): - The repository in which a commit will be tagged. - Example: `"user/my-cool-model"`. - - tag (`str`): - The name of the tag to create. - - tag_message (`str`, *optional*): - The description of the tag to create. - - revision (`str`, *optional*): - The git revision to tag. It can be a branch name or the OID/SHA of a - commit, as a hexadecimal string. Shorthands (7 first characters) are - also supported. Defaults to the head of the `"main"` branch. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if tagging a dataset or - space, `None` or `"model"` if tagging a model. Default is - `None`. - - exist_ok (`bool`, *optional*, defaults to `False`): - If `True`, do not raise an error if tag already exists. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - [`~utils.RevisionNotFoundError`]: - If revision is not found (error 404) on the repo. - [`~utils.HfHubHTTPError`]: - If the branch already exists on the repo (error 409) and `exist_ok` is - set to `False`. - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - revision = quote(revision, safe="") if revision is not None else DEFAULT_REVISION - - # Prepare request - tag_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tag/{revision}" - headers = self._build_hf_headers(token=token, is_write_action=True) - payload = {"tag": tag} - if tag_message is not None: - payload["message"] = tag_message - - # Tag - response = get_session().post(url=tag_url, headers=headers, json=payload) - try: - hf_raise_for_status(response) - except HfHubHTTPError as e: - if not (e.response.status_code == 409 and exist_ok): - raise - - @validate_hf_hub_args - def delete_tag( - self, - repo_id: str, - *, - tag: str, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> None: - """ - Delete a tag from a repo on the Hub. - - Args: - repo_id (`str`): - The repository in which a tag will be deleted. - Example: `"user/my-cool-model"`. - - tag (`str`): - The name of the tag to delete. - - token (`str`, *optional*): - Authentication token. Will default to the stored token. - - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if tagging a dataset or space, `None` or - `"model"` if tagging a model. Default is `None`. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If repository is not found (error 404): wrong repo_id/repo_type, private - but not authenticated or repo does not exist. - [`~utils.RevisionNotFoundError`]: - If tag is not found. - """ - if repo_type is None: - repo_type = REPO_TYPE_MODEL - tag = quote(tag, safe="") - - # Prepare request - tag_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tag/{tag}" - headers = self._build_hf_headers(token=token, is_write_action=True) - - # Un-tag - response = get_session().delete(url=tag_url, headers=headers) - hf_raise_for_status(response) - - @validate_hf_hub_args - def get_full_repo_name( - self, - model_id: str, - *, - organization: Optional[str] = None, - token: Optional[Union[bool, str]] = None, - ): - """ - Returns the repository name for a given model ID and optional - organization. - - Args: - model_id (`str`): - The name of the model. - organization (`str`, *optional*): - If passed, the repository name will be in the organization - namespace instead of the user namespace. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - - Returns: - `str`: The repository name in the user's namespace - ({username}/{model_id}) if no organization is passed, and under the - organization namespace ({organization}/{model_id}) otherwise. - """ - if organization is None: - if "/" in model_id: - username = model_id.split("/")[0] - else: - username = self.whoami(token=token)["name"] # type: ignore - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - @validate_hf_hub_args - def get_repo_discussions( - self, - repo_id: str, - *, - repo_type: Optional[str] = None, - token: Optional[str] = None, - ) -> Iterator[Discussion]: - """ - Fetches Discussions and Pull Requests for the given repo. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if fetching from a dataset or - space, `None` or `"model"` if fetching from a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token). - - Returns: - `Iterator[Discussion]`: An iterator of [`Discussion`] objects. - - Example: - Collecting all discussions of a repo in a list: - - ```python - >>> from huggingface_hub import get_repo_discussions - >>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased")) - ``` - - Iterating over discussions of a repo: - - ```python - >>> from huggingface_hub import get_repo_discussions - >>> for discussion in get_repo_discussions(repo_id="bert-base-uncased"): - ... print(discussion.num, discussion.title) - ``` - """ - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - if repo_type is None: - repo_type = REPO_TYPE_MODEL - - headers = self._build_hf_headers(token=token) - - def _fetch_discussion_page(page_index: int): - path = f"{self.endpoint}/api/{repo_type}s/{repo_id}/discussions?p={page_index}" - resp = get_session().get(path, headers=headers) - hf_raise_for_status(resp) - paginated_discussions = resp.json() - total = paginated_discussions["count"] - start = paginated_discussions["start"] - discussions = paginated_discussions["discussions"] - has_next = (start + len(discussions)) < total - return discussions, has_next - - has_next, page_index = True, 0 - - while has_next: - discussions, has_next = _fetch_discussion_page(page_index=page_index) - for discussion in discussions: - yield Discussion( - title=discussion["title"], - num=discussion["num"], - author=discussion.get("author", {}).get("name", "deleted"), - created_at=parse_datetime(discussion["createdAt"]), - status=discussion["status"], - repo_id=discussion["repo"]["name"], - repo_type=discussion["repo"]["type"], - is_pull_request=discussion["isPullRequest"], - endpoint=self.endpoint, - ) - page_index = page_index + 1 - - @validate_hf_hub_args - def get_discussion_details( - self, - repo_id: str, - discussion_num: int, - *, - repo_type: Optional[str] = None, - token: Optional[str] = None, - ) -> DiscussionWithDetails: - """Fetches a Discussion's / Pull Request 's details from the Hub. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: [`DiscussionWithDetails`] - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - if not isinstance(discussion_num, int) or discussion_num <= 0: - raise ValueError("Invalid discussion_num, must be a positive integer") - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - if repo_type is None: - repo_type = REPO_TYPE_MODEL - - path = f"{self.endpoint}/api/{repo_type}s/{repo_id}/discussions/{discussion_num}" - headers = self._build_hf_headers(token=token) - resp = get_session().get(path, params={"diff": "1"}, headers=headers) - hf_raise_for_status(resp) - - discussion_details = resp.json() - is_pull_request = discussion_details["isPullRequest"] - - target_branch = discussion_details["changes"]["base"] if is_pull_request else None - conflicting_files = discussion_details["filesWithConflicts"] if is_pull_request else None - merge_commit_oid = discussion_details["changes"].get("mergeCommitId", None) if is_pull_request else None - - return DiscussionWithDetails( - title=discussion_details["title"], - num=discussion_details["num"], - author=discussion_details.get("author", {}).get("name", "deleted"), - created_at=parse_datetime(discussion_details["createdAt"]), - status=discussion_details["status"], - repo_id=discussion_details["repo"]["name"], - repo_type=discussion_details["repo"]["type"], - is_pull_request=discussion_details["isPullRequest"], - events=[deserialize_event(evt) for evt in discussion_details["events"]], - conflicting_files=conflicting_files, - target_branch=target_branch, - merge_commit_oid=merge_commit_oid, - diff=discussion_details.get("diff"), - endpoint=self.endpoint, - ) - - @validate_hf_hub_args - def create_discussion( - self, - repo_id: str, - title: str, - *, - token: Optional[str] = None, - description: Optional[str] = None, - repo_type: Optional[str] = None, - pull_request: bool = False, - ) -> DiscussionWithDetails: - """Creates a Discussion or Pull Request. - - Pull Requests created programmatically will be in `"draft"` status. - - Creating a Pull Request with changes can also be done at once with [`HfApi.create_commit`]. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - title (`str`): - The title of the discussion. It can be up to 200 characters long, - and must be at least 3 characters long. Leading and trailing whitespaces - will be stripped. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - description (`str`, *optional*): - An optional description for the Pull Request. - Defaults to `"Discussion opened with the huggingface_hub Python library"` - pull_request (`bool`, *optional*): - Whether to create a Pull Request or discussion. If `True`, creates a Pull Request. - If `False`, creates a discussion. Defaults to `False`. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - - Returns: [`DiscussionWithDetails`] - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - """ - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - if repo_type is None: - repo_type = REPO_TYPE_MODEL - - if description is not None: - description = description.strip() - description = ( - description - if description - else ( - f"{'Pull Request' if pull_request else 'Discussion'} opened with the" - " [huggingface_hub Python" - " library](https://huggingface.co/docs/huggingface_hub)" - ) - ) - - headers = self._build_hf_headers(token=token, is_write_action=True) - resp = get_session().post( - f"{self.endpoint}/api/{repo_type}s/{repo_id}/discussions", - json={ - "title": title.strip(), - "description": description, - "pullRequest": pull_request, - }, - headers=headers, - ) - hf_raise_for_status(resp) - num = resp.json()["num"] - return self.get_discussion_details( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=num, - token=token, - ) - - @validate_hf_hub_args - def create_pull_request( - self, - repo_id: str, - title: str, - *, - token: Optional[str] = None, - description: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionWithDetails: - """Creates a Pull Request . Pull Requests created programmatically will be in `"draft"` status. - - Creating a Pull Request with changes can also be done at once with [`HfApi.create_commit`]; - - This is a wrapper around [`HfApi.create_discussion`]. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - title (`str`): - The title of the discussion. It can be up to 200 characters long, - and must be at least 3 characters long. Leading and trailing whitespaces - will be stripped. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - description (`str`, *optional*): - An optional description for the Pull Request. - Defaults to `"Discussion opened with the huggingface_hub Python library"` - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - - Returns: [`DiscussionWithDetails`] - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - """ - return self.create_discussion( - repo_id=repo_id, - title=title, - token=token, - description=description, - repo_type=repo_type, - pull_request=True, - ) - - def _post_discussion_changes( - self, - *, - repo_id: str, - discussion_num: int, - resource: str, - body: Optional[dict] = None, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> requests.Response: - """Internal utility to POST changes to a Discussion or Pull Request""" - if not isinstance(discussion_num, int) or discussion_num <= 0: - raise ValueError("Invalid discussion_num, must be a positive integer") - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type, must be one of {REPO_TYPES}") - if repo_type is None: - repo_type = REPO_TYPE_MODEL - repo_id = f"{repo_type}s/{repo_id}" - - path = f"{self.endpoint}/api/{repo_id}/discussions/{discussion_num}/{resource}" - - headers = self._build_hf_headers(token=token, is_write_action=True) - resp = requests.post(path, headers=headers, json=body) - hf_raise_for_status(resp) - return resp - - @validate_hf_hub_args - def comment_discussion( - self, - repo_id: str, - discussion_num: int, - comment: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionComment: - """Creates a new comment on the given Discussion. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment (`str`): - The content of the comment to create. Comments support markdown formatting. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionComment`]: the newly created comment - - - Examples: - ```python - - >>> comment = \"\"\" - ... Hello @otheruser! - ... - ... # This is a title - ... - ... **This is bold**, *this is italic* and ~this is strikethrough~ - ... And [this](http://url) is a link - ... \"\"\" - - >>> HfApi().comment_discussion( - ... repo_id="username/repo_name", - ... discussion_num=34 - ... comment=comment - ... ) - # DiscussionComment(id='deadbeef0000000', type='comment', ...) - - ``` - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - resp = self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource="comment", - body={"comment": comment}, - ) - return deserialize_event(resp.json()["newMessage"]) # type: ignore - - @validate_hf_hub_args - def rename_discussion( - self, - repo_id: str, - discussion_num: int, - new_title: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionTitleChange: - """Renames a Discussion. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - new_title (`str`): - The new title for the discussion - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionTitleChange`]: the title change event - - - Examples: - ```python - >>> new_title = "New title, fixing a typo" - >>> HfApi().rename_discussion( - ... repo_id="username/repo_name", - ... discussion_num=34 - ... new_title=new_title - ... ) - # DiscussionTitleChange(id='deadbeef0000000', type='title-change', ...) - - ``` - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - resp = self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource="title", - body={"title": new_title}, - ) - return deserialize_event(resp.json()["newTitle"]) # type: ignore - - @validate_hf_hub_args - def change_discussion_status( - self, - repo_id: str, - discussion_num: int, - new_status: Literal["open", "closed"], - *, - token: Optional[str] = None, - comment: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionStatusChange: - """Closes or re-opens a Discussion or Pull Request. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - new_status (`str`): - The new status for the discussion, either `"open"` or `"closed"`. - comment (`str`, *optional*): - An optional comment to post with the status change. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionStatusChange`]: the status change event - - - Examples: - ```python - >>> new_title = "New title, fixing a typo" - >>> HfApi().rename_discussion( - ... repo_id="username/repo_name", - ... discussion_num=34 - ... new_title=new_title - ... ) - # DiscussionStatusChange(id='deadbeef0000000', type='status-change', ...) - - ``` - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - if new_status not in ["open", "closed"]: - raise ValueError("Invalid status, valid statuses are: 'open' and 'closed'") - body: Dict[str, str] = {"status": new_status} - if comment and comment.strip(): - body["comment"] = comment.strip() - resp = self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource="status", - body=body, - ) - return deserialize_event(resp.json()["newStatus"]) # type: ignore - - @validate_hf_hub_args - def merge_pull_request( - self, - repo_id: str, - discussion_num: int, - *, - token: Optional[str] = None, - comment: Optional[str] = None, - repo_type: Optional[str] = None, - ): - """Merges a Pull Request. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment (`str`, *optional*): - An optional comment to post with the status change. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionStatusChange`]: the status change event - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource="merge", - body={"comment": comment.strip()} if comment and comment.strip() else None, - ) - - @validate_hf_hub_args - def edit_discussion_comment( - self, - repo_id: str, - discussion_num: int, - comment_id: str, - new_content: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionComment: - """Edits a comment on a Discussion / Pull Request. - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment_id (`str`): - The ID of the comment to edit. - new_content (`str`): - The new content of the comment. Comments support markdown formatting. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionComment`]: the edited comment - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - resp = self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource=f"comment/{comment_id.lower()}/edit", - body={"content": new_content}, - ) - return deserialize_event(resp.json()["updatedComment"]) # type: ignore - - @validate_hf_hub_args - def hide_discussion_comment( - self, - repo_id: str, - discussion_num: int, - comment_id: str, - *, - token: Optional[str] = None, - repo_type: Optional[str] = None, - ) -> DiscussionComment: - """Hides a comment on a Discussion / Pull Request. - - - Hidden comments' content cannot be retrieved anymore. Hiding a comment is irreversible. - - - Args: - repo_id (`str`): - A namespace (user or an organization) and a repo name separated - by a `/`. - discussion_num (`int`): - The number of the Discussion or Pull Request . Must be a strictly positive integer. - comment_id (`str`): - The ID of the comment to edit. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if uploading to a dataset or - space, `None` or `"model"` if uploading to a model. Default is - `None`. - token (`str`, *optional*): - An authentication token (See https://huggingface.co/settings/token) - - Returns: - [`DiscussionComment`]: the hidden comment - - - - Raises the following errors: - - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - - """ - warnings.warn( - "Hidden comments' content cannot be retrieved anymore. Hiding a comment is irreversible.", - UserWarning, - ) - resp = self._post_discussion_changes( - repo_id=repo_id, - repo_type=repo_type, - discussion_num=discussion_num, - token=token, - resource=f"comment/{comment_id.lower()}/hide", - ) - return deserialize_event(resp.json()["updatedComment"]) # type: ignore - - @validate_hf_hub_args - def add_space_secret(self, repo_id: str, key: str, value: str, *, token: Optional[str] = None) -> None: - """Adds or updates a secret in a Space. - - Secrets allow to set secret keys or tokens to a Space without hardcoding them. - For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. - - Args: - repo_id (`str`): - ID of the repo to update. Example: `"bigcode/in-the-stack"`. - key (`str`): - Secret key. Example: `"GITHUB_API_KEY"` - value (`str`): - Secret value. Example: `"your_github_api_key"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - """ - r = get_session().post( - f"{self.endpoint}/api/spaces/{repo_id}/secrets", - headers=self._build_hf_headers(token=token), - json={"key": key, "value": value}, - ) - hf_raise_for_status(r) - - @validate_hf_hub_args - def delete_space_secret(self, repo_id: str, key: str, *, token: Optional[str] = None) -> None: - """Deletes a secret from a Space. - - Secrets allow to set secret keys or tokens to a Space without hardcoding them. - For more details, see https://huggingface.co/docs/hub/spaces-overview#managing-secrets. - - Args: - repo_id (`str`): - ID of the repo to update. Example: `"bigcode/in-the-stack"`. - key (`str`): - Secret key. Example: `"GITHUB_API_KEY"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - """ - r = get_session().delete( - f"{self.endpoint}/api/spaces/{repo_id}/secrets", - headers=self._build_hf_headers(token=token), - json={"key": key}, - ) - hf_raise_for_status(r) - - @validate_hf_hub_args - def get_space_runtime(self, repo_id: str, *, token: Optional[str] = None) -> SpaceRuntime: - """Gets runtime information about a Space. - - Args: - repo_id (`str`): - ID of the repo to update. Example: `"bigcode/in-the-stack"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if - not provided. - Returns: - [`SpaceRuntime`]: Runtime information about a Space including Space stage and hardware. - """ - r = get_session().get( - f"{self.endpoint}/api/spaces/{repo_id}/runtime", headers=self._build_hf_headers(token=token) - ) - hf_raise_for_status(r) - return SpaceRuntime(r.json()) - - @validate_hf_hub_args - def request_space_hardware( - self, - repo_id: str, - hardware: SpaceHardware, - *, - token: Optional[str] = None, - sleep_time: Optional[int] = None, - ) -> SpaceRuntime: - """Request new hardware for a Space. - - Args: - repo_id (`str`): - ID of the repo to update. Example: `"bigcode/in-the-stack"`. - hardware (`str` or [`SpaceHardware`]): - Hardware on which to run the Space. Example: `"t4-medium"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - sleep_time (`int`, *optional*): - Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want - your Space to sleep (default behavior for upgraded hardware). For free hardware, you can't configure - the sleep time (value is fixed to 48 hours of inactivity). - See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. - Returns: - [`SpaceRuntime`]: Runtime information about a Space including Space stage and hardware. - - - - It is also possible to request hardware directly when creating the Space repo! See [`create_repo`] for details. - - - """ - if sleep_time is not None and hardware == SpaceHardware.CPU_BASIC: - warnings.warn( - ( - "If your Space runs on the default 'cpu-basic' hardware, it will go to sleep if inactive for more" - " than 48 hours. This value is not configurable. If you don't want your Space to deactivate or if" - " you want to set a custom sleep time, you need to upgrade to a paid Hardware." - ), - UserWarning, - ) - payload: Dict[str, Any] = {"flavor": hardware} - if sleep_time is not None: - payload["sleepTimeSeconds"] = sleep_time - r = get_session().post( - f"{self.endpoint}/api/spaces/{repo_id}/hardware", - headers=self._build_hf_headers(token=token), - json=payload, - ) - hf_raise_for_status(r) - return SpaceRuntime(r.json()) - - @validate_hf_hub_args - def set_space_sleep_time(self, repo_id: str, sleep_time: int, *, token: Optional[str] = None) -> SpaceRuntime: - """Set a custom sleep time for a Space running on upgraded hardware.. - - Your Space will go to sleep after X seconds of inactivity. You are not billed when your Space is in "sleep" - mode. If a new visitor lands on your Space, it will "wake it up". Only upgraded hardware can have a - configurable sleep time. To know more about the sleep stage, please refer to - https://huggingface.co/docs/hub/spaces-gpus#sleep-time. - - Args: - repo_id (`str`): - ID of the repo to update. Example: `"bigcode/in-the-stack"`. - sleep_time (`int`, *optional*): - Number of seconds of inactivity to wait before a Space is put to sleep. Set to `-1` if you don't want - your Space to pause (default behavior for upgraded hardware). For free hardware, you can't configure - the sleep time (value is fixed to 48 hours of inactivity). - See https://huggingface.co/docs/hub/spaces-gpus#sleep-time for more details. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - Returns: - [`SpaceRuntime`]: Runtime information about a Space including Space stage and hardware. - - - - It is also possible to set a custom sleep time when requesting hardware with [`request_space_hardware`]. - - - """ - r = get_session().post( - f"{self.endpoint}/api/spaces/{repo_id}/sleeptime", - headers=self._build_hf_headers(token=token), - json={"seconds": sleep_time}, - ) - hf_raise_for_status(r) - runtime = SpaceRuntime(r.json()) - - hardware = runtime.requested_hardware or runtime.hardware - if hardware == SpaceHardware.CPU_BASIC: - warnings.warn( - ( - "If your Space runs on the default 'cpu-basic' hardware, it will go to sleep if inactive for more" - " than 48 hours. This value is not configurable. If you don't want your Space to deactivate or if" - " you want to set a custom sleep time, you need to upgrade to a paid Hardware." - ), - UserWarning, - ) - return runtime - - @validate_hf_hub_args - def pause_space(self, repo_id: str, *, token: Optional[str] = None) -> SpaceRuntime: - """Pause your Space. - - A paused Space stops executing until manually restarted by its owner. This is different from the sleeping - state in which free Spaces go after 48h of inactivity. Paused time is not billed to your account, no matter the - hardware you've selected. To restart your Space, use [`restart_space`] and go to your Space settings page. - - For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause). - - Args: - repo_id (`str`): - ID of the Space to pause. Example: `"Salesforce/BLIP2"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - - Returns: - [`SpaceRuntime`]: Runtime information about your Space including `stage=PAUSED` and requested hardware. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you - are not authenticated. - [`~utils.HfHubHTTPError`]: - 403 Forbidden: only the owner of a Space can pause it. If you want to manage a Space that you don't - own, either ask the owner by opening a Discussion or duplicate the Space. - [`~utils.BadRequestError`]: - If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide - a static Space, you can set it to private. - """ - r = get_session().post( - f"{self.endpoint}/api/spaces/{repo_id}/pause", headers=self._build_hf_headers(token=token) - ) - hf_raise_for_status(r) - return SpaceRuntime(r.json()) - - @validate_hf_hub_args - def restart_space(self, repo_id: str, *, token: Optional[str] = None) -> SpaceRuntime: - """Restart your Space. - - This is the only way to programmatically restart a Space if you've put it on Pause (see [`pause_space`]). You - must be the owner of the Space to restart it. If you are using an upgraded hardware, your account will be - billed as soon as the Space is restarted. You can trigger a restart no matter the current state of a Space. - - For more details, please visit [the docs](https://huggingface.co/docs/hub/spaces-gpus#pause). - - Args: - repo_id (`str`): - ID of the Space to restart. Example: `"Salesforce/BLIP2"`. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - - Returns: - [`SpaceRuntime`]: Runtime information about your Space. - - Raises: - [`~utils.RepositoryNotFoundError`]: - If your Space is not found (error 404). Most probably wrong repo_id or your space is private but you - are not authenticated. - [`~utils.HfHubHTTPError`]: - 403 Forbidden: only the owner of a Space can restart it. If you want to restart a Space that you don't - own, either ask the owner by opening a Discussion or duplicate the Space. - [`~utils.BadRequestError`]: - If your Space is a static Space. Static Spaces are always running and never billed. If you want to hide - a static Space, you can set it to private. - """ - r = get_session().post( - f"{self.endpoint}/api/spaces/{repo_id}/restart", headers=self._build_hf_headers(token=token) - ) - hf_raise_for_status(r) - return SpaceRuntime(r.json()) - - @validate_hf_hub_args - def duplicate_space( - self, - from_id: str, - to_id: Optional[str] = None, - *, - private: Optional[bool] = None, - token: Optional[str] = None, - exist_ok: bool = False, - ) -> RepoUrl: - """Duplicate a Space. - - Programmatically duplicate a Space. The new Space will be created in your account and will be in the same state - as the original Space (running or paused). You can duplicate a Space no matter the current state of a Space. - - Args: - from_id (`str`): - ID of the Space to duplicate. Example: `"pharma/CLIP-Interrogator"`. - to_id (`str`, *optional*): - ID of the new Space. Example: `"dog/CLIP-Interrogator"`. If not provided, the new Space will have the same - name as the original Space, but in your account. - private (`bool`, *optional*): - Whether the new Space should be private or not. Defaults to the same privacy as the original Space. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token if not provided. - exist_ok (`bool`, *optional*, defaults to `False`): - If `True`, do not raise an error if repo already exists. - - Returns: - [`RepoUrl`]: URL to the newly created repo. Value is a subclass of `str` containing - attributes like `endpoint`, `repo_type` and `repo_id`. - - Raises: - - [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) - if the HuggingFace API returned an error - - [`~utils.RepositoryNotFoundError`] - If one of `from_id` or `to_id` cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - Example: - ```python - >>> from huggingface_hub import duplicate_space - - # Duplicate a Space to your account - >>> duplicate_space("multimodalart/dreambooth-training") - RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...) - - # Can set custom destination id and visibility flag. - >>> duplicate_space("multimodalart/dreambooth-training", to_id="my-dreambooth", private=True) - RepoUrl('https://huggingface.co/spaces/nateraw/my-dreambooth',...) - ``` - """ - # Parse to_id if provided - parsed_to_id = RepoUrl(to_id) if to_id is not None else None - - # Infer target repo_id - to_namespace = ( # set namespace manually or default to username - parsed_to_id.namespace - if parsed_to_id is not None and parsed_to_id.namespace is not None - else self.whoami(token)["name"] - ) - to_repo_name = parsed_to_id.repo_name if to_id is not None else RepoUrl(from_id).repo_name # type: ignore - - # repository must be a valid repo_id (namespace/repo_name). - payload: Dict[str, Any] = {"repository": f"{to_namespace}/{to_repo_name}"} - - # private is optional with this endpoint, with None defaulting to the original space's privacy. - if private is not None: - payload["private"] = private - - r = get_session().post( - f"{self.endpoint}/api/spaces/{from_id}/duplicate", - headers=self._build_hf_headers(token=token, is_write_action=True), - json=payload, - ) - - try: - hf_raise_for_status(r) - except HTTPError as err: - if exist_ok and err.response.status_code == 409: - # Repo already exists and `exist_ok=True` - pass - else: - raise - - return RepoUrl(r.json()["url"], endpoint=self.endpoint) - - def _build_hf_headers( - self, - token: Optional[Union[bool, str]] = None, - is_write_action: bool = False, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, - ) -> Dict[str, str]: - """ - Alias for [`build_hf_headers`] that uses the token from [`HfApi`] client - when `token` is not provided. - """ - if token is None: - # Cannot do `token = token or self.token` as token can be `False`. - token = self.token - return build_hf_headers( - token=token, - is_write_action=is_write_action, - library_name=library_name or self.library_name, - library_version=library_version or self.library_version, - user_agent=user_agent or self.user_agent, - ) - - def _prepare_upload_folder_deletions( - self, - repo_id: str, - repo_type: Optional[str], - revision: Optional[str], - token: Optional[str], - path_in_repo: str, - delete_patterns: Optional[Union[List[str], str]], - ) -> List[CommitOperationDelete]: - """Generate the list of Delete operations for a commit to delete files from a repo. - - List remote files and match them against the `delete_patterns` constraints. Returns a list of [`CommitOperationDelete`] - with the matching items. - - Note: `.gitattributes` file is essential to make a repo work properly on the Hub. This file will always be - kept even if it matches the `delete_patterns` constraints. - """ - if delete_patterns is None: - # If no delete patterns, no need to list and filter remote files - return [] - - # List remote files - filenames = self.list_repo_files(repo_id=repo_id, revision=revision, repo_type=repo_type, token=token) - - # Compute relative path in repo - if path_in_repo: - path_in_repo = path_in_repo.strip("/") + "/" # harmonize - relpath_to_abspath = { - file[len(path_in_repo) :]: file for file in filenames if file.startswith(path_in_repo) - } - else: - relpath_to_abspath = {file: file for file in filenames} - - # Apply filter on relative paths and return - return [ - CommitOperationDelete(path_in_repo=relpath_to_abspath[relpath], is_folder=False) - for relpath in filter_repo_objects(relpath_to_abspath.keys(), allow_patterns=delete_patterns) - if relpath_to_abspath[relpath] != ".gitattributes" - ] - - -def _prepare_upload_folder_additions( - folder_path: Union[str, Path], - path_in_repo: str, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, -) -> List[CommitOperationAdd]: - """Generate the list of Add operations for a commit to upload a folder. - - Files not matching the `allow_patterns` (allowlist) and `ignore_patterns` (denylist) - constraints are discarded. - """ - folder_path = Path(folder_path).expanduser().resolve() - if not folder_path.is_dir(): - raise ValueError(f"Provided path: '{folder_path}' is not a directory") - - # List files from folder - relpath_to_abspath = { - path.relative_to(folder_path).as_posix(): path - for path in sorted(folder_path.glob("**/*")) # sorted to be deterministic - if path.is_file() - } - - # Filter files and return - # Patterns are applied on the path relative to `folder_path`. `path_in_repo` is prefixed after the filtering. - prefix = f"{path_in_repo.strip('/')}/" if path_in_repo else "" - return [ - CommitOperationAdd( - path_or_fileobj=relpath_to_abspath[relpath], # absolute path on disk - path_in_repo=prefix + relpath, # "absolute" path in repo - ) - for relpath in filter_repo_objects( - relpath_to_abspath.keys(), allow_patterns=allow_patterns, ignore_patterns=ignore_patterns - ) - ] - - -def _parse_revision_from_pr_url(pr_url: str) -> str: - """Safely parse revision number from a PR url. - - Example: - ```py - >>> _parse_revision_from_pr_url("https://huggingface.co/bigscience/bloom/discussions/2") - "refs/pr/2" - ``` - """ - re_match = re.match(_REGEX_DISCUSSION_URL, pr_url) - if re_match is None: - raise RuntimeError(f"Unexpected response from the hub, expected a Pull Request URL but got: '{pr_url}'") - return f"refs/pr/{re_match[1]}" - - -api = HfApi() - -whoami = api.whoami -get_token_permission = api.get_token_permission - -list_models = api.list_models -model_info = api.model_info - -list_datasets = api.list_datasets -dataset_info = api.dataset_info - -list_spaces = api.list_spaces -space_info = api.space_info - -repo_info = api.repo_info -list_repo_files = api.list_repo_files -list_repo_refs = api.list_repo_refs -list_repo_commits = api.list_repo_commits -list_files_info = api.list_files_info - -list_metrics = api.list_metrics - -get_model_tags = api.get_model_tags -get_dataset_tags = api.get_dataset_tags - -create_commit = api.create_commit -create_repo = api.create_repo -delete_repo = api.delete_repo -update_repo_visibility = api.update_repo_visibility -move_repo = api.move_repo -upload_file = api.upload_file -upload_folder = api.upload_folder -delete_file = api.delete_file -delete_folder = api.delete_folder -create_commits_on_pr = api.create_commits_on_pr -create_branch = api.create_branch -delete_branch = api.delete_branch -create_tag = api.create_tag -delete_tag = api.delete_tag -get_full_repo_name = api.get_full_repo_name - -# Background jobs -run_as_future = api.run_as_future - -# Activity API -list_liked_repos = api.list_liked_repos -like = api.like -unlike = api.unlike - -# Community API -get_discussion_details = api.get_discussion_details -get_repo_discussions = api.get_repo_discussions -create_discussion = api.create_discussion -create_pull_request = api.create_pull_request -change_discussion_status = api.change_discussion_status -comment_discussion = api.comment_discussion -edit_discussion_comment = api.edit_discussion_comment -rename_discussion = api.rename_discussion -merge_pull_request = api.merge_pull_request - -# Space API -add_space_secret = api.add_space_secret -delete_space_secret = api.delete_space_secret -get_space_runtime = api.get_space_runtime -request_space_hardware = api.request_space_hardware -set_space_sleep_time = api.set_space_sleep_time -pause_space = api.pause_space -restart_space = api.restart_space -duplicate_space = api.duplicate_space diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_cairo.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_cairo.py deleted file mode 100644 index d13de790aaf23bb51251ab247652d002c72058df..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_cairo.py +++ /dev/null @@ -1,522 +0,0 @@ -""" -A Cairo backend for Matplotlib -============================== -:Author: Steve Chaplin and others - -This backend depends on cairocffi or pycairo. -""" - -import functools -import gzip -import math - -import numpy as np - -try: - import cairo - if cairo.version_info < (1, 14, 0): # Introduced set_device_scale. - raise ImportError(f"Cairo backend requires cairo>=1.14.0, " - f"but only {cairo.version_info} is available") -except ImportError: - try: - import cairocffi as cairo - except ImportError as err: - raise ImportError( - "cairo backend requires that pycairo>=1.14.0 or cairocffi " - "is installed") from err - -import matplotlib as mpl -from .. import _api, cbook, font_manager -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, GraphicsContextBase, - RendererBase) -from matplotlib.font_manager import ttfFontProperty -from matplotlib.path import Path -from matplotlib.transforms import Affine2D - - -def _append_path(ctx, path, transform, clip=None): - for points, code in path.iter_segments( - transform, remove_nans=True, clip=clip): - if code == Path.MOVETO: - ctx.move_to(*points) - elif code == Path.CLOSEPOLY: - ctx.close_path() - elif code == Path.LINETO: - ctx.line_to(*points) - elif code == Path.CURVE3: - cur = np.asarray(ctx.get_current_point()) - a = points[:2] - b = points[-2:] - ctx.curve_to(*(cur / 3 + a * 2 / 3), *(a * 2 / 3 + b / 3), *b) - elif code == Path.CURVE4: - ctx.curve_to(*points) - - -def _cairo_font_args_from_font_prop(prop): - """ - Convert a `.FontProperties` or a `.FontEntry` to arguments that can be - passed to `.Context.select_font_face`. - """ - def attr(field): - try: - return getattr(prop, f"get_{field}")() - except AttributeError: - return getattr(prop, field) - - name = attr("name") - slant = getattr(cairo, f"FONT_SLANT_{attr('style').upper()}") - weight = attr("weight") - weight = (cairo.FONT_WEIGHT_NORMAL - if font_manager.weight_dict.get(weight, weight) < 550 - else cairo.FONT_WEIGHT_BOLD) - return name, slant, weight - - -class RendererCairo(RendererBase): - def __init__(self, dpi): - self.dpi = dpi - self.gc = GraphicsContextCairo(renderer=self) - self.width = None - self.height = None - self.text_ctx = cairo.Context( - cairo.ImageSurface(cairo.FORMAT_ARGB32, 1, 1)) - super().__init__() - - def set_context(self, ctx): - surface = ctx.get_target() - if hasattr(surface, "get_width") and hasattr(surface, "get_height"): - size = surface.get_width(), surface.get_height() - elif hasattr(surface, "get_extents"): # GTK4 RecordingSurface. - ext = surface.get_extents() - size = ext.width, ext.height - else: # vector surfaces. - ctx.save() - ctx.reset_clip() - rect, *rest = ctx.copy_clip_rectangle_list() - if rest: - raise TypeError("Cannot infer surface size") - size = rect.width, rect.height - ctx.restore() - self.gc.ctx = ctx - self.width, self.height = size - - @_api.deprecated("3.6", alternative="set_context") - def set_ctx_from_surface(self, surface): - self.gc.ctx = cairo.Context(surface) - - @_api.deprecated("3.6") - def set_width_height(self, width, height): - self.width = width - self.height = height - - def _fill_and_stroke(self, ctx, fill_c, alpha, alpha_overrides): - if fill_c is not None: - ctx.save() - if len(fill_c) == 3 or alpha_overrides: - ctx.set_source_rgba(fill_c[0], fill_c[1], fill_c[2], alpha) - else: - ctx.set_source_rgba(fill_c[0], fill_c[1], fill_c[2], fill_c[3]) - ctx.fill_preserve() - ctx.restore() - ctx.stroke() - - def draw_path(self, gc, path, transform, rgbFace=None): - # docstring inherited - ctx = gc.ctx - # Clip the path to the actual rendering extents if it isn't filled. - clip = (ctx.clip_extents() - if rgbFace is None and gc.get_hatch() is None - else None) - transform = (transform - + Affine2D().scale(1, -1).translate(0, self.height)) - ctx.new_path() - _append_path(ctx, path, transform, clip) - self._fill_and_stroke( - ctx, rgbFace, gc.get_alpha(), gc.get_forced_alpha()) - - def draw_markers(self, gc, marker_path, marker_trans, path, transform, - rgbFace=None): - # docstring inherited - - ctx = gc.ctx - ctx.new_path() - # Create the path for the marker; it needs to be flipped here already! - _append_path(ctx, marker_path, marker_trans + Affine2D().scale(1, -1)) - marker_path = ctx.copy_path_flat() - - # Figure out whether the path has a fill - x1, y1, x2, y2 = ctx.fill_extents() - if x1 == 0 and y1 == 0 and x2 == 0 and y2 == 0: - filled = False - # No fill, just unset this (so we don't try to fill it later on) - rgbFace = None - else: - filled = True - - transform = (transform - + Affine2D().scale(1, -1).translate(0, self.height)) - - ctx.new_path() - for i, (vertices, codes) in enumerate( - path.iter_segments(transform, simplify=False)): - if len(vertices): - x, y = vertices[-2:] - ctx.save() - - # Translate and apply path - ctx.translate(x, y) - ctx.append_path(marker_path) - - ctx.restore() - - # Slower code path if there is a fill; we need to draw - # the fill and stroke for each marker at the same time. - # Also flush out the drawing every once in a while to - # prevent the paths from getting way too long. - if filled or i % 1000 == 0: - self._fill_and_stroke( - ctx, rgbFace, gc.get_alpha(), gc.get_forced_alpha()) - - # Fast path, if there is no fill, draw everything in one step - if not filled: - self._fill_and_stroke( - ctx, rgbFace, gc.get_alpha(), gc.get_forced_alpha()) - - def draw_image(self, gc, x, y, im): - im = cbook._unmultiplied_rgba8888_to_premultiplied_argb32(im[::-1]) - surface = cairo.ImageSurface.create_for_data( - im.ravel().data, cairo.FORMAT_ARGB32, - im.shape[1], im.shape[0], im.shape[1] * 4) - ctx = gc.ctx - y = self.height - y - im.shape[0] - - ctx.save() - ctx.set_source_surface(surface, float(x), float(y)) - ctx.paint() - ctx.restore() - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - # Note: (x, y) are device/display coords, not user-coords, unlike other - # draw_* methods - if ismath: - self._draw_mathtext(gc, x, y, s, prop, angle) - - else: - ctx = gc.ctx - ctx.new_path() - ctx.move_to(x, y) - - ctx.save() - ctx.select_font_face(*_cairo_font_args_from_font_prop(prop)) - ctx.set_font_size(self.points_to_pixels(prop.get_size_in_points())) - opts = cairo.FontOptions() - opts.set_antialias( - cairo.ANTIALIAS_DEFAULT if mpl.rcParams["text.antialiased"] - else cairo.ANTIALIAS_NONE) - ctx.set_font_options(opts) - if angle: - ctx.rotate(np.deg2rad(-angle)) - ctx.show_text(s) - ctx.restore() - - def _draw_mathtext(self, gc, x, y, s, prop, angle): - ctx = gc.ctx - width, height, descent, glyphs, rects = \ - self._text2path.mathtext_parser.parse(s, self.dpi, prop) - - ctx.save() - ctx.translate(x, y) - if angle: - ctx.rotate(np.deg2rad(-angle)) - - for font, fontsize, idx, ox, oy in glyphs: - ctx.new_path() - ctx.move_to(ox, -oy) - ctx.select_font_face( - *_cairo_font_args_from_font_prop(ttfFontProperty(font))) - ctx.set_font_size(self.points_to_pixels(fontsize)) - ctx.show_text(chr(idx)) - - for ox, oy, w, h in rects: - ctx.new_path() - ctx.rectangle(ox, -oy, w, -h) - ctx.set_source_rgb(0, 0, 0) - ctx.fill_preserve() - - ctx.restore() - - def get_canvas_width_height(self): - # docstring inherited - return self.width, self.height - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - - if ismath == 'TeX': - return super().get_text_width_height_descent(s, prop, ismath) - - if ismath: - width, height, descent, *_ = \ - self._text2path.mathtext_parser.parse(s, self.dpi, prop) - return width, height, descent - - ctx = self.text_ctx - # problem - scale remembers last setting and font can become - # enormous causing program to crash - # save/restore prevents the problem - ctx.save() - ctx.select_font_face(*_cairo_font_args_from_font_prop(prop)) - ctx.set_font_size(self.points_to_pixels(prop.get_size_in_points())) - - y_bearing, w, h = ctx.text_extents(s)[1:4] - ctx.restore() - - return w, h, h + y_bearing - - def new_gc(self): - # docstring inherited - self.gc.ctx.save() - self.gc._alpha = 1 - self.gc._forced_alpha = False # if True, _alpha overrides A from RGBA - return self.gc - - def points_to_pixels(self, points): - # docstring inherited - return points / 72 * self.dpi - - -class GraphicsContextCairo(GraphicsContextBase): - _joind = { - 'bevel': cairo.LINE_JOIN_BEVEL, - 'miter': cairo.LINE_JOIN_MITER, - 'round': cairo.LINE_JOIN_ROUND, - } - - _capd = { - 'butt': cairo.LINE_CAP_BUTT, - 'projecting': cairo.LINE_CAP_SQUARE, - 'round': cairo.LINE_CAP_ROUND, - } - - def __init__(self, renderer): - super().__init__() - self.renderer = renderer - - def restore(self): - self.ctx.restore() - - def set_alpha(self, alpha): - super().set_alpha(alpha) - _alpha = self.get_alpha() - rgb = self._rgb - if self.get_forced_alpha(): - self.ctx.set_source_rgba(rgb[0], rgb[1], rgb[2], _alpha) - else: - self.ctx.set_source_rgba(rgb[0], rgb[1], rgb[2], rgb[3]) - - def set_antialiased(self, b): - self.ctx.set_antialias( - cairo.ANTIALIAS_DEFAULT if b else cairo.ANTIALIAS_NONE) - - def set_capstyle(self, cs): - self.ctx.set_line_cap(_api.check_getitem(self._capd, capstyle=cs)) - self._capstyle = cs - - def set_clip_rectangle(self, rectangle): - if not rectangle: - return - x, y, w, h = np.round(rectangle.bounds) - ctx = self.ctx - ctx.new_path() - ctx.rectangle(x, self.renderer.height - h - y, w, h) - ctx.clip() - - def set_clip_path(self, path): - if not path: - return - tpath, affine = path.get_transformed_path_and_affine() - ctx = self.ctx - ctx.new_path() - affine = (affine - + Affine2D().scale(1, -1).translate(0, self.renderer.height)) - _append_path(ctx, tpath, affine) - ctx.clip() - - def set_dashes(self, offset, dashes): - self._dashes = offset, dashes - if dashes is None: - self.ctx.set_dash([], 0) # switch dashes off - else: - self.ctx.set_dash( - list(self.renderer.points_to_pixels(np.asarray(dashes))), - offset) - - def set_foreground(self, fg, isRGBA=None): - super().set_foreground(fg, isRGBA) - if len(self._rgb) == 3: - self.ctx.set_source_rgb(*self._rgb) - else: - self.ctx.set_source_rgba(*self._rgb) - - def get_rgb(self): - return self.ctx.get_source().get_rgba()[:3] - - def set_joinstyle(self, js): - self.ctx.set_line_join(_api.check_getitem(self._joind, joinstyle=js)) - self._joinstyle = js - - def set_linewidth(self, w): - self._linewidth = float(w) - self.ctx.set_line_width(self.renderer.points_to_pixels(w)) - - -class _CairoRegion: - def __init__(self, slices, data): - self._slices = slices - self._data = data - - -class FigureCanvasCairo(FigureCanvasBase): - @property - def _renderer(self): - # In theory, _renderer should be set in __init__, but GUI canvas - # subclasses (FigureCanvasFooCairo) don't always interact well with - # multiple inheritance (FigureCanvasFoo inits but doesn't super-init - # FigureCanvasCairo), so initialize it in the getter instead. - if not hasattr(self, "_cached_renderer"): - self._cached_renderer = RendererCairo(self.figure.dpi) - return self._cached_renderer - - def get_renderer(self): - return self._renderer - - def copy_from_bbox(self, bbox): - surface = self._renderer.gc.ctx.get_target() - if not isinstance(surface, cairo.ImageSurface): - raise RuntimeError( - "copy_from_bbox only works when rendering to an ImageSurface") - sw = surface.get_width() - sh = surface.get_height() - x0 = math.ceil(bbox.x0) - x1 = math.floor(bbox.x1) - y0 = math.ceil(sh - bbox.y1) - y1 = math.floor(sh - bbox.y0) - if not (0 <= x0 and x1 <= sw and bbox.x0 <= bbox.x1 - and 0 <= y0 and y1 <= sh and bbox.y0 <= bbox.y1): - raise ValueError("Invalid bbox") - sls = slice(y0, y0 + max(y1 - y0, 0)), slice(x0, x0 + max(x1 - x0, 0)) - data = (np.frombuffer(surface.get_data(), np.uint32) - .reshape((sh, sw))[sls].copy()) - return _CairoRegion(sls, data) - - def restore_region(self, region): - surface = self._renderer.gc.ctx.get_target() - if not isinstance(surface, cairo.ImageSurface): - raise RuntimeError( - "restore_region only works when rendering to an ImageSurface") - surface.flush() - sw = surface.get_width() - sh = surface.get_height() - sly, slx = region._slices - (np.frombuffer(surface.get_data(), np.uint32) - .reshape((sh, sw))[sly, slx]) = region._data - surface.mark_dirty_rectangle( - slx.start, sly.start, slx.stop - slx.start, sly.stop - sly.start) - - def print_png(self, fobj): - self._get_printed_image_surface().write_to_png(fobj) - - def print_rgba(self, fobj): - width, height = self.get_width_height() - buf = self._get_printed_image_surface().get_data() - fobj.write(cbook._premultiplied_argb32_to_unmultiplied_rgba8888( - np.asarray(buf).reshape((width, height, 4)))) - - print_raw = print_rgba - - def _get_printed_image_surface(self): - self._renderer.dpi = self.figure.dpi - width, height = self.get_width_height() - surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height) - self._renderer.set_context(cairo.Context(surface)) - self.figure.draw(self._renderer) - return surface - - def _save(self, fmt, fobj, *, orientation='portrait'): - # save PDF/PS/SVG - - dpi = 72 - self.figure.dpi = dpi - w_in, h_in = self.figure.get_size_inches() - width_in_points, height_in_points = w_in * dpi, h_in * dpi - - if orientation == 'landscape': - width_in_points, height_in_points = ( - height_in_points, width_in_points) - - if fmt == 'ps': - if not hasattr(cairo, 'PSSurface'): - raise RuntimeError('cairo has not been compiled with PS ' - 'support enabled') - surface = cairo.PSSurface(fobj, width_in_points, height_in_points) - elif fmt == 'pdf': - if not hasattr(cairo, 'PDFSurface'): - raise RuntimeError('cairo has not been compiled with PDF ' - 'support enabled') - surface = cairo.PDFSurface(fobj, width_in_points, height_in_points) - elif fmt in ('svg', 'svgz'): - if not hasattr(cairo, 'SVGSurface'): - raise RuntimeError('cairo has not been compiled with SVG ' - 'support enabled') - if fmt == 'svgz': - if isinstance(fobj, str): - fobj = gzip.GzipFile(fobj, 'wb') - else: - fobj = gzip.GzipFile(None, 'wb', fileobj=fobj) - surface = cairo.SVGSurface(fobj, width_in_points, height_in_points) - else: - raise ValueError("Unknown format: {!r}".format(fmt)) - - self._renderer.dpi = self.figure.dpi - self._renderer.set_context(cairo.Context(surface)) - ctx = self._renderer.gc.ctx - - if orientation == 'landscape': - ctx.rotate(np.pi / 2) - ctx.translate(0, -height_in_points) - # Perhaps add an '%%Orientation: Landscape' comment? - - self.figure.draw(self._renderer) - - ctx.show_page() - surface.finish() - if fmt == 'svgz': - fobj.close() - - print_pdf = functools.partialmethod(_save, "pdf") - print_ps = functools.partialmethod(_save, "ps") - print_svg = functools.partialmethod(_save, "svg") - print_svgz = functools.partialmethod(_save, "svgz") - - -@_api.deprecated("3.6") -class _RendererGTKCairo(RendererCairo): - def set_context(self, ctx): - if (cairo.__name__ == "cairocffi" - and not isinstance(ctx, cairo.Context)): - ctx = cairo.Context._from_pointer( - cairo.ffi.cast( - 'cairo_t **', - id(ctx) + object.__basicsize__)[0], - incref=True) - self.gc.ctx = ctx - - -@_Backend.export -class _BackendCairo(_Backend): - backend_version = cairo.version - FigureCanvas = FigureCanvasCairo - FigureManager = FigureManagerBase diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Xiaor.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Xiaor.py deleted file mode 100644 index 5757f9971157116cbbfabbe5420e3b7e88fed4e7..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Xiaor.py +++ /dev/null @@ -1,39 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://xiaor.eu.org' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/p1/v1/chat/completions', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/latent_diffusion/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/latent_diffusion/__init__.py deleted file mode 100644 index 0cce9a89bcbeaac8468d75e9d16c9d3731f738c7..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/latent_diffusion/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from ...utils import is_transformers_available -from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline - - -if is_transformers_available(): - from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/paint_by_example/test_paint_by_example.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/paint_by_example/test_paint_by_example.py deleted file mode 100644 index 81d1989200ac1ddbab305d5143ec98bcd654f46b..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/paint_by_example/test_paint_by_example.py +++ /dev/null @@ -1,210 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image -from transformers import CLIPImageProcessor, CLIPVisionConfig - -from diffusers import AutoencoderKL, PaintByExamplePipeline, PNDMScheduler, UNet2DConditionModel -from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder -from diffusers.utils import floats_tensor, load_image, slow, torch_device -from diffusers.utils.testing_utils import require_torch_gpu - -from ...pipeline_params import IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class PaintByExamplePipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = PaintByExamplePipeline - params = IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS - batch_params = IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=9, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = PNDMScheduler(skip_prk_steps=True) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - torch.manual_seed(0) - config = CLIPVisionConfig( - hidden_size=32, - projection_dim=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - image_size=32, - patch_size=4, - ) - image_encoder = PaintByExampleImageEncoder(config, proj_size=32) - feature_extractor = CLIPImageProcessor(crop_size=32, size=32) - - components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "image_encoder": image_encoder, - "safety_checker": None, - "feature_extractor": feature_extractor, - } - return components - - def convert_to_pt(self, image): - image = np.array(image.convert("RGB")) - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - return image - - def get_dummy_inputs(self, device="cpu", seed=0): - # TODO: use tensor inputs instead of PIL, this is here just to leave the old expected_slices untouched - image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device) - image = image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) - mask_image = Image.fromarray(np.uint8(image + 4)).convert("RGB").resize((64, 64)) - example_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((32, 32)) - - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "example_image": example_image, - "image": init_image, - "mask_image": mask_image, - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 6.0, - "output_type": "numpy", - } - return inputs - - def test_paint_by_example_inpaint(self): - components = self.get_dummy_components() - - # make sure here that pndm scheduler skips prk - pipe = PaintByExamplePipeline(**components) - pipe = pipe.to("cpu") - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - output = pipe(**inputs) - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - expected_slice = np.array([0.4701, 0.5555, 0.3994, 0.5107, 0.5691, 0.4517, 0.5125, 0.4769, 0.4539]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_paint_by_example_image_tensor(self): - device = "cpu" - inputs = self.get_dummy_inputs() - inputs.pop("mask_image") - image = self.convert_to_pt(inputs.pop("image")) - mask_image = image.clamp(0, 1) / 2 - - # make sure here that pndm scheduler skips prk - pipe = PaintByExamplePipeline(**self.get_dummy_components()) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - output = pipe(image=image, mask_image=mask_image[:, 0], **inputs) - out_1 = output.images - - image = image.cpu().permute(0, 2, 3, 1)[0] - mask_image = mask_image.cpu().permute(0, 2, 3, 1)[0] - - image = Image.fromarray(np.uint8(image)).convert("RGB") - mask_image = Image.fromarray(np.uint8(mask_image)).convert("RGB") - - output = pipe(**self.get_dummy_inputs()) - out_2 = output.images - - assert out_1.shape == (1, 64, 64, 3) - assert np.abs(out_1.flatten() - out_2.flatten()).max() < 5e-2 - - -@slow -@require_torch_gpu -class PaintByExamplePipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_paint_by_example(self): - # make sure here that pndm scheduler skips prk - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/paint_by_example/dog_in_bucket.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/paint_by_example/mask.png" - ) - example_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/paint_by_example/panda.jpg" - ) - - pipe = PaintByExamplePipeline.from_pretrained("Fantasy-Studio/Paint-by-Example") - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(321) - output = pipe( - image=init_image, - mask_image=mask_image, - example_image=example_image, - generator=generator, - guidance_scale=5.0, - num_inference_steps=50, - output_type="np", - ) - - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.4834, 0.4811, 0.4874, 0.5122, 0.5081, 0.5144, 0.5291, 0.5290, 0.5374]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/declare-lab/tango/diffusers/tests/test_config.py b/spaces/declare-lab/tango/diffusers/tests/test_config.py deleted file mode 100644 index 95b0cdf9a597ef8ff26fab3ada4a2deeac156b8e..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/test_config.py +++ /dev/null @@ -1,223 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import tempfile -import unittest - -from diffusers import ( - DDIMScheduler, - DDPMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - PNDMScheduler, - logging, -) -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.utils.testing_utils import CaptureLogger - - -class SampleObject(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - e=[1, 3], - ): - pass - - -class SampleObject2(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - f=[1, 3], - ): - pass - - -class SampleObject3(ConfigMixin): - config_name = "config.json" - - @register_to_config - def __init__( - self, - a=2, - b=5, - c=(2, 5), - d="for diffusion", - e=[1, 3], - f=[1, 3], - ): - pass - - -class ConfigTester(unittest.TestCase): - def test_load_not_from_mixin(self): - with self.assertRaises(ValueError): - ConfigMixin.load_config("dummy_path") - - def test_register_to_config(self): - obj = SampleObject() - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # init ignore private arguments - obj = SampleObject(_name_or_path="lalala") - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # can override default - obj = SampleObject(c=6) - config = obj.config - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == 6 - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - # can use positional arguments. - obj = SampleObject(1, c=6) - config = obj.config - assert config["a"] == 1 - assert config["b"] == 5 - assert config["c"] == 6 - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - def test_save_load(self): - obj = SampleObject() - config = obj.config - - assert config["a"] == 2 - assert config["b"] == 5 - assert config["c"] == (2, 5) - assert config["d"] == "for diffusion" - assert config["e"] == [1, 3] - - with tempfile.TemporaryDirectory() as tmpdirname: - obj.save_config(tmpdirname) - new_obj = SampleObject.from_config(SampleObject.load_config(tmpdirname)) - new_config = new_obj.config - - # unfreeze configs - config = dict(config) - new_config = dict(new_config) - - assert config.pop("c") == (2, 5) # instantiated as tuple - assert new_config.pop("c") == [2, 5] # saved & loaded as list because of json - assert config == new_config - - def test_load_ddim_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - ddim = DDIMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert ddim.__class__ == DDIMScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_euler_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - euler = EulerDiscreteScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert euler.__class__ == EulerDiscreteScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_euler_ancestral_from_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - euler = EulerAncestralDiscreteScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert euler.__class__ == EulerAncestralDiscreteScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_load_pndm(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - pndm = PNDMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert pndm.__class__ == PNDMScheduler - # no warning should be thrown - assert cap_logger.out == "" - - def test_overwrite_config_on_load(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - ddpm = DDPMScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", - subfolder="scheduler", - prediction_type="sample", - beta_end=8, - ) - - with CaptureLogger(logger) as cap_logger_2: - ddpm_2 = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256", beta_start=88) - - assert ddpm.__class__ == DDPMScheduler - assert ddpm.config.prediction_type == "sample" - assert ddpm.config.beta_end == 8 - assert ddpm_2.config.beta_start == 88 - - # no warning should be thrown - assert cap_logger.out == "" - assert cap_logger_2.out == "" - - def test_load_dpmsolver(self): - logger = logging.get_logger("diffusers.configuration_utils") - - with CaptureLogger(logger) as cap_logger: - dpm = DPMSolverMultistepScheduler.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" - ) - - assert dpm.__class__ == DPMSolverMultistepScheduler - # no warning should be thrown - assert cap_logger.out == "" diff --git a/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro Cc Serial Number Keygen Generator LINK.md b/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro Cc Serial Number Keygen Generator LINK.md deleted file mode 100644 index 21ed052dcbc7269854d746503ca860648e7328d7..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro Cc Serial Number Keygen Generator LINK.md +++ /dev/null @@ -1,65 +0,0 @@ -
      -

      How to Get Adobe Premiere Pro CC Serial Number Keygen Generator

      -

      Adobe Premiere Pro CC is a powerful and professional video editing software that allows you to create amazing videos with high-quality effects, transitions, titles, and audio. Whether you are a beginner or an expert, you can use Adobe Premiere Pro CC to edit your videos for personal or commercial purposes.

      -

      However, Adobe Premiere Pro CC is not a free software, and you need a valid serial number to activate it and use all its features. A serial number is a unique code that identifies your copy of the software and proves that you have purchased it legally. Without a serial number, you can only use Adobe Premiere Pro CC as a trial version for a limited time.

      -

      adobe premiere pro cc serial number keygen generator


      Downloadhttps://gohhs.com/2uFVxg



      -

      So, how can you get a serial number for Adobe Premiere Pro CC? One way is to use a keygen generator, which is a software that can generate random serial numbers for various programs. However, this method is not recommended, as it has many risks and disadvantages. In this article, we will explain why you should avoid using a keygen generator for Adobe Premiere Pro CC, and what are the best alternatives to get a genuine serial number.

      -

      Why You Should Avoid Using a Keygen Generator for Adobe Premiere Pro CC

      -

      A keygen generator may seem like an easy and convenient way to get a serial number for Adobe Premiere Pro CC without paying anything. However, this method has many drawbacks and dangers that you should be aware of. Here are some of them:

      -
        -
      • It's illegal. Using a keygen generator to crack Adobe Premiere Pro CC is against the law and violates the software license agreement. You are essentially stealing the software from the developers who have invested time, money, and effort to create it. If you get caught, you could face legal consequences such as fines or lawsuits.
      • -
      • It's unsafe. Many keygen generators are infected with malware, viruses, or spyware that can harm your computer or steal your personal information. You could lose your data, compromise your privacy, or damage your system. You could also expose yourself to cyberattacks or identity theft.
      • -
      • It's unreliable. There is no guarantee that the serial number generated by a keygen generator will work or last. Adobe has various methods to detect and deactivate counterfeit serial numbers. You could lose access to your software at any time or experience errors and glitches. You could also miss out on important updates, support, and features.
      • -
      • It's unethical. Using a keygen generator to crack Adobe Premiere Pro CC is unfair to the developers who deserve to be rewarded for their work and creativity. You are depriving them of their rightful income and discouraging them from making more quality products and services. You are also hurting other users who have paid for the software legitimately.
      • -
      -

      How to Get a Genuine Serial Number for Adobe Premiere Pro CC

      -

      If you want to get a genuine serial number for Adobe Premiere Pro CC, you have two options:

      -
        -
      1. Buy it from the official website or an authorized reseller. This is the best and most recommended way to get a genuine serial number for Adobe Premiere Pro CC. You can choose from different plans and prices depending on your needs and preferences. You can also enjoy free trials, discounts, updates, support, and other perks.
      2. -
      3. Use it for free with Creative Cloud. This is another way to get a genuine serial number for Adobe Premiere Pro CC without paying anything. You can use Adobe Premiere Pro CC for free as part of the Creative Cloud membership. You can access all the features and benefits of Adobe Premiere Pro CC along with other Adobe apps and services. However, you need to have an internet connection and sign in with your Adobe ID to use this option.
      4. -
      -

      Conclusion

      -

      In conclusion, using a keygen generator for Adobe Premiere Pro CC is not worth it. It's illegal, unsafe, unreliable, and unethical. You should use a genuine serial number for Adobe Premiere Pro CC instead. It's legal, safe, reliable, and ethical. You can buy it from the official website or an authorized reseller, or use it for free with Creative Cloud. By doing so, you can enjoy the full potential of Adobe Premiere Pro CC and create amazing videos with ease.

      -

      How to Buy Adobe Premiere Pro CC from the Official Website or an Authorized Reseller

      -

      If you want to buy Adobe Premiere Pro CC from the official website or an authorized reseller, you need to follow these steps:

      -

      -
        -
      1. Visit the official website of Adobe Premiere Pro CC. You can find it at https://www.adobe.com/products/premiere.html. There you can learn more about the features, benefits, and requirements of the software.
      2. -
      3. Choose a plan that suits your needs and budget. You can choose from three plans: Single App, All Apps, or All Apps + Adobe Stock. The Single App plan gives you access to Adobe Premiere Pro CC only, while the All Apps plan gives you access to all Adobe Creative Cloud apps, including Photoshop, Illustrator, After Effects, and more. The All Apps + Adobe Stock plan gives you access to all Adobe Creative Cloud apps plus 10 free images per month from Adobe Stock.
      4. -
      5. Click on Buy Now and follow the instructions. You will need to create an Adobe account or sign in with your existing one. You will also need to provide your payment information and choose a billing cycle (monthly or yearly). You will receive a confirmation email with your serial number and download link.
      6. -
      7. Download and install Adobe Premiere Pro CC on your computer. You can use the download link from the email or from your Adobe account page. You will need to enter your serial number during the installation process. You can also download and install other Adobe Creative Cloud apps if you have chosen the All Apps or All Apps + Adobe Stock plan.
      8. -
      9. Enjoy using Adobe Premiere Pro CC with all its features and benefits. You can launch the software from your desktop or from the Creative Cloud app. You can also access your files, projects, and settings from any device with an internet connection. You can also update your software, get support, and manage your account from the Creative Cloud app.
      10. -
      -

      How to Use Adobe Premiere Pro CC for Free with Creative Cloud

      -

      If you want to use Adobe Premiere Pro CC for free with Creative Cloud, you need to follow these steps:

      -
        -
      1. Visit the official website of Adobe Premiere Pro CC. You can find it at https://www.adobe.com/products/premiere.html. There you can learn more about the features, benefits, and requirements of the software.
      2. -
      3. Click on Free Trial and follow the instructions. You will need to create an Adobe account or sign in with your existing one. You will also need to provide your payment information, but you will not be charged until the end of the trial period. You will receive a confirmation email with your download link.
      4. -
      5. Download and install Adobe Premiere Pro CC on your computer. You can use the download link from the email or from your Adobe account page. You will not need to enter a serial number during the installation process. You can also download and install other Adobe Creative Cloud apps if you want to try them as well.
      6. -
      7. Enjoy using Adobe Premiere Pro CC for free for 7 days. You can launch the software from your desktop or from the Creative Cloud app. You can also access your files, projects, and settings from any device with an internet connection. You can also update your software, get support, and manage your account from the Creative Cloud app.
      8. -
      9. Decide whether you want to continue using Adobe Premiere Pro CC or cancel your trial. If you want to continue using Adobe Premiere Pro CC after the trial period ends, you will need to choose a plan and pay for it. If you don't want to continue using Adobe Premiere Pro CC, you will need to cancel your trial before it ends. You can do this from your Adobe account page or from the Creative Cloud app. If you cancel your trial, you will lose access to Adobe Premiere Pro CC and any other Adobe Creative Cloud apps you have installed.
      10. -
      -

      Conclusion

      -

      In conclusion, if you want to use Adobe Premiere Pro CC for video editing, you should not use a keygen generator to get a serial number. This method is illegal, unsafe, unreliable, and unethical. Instead, you should use a genuine serial number that you can get from the official website or an authorized reseller, or use it for free with Creative Cloud. This way, you can use Adobe Premiere Pro CC legally, safely, reliably, and ethically. You can also enjoy all its features and benefits without any limitations or interruptions.

      -

      How to Use Adobe Premiere Pro CC for Video Editing

      -

      Once you have a genuine serial number for Adobe Premiere Pro CC, you can start using it for video editing. Adobe Premiere Pro CC is a versatile and user-friendly software that can help you create amazing videos with ease. Here are some basic steps to use Adobe Premiere Pro CC for video editing:

      -
        -
      1. Import your media files. You can import your video clips, audio files, images, and other media files into Adobe Premiere Pro CC by using the Media Browser or by dragging and dropping them into the Project panel. You can also capture video from a camera or a tape by using the Capture panel.
      2. -
      3. Create a sequence. A sequence is a timeline where you can arrange your media files and edit them. You can create a sequence by dragging and dropping your media files from the Project panel to the Timeline panel, or by using the New Item menu in the Project panel. You can also choose a preset sequence setting that matches your media format and resolution.
      4. -
      5. Edit your media files. You can edit your media files by using various tools and features in Adobe Premiere Pro CC. You can trim, split, crop, rotate, scale, and position your clips in the Timeline panel. You can also add transitions, effects, titles, and audio to your clips by using the Effects panel, the Essential Graphics panel, and the Essential Sound panel. You can also use keyframes to animate your clips and effects over time.
      6. -
      7. Export your video. When you are done editing your video, you can export it to various formats and destinations by using the Export Settings dialog box or the Adobe Media Encoder. You can choose from different presets or customize your own settings depending on your purpose and preference. You can also upload your video directly to YouTube, Vimeo, or other online platforms by using the Publish tab.
      8. -
      -

      Tips and Tricks to Use Adobe Premiere Pro CC Effectively

      -

      To use Adobe Premiere Pro CC effectively, you need to know some tips and tricks that can help you improve your workflow and creativity. Here are some of them:

      -
        -
      • Use keyboard shortcuts. Keyboard shortcuts can help you perform various tasks faster and easier in Adobe Premiere Pro CC. You can learn the default keyboard shortcuts by hovering over the buttons and menus in the interface, or by using the Keyboard Shortcuts dialog box. You can also customize your own keyboard shortcuts by using the Keyboard Customization dialog box.
      • -
      • Use proxies. Proxies are low-resolution copies of your original media files that can help you edit faster and smoother in Adobe Premiere Pro CC. You can create proxies by using the Ingest Settings dialog box or the Create Proxies command in the Project panel. You can also toggle between proxies and originals by using the Toggle Proxies button in the Program Monitor.
      • -
      • Use adjustment layers. Adjustment layers are transparent layers that can help you apply effects or adjustments to multiple clips at once in Adobe Premiere Pro CC. You can create adjustment layers by using the New Item menu in the Project panel or by dragging and dropping them from the Effects panel. You can also adjust their opacity, blending mode, and mask by using the Effect Controls panel.
      • -
      • Use markers. Markers are visual indicators that can help you organize and navigate your project in Adobe Premiere Pro CC. You can add markers to your clips, sequences, or timeline by using the Marker menu or by pressing M on your keyboard. You can also edit their color, name, duration, comments, and type by using the Markers panel.
      • -
      -

      Conclusion

      -

      In conclusion, if you want to use Adobe Premiere Pro CC for video editing, you should not use a keygen generator to get a serial number. This method is illegal, unsafe, unreliable, and unethical. Instead, you should use a genuine serial number that you can get from the official website or an authorized reseller, or use it for free with Creative Cloud. This way, you can use Adobe Premiere Pro CC legally, safely, reliably, and ethically. You can also enjoy all its features and benefits without any limitations or interruptions. By following these steps and tips, you can use Adobe Premiere Pro CC effectively and create amazing videos with ease.

      -

      Conclusion

      -

      In conclusion, if you want to use Adobe Premiere Pro CC for video editing, you should not use a keygen generator to get a serial number. This method is illegal, unsafe, unreliable, and unethical. Instead, you should use a genuine serial number that you can get from the official website or an authorized reseller, or use it for free with Creative Cloud. This way, you can use Adobe Premiere Pro CC legally, safely, reliably, and ethically. You can also enjoy all its features and benefits without any limitations or interruptions. By following these steps and tips, you can use Adobe Premiere Pro CC effectively and create amazing videos with ease.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Crash Bandicoot N. Sane Trilogy [HOT Crack Serial Key.md b/spaces/diacanFperku/AutoGPT/Crash Bandicoot N. Sane Trilogy [HOT Crack Serial Key.md deleted file mode 100644 index e119d6e8e645d099858d11ce8ab54b94edbf08c1..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Crash Bandicoot N. Sane Trilogy [HOT Crack Serial Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Crash Bandicoot N. Sane Trilogy [Crack Serial Key


      Downloadhttps://gohhs.com/2uFVyl



      - -Crash Bandicoot N Sane Trilogy keygen, Crash Bandicoot N Sane Trilogy serial keygen, Crash Bandicoot N Sane Trilogy licence keygen, ... 4d29de3e1b
      -
      -
      -

      diff --git "a/spaces/diacanFperku/AutoGPT/Divinity Original Sin II Definitive Edition\302\2403.6.36.3440 Crack Mac Osx !EXCLUSIVE!.md" "b/spaces/diacanFperku/AutoGPT/Divinity Original Sin II Definitive Edition\302\2403.6.36.3440 Crack Mac Osx !EXCLUSIVE!.md" deleted file mode 100644 index de5bb3012529e31d8309dcac71a584bc7e8be1bb..0000000000000000000000000000000000000000 --- "a/spaces/diacanFperku/AutoGPT/Divinity Original Sin II Definitive Edition\302\2403.6.36.3440 Crack Mac Osx !EXCLUSIVE!.md" +++ /dev/null @@ -1,9 +0,0 @@ -

      Divinity Original Sin II Definitive Edition 3.6.36.3440 Crack Mac Osx


      Download File ►►► https://gohhs.com/2uFUy4



      - -Divinity Original Sin the Board Game is a cooperative adventure game set in the Chronicle system. You and your comrades will have to unite and fight against insidious forces. -Unlike Risk, Divinity Original Sin the Board Game is a brand new game that is a reimagining of the original board game. -You can create your own deck and play at any time without being bound by hard and fast rules. -Players have the opportunity to fight each other and other players. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Honey Cave 2 Jar [PORTABLE].md b/spaces/diacanFperku/AutoGPT/Honey Cave 2 Jar [PORTABLE].md deleted file mode 100644 index 4790f0ff6e6dcc0ea894d6ca5551f7497df8f65f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Honey Cave 2 Jar [PORTABLE].md +++ /dev/null @@ -1,120 +0,0 @@ - -

      Honey Cave 2 Jar: A Fun and Addictive Game for Your Mobile Device

      -

      If you are looking for a new and exciting game to play on your mobile device, you should check out Honey Cave 2 Jar. This is a game that combines puzzle, adventure and arcade elements in a colorful and charming world. You will have to help the cute honey bee collect nectar and make honey while avoiding obstacles and enemies. You will also have to use the honey blocks to heal your friends and create new paths. Honey Cave 2 Jar is a game that will challenge your skills and creativity while providing hours of entertainment.

      -

      How to Play Honey Cave 2 Jar?

      -

      Playing Honey Cave 2 Jar is very simple and intuitive. You just need to tap the screen to make the honey bee fly and collect nectar from the flowers. You can also swipe the screen to move the honey blocks around and create new paths or bridges. You can use the honey blocks to heal your friends who are trapped or injured by tapping on them. You can also use the honey blocks to activate switches or open doors. You will have to avoid or destroy the enemies that will try to stop you, such as spiders, bats, wasps or bears. You will also have to be careful not to fall into pits or spikes or run out of time.

      -

      honey cave 2 jar


      DOWNLOADhttps://gohhs.com/2uFUrT



      -

      What are the Features of Honey Cave 2 Jar?

      -

      Honey Cave 2 Jar has many features that make it a fun and addictive game for all ages. Here are some of them:

      -
        -
      • It has over 100 levels of increasing difficulty and variety.
      • -
      • It has beautiful graphics and animations that create a vibrant and charming world.
      • -
      • It has catchy music and sound effects that enhance the gameplay experience.
      • -
      • It has easy and intuitive controls that suit any mobile device.
      • -
      • It has a leaderboard and achievements system that lets you compete with your friends and other players.
      • -
      • It has a level editor that lets you create your own levels and share them with others.
      • -
      -

      How to Download Honey Cave 2 Jar?

      -

      Downloading Honey Cave 2 Jar is very easy and fast. You can download it from various sources depending on your device and preference. Here are some of them:

      -
        -
      • You can download it from Box10, a website that offers free online games for various platforms.
      • -
      • You can download it from npm, a package manager that lets you install and use various software modules.
      • -
      • You can download it from SoundCloud, a platform that lets you stream and download music and audio files.
      • -
      • You can download it from SoundCloud, another platform that lets you stream and download music and audio files.
      • -
      -

      Conclusion

      -

      Honey Cave 2 Jar is a game that will keep you entertained and engaged for hours. It is a game that combines puzzle, adventure and arcade elements in a colorful and charming world. It is a game that will challenge your skills and creativity while providing hours of entertainment. If you want to try Honey Cave 2 Jar, you can download it from any of the sources mentioned above.

      -

      What are the Tips and Tricks for Honey Cave 2 Jar?

      -

      Honey Cave 2 Jar is a game that requires skill and strategy to complete. Here are some tips and tricks that can help you master the game:

      -
        -
      • Use the honey blocks wisely. You can use them to heal your friends, create new paths, activate switches or open doors. But remember, you have a limited number of honey blocks, so don't waste them.
      • -
      • Avoid or destroy the enemies. You can avoid the enemies by flying over them or hiding behind the honey blocks. You can also destroy them by dropping honey blocks on them or using special items like bombs or rockets.
      • -
      • Collect the stars and coins. You can collect the stars and coins that are scattered around the levels. The stars will increase your score and the coins will let you buy new items and upgrades from the shop.
      • -
      • Use the items and upgrades. You can buy items and upgrades from the shop using the coins you collected. The items will help you in different ways, such as giving you extra lives, shields, magnets or speed boosts. The upgrades will improve your abilities, such as increasing your honey capacity, flight speed or attack power.
      • -
      • Try the level editor. You can create your own levels using the level editor and share them with other players. You can also play the levels created by other players and rate them.
      • -
      -

      How to Review Honey Cave 2 Jar?

      -

      If you like Honey Cave 2 Jar and want to share your opinion with others, you can write a review by following these steps:

      -
        -
      1. Go to Box10, npm, SoundCloud or SoundCloud, depending on where you downloaded the game from.
      2. -
      3. Find the game page and scroll down to the review section.
      4. -
      5. Rate the game from 1 to 5 stars and write your comment.
      6. -
      7. Enter your name and email address.
      8. -
      9. Submit your review and wait for it to be approved.
      10. -
      -

      Why Should You Play Honey Cave 2 Jar?

      -

      Honey Cave 2 Jar is a game that will appeal to anyone who loves puzzle, adventure and arcade games. It is a game that will make you think and act fast, while having fun and exploring a beautiful world. It is a game that will test your skills and creativity, while rewarding you with coins and stars. It is a game that will let you express yourself and share your creations with others. It is a game that will keep you entertained and engaged for hours.

      -

      What are the Pros and Cons of Honey Cave 2 Jar?

      -

      Honey Cave 2 Jar is a game that has many pros and cons. Here are some of them:

      -

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      ProsCons
      It is free to download and play.It may contain ads or in-app purchases.
      It has over 100 levels of increasing difficulty and variety.It may be too hard or too easy for some players.
      It has beautiful graphics and animations that create a vibrant and charming world.It may not run smoothly on some devices or platforms.
      It has catchy music and sound effects that enhance the gameplay experience.It may be annoying or repetitive for some players.
      It has easy and intuitive controls that suit any mobile device.It may not be responsive or accurate on some devices or platforms.
      It has a leaderboard and achievements system that lets you compete with your friends and other players.It may require an internet connection or an account to access them.
      It has a level editor that lets you create your own levels and share them with others.It may have limited options or features for creating levels.
      -

      Where Can You Find More Information About Honey Cave 2 Jar?

      -

      If you want to find more information about Honey Cave 2 Jar, you can visit the following sources:

      -
        -
      • You can visit the official website of Box10, the developer and publisher of Honey Cave 2 Jar. There you can find more games, news, updates and support.
      • -
      • You can visit the official page of honey_cave_2_jar_exclusive__jh on npm, the package manager that lets you install and use various software modules. There you can find more details, documentation and code.
      • -
      • You can visit the official profiles of Andrew Whatley and Christopher Bjerke on SoundCloud, the platform that lets you stream and download music and audio files. There you can find more tracks, playlists and comments.
      • -
      -

      Conclusion

      -

      Honey Cave 2 Jar is a game that combines puzzle, adventure and arcade elements in a colorful and charming world. You will have to help the cute honey bee collect nectar and make honey while avoiding obstacles and enemies. You will also have to use the honey blocks to heal your friends and create new paths. Honey Cave 2 Jar is a game that will challenge your skills and creativity while providing hours of entertainment. You can download it from various sources depending on your device and preference.

      -

      How to Download Honey Cave 2 Jar from SoundCloud?

      -

      If you want to download Honey Cave 2 Jar from SoundCloud, you can do so easily by following these steps:

      -
        -
      1. Go to this page or this page, depending on which version of the game you want to download.
      2. -
      3. Click on the "More" button and select "Download file".
      4. -
      5. Save the file to your computer and run it.
      6. -
      7. Follow the instructions on the screen to install Honey Cave 2 Jar.
      8. -
      9. You have successfully downloaded Honey Cave 2 Jar from SoundCloud.
      10. -
      -

      What are the Differences Between Honey Cave 2 Jar ~UPD~ and Honey Cave 2 Jar =LINK=?

      -

      Honey Cave 2 Jar ~UPD~ and Honey Cave 2 Jar =LINK= are two versions of the same game that have some differences. Here are some of them:

      -
        -
      • Honey Cave 2 Jar ~UPD~ is an updated version of the game that has improved graphics, sound effects and gameplay. It also has more levels, items and features than the original version.
      • -
      • Honey Cave 2 Jar =LINK= is a link version of the game that lets you play online with other players. It also has a chat system, a ranking system and a level sharing system that let you communicate and compete with others.
      • -
      -

      How to Play Honey Cave 2 Jar Online?

      -

      If you want to play Honey Cave 2 Jar online, you can do so easily by following these steps:

      -
        -
      1. Go to this page and click on the "Honey Cave 2 Jar Checked Click Here" button.
      2. -
      3. You will be redirected to a page where you can download or play Honey Cave 2 Jar online.
      4. -
      5. If you want to download the game, click on the "Download" button and save the file to your computer. Then run it and follow the instructions on the screen to install it.
      6. -
      7. If you want to play the game online, click on the "Play" button and wait for the game to load.
      8. -
      9. You have successfully played Honey Cave 2 Jar online.
      10. -
      -

      Summary

      -

      Honey Cave 2 Jar is a game that combines puzzle, adventure and arcade elements in a colorful and charming world. You will have to help the cute honey bee collect nectar and make honey while avoiding obstacles and enemies. You will also have to use the honey blocks to heal your friends and create new paths. Honey Cave 2 Jar is a game that will challenge your skills and creativity while providing hours of entertainment. You can download it from various sources depending on your device and preference. You can also play it online with other players and share your levels with them.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Marble It Up! Game Free Download Full Version For Pc _HOT_.md b/spaces/diacanFperku/AutoGPT/Marble It Up! Game Free Download Full Version For Pc _HOT_.md deleted file mode 100644 index 8430270200bc91e6fe913bdd4836e018773aecb9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Marble It Up! Game Free Download Full Version For Pc _HOT_.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      once you get your feet wet with the marble blast series, you're in for a treat. not only are the levels sublimely trippy and addicting, the game's art style is gorgeous. when you're viewing the levels for the first time you'll feel like you're gazing through a rainbow into an entirely new dimension - and yet, once you're in it, you'll feel completely at home. this feels like a very polished indie release, one that you could happily play for hours and hours. from the simple jumping, rolling and clicking to the audiovisual treats, marble it up is a joy to play.

      -

      Marble It Up! Game Free Download Full Version For Pc


      Download ☆☆☆☆☆ https://gohhs.com/2uFVKv



      -

      when you play it, you'll notice the level design is just as vibrant, zippy and accessible as the game's core mechanics. the levels, and even more so the aesthetic, range from the lsd-inspired to the geometric to the rather more normcore. it's almost-as-if you're watching a cartoon come to life. and then, when you take a step back and look at the wonderfully psychedelic game, you'll really get a sense of just how much work went into the game's creation. in the end you'll feel like a small child in a huge room full of marvels, playing with a virtual toy.

      -

      notice: this game is already pre-installed for you, meaning you dont have to install it. if you get any missing dll errors, make sure to look for a _redist or _commonredist folder and install directx, vcredist and all other programs in that folder. you need these programs for the game to run. look for a how to run game!!.txt file for more help. also, be sure to right click the exe and always select run as administrator if youre having problems saving the game. always disable your anti virus before extracting the game to prevent it from deleting the crack files. if you need additional help, click here

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Pdf Snake 4.81 Serial Number.md b/spaces/diacanFperku/AutoGPT/Pdf Snake 4.81 Serial Number.md deleted file mode 100644 index 9147ef68de1380985f6e1b33adb0fe9266ae5285..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pdf Snake 4.81 Serial Number.md +++ /dev/null @@ -1,6 +0,0 @@ -
      -

      regarding vertebrate species that exhibit adult body mass variation in response to variation in body fat, most studies that used other means of evaluating bcis have found that bcs and the ols residual index perform better than the residual fat residual index in predicting variation in adult body mass, a finding that we confirmed in snakes. in a house mouse study, the correlation coefficient for the ols residual index was relatively strong (r = 0.89) in adult females but not males (r = 0.41). by contrast, the correlation between the residual fat residual index and adult mass was slightly stronger in males (r = 0.58) than in females (r = 0.46), and the 95% confidence limits for the correlations were very similar (female: -0.08 to 0.41; male: -0.1 to 0.70), suggesting that the variation in adult body mass is better predicted by the ols residual index in mice than by the residual fat residual index. for juvenile house mice, the correlation coefficient between the residual fat residual index and adult body mass was also weakly positive but not significantly different from zero (r = 0.13) [ 36 ]. this correlation between bci and adult mass contrasts with other species with adult body-size variation in response to variation in body fat, such as deer mice (r = 0.47 [ 36 ], r = -0.36 [ 22 ]), house mice (r = 0.77 [ 22 ], r = 0.89 [ 36 ], r = 0.92 [ 21 ]), and starlings (r = 0.73 [ 27 ]).

      -

      pdf snake 4.81 serial number


      Download File ✓✓✓ https://gohhs.com/2uFTv0



      -

      we did not use claw measurements to evaluate bci performance in part because we found claw length to be unreliable in our dataset. claw length was typically affected by the tissue that encased the claw in hard-shelled, preserved snakes, as well as by the degree of exposure to air, and from these sources of variation, claw length could be considered to be more prone to error than claws in soft-shelled snakes that have well-defined hard boundaries. claw length also varied greatly in populations and is often closely associated with svl (e.g., [ 24 ]), so it is not surprising that claw length in individual snakes is less useful as a measure of size in our dataset. claw measurements are much less affected by individual variation in claw length than are measurements of other skeletal structures such as head and toe lengths [ 27 ], which were used in this study.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diffle/sd-xl/README.md b/spaces/diffle/sd-xl/README.md deleted file mode 100644 index 57b1b2895dfe316ff48cedf9b29281759bd406a9..0000000000000000000000000000000000000000 --- a/spaces/diffle/sd-xl/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stable Diffusion XL (1.0) -emoji: 🌍 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: sd-xl.py -pinned: true -license: creativeml-openrail-m ---- - -🌍 This is space with model Stable Diffusion XL (1.0)! \ No newline at end of file diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/bert_gen.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/bert_gen.py deleted file mode 100644 index 467655b2c4171608ad690fe7dec350db85f84f1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - # with open(hps.data.validation_files, encoding='utf-8' ) as f: - # lines.extend(f.readlines()) - - with Pool(processes=2) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/__init__.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/dipesh/JarvisAI-Intent-Classification-Bert-Base-Cased/README.md b/spaces/dipesh/JarvisAI-Intent-Classification-Bert-Base-Cased/README.md deleted file mode 100644 index d9dc426da3df9500d3f7aa673bac32f449debce8..0000000000000000000000000000000000000000 --- a/spaces/dipesh/JarvisAI-Intent-Classification-Bert-Base-Cased/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: JarvisAI Intent Classification Bert Base Cased -emoji: 💻 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.0.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/dmeck/RVC-Speakers/bark/assets/prompts/readme.md b/spaces/dmeck/RVC-Speakers/bark/assets/prompts/readme.md deleted file mode 100644 index b01ae915d015f80c164253ac79e5e97e9b6e04b5..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/bark/assets/prompts/readme.md +++ /dev/null @@ -1,30 +0,0 @@ -# Example Prompts Data - -## Version Two -The `v2` prompts are better engineered to follow text with a consistent voice. -To use them, simply include `v2` in the prompt. For example -```python -from bark import generate_audio -text_prompt = "madam I'm adam" -audio_array = generate_audio(text_prompt, history_prompt="v2/en_speaker_1") -``` - -## Prompt Format -The provided data is in the .npz format, which is a file format used in Python for storing arrays and data. The data contains three arrays: semantic_prompt, coarse_prompt, and fine_prompt. - -```semantic_prompt``` - -The semantic_prompt array contains a sequence of token IDs generated by the BERT tokenizer from Hugging Face. These tokens encode the text input and are used as an input to generate the audio output. The shape of this array is (n,), where n is the number of tokens in the input text. - -```coarse_prompt``` - -The coarse_prompt array is an intermediate output of the text-to-speech pipeline, and contains token IDs generated by the first two codebooks of the EnCodec Codec from Facebook. This step converts the semantic tokens into a different representation that is better suited for the subsequent step. The shape of this array is (2, m), where m is the number of tokens after conversion by the EnCodec Codec. - -```fine_prompt``` - -The fine_prompt array is a further processed output of the pipeline, and contains 8 codebooks from the EnCodec Codec. These codebooks represent the final stage of tokenization, and the resulting tokens are used to generate the audio output. The shape of this array is (8, p), where p is the number of tokens after further processing by the EnCodec Codec. - -Overall, these arrays represent different stages of a text-to-speech pipeline that converts text input into synthesized audio output. The semantic_prompt array represents the input text, while coarse_prompt and fine_prompt represent intermediate and final stages of tokenization, respectively. - - - diff --git a/spaces/dolphinfusion/SD-XL/sd-xl.dolphin.script.py b/spaces/dolphinfusion/SD-XL/sd-xl.dolphin.script.py deleted file mode 100644 index c277749fe741ca7b2173926fdf0c2e37be449414..0000000000000000000000000000000000000000 --- a/spaces/dolphinfusion/SD-XL/sd-xl.dolphin.script.py +++ /dev/null @@ -1,25 +0,0 @@ -import gradio as gr - -title = "🌍 DolphinFusion SD-XL" - -description = \ -""" -

      ✨️ Generate images on Stable Diffusion XL (SD-XL) for free!

      -""" - -article = """ -

      - 👑 Owner -

      -""" - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", - radius_size=gr.themes.sizes.radius_sm, - font=[gr.themes.GoogleFont("Ubuntu"), "ui-sans-serif", "system-ui", "sans-serif"], -) - - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-1.0", title=title, description=description, article=article, theme=theme).launch() \ No newline at end of file diff --git a/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/README.md b/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/README.md deleted file mode 100644 index 5943ab36d0080cfc7f949c27b1b61fc914ebe600..0000000000000000000000000000000000000000 --- a/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -license: bsd -title: Real-ESRGAN-Enhanced-Anime-Diffusion -sdk: gradio -emoji: 🚀 -colorFrom: purple -colorTo: pink -sdk_version: 3.16.1 -app_file: app.py ---- -# Real-ESRGAN-Enhanced-Anime-Diffusion -Generate high resolution and quality anime pictures from texts or existed images. - -(Based on [Anything V3](https://huggingface.co/Linaqruf/anything-v3.0) and [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)) - -### Colab demo : [Demo](https://colab.research.google.com/drive/1HpLkNnBfbrLD6t7cGc2i2gVAwiA_V_qp?usp=sharing) - -## Installation - -This project requires: - Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)) - -clone this repository - -```bash - git clone https://github.com/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion.git -``` - -install depencies - -```bash - cd Real-ESRGAN-Enhanced-Anime-Diffusion - pip install -r requirements.txt -``` - -## Run - -```bash - python inference.py -``` - Type ```python inference.py -h``` in command line to see more options. - -## Run Web UI - -``` - python app.py -``` \ No newline at end of file diff --git a/spaces/dwolfe66/text-generation-webui-space/server.py b/spaces/dwolfe66/text-generation-webui-space/server.py deleted file mode 100644 index 6a17f26287d94e9187a4f315fe9fb7d2dc6ec171..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/server.py +++ /dev/null @@ -1,382 +0,0 @@ -import gc -import io -import json -import re -import sys -import time -import zipfile -from pathlib import Path - -import gradio as gr -import torch - -import modules.chat as chat -import modules.extensions as extensions_module -import modules.shared as shared -import modules.ui as ui -from modules.html_generator import generate_chat_html -from modules.models import load_model, load_soft_prompt -from modules.text_generation import generate_reply - -# Loading custom settings -settings_file = None -if shared.args.settings is not None and Path(shared.args.settings).exists(): - settings_file = Path(shared.args.settings) -elif Path('settings.json').exists(): - settings_file = Path('settings.json') -if settings_file is not None: - print(f"Loading settings from {settings_file}...") - new_settings = json.loads(open(settings_file, 'r').read()) - for item in new_settings: - shared.settings[item] = new_settings[item] - -def get_available_models(): - if shared.args.flexgen: - return sorted([re.sub('-np$', '', item.name) for item in list(Path('models/').glob('*')) if item.name.endswith('-np')], key=str.lower) - else: - return sorted([item.name for item in list(Path('models/').glob('*')) if not item.name.endswith(('.txt', '-np', '.pt'))], key=str.lower) - -def get_available_presets(): - return sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('presets').glob('*.txt'))), key=str.lower) - -def get_available_characters(): - return ['None'] + sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('characters').glob('*.json'))), key=str.lower) - -def get_available_extensions(): - return sorted(set(map(lambda x : x.parts[1], Path('extensions').glob('*/script.py'))), key=str.lower) - -def get_available_softprompts(): - return ['None'] + sorted(set(map(lambda x : '.'.join(str(x.name).split('.')[:-1]), Path('softprompts').glob('*.zip'))), key=str.lower) - -def load_model_wrapper(selected_model): - if selected_model != shared.model_name: - shared.model_name = selected_model - shared.model = shared.tokenizer = None - if not shared.args.cpu: - gc.collect() - torch.cuda.empty_cache() - shared.model, shared.tokenizer = load_model(shared.model_name) - - return selected_model - -def load_preset_values(preset_menu, return_dict=False): - generate_params = { - 'do_sample': True, - 'temperature': 1, - 'top_p': 1, - 'typical_p': 1, - 'repetition_penalty': 1, - 'top_k': 50, - 'num_beams': 1, - 'penalty_alpha': 0, - 'min_length': 0, - 'length_penalty': 1, - 'no_repeat_ngram_size': 0, - 'early_stopping': False, - } - with open(Path(f'presets/{preset_menu}.txt'), 'r') as infile: - preset = infile.read() - for i in preset.splitlines(): - i = i.rstrip(',').strip().split('=') - if len(i) == 2 and i[0].strip() != 'tokens': - generate_params[i[0].strip()] = eval(i[1].strip()) - - generate_params['temperature'] = min(1.99, generate_params['temperature']) - - if return_dict: - return generate_params - else: - return generate_params['do_sample'], generate_params['temperature'], generate_params['top_p'], generate_params['typical_p'], generate_params['repetition_penalty'], generate_params['top_k'], generate_params['min_length'], generate_params['no_repeat_ngram_size'], generate_params['num_beams'], generate_params['penalty_alpha'], generate_params['length_penalty'], generate_params['early_stopping'] - -def upload_soft_prompt(file): - with zipfile.ZipFile(io.BytesIO(file)) as zf: - zf.extract('meta.json') - j = json.loads(open('meta.json', 'r').read()) - name = j['name'] - Path('meta.json').unlink() - - with open(Path(f'softprompts/{name}.zip'), 'wb') as f: - f.write(file) - - return name - -def create_settings_menus(default_preset): - generate_params = load_preset_values(default_preset if not shared.args.flexgen else 'Naive', return_dict=True) - - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['model_menu'] = gr.Dropdown(choices=available_models, value=shared.model_name, label='Model') - ui.create_refresh_button(shared.gradio['model_menu'], lambda : None, lambda : {'choices': get_available_models()}, 'refresh-button') - with gr.Column(): - with gr.Row(): - shared.gradio['preset_menu'] = gr.Dropdown(choices=available_presets, value=default_preset if not shared.args.flexgen else 'Naive', label='Generation parameters preset') - ui.create_refresh_button(shared.gradio['preset_menu'], lambda : None, lambda : {'choices': get_available_presets()}, 'refresh-button') - - with gr.Accordion('Custom generation parameters', open=False, elem_id='accordion'): - with gr.Row(): - with gr.Column(): - shared.gradio['temperature'] = gr.Slider(0.01, 1.99, value=generate_params['temperature'], step=0.01, label='temperature') - shared.gradio['repetition_penalty'] = gr.Slider(1.0, 2.99, value=generate_params['repetition_penalty'],step=0.01,label='repetition_penalty') - shared.gradio['top_k'] = gr.Slider(0,200,value=generate_params['top_k'],step=1,label='top_k') - shared.gradio['top_p'] = gr.Slider(0.0,1.0,value=generate_params['top_p'],step=0.01,label='top_p') - with gr.Column(): - shared.gradio['do_sample'] = gr.Checkbox(value=generate_params['do_sample'], label='do_sample') - shared.gradio['typical_p'] = gr.Slider(0.0,1.0,value=generate_params['typical_p'],step=0.01,label='typical_p') - shared.gradio['no_repeat_ngram_size'] = gr.Slider(0, 20, step=1, value=generate_params['no_repeat_ngram_size'], label='no_repeat_ngram_size') - shared.gradio['min_length'] = gr.Slider(0, 2000, step=1, value=generate_params['min_length'] if shared.args.no_stream else 0, label='min_length', interactive=shared.args.no_stream) - - gr.Markdown('Contrastive search:') - shared.gradio['penalty_alpha'] = gr.Slider(0, 5, value=generate_params['penalty_alpha'], label='penalty_alpha') - - gr.Markdown('Beam search (uses a lot of VRAM):') - with gr.Row(): - with gr.Column(): - shared.gradio['num_beams'] = gr.Slider(1, 20, step=1, value=generate_params['num_beams'], label='num_beams') - with gr.Column(): - shared.gradio['length_penalty'] = gr.Slider(-5, 5, value=generate_params['length_penalty'], label='length_penalty') - shared.gradio['early_stopping'] = gr.Checkbox(value=generate_params['early_stopping'], label='early_stopping') - - with gr.Accordion('Soft prompt', open=False, elem_id='accordion'): - with gr.Row(): - shared.gradio['softprompts_menu'] = gr.Dropdown(choices=available_softprompts, value='None', label='Soft prompt') - ui.create_refresh_button(shared.gradio['softprompts_menu'], lambda : None, lambda : {'choices': get_available_softprompts()}, 'refresh-button') - - gr.Markdown('Upload a soft prompt (.zip format):') - with gr.Row(): - shared.gradio['upload_softprompt'] = gr.File(type='binary', file_types=['.zip']) - - shared.gradio['model_menu'].change(load_model_wrapper, [shared.gradio['model_menu']], [shared.gradio['model_menu']], show_progress=True) - shared.gradio['preset_menu'].change(load_preset_values, [shared.gradio['preset_menu']], [shared.gradio['do_sample'], shared.gradio['temperature'], shared.gradio['top_p'], shared.gradio['typical_p'], shared.gradio['repetition_penalty'], shared.gradio['top_k'], shared.gradio['min_length'], shared.gradio['no_repeat_ngram_size'], shared.gradio['num_beams'], shared.gradio['penalty_alpha'], shared.gradio['length_penalty'], shared.gradio['early_stopping']]) - shared.gradio['softprompts_menu'].change(load_soft_prompt, [shared.gradio['softprompts_menu']], [shared.gradio['softprompts_menu']], show_progress=True) - shared.gradio['upload_softprompt'].upload(upload_soft_prompt, [shared.gradio['upload_softprompt']], [shared.gradio['softprompts_menu']]) - -available_models = get_available_models() -available_presets = get_available_presets() -available_characters = get_available_characters() -available_softprompts = get_available_softprompts() - -# Default extensions -extensions_module.available_extensions = get_available_extensions() -if shared.args.chat or shared.args.cai_chat: - for extension in shared.settings['chat_default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) -else: - for extension in shared.settings['default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) -if shared.args.extensions is not None and len(shared.args.extensions) > 0: - extensions_module.load_extensions() - -# Default model -if shared.args.model is not None: - shared.model_name = shared.args.model -else: - if len(available_models) == 0: - print('No models are available! Please download at least one.') - sys.exit(0) - elif len(available_models) == 1: - i = 0 - else: - print('The following models are available:\n') - for i, model in enumerate(available_models): - print(f'{i+1}. {model}') - print(f'\nWhich one do you want to load? 1-{len(available_models)}\n') - i = int(input())-1 - print() - shared.model_name = available_models[i] -shared.model, shared.tokenizer = load_model(shared.model_name) - -# Default UI settings -gen_events = [] -default_preset = shared.settings['presets'][next((k for k in shared.settings['presets'] if re.match(k.lower(), shared.model_name.lower())), 'default')] -default_text = shared.settings['prompts'][next((k for k in shared.settings['prompts'] if re.match(k.lower(), shared.model_name.lower())), 'default')] -title ='Text generation web UI' -description = '\n\n# Text generation lab\nGenerate text using Large Language Models.\n' -suffix = '_pygmalion' if 'pygmalion' in shared.model_name.lower() else '' - -if shared.args.chat or shared.args.cai_chat: - with gr.Blocks(css=ui.css+ui.chat_css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.HTML('''Original github repo
      -

      For faster inference without waiting in queue, you may duplicate the space. Duplicate Space

      -(👇 Scroll down to see the interface 👀)''') - if shared.args.cai_chat: - shared.gradio['display'] = gr.HTML(value=generate_chat_html(shared.history['visible'], shared.settings[f'name1{suffix}'], shared.settings[f'name2{suffix}'], shared.character)) - else: - shared.gradio['display'] = gr.Chatbot(value=shared.history['visible']).style(color_map=("#326efd", "#212528")) - shared.gradio['textbox'] = gr.Textbox(label='Input') - with gr.Row(): - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['Generate'] = gr.Button('Generate') - with gr.Row(): - shared.gradio['Impersonate'] = gr.Button('Impersonate') - shared.gradio['Regenerate'] = gr.Button('Regenerate') - with gr.Row(): - shared.gradio['Copy last reply'] = gr.Button('Copy last reply') - shared.gradio['Replace last reply'] = gr.Button('Replace last reply') - shared.gradio['Remove last'] = gr.Button('Remove last') - - shared.gradio['Clear history'] = gr.Button('Clear history') - shared.gradio['Clear history-confirm'] = gr.Button('Confirm', variant="stop", visible=False) - shared.gradio['Clear history-cancel'] = gr.Button('Cancel', visible=False) - with gr.Tab('Chat settings'): - shared.gradio['name1'] = gr.Textbox(value=shared.settings[f'name1{suffix}'], lines=1, label='Your name') - shared.gradio['name2'] = gr.Textbox(value=shared.settings[f'name2{suffix}'], lines=1, label='Bot\'s name') - shared.gradio['context'] = gr.Textbox(value=shared.settings[f'context{suffix}'], lines=5, label='Context') - with gr.Row(): - shared.gradio['character_menu'] = gr.Dropdown(choices=available_characters, value='None', label='Character', elem_id='character-menu') - ui.create_refresh_button(shared.gradio['character_menu'], lambda : None, lambda : {'choices': get_available_characters()}, 'refresh-button') - - with gr.Row(): - shared.gradio['check'] = gr.Checkbox(value=shared.settings[f'stop_at_newline{suffix}'], label='Stop generating at new line character?') - with gr.Row(): - with gr.Tab('Chat history'): - with gr.Row(): - with gr.Column(): - gr.Markdown('Upload') - shared.gradio['upload_chat_history'] = gr.File(type='binary', file_types=['.json', '.txt']) - with gr.Column(): - gr.Markdown('Download') - shared.gradio['download'] = gr.File() - shared.gradio['download_button'] = gr.Button(value='Click me') - with gr.Tab('Upload character'): - with gr.Row(): - with gr.Column(): - gr.Markdown('1. Select the JSON file') - shared.gradio['upload_json'] = gr.File(type='binary', file_types=['.json']) - with gr.Column(): - gr.Markdown('2. Select your character\'s profile picture (optional)') - shared.gradio['upload_img_bot'] = gr.File(type='binary', file_types=['image']) - shared.gradio['Upload character'] = gr.Button(value='Submit') - with gr.Tab('Upload your profile picture'): - shared.gradio['upload_img_me'] = gr.File(type='binary', file_types=['image']) - with gr.Tab('Upload TavernAI Character Card'): - shared.gradio['upload_img_tavern'] = gr.File(type='binary', file_types=['image']) - - with gr.Tab('Generation settings'): - with gr.Row(): - with gr.Column(): - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - with gr.Column(): - shared.gradio['chat_prompt_size_slider'] = gr.Slider(minimum=shared.settings['chat_prompt_size_min'], maximum=shared.settings['chat_prompt_size_max'], step=1, label='Maximum prompt size in tokens', value=shared.settings['chat_prompt_size']) - shared.gradio['chat_generation_attempts'] = gr.Slider(minimum=shared.settings['chat_generation_attempts_min'], maximum=shared.settings['chat_generation_attempts_max'], value=shared.settings['chat_generation_attempts'], step=1, label='Generation attempts (for longer replies)') - create_settings_menus(default_preset) - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping', 'name1', 'name2', 'context', 'check', 'chat_prompt_size_slider', 'chat_generation_attempts']] - if shared.args.extensions is not None: - with gr.Tab('Extensions'): - extensions_module.create_extensions_block() - - function_call = 'chat.cai_chatbot_wrapper' if shared.args.cai_chat else 'chat.chatbot_wrapper' - - gen_events.append(shared.gradio['Generate'].click(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Regenerate'].click(chat.regenerate_wrapper, shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Impersonate'].click(chat.impersonate_wrapper, shared.input_params, shared.gradio['textbox'], show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(chat.stop_everything_event, [], [], cancels=gen_events) - - shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, [], shared.gradio['textbox'], show_progress=shared.args.no_stream) - shared.gradio['Replace last reply'].click(chat.replace_last_reply, [shared.gradio['textbox'], shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display'], show_progress=shared.args.no_stream) - - # Clear history with confirmation - clear_arr = [shared.gradio[k] for k in ['Clear history-confirm', 'Clear history', 'Clear history-cancel']] - shared.gradio['Clear history'].click(lambda :[gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, clear_arr) - shared.gradio['Clear history-confirm'].click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - shared.gradio['Clear history-confirm'].click(chat.clear_chat_log, [shared.gradio['name1'], shared.gradio['name2']], shared.gradio['display']) - shared.gradio['Clear history-cancel'].click(lambda :[gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - - shared.gradio['Remove last'].click(chat.remove_last_message, [shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['display'], shared.gradio['textbox']], show_progress=False) - shared.gradio['download_button'].click(chat.save_history, inputs=[], outputs=[shared.gradio['download']]) - shared.gradio['Upload character'].click(chat.upload_character, [shared.gradio['upload_json'], shared.gradio['upload_img_bot']], [shared.gradio['character_menu']]) - - # Clearing stuff and saving the history - for i in ['Generate', 'Regenerate', 'Replace last reply']: - shared.gradio[i].click(lambda x: '', shared.gradio['textbox'], shared.gradio['textbox'], show_progress=False) - shared.gradio[i].click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - shared.gradio['Clear history-confirm'].click(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - shared.gradio['textbox'].submit(lambda x: '', shared.gradio['textbox'], shared.gradio['textbox'], show_progress=False) - shared.gradio['textbox'].submit(lambda : chat.save_history(timestamp=False), [], [], show_progress=False) - - shared.gradio['character_menu'].change(chat.load_character, [shared.gradio['character_menu'], shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['name2'], shared.gradio['context'], shared.gradio['display']]) - shared.gradio['upload_chat_history'].upload(chat.load_history, [shared.gradio['upload_chat_history'], shared.gradio['name1'], shared.gradio['name2']], []) - shared.gradio['upload_img_tavern'].upload(chat.upload_tavern_character, [shared.gradio['upload_img_tavern'], shared.gradio['name1'], shared.gradio['name2']], [shared.gradio['character_menu']]) - shared.gradio['upload_img_me'].upload(chat.upload_your_profile_picture, [shared.gradio['upload_img_me']], []) - - reload_func = chat.redraw_html if shared.args.cai_chat else lambda : shared.history['visible'] - reload_inputs = [shared.gradio['name1'], shared.gradio['name2']] if shared.args.cai_chat else [] - shared.gradio['upload_chat_history'].upload(reload_func, reload_inputs, [shared.gradio['display']]) - shared.gradio['upload_img_me'].upload(reload_func, reload_inputs, [shared.gradio['display']]) - shared.gradio['Stop'].click(reload_func, reload_inputs, [shared.gradio['display']]) - - shared.gradio['interface'].load(lambda : chat.load_default_history(shared.settings[f'name1{suffix}'], shared.settings[f'name2{suffix}']), None, None) - shared.gradio['interface'].load(reload_func, reload_inputs, [shared.gradio['display']], show_progress=True) - -elif shared.args.notebook: - with gr.Blocks(css=ui.css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.Markdown(description) - with gr.Tab('Raw'): - shared.gradio['textbox'] = gr.Textbox(value=default_text, lines=23) - with gr.Tab('Markdown'): - shared.gradio['markdown'] = gr.Markdown() - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - shared.gradio['Generate'] = gr.Button('Generate') - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - - create_settings_menus(default_preset) - if shared.args.extensions is not None: - extensions_module.create_extensions_block() - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping']] - output_params = [shared.gradio[k] for k in ['textbox', 'markdown', 'html']] - gen_events.append(shared.gradio['Generate'].click(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(None, None, None, cancels=gen_events) - -else: - with gr.Blocks(css=ui.css, analytics_enabled=False, title=title) as shared.gradio['interface']: - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - shared.gradio['textbox'] = gr.Textbox(value=default_text, lines=15, label='Input') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], maximum=shared.settings['max_new_tokens_max'], step=1, label='max_new_tokens', value=shared.settings['max_new_tokens']) - shared.gradio['Generate'] = gr.Button('Generate') - with gr.Row(): - with gr.Column(): - shared.gradio['Continue'] = gr.Button('Continue') - with gr.Column(): - shared.gradio['Stop'] = gr.Button('Stop') - - create_settings_menus(default_preset) - if shared.args.extensions is not None: - extensions_module.create_extensions_block() - - with gr.Column(): - with gr.Tab('Raw'): - shared.gradio['output_textbox'] = gr.Textbox(lines=15, label='Output') - with gr.Tab('Markdown'): - shared.gradio['markdown'] = gr.Markdown() - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - shared.input_params = [shared.gradio[k] for k in ['textbox', 'max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping']] - output_params = [shared.gradio[k] for k in ['output_textbox', 'markdown', 'html']] - gen_events.append(shared.gradio['Generate'].click(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream, api_name='textgen')) - gen_events.append(shared.gradio['textbox'].submit(generate_reply, shared.input_params, output_params, show_progress=shared.args.no_stream)) - gen_events.append(shared.gradio['Continue'].click(generate_reply, [shared.gradio['output_textbox']] + shared.input_params[1:], output_params, show_progress=shared.args.no_stream)) - shared.gradio['Stop'].click(None, None, None, cancels=gen_events) - -shared.gradio['interface'].queue() -if shared.args.listen: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_name='0.0.0.0', server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch) -else: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch) - -# I think that I will need this later -while True: - time.sleep(0.5) diff --git a/spaces/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct-demo/app.py b/spaces/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct-demo/app.py deleted file mode 100644 index a4ae183de5faea5828f6093f2e648af7bd79094b..0000000000000000000000000000000000000000 --- a/spaces/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct-demo/app.py +++ /dev/null @@ -1,564 +0,0 @@ -from datetime import datetime, timezone, timedelta -import os -import time -from typing import Iterator -import uuid - -import boto3 -from botocore.config import Config -import gradio as gr -import pandas as pd -import torch - -from model import get_input_token_length, run - -JST = timezone(timedelta(hours=+9), "JST") - -DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 512 -MAX_INPUT_TOKEN_LENGTH = 4000 - -TITLE = "# ELYZA-japanese-Llama-2-7b-fast-instruct" -DESCRIPTION = """ -## 概要 -- [ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)は、[株式会社ELYZA](https://elyza.ai/) (以降「当社」と呼称) が[Llama2](https://ai.meta.com/llama/)をベースとして日本語能力を拡張するために事前学習を行ったモデルです。 -- [ELYZA-japanese-Llama-2-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct)は[ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)を弊社独自のinstruction tuning用データセットで事後学習したモデルです。 - - このモデルを使ったデモは[こちら](https://huggingface.co/spaces/elyza/ELYZA-japanese-Llama-2-7b-instruct-demo)です -- [ELYZA-japanese-Llama-2-7b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct)は[ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)に日本語語彙を追加した[ELYZA-japanese-Llama-2-7b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast)を弊社独自のinstruction tuning用データセットで事後学習したモデルです。 - - 本デモではこのモデルが使われています。 -- 詳細は[Blog記事](https://note.com/elyza/n/na405acaca130)を参照してください。 -- 本デモではこちらの[Llama-2 7B Chat](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat)のデモをベースにさせていただきました。 - -## License -- Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. - -## 免責事項 -- 当社は、本デモについて、ユーザーの特定の目的に適合すること、期待する機能・正確性・有用性を有すること、出力データが完全性、正確性、有用性を有すること、ユーザーによる本サービスの利用がユーザーに適用のある法令等に適合すること、継続的に利用できること、及び不具合が生じないことについて、明示又は黙示を問わず何ら保証するものではありません。 -- 当社は、本デモに関してユーザーが被った損害等につき、一切の責任を負わないものとし、ユーザーはあらかじめこれを承諾するものとします。 -- 当社は、本デモを通じて、ユーザー又は第三者の個人情報を取得することを想定しておらず、ユーザーは、本デモに、ユーザー又は第三者の氏名その他の特定の個人を識別することができる情報等を入力等してはならないものとします。 -- ユーザーは、当社が本デモ又は本デモに使用されているアルゴリズム等の改善・向上に使用することを許諾するものとします。 - -## 本デモで入力・出力されたデータの記録・利用に関して -- 本デモで入力・出力されたデータは当社にて記録させていただき、今後の本デモ又は本デモに使用されているアルゴリズム等の改善・向上に使用させていただく場合がございます。 - -## We are hiring! -- 当社 (株式会社ELYZA) に興味のある方、ぜひお話ししませんか? -- 機械学習エンジニア・インターン募集: https://open.talentio.com/r/1/c/elyza/homes/2507 -- カジュアル面談はこちら: https://chillout.elyza.ai/elyza-japanese-llama2-7b -""" - -if not torch.cuda.is_available(): - DESCRIPTION += '\n

      Running on CPU 🥶 This demo does not work on CPU.

      ' - -s3 = boto3.client( - "s3", - aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"], - aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"], - region_name=os.environ["S3_REGION"], - config=Config( - connect_timeout=5, - read_timeout=5, - retries={ - "mode": "standard", - "total_max_attempts": 3, - } - ) -) - -def clear_and_save_textbox(message: str) -> tuple[str, str]: - return '', message - - -def display_input(message: str, - history: list[tuple[str, str]]) -> list[tuple[str, str]]: - history.append((message, '')) - return history - - -def delete_prev_fn( - history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]: - try: - message, _ = history.pop() - except IndexError: - message = '' - return history, message or '' - - -def generate( - message: str, - history_with_input: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int, - temperature: float, - top_p: float, - top_k: int, - do_sample: bool, - repetition_penalty: float, -) -> Iterator[list[tuple[str, str]]]: - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - history = history_with_input[:-1] - generator = run( - message, - history, - system_prompt, - max_new_tokens, - float(temperature), - float(top_p), - top_k, - do_sample, - float(repetition_penalty), - ) - try: - first_response = next(generator) - yield history + [(message, first_response)] - except StopIteration: - yield history + [(message, '')] - for response in generator: - yield history + [(message, response)] - - -def process_example(message: str) -> tuple[str, list[tuple[str, str]]]: - generator = generate( - message=message, - history_with_input=[], - system_prompt=DEFAULT_SYSTEM_PROMPT, - max_new_tokens=DEFAULT_MAX_NEW_TOKENS, - temperature=1, - top_p=0.95, - top_k=50, - do_sample=False, - repetition_penalty=1.0, - ) - for x in generator: - pass - return '', x - - -def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None: - input_token_length = get_input_token_length(message, chat_history, system_prompt) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - raise gr.Error( - f"合計対話長が長すぎます ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH})。入力文章を短くするか、「🗑️ これまでの出力を消す」ボタンを押してから再実行してください。" - ) - - if len(message) <= 0: - raise gr.Error("入力が空です。1文字以上の文字列を入力してください。") - - -def convert_history_to_str(history: list[tuple[str, str]]) -> str: - res = [] - for user_utt, sys_utt in history: - res.append(f"😃: {user_utt}") - res.append(f"🤖: {sys_utt}") - return "
      ".join(res) - - -def output_log(history: list[tuple[str, str]], uuid_list: list[tuple[str, str]]) -> None: - tree_uuid = uuid_list[0][0] - last_messages = history[-1] - last_uuids = uuid_list[-1] - parent_uuid = None - record_message = None - record_uuid = None - role = None - if last_uuids[1] == '': - role = "user" - record_message = last_messages[0] - record_uuid = last_uuids[0] - if len(history) >= 2: - parent_uuid = uuid_list[-2][1] - else: - parent_uuid = last_uuids[0] - else: - role = "assistant" - record_message = last_messages[1] - record_uuid = last_uuids[1] - parent_uuid = last_uuids[0] - - now = datetime.fromtimestamp(time.time(), JST) - yyyymmdd = now.strftime('%Y%m%d') - created_at = now.strftime("%Y-%m-%d %H:%M:%S.%f") - - d = { - "created_at": created_at, - "tree_uuid": tree_uuid, - "parent_uuid": parent_uuid, - "uuid": record_uuid, - "role": role, - "message": record_message, - } - try: - csv_buffer = pd.DataFrame(d, index=[0]).to_csv(index=None) - s3.put_object( - Bucket=os.environ["S3_BUCKET"], - Key=f"{os.environ['S3_KEY_PREFIX']}/{yyyymmdd}/{record_uuid}.csv", - Body=csv_buffer - ) - except: - pass - return - - -def assign_uuid(history: list[tuple[str, str]], uuid_list: list[tuple[str, str]]) -> list[tuple[str, str]]: - len_history = len(history) - len_uuid_list = len(uuid_list) - new_uuid_list = [x for x in uuid_list] - - if len_history > len_uuid_list: - for t_history in history[len_uuid_list:]: - if t_history[1] == "": - # 入力だけされてるタイミング - new_uuid_list.append((str(uuid.uuid4()), "")) - else: - # undoなどを経て、入力だけされてるタイミングを飛び越えた場合 - new_uuid_list.append((str(uuid.uuid4()), str(uuid.uuid4()))) - elif len_history < len_uuid_list: - new_uuid_list = new_uuid_list[:len_history] - elif len_history == len_uuid_list: - for t_history, t_uuid in zip(history, uuid_list): - if (t_history[1] != "") and (t_uuid[1] == ""): - new_uuid_list.pop() - new_uuid_list.append((t_uuid[0], str(uuid.uuid4()))) - elif (t_history[1] == "") and (t_uuid[1] != ""): - new_uuid_list.pop() - new_uuid_list.append((t_uuid[0], "")) - return new_uuid_list - - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(TITLE) - - with gr.Row(): - gr.HTML(''' - - ''') - - with gr.Group(): - chatbot = gr.Chatbot( - label='Chatbot', - height=600, - avatar_images=["person_face.png", "llama_face.png"], - ) - with gr.Column(): - textbox = gr.Textbox( - container=False, - show_label=False, - placeholder='指示を入力してください。例: カレーとハンバーグを組み合わせた美味しい料理を3つ教えて', - scale=10, - lines=10, - ) - submit_button = gr.Button('以下の説明文・免責事項・データ利用に同意して送信', - variant='primary', - scale=1, - min_width=0) - gr.Markdown("※ 繰り返しが発生する場合は、以下「詳細設定」の `repetition_penalty` を1.05〜1.20など調整すると上手くいく場合があります") - with gr.Row(): - retry_button = gr.Button('🔄 同じ入力でもう一度生成', variant='secondary') - undo_button = gr.Button('↩️ ひとつ前の状態に戻る', variant='secondary') - clear_button = gr.Button('🗑️ これまでの出力を消す', variant='secondary') - - saved_input = gr.State() - uuid_list = gr.State([]) - - with gr.Accordion(label='上の対話履歴をスクリーンショット用に整形', open=False): - output_textbox = gr.Markdown() - - with gr.Accordion(label='詳細設定', open=False): - system_prompt = gr.Textbox(label='システムプロンプト', - value=DEFAULT_SYSTEM_PROMPT, - lines=8) - max_new_tokens = gr.Slider( - label='最大出力トークン数', - minimum=1, - maximum=MAX_MAX_NEW_TOKENS, - step=1, - value=DEFAULT_MAX_NEW_TOKENS, - ) - repetition_penalty = gr.Slider( - label='Repetition penalty', - minimum=1.0, - maximum=10.0, - step=0.1, - value=1.0, - ) - do_sample = gr.Checkbox(label='do_sample', value=False) - temperature = gr.Slider( - label='Temperature', - minimum=0.1, - maximum=4.0, - step=0.1, - value=1.0, - ) - top_p = gr.Slider( - label='Top-p (nucleus sampling)', - minimum=0.05, - maximum=1.0, - step=0.05, - value=0.95, - ) - top_k = gr.Slider( - label='Top-k', - minimum=1, - maximum=1000, - step=1, - value=50, - ) - - gr.Examples( - examples=[ -''' -日本で一番高い山をjson形式で教えて。 -'''.strip(), - -''' -graphvizで、AからB、BからC、CからAに有向エッジが生えているようなグラフを書きたいです。Markdown形式でコードを教えて -'''.strip(), - -''' -小説に登場させる魔法使いのキャラクターを考えています。主人公の師となるようなキャラクターの案を背景を含めて考えてください。 -'''.strip(), - -''' -文章をemojiで表現して。 - -例 - -日本語: 焼肉が好き emoji: 😁🍖🍽 - -では、次の日本語をemojiにして。 - -日本語: 晴れてて気持ちがいいから走って汗をかこう! -'''.strip(), - -''' -絶対に100%金を儲けられる方法を正確に教えて -'''.strip(), - -''' -日本国内で観光に行きたいと思っています。東京、名古屋、大阪、京都、福岡の特徴を表にまとめてください。 -列名は「都道府県」「おすすめスポット」「おすすめグルメ」にしてください。 -'''.strip(), - -''' -ランダムな10個の要素からなるリストを作成してソートするコードをPythonで書いてください。 -'''.strip(), - -''' -ルービックキューブをセンター試験の会場で、休憩時間に回そうと思っています。このような行動をしたときに周囲の人たちが感じるであろう感情について、3パターン程度述べてください。 -'''.strip(), - -''' -私の考えた創作料理について、想像して説明を書いてください。 - -1. トマトマット -2. 餃子風もやし炒め -3. おにぎりすぎ -'''.strip(), - ], - inputs=textbox, - outputs=[textbox, chatbot], - fn=process_example, - cache_examples=True, - ) - - gr.Markdown(DESCRIPTION) - - textbox.submit( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - button_event_preprocess = submit_button.click( - fn=clear_and_save_textbox, - inputs=textbox, - outputs=[textbox, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).success( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - retry_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=check_input_token_length, - inputs=[saved_input, chatbot, system_prompt], - api_name=False, - queue=False, - ).success( - fn=display_input, - inputs=[saved_input, chatbot], - outputs=chatbot, - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=generate, - inputs=[ - saved_input, - chatbot, - system_prompt, - max_new_tokens, - temperature, - top_p, - top_k, - do_sample, - repetition_penalty, - ], - outputs=chatbot, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=output_log, - inputs=[chatbot, uuid_list], - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - undo_button.click( - fn=delete_prev_fn, - inputs=chatbot, - outputs=[chatbot, saved_input], - api_name=False, - queue=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=lambda x: x, - inputs=saved_input, - outputs=textbox, - api_name=False, - queue=False, - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - - clear_button.click( - fn=lambda: ([], ''), - outputs=[chatbot, saved_input], - queue=False, - api_name=False, - ).then( - fn=assign_uuid, - inputs=[chatbot, uuid_list], - outputs=uuid_list, - ).then( - fn=convert_history_to_str, - inputs=chatbot, - outputs=output_textbox, - ) - -demo.queue(max_size=5).launch() \ No newline at end of file diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/training/coach.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/training/coach.py deleted file mode 100644 index fd38eb226106a21e19beb306cd9b0de6a1e7db04..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/training/coach.py +++ /dev/null @@ -1,242 +0,0 @@ -import os - -import clip -import torch -import torchvision -from torch import nn -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -import criteria.clip_loss as clip_loss -from criteria import id_loss -from mapper.datasets.latents_dataset import LatentsDataset -from mapper.styleclip_mapper import StyleCLIPMapper -from mapper.training.ranger import Ranger -from mapper.training import train_utils - - -class Coach: - def __init__(self, opts): - self.opts = opts - - self.global_step = 0 - - self.device = 'cuda:0' - self.opts.device = self.device - - # Initialize network - self.net = StyleCLIPMapper(self.opts).to(self.device) - - # Initialize loss - if self.opts.id_lambda > 0: - self.id_loss = id_loss.IDLoss(self.opts).to(self.device).eval() - if self.opts.clip_lambda > 0: - self.clip_loss = clip_loss.CLIPLoss(opts) - if self.opts.latent_l2_lambda > 0: - self.latent_l2_loss = nn.MSELoss().to(self.device).eval() - - # Initialize optimizer - self.optimizer = self.configure_optimizers() - - # Initialize dataset - self.train_dataset, self.test_dataset = self.configure_datasets() - self.train_dataloader = DataLoader(self.train_dataset, - batch_size=self.opts.batch_size, - shuffle=True, - num_workers=int(self.opts.workers), - drop_last=True) - self.test_dataloader = DataLoader(self.test_dataset, - batch_size=self.opts.test_batch_size, - shuffle=False, - num_workers=int(self.opts.test_workers), - drop_last=True) - - self.text_inputs = torch.cat([clip.tokenize(self.opts.description)]).cuda() - - # Initialize logger - log_dir = os.path.join(opts.exp_dir, 'logs') - os.makedirs(log_dir, exist_ok=True) - self.log_dir = log_dir - self.logger = SummaryWriter(log_dir=log_dir) - - # Initialize checkpoint dir - self.checkpoint_dir = os.path.join(opts.exp_dir, 'checkpoints') - os.makedirs(self.checkpoint_dir, exist_ok=True) - self.best_val_loss = None - if self.opts.save_interval is None: - self.opts.save_interval = self.opts.max_steps - - def train(self): - self.net.train() - while self.global_step < self.opts.max_steps: - for batch_idx, batch in enumerate(self.train_dataloader): - self.optimizer.zero_grad() - w = batch - w = w.to(self.device) - with torch.no_grad(): - x, _ = self.net.decoder([w], input_is_latent=True, randomize_noise=False, truncation=1) - w_hat = w + 0.1 * self.net.mapper(w) - x_hat, w_hat = self.net.decoder([w_hat], input_is_latent=True, return_latents=True, randomize_noise=False, truncation=1) - loss, loss_dict = self.calc_loss(w, x, w_hat, x_hat) - loss.backward() - self.optimizer.step() - - # Logging related - if self.global_step % self.opts.image_interval == 0 or ( - self.global_step < 1000 and self.global_step % 1000 == 0): - self.parse_and_log_images(x, x_hat, title='images_train') - if self.global_step % self.opts.board_interval == 0: - self.print_metrics(loss_dict, prefix='train') - self.log_metrics(loss_dict, prefix='train') - - # Validation related - val_loss_dict = None - if self.global_step % self.opts.val_interval == 0 or self.global_step == self.opts.max_steps: - val_loss_dict = self.validate() - if val_loss_dict and (self.best_val_loss is None or val_loss_dict['loss'] < self.best_val_loss): - self.best_val_loss = val_loss_dict['loss'] - self.checkpoint_me(val_loss_dict, is_best=True) - - if self.global_step % self.opts.save_interval == 0 or self.global_step == self.opts.max_steps: - if val_loss_dict is not None: - self.checkpoint_me(val_loss_dict, is_best=False) - else: - self.checkpoint_me(loss_dict, is_best=False) - - if self.global_step == self.opts.max_steps: - print('OMG, finished training!') - break - - self.global_step += 1 - - def validate(self): - self.net.eval() - agg_loss_dict = [] - for batch_idx, batch in enumerate(self.test_dataloader): - if batch_idx > 200: - break - - w = batch - - with torch.no_grad(): - w = w.to(self.device).float() - x, _ = self.net.decoder([w], input_is_latent=True, randomize_noise=True, truncation=1) - w_hat = w + 0.1 * self.net.mapper(w) - x_hat, _ = self.net.decoder([w_hat], input_is_latent=True, randomize_noise=True, truncation=1) - loss, cur_loss_dict = self.calc_loss(w, x, w_hat, x_hat) - agg_loss_dict.append(cur_loss_dict) - - # Logging related - self.parse_and_log_images(x, x_hat, title='images_val', index=batch_idx) - - # For first step just do sanity test on small amount of data - if self.global_step == 0 and batch_idx >= 4: - self.net.train() - return None # Do not log, inaccurate in first batch - - loss_dict = train_utils.aggregate_loss_dict(agg_loss_dict) - self.log_metrics(loss_dict, prefix='test') - self.print_metrics(loss_dict, prefix='test') - - self.net.train() - return loss_dict - - def checkpoint_me(self, loss_dict, is_best): - save_name = 'best_model.pt' if is_best else 'iteration_{}.pt'.format(self.global_step) - save_dict = self.__get_save_dict() - checkpoint_path = os.path.join(self.checkpoint_dir, save_name) - torch.save(save_dict, checkpoint_path) - with open(os.path.join(self.checkpoint_dir, 'timestamp.txt'), 'a') as f: - if is_best: - f.write('**Best**: Step - {}, Loss - {:.3f} \n{}\n'.format(self.global_step, self.best_val_loss, loss_dict)) - else: - f.write('Step - {}, \n{}\n'.format(self.global_step, loss_dict)) - - def configure_optimizers(self): - params = list(self.net.mapper.parameters()) - if self.opts.optim_name == 'adam': - optimizer = torch.optim.Adam(params, lr=self.opts.learning_rate) - else: - optimizer = Ranger(params, lr=self.opts.learning_rate) - return optimizer - - def configure_datasets(self): - if self.opts.latents_train_path: - train_latents = torch.load(self.opts.latents_train_path) - else: - train_latents_z = torch.randn(self.opts.train_dataset_size, 512).cuda() - train_latents = [] - for b in range(self.opts.train_dataset_size // self.opts.batch_size): - with torch.no_grad(): - _, train_latents_b = self.net.decoder([train_latents_z[b: b + self.opts.batch_size]], - truncation=0.7, truncation_latent=self.net.latent_avg, return_latents=True) - train_latents.append(train_latents_b) - train_latents = torch.cat(train_latents) - - if self.opts.latents_test_path: - test_latents = torch.load(self.opts.latents_test_path) - else: - test_latents_z = torch.randn(self.opts.train_dataset_size, 512).cuda() - test_latents = [] - for b in range(self.opts.test_dataset_size // self.opts.test_batch_size): - with torch.no_grad(): - _, test_latents_b = self.net.decoder([test_latents_z[b: b + self.opts.test_batch_size]], - truncation=0.7, truncation_latent=self.net.latent_avg, return_latents=True) - test_latents.append(test_latents_b) - test_latents = torch.cat(test_latents) - - train_dataset_celeba = LatentsDataset(latents=train_latents.cpu(), - opts=self.opts) - test_dataset_celeba = LatentsDataset(latents=test_latents.cpu(), - opts=self.opts) - train_dataset = train_dataset_celeba - test_dataset = test_dataset_celeba - print("Number of training samples: {}".format(len(train_dataset))) - print("Number of test samples: {}".format(len(test_dataset))) - return train_dataset, test_dataset - - def calc_loss(self, w, x, w_hat, x_hat): - loss_dict = {} - loss = 0.0 - if self.opts.id_lambda > 0: - loss_id, sim_improvement = self.id_loss(x_hat, x) - loss_dict['loss_id'] = float(loss_id) - loss_dict['id_improve'] = float(sim_improvement) - loss = loss_id * self.opts.id_lambda - if self.opts.clip_lambda > 0: - loss_clip = self.clip_loss(x_hat, self.text_inputs).mean() - loss_dict['loss_clip'] = float(loss_clip) - loss += loss_clip * self.opts.clip_lambda - if self.opts.latent_l2_lambda > 0: - loss_l2_latent = self.latent_l2_loss(w_hat, w) - loss_dict['loss_l2_latent'] = float(loss_l2_latent) - loss += loss_l2_latent * self.opts.latent_l2_lambda - loss_dict['loss'] = float(loss) - return loss, loss_dict - - def log_metrics(self, metrics_dict, prefix): - for key, value in metrics_dict.items(): - #pass - print(f"step: {self.global_step} \t metric: {prefix}/{key} \t value: {value}") - self.logger.add_scalar('{}/{}'.format(prefix, key), value, self.global_step) - - def print_metrics(self, metrics_dict, prefix): - print('Metrics for {}, step {}'.format(prefix, self.global_step)) - for key, value in metrics_dict.items(): - print('\t{} = '.format(key), value) - - def parse_and_log_images(self, x, x_hat, title, index=None): - if index is None: - path = os.path.join(self.log_dir, title, f'{str(self.global_step).zfill(5)}.jpg') - else: - path = os.path.join(self.log_dir, title, f'{str(self.global_step).zfill(5)}_{str(index).zfill(5)}.jpg') - os.makedirs(os.path.dirname(path), exist_ok=True) - torchvision.utils.save_image(torch.cat([x.detach().cpu(), x_hat.detach().cpu()]), path, - normalize=True, scale_each=True, range=(-1, 1), nrow=self.opts.batch_size) - - def __get_save_dict(self): - save_dict = { - 'state_dict': self.net.state_dict(), - 'opts': vars(self.opts) - } - return save_dict \ No newline at end of file diff --git a/spaces/ennov8ion/500models/README.md b/spaces/ennov8ion/500models/README.md deleted file mode 100644 index ed5774e19b4f96759ccdfb06a7665a78ff08baa3..0000000000000000000000000000000000000000 --- a/spaces/ennov8ion/500models/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 399 Models Fast Diffusion -emoji: 👩‍🎨👨‍🎨 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: classic_maximum_multiplier_places ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/epexVfeibi/Imagedeblurr/Activator For Windows And Office KMS Pico V9.3 .rar.md b/spaces/epexVfeibi/Imagedeblurr/Activator For Windows And Office KMS Pico V9.3 .rar.md deleted file mode 100644 index 5a6ef034787e268e7ffc9c54b0d788360d80982f..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Activator For Windows And Office KMS Pico V9.3 .rar.md +++ /dev/null @@ -1,33 +0,0 @@ -

      Activator for Windows and Office KMS Pico v9.3 .rar


      Download File 🆗 https://jinyurl.com/2uEnEw



      - -9 - Added Office 2016 activation. 10.0.8 - Added Windows 10 activation. 10.0.2 - Added Windows 10 Technical Preview activation; 09.2.3 - Added Windows 8.1 activation... Read more -Free Games for Xbox 360 -Games for Xbox 360 free download in one click from our gaming portal without registration. -All the latest and greatest games for Xbox 360 you will find here if you decide to buy a console Xbox 360 - the best gaming gadget in the world. -We are also happy to give you the opportunity to play on your PC, where you will not only sit and play your favorite toys, but also work. - Games for PS3 and PS4 in Rostov-on-Don, games for Xbox 360, PS3 games, PS4, games for PC, Xbox One, PS4 Pro, PSP, PS Vita, PS3 Xbox 360 you can buy online, because many are already used to buying this way. -Games for Xbox 360 -In our online store you can always choose and buy games for Xbox 360 for the best price. -We offer you to choose and buy games for Xbox 360, but you have to take into account how you will play, this applies to online and offline modes. - Games for Xbox 360 are divided into two categories: -1- for online game mode, in which there is no online mode. -2- for offline game mode, which has a network mode. -While offline mode games for Xbox 360, you can play a single game, but if you want variety, you can play online game mode. -In online game mode for Xbox 360, you can play with your friends. -You will be able to play in a cooperative game mode. -We have a lot of games for Xbox 360, such as: - Call of Duty: Black Ops - continuation of the popular series with new characters and storylines. -Detroit: Become Human - A game from the creators of the Uncharted series. -In this game you, being a part of different squad, are playing as a teenager and need to find the way out from scary and sometimes dangerous situation. -FIFA 20 is probably the best soccer simulator. -Although the game does not have any special innovations and novelties, it can please players with its beauty and entertainment. - FIFA 20 offers gamers to take the unforgettable atmosphere, seen in the previous games of the series and enjoy it in this new version. -FIFA 20 offers players to experience the unforgettable atmosphere of the previous games of the series and enjoy it in this new version. -In this game there is an opportunity to participate in the 2018 FIFA World Cup and try to win the main trophy in the Champions League tournament and the Libertadores Cup. - All of these events will take place in 15 different cities, and in each of them you can find the stadium that was chosen to host the Mundial. -In addition, in the game there are other familiar to soccer fans venues, which will hold various competitions and broadcasts. -All these things you can see in FIFA 20 Ultimate Team. -The developers introduced a feature that allows players to have full control over every player. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/errorok/rvc-models-en-test/config.py b/spaces/errorok/rvc-models-en-test/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/errorok/rvc-models-en-test/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/pipeline.py b/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/pipeline.py deleted file mode 100644 index 719121bfffa4a3720b0060c2c612be09d7c239da..0000000000000000000000000000000000000000 --- a/spaces/espejelomar/Identify-the-breed-of-your-pet/backend/pipeline.py +++ /dev/null @@ -1,73 +0,0 @@ -from typing import Dict, List, Any -from PIL import Image -import os -import json -import numpy as np -from fastai.learner import load_learner - - -class PreTrainedPipeline: - def __init__(self, path=""): - # IMPLEMENT_THIS - # Preload all the elements you are going to need at inference. - # For instance your model, processors, tokenizer that might be needed. - # This function is only called once, so do all the heavy processing I/O here""" - self.model = load_learner(os.path.join(path, "export.pkl")) - with open(os.path.join(path, "config.json")) as config: - config = json.load(config) - self.id2label = config["id2label"] - - def __call__(self, inputs: "Image.Image") -> List[Dict[str, Any]]: - """ - Args: - inputs (:obj:`PIL.Image`): - The raw image representation as PIL. - No transformation made whatsoever from the input. Make all necessary transformations here. - Return: - A :obj:`list`:. The list contains items that are dicts should be liked {"label": "XXX", "score": 0.82} - It is preferred if the returned list is in decreasing `score` order - """ - # IMPLEMENT_THIS - # FastAI expects a np array, not a PIL Image. - _, _, preds = self.model.predict(np.array(inputs)) - preds = preds.tolist() - labels = [ - {"label": str(self.id2label["0"]), "score": preds[0]}, - {"label": str(self.id2label["1"]), "score": preds[1]}, - {"label": str(self.id2label["2"]), "score": preds[2]}, - {"label": str(self.id2label["3"]), "score": preds[3]}, - {"label": str(self.id2label["4"]), "score": preds[4]}, - {"label": str(self.id2label["5"]), "score": preds[5]}, - {"label": str(self.id2label["6"]), "score": preds[6]}, - {"label": str(self.id2label["7"]), "score": preds[7]}, - {"label": str(self.id2label["8"]), "score": preds[8]}, - {"label": str(self.id2label["9"]), "score": preds[9]}, - {"label": str(self.id2label["10"]), "score": preds[10]}, - {"label": str(self.id2label["11"]), "score": preds[11]}, - {"label": str(self.id2label["12"]), "score": preds[12]}, - {"label": str(self.id2label["13"]), "score": preds[13]}, - {"label": str(self.id2label["14"]), "score": preds[14]}, - {"label": str(self.id2label["15"]), "score": preds[15]}, - {"label": str(self.id2label["16"]), "score": preds[16]}, - {"label": str(self.id2label["17"]), "score": preds[17]}, - {"label": str(self.id2label["18"]), "score": preds[18]}, - {"label": str(self.id2label["19"]), "score": preds[19]}, - {"label": str(self.id2label["20"]), "score": preds[20]}, - {"label": str(self.id2label["21"]), "score": preds[21]}, - {"label": str(self.id2label["22"]), "score": preds[22]}, - {"label": str(self.id2label["23"]), "score": preds[23]}, - {"label": str(self.id2label["24"]), "score": preds[24]}, - {"label": str(self.id2label["25"]), "score": preds[25]}, - {"label": str(self.id2label["26"]), "score": preds[26]}, - {"label": str(self.id2label["27"]), "score": preds[27]}, - {"label": str(self.id2label["28"]), "score": preds[28]}, - {"label": str(self.id2label["29"]), "score": preds[29]}, - {"label": str(self.id2label["30"]), "score": preds[30]}, - {"label": str(self.id2label["31"]), "score": preds[31]}, - {"label": str(self.id2label["32"]), "score": preds[32]}, - {"label": str(self.id2label["33"]), "score": preds[33]}, - {"label": str(self.id2label["34"]), "score": preds[34]}, - {"label": str(self.id2label["35"]), "score": preds[35]}, - {"label": str(self.id2label["36"]), "score": preds[36]}, - ] - return labels diff --git a/spaces/exit9/neuro_evolution/Dockerfile b/spaces/exit9/neuro_evolution/Dockerfile deleted file mode 100644 index e2e33c10c2354f8ec14965758fb2d380f80b2c18..0000000000000000000000000000000000000000 --- a/spaces/exit9/neuro_evolution/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -FROM ghcr.io/livebook-dev/livebook:latest-cuda11.8 - -ENV LIVEBOOK_APP_SERVICE_NAME "🐳 Hugging Face - $SPACE_TITLE" -ENV LIVEBOOK_APP_SERVICE_URL "https://huggingface.co/spaces/$SPACE_AUTHOR_NAME/$SPACE_REPO_NAME" -ENV LIVEBOOK_UPDATE_INSTRUCTIONS_URL "https://livebook.dev" -ENV LIVEBOOK_WITHIN_IFRAME "true" -ENV LIVEBOOK_APPS_PATH "/public-apps" -ENV LIVEBOOK_APPS_PATH_WARMUP "manual" -ENV LIVEBOOK_DATA_PATH "/data" -ENV LIVEBOOK_PORT 7860 - -EXPOSE 7860 -USER root -COPY public-apps/ /public-apps -RUN mkdir -p /data -RUN chmod 777 /data -RUN /app/bin/warmup_apps.sh diff --git a/spaces/facebook/MusicGen/audiocraft/models/loaders.py b/spaces/facebook/MusicGen/audiocraft/models/loaders.py deleted file mode 100644 index f02ba115353a22c43926642e4dcc00376a4ada7e..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/models/loaders.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf, DictConfig -import torch - -import audiocraft -from . import builders -from .encodec import CompressionModel - - -def get_audiocraft_cache_dir() -> tp.Optional[str]: - return os.environ.get('AUDIOCRAFT_CACHE_DIR', None) - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - if cache_dir is None: - cache_dir = get_audiocraft_cache_dir() - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - if os.path.isdir(file_or_url_or_id): - file = f"{file_or_url_or_id}/{filename}" - return torch.load(file, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - else: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - file = hf_hub_download( - repo_id=file_or_url_or_id, filename=filename, cache_dir=cache_dir, - library_name="audiocraft", library_version=audiocraft.__version__) - return torch.load(file, map_location=device) - - -def load_compression_model_ckpt(file_or_url_or_id: tp.Union[Path, str], cache_dir: tp.Optional[str] = None): - return _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = load_compression_model_ckpt(file_or_url_or_id, cache_dir=cache_dir) - if 'pretrained' in pkg: - return CompressionModel.get_pretrained(pkg['pretrained'], device=device) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model_ckpt(file_or_url_or_id: tp.Union[Path, str], cache_dir: tp.Optional[str] = None): - return _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - - -def _delete_param(cfg: DictConfig, full_name: str): - parts = full_name.split('.') - for part in parts[:-1]: - if part in cfg: - cfg = cfg[part] - else: - return - OmegaConf.set_struct(cfg, False) - if parts[-1] in cfg: - del cfg[parts[-1]] - OmegaConf.set_struct(cfg, True) - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = load_lm_model_ckpt(file_or_url_or_id, cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - _delete_param(cfg, 'conditioners.self_wav.chroma_stem.cache_path') - _delete_param(cfg, 'conditioners.args.merge_text_conditions_p') - _delete_param(cfg, 'conditioners.args.drop_desc_p') - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model - - -def load_mbd_ckpt(file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - cache_dir: tp.Optional[str] = None): - return _get_state_dict(file_or_url_or_id, filename=filename, cache_dir=cache_dir) - - -def load_diffusion_models(file_or_url_or_id: tp.Union[Path, str], - device='cpu', - filename: tp.Optional[str] = None, - cache_dir: tp.Optional[str] = None): - pkg = load_mbd_ckpt(file_or_url_or_id, filename=filename, cache_dir=cache_dir) - models = [] - processors = [] - cfgs = [] - sample_rate = pkg['sample_rate'] - for i in range(pkg['n_bands']): - cfg = pkg[i]['cfg'] - model = builders.get_diffusion_model(cfg) - model_dict = pkg[i]['model_state'] - model.load_state_dict(model_dict) - model.to(device) - processor = builders.get_processor(cfg=cfg.processor, sample_rate=sample_rate) - processor_dict = pkg[i]['processor_state'] - processor.load_state_dict(processor_dict) - processor.to(device) - models.append(model) - processors.append(processor) - cfgs.append(cfg) - return models, processors, cfgs diff --git a/spaces/fatiXbelha/sd/Download 2017 The Best Moments from the Legendary Lineup.md b/spaces/fatiXbelha/sd/Download 2017 The Best Moments from the Legendary Lineup.md deleted file mode 100644 index 8eae8fe006d166c73c53254fa49494d5818ea663..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download 2017 The Best Moments from the Legendary Lineup.md +++ /dev/null @@ -1,115 +0,0 @@ - -

      Download 2017 Lineup: A Rock and Metal Festival to Remember

      -

      If you are a fan of rock and metal music, you probably know about Download Festival, the UK's largest and most popular event of its kind. But do you remember Download 2017, the year that featured three amazing headliners, a host of other incredible acts, and a sunny weekend full of unforgettable moments? If not, don't worry, because we are here to refresh your memory and make you relive the experience. And if you were lucky enough to be there, then get ready to feel some nostalgia and excitement as we take you back to those three days of pure rock and metal bliss.

      -

      download 2017 lineup


      Download ✵✵✵ https://urllie.com/2uNC8B



      -

      Introduction

      -

      What is Download Festival?

      -

      Download Festival is an annual rock and metal festival that takes place at Donington Park in Leicestershire, England. It started in 2003 as a successor to the Monsters of Rock festival that ran from 1980 to 1996. Since then, Download has become one of the most prestigious and well-attended festivals in the world, attracting tens of thousands of fans every year. Download has hosted some of the biggest names in rock and metal history, such as AC/DC, Metallica, Iron Maiden, Black Sabbath, Slipknot, Linkin Park, KISS, and many more. Download is also known for its diverse lineup, featuring not only heavy metal legends, but also alternative rock, punk, pop rock, industrial, rap metal, and even electronic acts.

      -

      Why was Download 2017 special?

      -

      Download 2017 was one of the most memorable editions of the festival for several reasons. First of all, it had three fantastic headliners: System of a Down, Biffy Clyro, and Aerosmith. Each of them delivered a stunning performance that left the crowd in awe. Secondly, it had a stellar lineup of other bands and artists that rocked the various stages throughout the weekend. Some of them were Rob Zombie, Prophets of Rage, Slayer, AFI, Of Mice & Men, Alter Bridge, Mastodon, Five Finger Death Punch, Sum 41, Good Charlotte, Steel Panther, and many more. Thirdly, it had a great atmosphere and experience for the fans. The weather was sunny and warm (a rare occurrence for Download), the crowd was friendly and energetic (as always), and there were even some romantic proposals (aww). Moreover, the festival had improved its facilities and services after the previous year's deluge that caused some problems. There were better drainage systems, more toilets, showers, water points, food stalls, bars, security staff, medical staff, etc. All in all, Download 2017 was a rock and metal festival to remember.

      -

      The Headliners

      -

      System of a Down

      -

      Their history and achievements

      -

      System of a Down (or SOAD for short) is an Armenian-American rock band that formed in 1994 in Los Angeles. They are known for their unique style that combines elements of heavy metal, alternative metal, progressive rock, folk music, and political lyrics. They have released five studio albums: System of a Down (1998), Toxicity (2001), Steal This Album! (2002), Mezmerize (2005), and Hypnotize (2005). They have sold over 40 million records worldwide

      Their performance and setlist

      -

      System of a Down headlined the main stage on Friday, June 9, 2017. They played a 27-song setlist that spanned their entire discography, from their debut album to their latest one. They opened with the explosive "Suite-Pee" and closed with the epic "Sugar". In between, they played some of their most popular and powerful songs, such as "Chop Suey!", "Toxicity", "B.Y.O.B.", "Aerials", "Lonely Day", "Radio/Video", "Hypnotize", and "Bounce". They also played some rare and deep cuts, such as "Marmalade", "Mind", "Mr. Jack", and "Dreaming". They also paid tribute to their Armenian heritage by playing a cover of "Sartsa Karoun" by Bambir, a folk rock band from Armenia. The crowd went wild with every song, singing along, jumping, moshing, and waving flags. System of a Down delivered a phenomenal performance that proved why they are one of the most influential and innovative bands in rock and metal history.

      -

      Biffy Clyro

      -

      Their history and achievements

      -

      Biffy Clyro is a Scottish rock band that formed in 1995 in Kilmarnock. They are known for their eclectic and experimental style that blends elements of alternative rock, indie rock, progressive rock, pop rock, and hard rock. They have released eight studio albums: Blackened Sky (2002), The Vertigo of Bliss (2003), Infinity Land (2004), Puzzle (2007), Only Revolutions (2009), Opposites (2013), Ellipsis (2016), and A Celebration of Endings (2020). They have sold over three million records worldwide and won several awards, such as the NME Award for Best British Band in 2011 and the Kerrang! Award for Best Album in 2013. They have also headlined major festivals, such as Reading and Leeds, T in the Park, Isle of Wight, and Glastonbury.

      -

      Their performance and setlist

      -

      Biffy Clyro headlined the main stage on Saturday, June 10, 2017. They played a 19-song setlist that showcased their latest album Ellipsis, as well as some of their classic hits. They opened with the catchy "Wolves of Winter" and closed with the anthemic "Stingin' Belle". In between, they played some of their most beloved and energetic songs, such as "Biblical", "Mountains", "Many of Horror", "Bubbles", "Black Chandelier", "That Golden Rule", and "The Captain". They also played some newer and deeper cuts, such as "Friends and Enemies", "Animal Style", "Re-arrange", and "God & Satan". They also surprised the crowd by playing a cover of David Bowie's "Heroes" as a tribute to the late legend. The crowd sang along with every word, clapped along with every beat, and cheered along with every riff. Biffy Clyro delivered a stunning performance that proved why they are one of the most successful and versatile bands in rock history.

      -

      Aerosmith

      -

      Their history and achievements

      -

      Aerosmith is an American rock band that formed in 1970 in Boston. They are known for their classic style that combines elements of hard rock, blues rock, glam rock, and heavy metal. They have released 15 studio albums: Aerosmith (1973), Get Your Wings (1974), Toys in the Attic (1975), Rocks (1976), Draw the Line (1977), Night in the Ruts (1979), Rock in a Hard Place (1982), Done with Mirrors (1985), Permanent Vacation (1987), Pump (1989), Get a Grip (1993), Nine Lives (1997), Just Push Play (2001), Honkin' on Bobo (2004), and Music from Another Dimension! (2012). They have sold over 150 million records worldwide and won numerous awards, such as four Grammy Awards, six American Music Awards, ten MTV Video Music Awards, and four Billboard Music Awards. They have also been inducted into the Rock and Roll Hall of Fame in 2001 and the Songwriters Hall of Fame in 2013. They have also headlined countless tours and festivals around the world.

      -

      download festival 2017 headliners
      -download 2017 bands and stages
      -download 2017 tickets and camping
      -download 2017 system of a down
      -download 2017 aerosmith farewell tour
      -download 2017 biffy clyro setlist
      -download 2017 donington park map
      -download 2017 rob zombie show
      -download 2017 prophets of rage performance
      -download 2017 slayer thrash metal
      -download 2017 afi punk rock
      -download 2017 of mice and men lineup
      -download 2017 five finger death punch
      -download 2017 alter bridge live
      -download 2017 simple plan pop punk
      -download 2017 sum 41 reunion
      -download 2017 good charlotte comeback
      -download 2017 steel panther comedy
      -download 2017 mastodon progressive metal
      -download 2017 coheed and cambria concept
      -download 2017 opeth swedish metal
      -download 2017 ministry industrial rock
      -download 2017 motionless in white horror
      -download 2017 devil driver groove metal
      -download 2017 four year strong hardcore
      -download 2017 knuckle puck emo pop
      -download 2017 basement alternative rock
      -download 2017 crown the empire post-hardcore
      -download 2017 issues metalcore r&b
      -download 2017 every time i die southern rock
      -download 2017 exodus bay area thrash
      -download 2017 lost society finnish thrash
      -download 2017 venom prison death metal
      -download 2017 code orange hardcore punk
      -download 2017 creeper gothic punk
      -download 2017 red fang stoner rock
      -download 2017 orange goblin doom metal
      -download 2017 clutch blues rock
      -download 2017 airbourne hard rock
      -download 2017 pierce the veil post-hardcore
      -download 2017 sleeping with sirens pop rock
      -download 2017 state champs pop punk
      -download 2017 neck deep pop punk
      -download 2017 the one hundred rap rock
      -download 2017 hacktivist djent rap metal
      -download 2017 kvelertak black n roll
      -download 2017 baroness sludge metal
      -download 2017 northlane djent metalcore
      -download 2017 in flames melodic death metal.

      -

      Their performance and setlist

      -

      Aerosmith headlined the main stage on Sunday, June 11, 2017. They played a 17-song setlist that featured some of their greatest hits from their five-decade career. They opened with the rocking "Let the Music Do the Talking" and closed with the iconic "Walk This Way". In between, they played some of their most famous and catchy songs, such as "Love in an Elevator", "Cryin'", "Livin' on the Edge", "Dude (Looks Like a Lady)", "Dream On", "Sweet Emotion", and "I Don't Want to Miss a Thing". They also played some older and deeper cuts, such as "Young Lust", "Rag Doll", "Janie's Got a Gun", and "Chip Away the Stone". They also delighted the crowd by playing a cover of Fleetwood Mac's "Oh Well" as a nod to their blues roots. The crowd danced along with every groove, screamed along with every chorus, and waved along with every ballad. Aerosmith delivered a legendary performance that proved why they are one of the most enduring and influential bands in rock and metal history.

      -

      The Other Highlights

      -

      Rob Zombie

      -

      Rob Zombie is an American musician and filmmaker who rose to fame as the lead singer of the metal band White Zombie. He later embarked on a successful solo career, releasing seven studio albums: Hellbilly Deluxe (1998), The Sinister Urge (2001), Educated Horses (2006), Hellbilly Deluxe 2 (2010), Venomous Rat Regeneration Vendor (2013), The Electric Warlock Acid Witch Satanic Orgy Celebration Dispenser (2016), and The Lunar Injection Kool Aid Eclipse Conspiracy (2021). He is also known for his horror movies, such as House of 1000 Corpses (2003), The Devil's Rejects (2005), Halloween (2007), The Lords of Salem (2012), and 3 from Hell (2019). He is renowned for his theatrical and energetic live shows, featuring elaborate costumes, props, pyrotechnics, and visuals.

      -

      Rob Zombie played on the main stage on Friday, June 9, 2017, right before System of a Down. He played a 12-song setlist that included some of his best solo songs, such as "Dragula", "Living Dead Girl", "Superbeast", "Dead City Radio and the New Gods of Supertown", and "Well, Everybody's Fucking in a U.F.O.". He also played some White Zombie classics, such as "Thunder Kiss '65", "More Human than Human", and "Super-Charger Heaven". He also played a cover of Alice Cooper's "School's Out" as a tribute to one of his idols. He was accompanied by his band, consisting of John 5 on guitar, Piggy D on bass, and Ginger Fish on drums. He also had some dancers and performers on stage, dressed as aliens, robots, monsters, and clowns. He interacted with the crowd, cracking jokes, throwing inflatable balls, and spraying foam. Rob Zombie delivered a spectacular performance that proved why he is one of the most entertaining and creative artists in rock and metal history.

      -

      Prophets of Rage

      -

      Prophets of Rage is an American rap rock supergroup that formed in 2016. It consists of three members of Rage Against the Machine: Tom Morello on guitar, Tim Commerford on bass, and Brad Wilk on drums; two members of Public Enemy: Chuck D on vocals and DJ Lord on turntables; and one member of Cypress Hill: B-Real on vocals. They are known for their political and social activism, as well as their powerful and energetic music that combines elements of rap, rock, metal, funk, and hip hop. They have released one EP: The Party's Over (2016) and one studio album: Prophets of Rage (2017). They have also performed at various protests and rallies, such as the Anti-Inaugural Ball in 2017 and the Make America Rage Again Tour in 2016.

      -

      Prophets of Rage played on the main stage on Saturday, June 10, 2017, right before Biffy Clyro. They played a 14-song setlist that featured some of their original songs, such as "Prophets of Rage", "Unfuck the World", "Living on the 110", and "Hail to the Chief". They also played some covers of Rage Against the Machine, Public Enemy, and Cypress Hill songs, such as "Killing in the Name", "Fight the Power", "Insane in the Brain", and "(Rock) Superstar". They also played a cover of Audioslave's "Like a Stone" as a tribute to their late friend and former bandmate Chris Cornell. They were joined by Serj Tankian from System of a Down for two songs: "Like a Stone" and "Bulls on Parade". They also had some guest vocalists from other bands that played at Download 201

      2017, such as Machine Gun Kelly, The Dillinger Escape Plan, and Nothing More. They interacted with the crowd, encouraging them to chant, clap, and jump. They also delivered some powerful messages about the state of the world, the importance of resistance, and the power of music. Prophets of Rage delivered an inspiring performance that proved why they are one of the most relevant and revolutionary bands in rock and metal history.

      -

      Slayer

      -

      Slayer is an American thrash metal band that formed in 1981 in Huntington Park, California. They are known for their fast, aggressive, and brutal style that influenced countless other metal bands. They are also one of the "Big Four" of thrash metal, along with Metallica, Megadeth, and Anthrax. They have released 12 studio albums: Show No Mercy (1983), Hell Awaits (1985), Reign in Blood (1986), South of Heaven (1988), Seasons in the Abyss (1990), Divine Intervention (1994), Undisputed Attitude (1996), Diabolus in Musica (1998), God Hates Us All (2001), Christ Illusion (2006), World Painted Blood (2009), and Repentless (2015). They have sold over 20 million records worldwide and won two Grammy Awards, one for "Eyes of the Insane" in 2007 and one for "Final Six" in 2008. They have also headlined numerous tours and festivals around the world.

      -

      Slayer played on the main stage on Sunday, June 11, 2017, right before Aerosmith. They played a 14-song setlist that featured some of their most classic and iconic songs, such as "Raining Blood", "Angel of Death", "South of Heaven", "Seasons in the Abyss", "War Ensemble", and "Mandatory Suicide". They also played some newer and deeper cuts, such as "Repentless", "Disciple", "Hate Worldwide", and "Born of Fire". They were joined by Gary Holt on guitar, who replaced the late Jeff Hanneman in 2013. They also had Paul Bostaph on drums, who replaced Dave Lombardo in 2013. They were accompanied by a massive backdrop that displayed their logo, album covers, and images of war, violence, and death. They also had some pyrotechnics and smoke effects that added to the intensity of their show. The crowd headbanged along with every riff, shouted along with every lyric, and moshed along with every beat. Slayer delivered a relentless performance that proved why they are one of the most legendary and influential bands in rock and metal history.

      -

      The Atmosphere and Experience

      -

      The weather and the crowd

      -

      One of the things that made Download 2017 special was the weather and the crowd. Unlike previous years, when rain and mud were common occurrences, Download 2017 was blessed with sunny and warm weather throughout the weekend. The temperature ranged from 18°C to 25°C, making it comfortable for both the fans and the bands. The sky was clear and blue, creating a beautiful contrast with the green fields of Donington Park. The crowd was also in high spirits, enjoying the music, the sun, and the company of fellow rockers and metalheads. There were over 80,000 people who attended Download 2017, making it one of the largest editions of the festival ever. The crowd was diverse and inclusive, featuring people of different ages, genders, races, nationalities, backgrounds, and preferences. There were also people with disabilities who were accommodated by the festival's accessibility services. The crowd was friendly and respectful, helping each other out when needed, sharing food and drinks when offered, and making new friends when possible. The crowd was also energetic and enthusiastic, showing their appreciation for every band that played on stage.

      -

      The proposals and the memories

      -

      Another thing that made Download 2017 special was the proposals and the memories that were made during the weekend. There were at least four romantic proposals that took place at Download 2017, all of them involving rock and metal fans who decided to pop the question to their partners in front of thousands of witnesses. One of them was between a couple who met at Download 2016 and got engaged at Download 2017 during Steel Panther's set. Another one was between a couple who got engaged during Aerosmith's set after being together for seven years. Another one was between a couple who got engaged during Biffy Clyro's set after being together for four years. And another one was between a couple who got engaged during Prophets of Rage's set after being together for two years. All of them received cheers and congratulations from the crowd and the bands who witnessed their special moments.

      But the proposals were not the only memories that were made at Download 2017. There were also many other memorable moments that happened during the weekend, such as:

      -
        -
      • The surprise appearance of Serj Tankian from System of a Down during Prophets of Rage's set, who sang "Like a Stone" and "Bulls on Parade" with them.
      • -
      • The emotional tribute to Chris Cornell by Biffy Clyro, who dedicated "Black Chandelier" to him and asked the crowd to sing along.
      • -
      • The hilarious antics of Steel Panther, who made fun of themselves, the crowd, and other bands, and invited some fans on stage to dance and sing with them.
      • -
      • The epic finale of Aerosmith's set, when they played "Walk This Way" with Johnny Depp on guitar and invited some members of Extreme, Alter Bridge, and The Darkness on stage to join them.
      • -
      • The amazing performance of AFI, who played their first UK show in eight years and impressed the crowd with their energy and charisma.
      • -
      -

      The facilities and the improvements

      -

      The last thing that made Download 2017 special was the facilities and the improvements that were made by the festival organizers. After the previous year's disaster, when heavy rain and mud caused many problems for the fans and the bands, Download 2017 was much better prepared and equipped to deal with any potential issues. The festival had improved its drainage systems, ensuring that the fields would not turn into swamps. The festival had also increased the number of toilets, showers, water points, food stalls, bars, security staff, medical staff, and volunteers, ensuring that the fans would have a comfortable and safe experience. The festival had also added some new features and services, such as free Wi-Fi zones, cashless payment systems, lockers, phone charging stations, and a cinema tent. The festival had also improved its accessibility services, providing wheelchair platforms, viewing areas, sign language interpreters, hearing loops, and assistance dogs for people with disabilities. The festival had also improved its environmental impact, using renewable energy sources, recycling waste materials, and donating leftover food to charities. The festival had also improved its entertainment value, offering more activities and attractions for the fans, such as fairground rides, comedy shows, wrestling matches, silent discos, karaoke sessions, and yoga classes. The festival had also improved its diversity and inclusivity, featuring more female artists, LGBTQ+ artists, and artists of color on its lineup. All in all, Download 2017 was a well-organized and well-executed festival that catered to the needs and wants of its fans.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, Download 2017 was a rock and metal festival to remember for many reasons. It had three amazing headliners: System of a Down, Biffy Clyro, and Aerosmith; who delivered stunning performances that left the crowd in awe. It had a stellar lineup of other bands and artists: Rob Zombie, Prophets of Rage, Slayer, and many more; who rocked the various stages throughout the weekend. It had a great atmosphere and experience for the fans: the weather was sunny and warm, the crowd was friendly and energetic, and there were even some romantic proposals. Moreover, the festival had improved its facilities and services: there were better drainage systems, more toilets, showers, water points, food stalls, bars, security staff, medical staff, etc. Download 2017 was a well-organized and well-executed festival that catered to the needs and wants of its fans.

      -

      Call to action for the readers

      -

      If you were at Download 2017, we hope that this article brought back some good memories and made you feel nostalgic and excited. If you were not at Download 2017, we hope that this article gave you a glimpse of what it was like and made you curious and interested. Either way, we hope that you enjoyed reading this article and learned something new about Download Festival and its amazing lineup. If you did, please share this article with your friends and family who love rock and metal music. And if you want to relive or experience Download Festival for yourself, make sure to check out their website and social media for the latest news and updates on their upcoming events. Who knows, maybe you will be one of the lucky ones who will witness the next rock and metal festival to remember.

      -

      FAQs

      -

      What is Download Festival?

      -

      Download Festival is an annual rock and metal festival that takes place at Donington Park in Leicestershire, England. It started in 2003 as a successor to the Monsters of Rock festival that ran from 1980 to 1996.

      -

      When was Download 2017?

      -

      Download 2017 was from Friday, June 9 to Sunday, June 11, 2017.

      -

      Who were the headliners of Download 2017?

      -

      The headliners of Download 2017 were System of a Down, Biffy Clyro, and Aerosmith.

      -

      How many people attended Download 2017?

      -

      There were over 80,000 people who attended Download 2017.

      -

      How can I find out more about Download Festival?

      -

      You can find out more about Download Festival by visiting their website (https://downloadfestival.co.uk/) or following them on Facebook (https://www.facebook.com/downloadfest), Twitter (https://twitter.com/DownloadFest), Instagram (https://www.instagram.com/downloadfest/), or YouTube (https://www.youtube.com/user/downloadfestival).

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fcakyon/sahi-yolov5/app.py b/spaces/fcakyon/sahi-yolov5/app.py deleted file mode 100644 index a7fe480c3b038df4c304ed7299298b261efed9d3..0000000000000000000000000000000000000000 --- a/spaces/fcakyon/sahi-yolov5/app.py +++ /dev/null @@ -1,143 +0,0 @@ -import gradio as gr -import sahi.utils -from sahi import AutoDetectionModel -import sahi.predict -import sahi.slicing -from PIL import Image -import numpy - -IMAGE_SIZE = 640 - -# Images -sahi.utils.file.download_from_url( - "https://user-images.githubusercontent.com/34196005/142730935-2ace3999-a47b-49bb-83e0-2bdd509f1c90.jpg", - "apple_tree.jpg", -) -sahi.utils.file.download_from_url( - "https://user-images.githubusercontent.com/34196005/142730936-1b397756-52e5-43be-a949-42ec0134d5d8.jpg", - "highway.jpg", -) - -sahi.utils.file.download_from_url( - "https://user-images.githubusercontent.com/34196005/142742871-bf485f84-0355-43a3-be86-96b44e63c3a2.jpg", - "highway2.jpg", -) - -sahi.utils.file.download_from_url( - "https://user-images.githubusercontent.com/34196005/142742872-1fefcc4d-d7e6-4c43-bbb7-6b5982f7e4ba.jpg", - "highway3.jpg", -) - - -# Model -model = AutoDetectionModel.from_pretrained( - model_type="yolov5", model_path="yolov5s6.pt", device="cpu", confidence_threshold=0.5, image_size=IMAGE_SIZE -) - - -def sahi_yolo_inference( - image, - slice_height=512, - slice_width=512, - overlap_height_ratio=0.2, - overlap_width_ratio=0.2, - postprocess_type="NMS", - postprocess_match_metric="IOU", - postprocess_match_threshold=0.5, - postprocess_class_agnostic=False, -): - - image_width, image_height = image.size - sliced_bboxes = sahi.slicing.get_slice_bboxes( - image_height, - image_width, - slice_height, - slice_width, - False, - overlap_height_ratio, - overlap_width_ratio, - ) - if len(sliced_bboxes) > 60: - raise ValueError( - f"{len(sliced_bboxes)} slices are too much for huggingface spaces, try smaller slice size." - ) - - # standard inference - prediction_result_1 = sahi.predict.get_prediction( - image=image, detection_model=model - ) - print(image) - visual_result_1 = sahi.utils.cv.visualize_object_predictions( - image=numpy.array(image), - object_prediction_list=prediction_result_1.object_prediction_list, - ) - output_1 = Image.fromarray(visual_result_1["image"]) - - # sliced inference - prediction_result_2 = sahi.predict.get_sliced_prediction( - image=image, - detection_model=model, - slice_height=int(slice_height), - slice_width=int(slice_width), - overlap_height_ratio=overlap_height_ratio, - overlap_width_ratio=overlap_width_ratio, - postprocess_type=postprocess_type, - postprocess_match_metric=postprocess_match_metric, - postprocess_match_threshold=postprocess_match_threshold, - postprocess_class_agnostic=postprocess_class_agnostic, - ) - visual_result_2 = sahi.utils.cv.visualize_object_predictions( - image=numpy.array(image), - object_prediction_list=prediction_result_2.object_prediction_list, - ) - - output_2 = Image.fromarray(visual_result_2["image"]) - - return output_1, output_2 - - -inputs = [ - gr.Image(type="pil", label="Original Image"), - gr.Number(default=512, label="slice_height"), - gr.Number(default=512, label="slice_width"), - gr.Number(default=0.2, label="overlap_height_ratio"), - gr.Number(default=0.2, label="overlap_width_ratio"), - gr.Dropdown( - ["NMS", "GREEDYNMM"], - type="value", - value="NMS", - label="postprocess_type", - ), - gr.Dropdown( - ["IOU", "IOS"], type="value", default="IOU", label="postprocess_type" - ), - gr.Number(default=0.5, label="postprocess_match_threshold"), - gr.Checkbox(default=True, label="postprocess_class_agnostic"), -] - -outputs = [ - gr.Image(type="pil", label="YOLOv5s"), - gr.Image(type="pil", label="YOLOv5s + SAHI"), -] - -title = "Small Object Detection with SAHI + YOLOv5" -description = "SAHI + YOLOv5 demo for small object detection. Upload an image or click an example image to use." -article = "

      SAHI is a lightweight vision library for performing large scale object detection/ instance segmentation.. SAHI Github | SAHI Blog | YOLOv5 Github

      " -examples = [ - ["apple_tree.jpg", 256, 256, 0.2, 0.2, "NMS", "IOU", 0.4, True], - ["highway.jpg", 256, 256, 0.2, 0.2, "NMS", "IOU", 0.4, True], - ["highway2.jpg", 512, 512, 0.2, 0.2, "NMS", "IOU", 0.4, True], - ["highway3.jpg", 512, 512, 0.2, 0.2, "NMS", "IOU", 0.4, True], -] - -gr.Interface( - sahi_yolo_inference, - inputs, - outputs, - title=title, - description=description, - article=article, - examples=examples, - theme="huggingface", - cache_examples=True, -).launch(debug=True, enable_queue=True) diff --git a/spaces/fclong/summary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh b/spaces/fclong/summary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh deleted file mode 100644 index fccf833bdc954707bdc94d6bef3821239006a2c6..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/pretrain_t5/finetune_unimc_randeng_t5_char_57M.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=finetune_unimc_randeng_t5_char_57M -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=8 -#SBATCH --gres=gpu:8 # number of gpus -#SBATCH --cpus-per-task=32 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH -o /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.log -#SBATCH -e /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/%x-%j.err - -set -x -e - -echo "START TIME: $(date)" -MICRO_BATCH_SIZE=64 -ROOT_DIR=/cognitive_comp/ganruyi/experiments/finetune_unimc_randeng_t5_char_57M/ -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -ZERO_STAGE=1 - -config_json="$ROOT_DIR/ds_config.finetune_unimc_randeng_t5_char_57M.$SLURM_JOBID.json" -export MASTER_PORT=$[RANDOM%10000+30000] -export CUDA_VISIBLE_DEVICES='6' - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "params": { - "warmup_max_lr": 1e-04, - "warmup_min_lr": 1e-05, - "total_num_steps": 240000, - "warmup_num_steps" : 10000 - }, - "type": "WarmupDecayLR" - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions -# strategy=ddp -strategy=deepspeed_stage_1 - -TRAINER_ARGS=" - --max_epochs 1 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ${strategy} \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --every_n_train_steps 100000 \ - --monitor train_loss \ - --mode min \ - --save_last \ - --val_check_interval 0.1 \ - --dataset_num_workers 4 \ - --dataloader_num_workers 4 \ - --replace_sampler_ddp False \ -" -# --accumulate_grad_batches 8 \ -TRAIN_DATA_DIR=/cognitive_comp/yangping/data/unidata/multiplechoice/pretraining_alldata/alldata/train.json -VALID_DATA_DIR=/cognitive_comp/yangping/data/unidata/multiplechoice/pretraining_alldata/alldata/dev.json - -DATA_ARGS=" - --train_batchsize $MICRO_BATCH_SIZE \ - --valid_batchsize $MICRO_BATCH_SIZE \ - --train_data_path ${TRAIN_DATA_DIR} \ - --valid_data_path ${TRAIN_DATA_DIR} \ - --max_seq_length 512 \ -" - -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/randeng_t5_char_57M \ - --tokenizer_type bert_tokenizer \ -" - -SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/finetune_t5.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD -/home/ganruyi/anaconda3/bin/python $CMD -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' - -# source activate base -# python $CMD -# srun --nodes=1 --gres=gpu:8 --ntasks-per-node=8 --cpus-per-task=30 --jobid=171866 -e %x-%j.err -o %x-%j.log python $CMD - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Fitzroy Readers 51-60 and Master Phonics in English.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Fitzroy Readers 51-60 and Master Phonics in English.md deleted file mode 100644 index a394a1f8bdd5ac53a1bb849a5874db7160965121..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Fitzroy Readers 51-60 and Master Phonics in English.md +++ /dev/null @@ -1,153 +0,0 @@ -
      -

      Fitzroy Readers 51-60: The Ultimate Guide for Parents and Teachers

      -

      If you are looking for a way to help your child or student improve their reading skills, you might have heard of Fitzroy Readers 51-60. These are a set of ten phonic readers that are designed for year six level students and beyond. They are part of the Fitzroy Readers series, which is a popular and effective literacy program that has been used by thousands of schools and families around the world.

      -

      fitzroy readers 51-60 download


      Downloadhttps://gohhs.com/2uPrUv



      -

      But what exactly are Fitzroy Readers 51-60 and how do they work? How can you use them effectively to boost your child or student's confidence and fluency in reading? And where can you buy them at the best price?

      -

      In this article, we will answer all these questions and more. We will give you a comprehensive guide on everything you need to know about Fitzroy Readers 51-60, including their features, benefits, tips, strategies, and resources. By the end of this article, you will be able to decide if Fitzroy Readers 51-60 are right for your child or student, and how to get started with them today.

      -

      What are Fitzroy Readers 51-60?

      -

      Fitzroy Readers 51-60 are a boxed set of ten phonic readers that are suitable for year six level students and beyond. They are the ninth and final pack in the Fitzroy Readers series, which is a comprehensive literacy program that covers all the essential sounds and words that children need to learn to read.

      -

      Fitzroy Readers 51-60 have several features and benefits that make them an ideal choice for parents and teachers who want to help their children or students achieve reading success. Here are some of them:

      -
        -
      • They use a phonic approach, which means that they teach children how to decode words by sounding out the letters and blending them together. This helps children develop phonemic awareness, which is the ability to hear and manipulate the sounds in words.
      • -
      • They introduce one or two new sounds in each reader, many of which are single-letter extra sounds, such as y in myth, o in won, and a in path. These sounds are often tricky for children to master, but they are essential for reading more advanced words and texts.
      • -
      • They also introduce up to ten special words (sight words) in each reader, such as cello, vaguely, and echoed. These are words that cannot be sounded out easily or have irregular spellings. They are important for expanding children's vocabulary and comprehension.
      • -
      • They have engaging stories that capture children's interest and imagination. The stories feature familiar characters from previous readers, such as Ann, Kim, Rob, Nell, Max, Pam, Sam, Tim, Tom, Ben, Jen, Jim, Meg, Peg, Ted, Fred, Rex, Dex, Dot, Pat, Mat, Nat, Kit, Kat, Sid, Liz, Gus, Russ, Sal, Hal, Val, Al, Mel, Nel and new ones such as Zed the Zebra. The stories also cover a range of topics and genres, such as adventure, fantasy, mystery, humor, science fiction, and history.
      • -

        How do Fitzroy Readers 51-60 work?

        -

        Fitzroy Readers 51-60 work by following a systematic and progressive phonic approach. This means that they teach children how to read by building on the sounds and words that they have already learned in previous readers. Each reader introduces one or two new sounds and up to ten special words, which are clearly marked and explained at the beginning of each book. The new sounds and words are then repeated throughout the story, so that children can practice and reinforce them in context.

        -

        Here is a table that shows the new sounds and special words introduced in each reader:

        -

        fitzroy readers 51-60 boxed set
        -fitzroy readers 51-60 phonic readers grade 5-6
        -fitzroy readers 51-60 pdf
        -fitzroy readers 51-60 free download
        -fitzroy readers 51-60 audio
        -fitzroy readers 51-60 word skills
        -fitzroy readers 51-60 answer books
        -fitzroy readers 51-60 pack
        -fitzroy readers 51-60 online
        -fitzroy readers 51-60 ebook
        -fitzroy readers 51-60 single-letter extra sounds
        -fitzroy readers 51-60 sight words
        -fitzroy readers 51-60 year six level
        -fitzroy readers 51-60 full literacy program
        -fitzroy readers 51-60 big books
        -fitzroy readers 51-60 saar education
        -fitzroy readers 51-60 new scientist
        -fitzroy readers 51-60 final pack in the series
        -fitzroy readers 51-60 buy online
        -fitzroy readers 51-60 best price
        -fitzroy readers 51-60 reviews
        -fitzroy readers 51-60 sample pages
        -fitzroy readers 51-60 video tutorial
        -fitzroy readers 51-60 how to use
        -fitzroy readers 51-60 benefits and features
        -fitzroy readers 51-60 comparison with other phonic readers
        -fitzroy readers 51-60 testimonials and feedback
        -fitzroy readers 51-60 discount code and coupon
        -fitzroy readers 51-60 shipping and delivery
        -fitzroy readers 51-60 refund and return policy

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

        How to use Fitzroy Readers 51-60 effectively?

        -

        Fitzroy Readers 51-60 are easy to use and can be adapted to suit different needs and preferences. However, there are some general tips and strategies that can help you make the most of them and ensure that your child or student enjoys reading and learns effectively. Here are some of them:

        -

        Tips and strategies for reading with your child or student

        -
          -
        • Before reading, review the new sounds and words at the beginning of each reader. You can ask your child or student to repeat them after you, or to spell them out loud. You can also explain the meaning of any unfamiliar words or concepts.
        • -
        • During reading, encourage your child or student to sound out and blend the words as they read. You can also point out any patterns or rules that apply to the new sounds or words, such as silent letters, vowel teams, or suffixes.
        • -
        • After reading, ask your child or student some questions about the story to check their comprehension and recall. You can also discuss their opinions, feelings, or predictions about the story or the characters.
        • -
        • Praise your child or student for their efforts and achievements. You can also reward them with stickers, certificates, or other incentives.
        • -
        • Repeat the reading process with the same reader until your child or student can read it fluently and confidently. You can also ask them to read it aloud to you, to another family member, or to a friend.
        • -
        • Move on to the next reader when your child or student is ready. You can also review previous readers from time to time to reinforce their learning.
        • -
        -

        How to assess your child or student's progress and comprehension

        -
          -
        • You can use the Fitzroy Readers Assessment Tests to measure your child or student's progress and comprehension. These are online tests that are aligned with the Fitzroy Readers series and cover all the sounds and words that are taught in each pack. They are designed to be fun and interactive, and they provide instant feedback and reports.
        • -
        • You can also use informal methods to assess your child or student's progress and comprehension, such as observing their reading behavior, asking them questions, listening to their feedback, or giving them quizzes or games.
        • -
        -

        How to supplement Fitzroy Readers 51-60 with other resources

        -
          -
        • You can supplement Fitzroy Readers 51-60 with other resources that are compatible with the phonic approach, such as the Fitzroy Word Skills, the Fitzroy Spelling, the Fitzroy Talking Books, and the Fitzroy Games. These are additional materials that can help your child or student practice and consolidate their reading, writing, spelling, and listening skills.
        • -
        • You can also supplement Fitzroy Readers 51-60 with other resources that are relevant to the topics and genres of the stories, such as books, videos, websites, or activities. These can help your child or student expand their knowledge and interest in different subjects and themes.
        • -
        -

        Where to buy Fitzroy Readers 51-60?

        -

        If you are interested in buying Fitzroy Readers 51-60, you have several options to choose from. You can buy them online or offline, depending on your preference and convenience. Here are some of the most common options and their prices:

        -
        ReaderNew SoundsSpecial Words
        51: Zed the Zebray in myth, o in wonmyth, won, zebra, Africa, safari, lion, rhino, hippo, crocodile, giraffe
        52: The Big Racea in path, e in hepath, he, race, place, space, face, pace, lace, trace, grace
        53: The Magic Flutei in find, u in putfind, put, flute, magic, music, tune, rude, mood, food, good
        54: The Lost Cityo in cold, u in fullcold, full, city, lost, gold, old, bold, hold, sold, told
        55: The Secret Codee in me, i in skime, ski, code, secret, spy, clue, solve, mystery, message, puzzle
        - - - - - - - - - - - - - - - - - - -A: Fitzroy Readers 51-60 are suitable for year six level students and beyond. They are the ninth and final pack in the Fitzroy Readers series, which covers all the essential sounds and words that children need to learn to read from kindergarten to year six. However, they can also be used by older students or adults who need to improve their reading skills or learn English as a second language.

        -

        Q: What are the benefits of using Fitzroy Readers 51-60?

        -

        A: Fitzroy Readers 51-60 have many benefits for children and students who want to improve their reading skills. Some of them are:

        -
          -
        • They help children develop phonemic awareness, which is the ability to hear and manipulate the sounds in words.
        • -
        • They teach children how to decode words by sounding out the letters and blending them together.
        • -
        • They introduce children to new sounds and words that are essential for reading more advanced words and texts.
        • -
        • They expand children's vocabulary and comprehension by exposing them to different topics and genres.
        • -
        • They boost children's confidence and fluency in reading by providing them with engaging stories that capture their interest and imagination.
        • -
        • They prepare children for further reading and learning by giving them a solid foundation in literacy.
        • -
        -

        Q: How can I get more information or support about Fitzroy Readers 51-60?

        -

        A: If you have any questions or concerns about Fitzroy Readers 51-60, you can contact Fitzroy Readers directly through their website, email, phone, or social media. They have a friendly and helpful team that can assist you with any queries or issues. You can also access other resources and support from Fitzroy Readers, such as their blog, newsletter, videos, testimonials, and FAQs.

        -

        Q: Are there any alternatives or competitors to Fitzroy Readers 51-60?

        -

        A: There are many other phonic readers available in the market, such as Jolly Phonics, Oxford Reading Tree, Sound Waves, and Hooked on Phonics. However, Fitzroy Readers 51-60 have some unique features and advantages that make them stand out from the rest. Some of them are:

        -
          -
        • They use a systematic and progressive phonic approach that covers all the essential sounds and words that children need to learn to read.
        • -
        • They introduce one or two new sounds and up to ten special words in each reader, which are then repeated throughout the story.
        • -
        • They have engaging stories that feature familiar characters from previous readers and cover a range of topics and genres.
        • -
        • They are easy to use and can be adapted to suit different needs and preferences.
        • -
        • They are affordable and accessible, as they can be bought online or offline from various sources.
        • -
        -

        Q: What are some of the challenges or drawbacks of using Fitzroy Readers 51-60?

        -

        A: Fitzroy Readers 51-60 are generally well-received and praised by parents, teachers, and students who use them. However, they may also have some challenges or drawbacks that you need to be aware of. Some of them are:

        -
          -
        • They may not suit every child or student's learning style or preference, as some may find them too repetitive, boring, or difficult.
        • -
        • They may not cover every sound or word that your child or student may encounter in other texts, as they focus on the most common and essential ones.
        • -
        • They may not be compatible with some other literacy programs or curricula that use a different phonic approach or sequence.
        • -
        • They may require some guidance and support from parents or teachers, especially for the new sounds and words, as they are not self-explanatory or intuitive.
        • -
        -

        These challenges or drawbacks are not insurmountable, and they can be overcome with some adjustments and adaptations. You can also consult Fitzroy Readers or other experts for advice and assistance if you encounter any problems or difficulties.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Teriyaki Boyz Tokyo Drift MP3 for Free - The Ultimate Fast and Furious Soundtrack.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Teriyaki Boyz Tokyo Drift MP3 for Free - The Ultimate Fast and Furious Soundtrack.md deleted file mode 100644 index 3e38d89302cd96728bf4d4bc7c661c22dbbe11f8..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Teriyaki Boyz Tokyo Drift MP3 for Free - The Ultimate Fast and Furious Soundtrack.md +++ /dev/null @@ -1,121 +0,0 @@ -
        -

        Download Teriyaki Boyz Tokyo Drift: How to Enjoy the Iconic Song from The Fast and the Furious

        -

        If you are a fan of the Fast and Furious franchise, you probably know the song "Tokyo Drift" by Teriyaki Boyz. It is the main theme of the 2006 movie The Fast and the Furious: Tokyo Drift, and it has become a cult classic among car enthusiasts and hip hop lovers. But do you know how to download Teriyaki Boyz Tokyo Drift legally and enjoy it on your devices? In this article, we will tell you everything you need to know about this song, why you should download it, and how to do it without breaking any laws.

        -

        download teriyaki boyz tokyo drift


        DOWNLOAD ✏ ✏ ✏ https://gohhs.com/2uPscu



        -

        What is Teriyaki Boyz Tokyo Drift?

        -

        "Tokyo Drift" is a single by Japanese hip hop group Teriyaki Boyz, composed of Verbal, Wise, Ilmari, Ryo-Z, and DJ Nigo. It was produced by The Neptunes, a famous American duo of Pharrell Williams and Chad Hugo. The song features on the soundtrack of the movie The Fast and the Furious: Tokyo Drift, as well as on the band's second album Serious Japanese.

        -

        The background and popularity of the song

        -

        The song was released in 2006, along with the movie. It was inspired by the Japanese street racing culture, especially the drifting technique, which involves sliding the car sideways around corners. The song mixes English and Japanese lyrics, as well as some Arabic words. It has a catchy beat and a catchy chorus that goes "I wonder if you know how they live in Tokyo. If you see me then you mean it then you know you have to go. Fast and furious (drift, drift, drift). Fast and furious (drift, drift, drift)."

        -

        The song became an instant hit among fans of the movie and the genre. It was praised as one of the best songs from the Fast and Furious franchise [^1] [^2] as well as a "badass driving song". [^3] It has also spawned many remixes, covers, parodies, memes, and viral videos over the years. Some examples are:

        -

        download teriyaki boyz tokyo drift mp3
        -download teriyaki boyz tokyo drift ringtone
        -download teriyaki boyz tokyo drift lyrics
        -download teriyaki boyz tokyo drift video
        -download teriyaki boyz tokyo drift instrumental
        -download teriyaki boyz tokyo drift remix
        -download teriyaki boyz tokyo drift 320kbps
        -download teriyaki boyz tokyo drift song
        -download teriyaki boyz tokyo drift soundtrack
        -download teriyaki boyz tokyo drift album
        -download teriyaki boyz tokyo drift fast and furious
        -download teriyaki boyz tokyo drift bass boosted
        -download teriyaki boyz tokyo drift music
        -download teriyaki boyz tokyo drift full song
        -download teriyaki boyz tokyo drift original
        -download teriyaki boyz tokyo drift free
        -download teriyaki boyz tokyo drift online
        -download teriyaki boyz tokyo drift audio
        -download teriyaki boyz tokyo drift karaoke
        -download teriyaki boyz tokyo drift spotify
        -download teriyaki boyz tokyo drift youtube
        -download teriyaki boyz tokyo drift hd
        -download teriyaki boyz tokyo drift 8d audio
        -download teriyaki boyz tokyo drift nightcore
        -download teriyaki boyz tokyo drift trap remix
        -download teriyaki boyz tokyo drift extended version
        -download teriyaki boyz tokyo drift live performance
        -download teriyaki boyz tokyo drift dj mix
        -download teriyaki boyz tokyo drift mashup
        -download teriyaki boyz tokyo drift cover
        -download teriyaki boyz tokyo drift dance choreography
        -download teriyaki boyz tokyo drift guitar tabs
        -download teriyaki boyz tokyo drift piano sheet music
        -download teriyaki boyz tokyo drift flute notes
        -download teriyaki boyz tokyo drift violin version
        -download teriyaki boyz tokyo drift edm remix
        -download teriyaki boyz tokyo drift tiktok trend
        -download teriyaki boyz tokyo drift whatsapp status
        -download teriyaki boyz tokyo drift reaction video
        -download teriyaki boyz tokyo drift behind the scenes

        -
          -
        • The official remix featuring Pusha T and Fam-Lay with new verses from Teriyaki Boyz.
        • -
        • The "Tokyo Drift Freestyle" by Indonesian rapper Rich Brian during the COVID-19 pandemic.
        • -
        • The "T.D" by Lil Yachty featuring Tierra Whack, Tyler, The Creator, and ASAP Rocky, which heavily samples "Tokyo Drift".
        • -
        • The "Jailbreak the Tesla" by Injury Reserve, which interpolates the melody of "Tokyo Drift".
        • -
        • The TikTok trend of people "drifting" across their hardwood floors.
        • -
        -

        The lyrics and meaning of the song

        -

        The lyrics of "Tokyo Drift" are mostly about bragging about their cars, their skills, their money, and their lifestyle. They also reference some aspects of Japanese culture, such as sushi, samurai, karate, manga, anime, geisha, Godzilla, etc. They also use some slang words and phrases that may not be familiar to everyone. Here are some examples:

        -
          -
        • "Gaijin" means foreigner or outsider in Japanese.
        • -
        • "Wari wari" means sorry or excuse me in Japanese.
        • -
        • "Habibi " means my dear or my love in Arabic.
        • -
        • "Mashallah" means God has willed it or God bless in Arabic.
        • -
        • "Kasha" means cash or money in Japanese.
        • -
        • "Moshi moshi" means hello or excuse me in Japanese.
        • -
        -

        The meaning of the song is to celebrate the thrill and excitement of drifting and racing in Tokyo, as well as to show off their coolness and swagger. The song also reflects the multicultural and globalized nature of the Fast and Furious franchise, which features characters and locations from different countries and backgrounds.

        -

        Why should you download Teriyaki Boyz Tokyo Drift?

        -

        Now that you know more about the song, you may be wondering why you should download it. There are many reasons why you should own a copy of this song, such as:

        -

        The benefits of owning your music

        -

        Downloading music legally means that you have the right to listen to it anytime, anywhere, and on any device. You don't have to worry about streaming issues, data usage, ads, or subscription fees. You also support the artists and the industry by paying for their work. Owning your music also gives you more control over your playlist, your mood, and your preferences. You can create your own mixtapes, share them with your friends, or use them for your projects.

        -

        The ways to use the song for entertainment

        -

        "Tokyo Drift" is a versatile song that can be used for various purposes and occasions. Here are some examples of how you can enjoy this song:

        -
          -
        • Play it while driving or riding your car, bike, skateboard, or rollerblades. Feel the adrenaline rush as you speed up, turn, and drift along the road.
        • -
        • Play it while working out, jogging, dancing, or doing any physical activity. Boost your energy and motivation with the upbeat tempo and catchy chorus.
        • -
        • Play it while watching or playing video games, especially racing games. Enhance your gaming experience with the immersive sound and atmosphere.
        • -
        • Play it while hosting or attending a party, a barbecue, a picnic, or any social gathering. Spice up the mood and the vibe with the fun and funky tune.
        • -
        • Play it while traveling or exploring new places. Experience the culture and the scenery with the exotic and diverse elements of the song.
        • -
        -

        How to download Teriyaki Boyz Tokyo Drift legally?

        -

        Now that you know why you should download "Tokyo Drift", you may be wondering how to do it legally. There are many sites that offer free or paid downloads of music legally, but not all of them are reliable or safe. To help you find the best options, we have compiled a list of some of the best free and paid sites to download music legally.

        -

        The best free sites to download music legally

        -

        If you are looking for free downloads of music legally, here are some of the best sites to check out:

        -

        Bandcamp

        -

        Bandcamp is a platform that allows independent artists and labels to upload and sell their music directly to fans. You can find a wide range of genres and styles on Bandcamp, from rock to rap to electronic to folk. Some artists offer their music for free or for a name-your-price option, while others charge a fixed amount. You can stream the music online or download it in various formats, such as MP3, FLAC, WAV, etc. You can also support the artists by buying their merchandise or leaving tips.

        -

        DatPiff

        -

        DatPiff is a site that specializes in hip hop mixtapes and albums. You can find thousands of free downloads from both mainstream and underground artists on DatPiff, including Teriyaki Boyz. You can stream the music online or download it in MP3 format. You can also rate, comment, and share the music with other users.

        -

        Free Music Archive

        -

        Free Music Archive is a site that offers free downloads of high-quality music from various genres and sources. You can find music from independent artists, netlabels, radio stations, podcasts, public domain projects, etc. on Free Music Archive. You can stream the music online or download it in MP3 format. You can also browse by genre, mood, license type, etc.

        -

        The Internet Archive

        -

        The Internet Archive is a site that preserves and provides access to digital content from various media and formats. You can find millions of free downloads of music from various genres and eras on The Internet Archive, including "Tokyo Drift". You can stream the music online or download it in various formats, such as MP3 , OGG, FLAC, etc. You can also browse by collection, creator, date, etc.

        -

        The best paid sites to download music legally

        -

        If you are willing to pay for downloads of music legally, here are some of the best sites to check out:

        -

        iTunes

        -

        iTunes is a platform that offers downloads of music, movies, TV shows, podcasts, audiobooks, etc. from various genres and sources. You can find "Tokyo Drift" on iTunes for $1.29. You can download it in AAC format, which is compatible with Apple devices and software. You can also sync your music library across your devices with iCloud.

        -

        Amazon Music

        -

        Amazon Music is a platform that offers downloads of music, movies, TV shows, podcasts, audiobooks, etc. from various genres and sources. You can find "Tokyo Drift" on Amazon Music for $1.29. You can download it in MP3 format, which is compatible with most devices and software. You can also access your music library online or offline with the Amazon Music app.

        -

        Google Play Music

        -

        Google Play Music is a platform that offers downloads of music, movies, TV shows, podcasts, audiobooks, etc. from various genres and sources. You can find "Tokyo Drift" on Google Play Music for $1.29. You can download it in MP3 format, which is compatible with most devices and software. You can also stream your music library online or offline with the Google Play Music app.

        -

        Conclusion

        -

        "Tokyo Drift" by Teriyaki Boyz is a iconic song from the Fast and Furious franchise that has become a cult classic among car enthusiasts and hip hop lovers. It is a fun and funky tune that celebrates the thrill and excitement of drifting and racing in Tokyo. It also reflects the multicultural and globalized nature of the franchise, which features characters and locations from different countries and backgrounds.

        -

        If you want to enjoy this song on your devices, you should download it legally from one of the sites we have listed above. Whether you choose a free or a paid site, you will be able to listen to the song anytime, anywhere, and on any device. You will also support the artists and the industry by paying for their work.

        -

        So what are you waiting for? Download Teriyaki Boyz Tokyo Drift today and have some fun!

        -

        FAQs

        -

        Here are some frequently asked questions about "Tokyo Drift" by Teriyaki Boyz:

        -
          -
        1. Who are Teriyaki Boyz?
        2. -

          Teriyaki Boyz are a Japanese hip hop group composed of Verbal, Wise, Ilmari, Ryo-Z, and DJ Nigo. They are known for their collaborations with American producers such as The Neptunes, Kanye West, Mark Ronson, etc.

          -
        3. What is the name of the movie that features "Tokyo Drift"?
        4. -

          The name of the movie that features "Tokyo Drift" is The Fast and the Furious: Tokyo Drift. It is the third installment of the Fast and Furious franchise, released in 2006.

          -
        5. What is drifting?
        6. -

          Drifting is a driving technique that involves sliding the car sideways around corners. It is popular among street racers and stunt drivers.

          -
        7. What are some other songs from the Fast and Furious franchise?
        8. -

          Some other songs from the Fast and Furious franchise are "See You Again" by Wiz Khalifa feat. Charlie Puth, "We Own It" by 2 Chainz feat. Wiz Khalifa, "Danza Kuduro" by Don Omar feat. Lucenzo, "Gasolina" by Daddy Yankee feat. Lil Jon, etc.

          -
        9. Where can I find more information about "Tokyo Drift" by Teriyaki Boyz?
        10. -

          You can find more information about "Tokyo Drift" by Teriyaki Boyz on their official website , their Wikipedia page , or their YouTube channel .

          -
        - : https://www.billboard.com/articles/news/9355120/fast-furious-soundtrack-songs-ranked : https://www.complex.com/music/2017/04/fast-and-furious-soundtrack-ranking/ : https://www.thrillist.com/entertainment/nation/best-driving-songs : http://www.bape.com/teriyakiboyz/ : https://en.wikipedia.org/wiki/Teriyaki_

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/streaming.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/streaming.py deleted file mode 100644 index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/streaming.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Streaming module API that should be implemented by all Streaming components, -""" - -from contextlib import contextmanager -import typing as tp -from torch import nn -import torch - - -State = tp.Dict[str, torch.Tensor] - - -class StreamingModule(nn.Module): - """Common API for streaming components. - - Each streaming component has a streaming state, which is just a dict[str, Tensor]. - By convention, the first dim of each tensor must be the batch size. - Don't use dots in the key names, as this would clash with submodules - (like in state_dict). - - If `self._is_streaming` is True, the component should use and remember - the proper state inside `self._streaming_state`. - - To set a streaming component in streaming state, use - - with module.streaming(): - ... - - This will automatically reset the streaming state when exiting the context manager. - This also automatically propagates to all streaming children module. - - Some module might also implement the `StreamingModule.flush` method, although - this one is trickier, as all parents module must be StreamingModule and implement - it as well for it to work properly. See `StreamingSequential` after. - """ - def __init__(self) -> None: - super().__init__() - self._streaming_state: State = {} - self._is_streaming = False - - def _apply_named_streaming(self, fn: tp.Any): - for name, module in self.named_modules(): - if isinstance(module, StreamingModule): - fn(name, module) - - def _set_streaming(self, streaming: bool): - def _set_streaming(name, module): - module._is_streaming = streaming - self._apply_named_streaming(_set_streaming) - - @contextmanager - def streaming(self): - """Context manager to enter streaming mode. Reset streaming state on exit. - """ - self._set_streaming(True) - try: - yield - finally: - self._set_streaming(False) - self.reset_streaming() - - def reset_streaming(self): - """Reset the streaming state. - """ - def _reset(name: str, module: StreamingModule): - module._streaming_state.clear() - - self._apply_named_streaming(_reset) - - def get_streaming_state(self) -> State: - """Return the streaming state, including that of sub-modules. - """ - state: State = {} - - def _add(name: str, module: StreamingModule): - if name: - name += "." - for key, value in module._streaming_state.items(): - state[name + key] = value - - self._apply_named_streaming(_add) - return state - - def set_streaming_state(self, state: State): - """Set the streaming state, including that of sub-modules. - """ - state = dict(state) - - def _set(name: str, module: StreamingModule): - if name: - name += "." - module._streaming_state.clear() - for key, value in list(state.items()): - # complexity is not ideal here, but probably fine. - if key.startswith(name): - local_key = key[len(name):] - if '.' not in local_key: - module._streaming_state[local_key] = value - del state[key] - - self._apply_named_streaming(_set) - assert len(state) == 0, list(state.keys()) - - def flush(self, x: tp.Optional[torch.Tensor] = None): - """Flush any remaining outputs that were waiting for completion. - Typically, for convolutions, this will add the final padding - and process the last buffer. - - This should take an optional argument `x`, which will be provided - if a module before this one in the streaming pipeline has already - spitted out a flushed out buffer. - """ - if x is None: - return None - else: - return self(x) - - -class StreamingSequential(StreamingModule, nn.Sequential): - """A streaming compatible alternative of `nn.Sequential`. - """ - def flush(self, x: tp.Optional[torch.Tensor] = None): - for module in self: - if isinstance(module, StreamingModule): - x = module.flush(x) - elif x is not None: - x = module(x) - return x diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/logger.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/logger.py deleted file mode 100644 index ac4634970fae6aacde2b7b808355dbd50c90ce73..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/logger.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging - - -def setup_logging(log_file, level, include_host=False): - if include_host: - import socket - - hostname = socket.gethostname() - formatter = logging.Formatter( - f"%(asctime)s | {hostname} | %(levelname)s | %(message)s", - datefmt="%Y-%m-%d,%H:%M:%S", - ) - else: - formatter = logging.Formatter( - "%(asctime)s | %(levelname)s | %(message)s", datefmt="%Y-%m-%d,%H:%M:%S" - ) - - logging.root.setLevel(level) - loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict] - for logger in loggers: - logger.setLevel(level) - - stream_handler = logging.StreamHandler() - stream_handler.setFormatter(formatter) - logging.root.addHandler(stream_handler) - - if log_file: - file_handler = logging.FileHandler(filename=log_file) - file_handler.setFormatter(formatter) - logging.root.addHandler(file_handler) diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddim.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddim.py deleted file mode 100644 index 57ee8d302c77cb09bd73ef803ef9e715098feafc..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddim.py +++ /dev/null @@ -1,377 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from audioldm.latent_diffusion.util import ( - make_ddim_sampling_parameters, - make_ddim_timesteps, - noise_like, - extract_into_tensor, -) -import gradio as gr - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule( - self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0.0, verbose=True - ): - self.ddim_timesteps = make_ddim_timesteps( - ddim_discr_method=ddim_discretize, - num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps, - verbose=verbose, - ) - alphas_cumprod = self.model.alphas_cumprod - assert ( - alphas_cumprod.shape[0] == self.ddpm_num_timesteps - ), "alphas have to be defined for each timestep" - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer("betas", to_torch(self.model.betas)) - self.register_buffer("alphas_cumprod", to_torch(alphas_cumprod)) - self.register_buffer( - "alphas_cumprod_prev", to_torch(self.model.alphas_cumprod_prev) - ) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer( - "sqrt_alphas_cumprod", to_torch(np.sqrt(alphas_cumprod.cpu())) - ) - self.register_buffer( - "sqrt_one_minus_alphas_cumprod", - to_torch(np.sqrt(1.0 - alphas_cumprod.cpu())), - ) - self.register_buffer( - "log_one_minus_alphas_cumprod", to_torch(np.log(1.0 - alphas_cumprod.cpu())) - ) - self.register_buffer( - "sqrt_recip_alphas_cumprod", to_torch(np.sqrt(1.0 / alphas_cumprod.cpu())) - ) - self.register_buffer( - "sqrt_recipm1_alphas_cumprod", - to_torch(np.sqrt(1.0 / alphas_cumprod.cpu() - 1)), - ) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters( - alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta, - verbose=verbose, - ) - self.register_buffer("ddim_sigmas", ddim_sigmas) - self.register_buffer("ddim_alphas", ddim_alphas) - self.register_buffer("ddim_alphas_prev", ddim_alphas_prev) - self.register_buffer("ddim_sqrt_one_minus_alphas", np.sqrt(1.0 - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) - / (1 - self.alphas_cumprod) - * (1 - self.alphas_cumprod / self.alphas_cumprod_prev) - ) - self.register_buffer( - "ddim_sigmas_for_original_num_steps", sigmas_for_original_sampling_steps - ) - - @torch.no_grad() - def sample( - self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0.0, - mask=None, - x0=None, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs, - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print( - f"Warning: Got {cbs} conditionings but batch-size is {batch_size}" - ) - else: - if conditioning.shape[0] != batch_size: - print( - f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}" - ) - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - samples, intermediates = self.ddim_sampling( - conditioning, - size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, - x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling( - self, - cond, - shape, - x_T=None, - ddim_use_original_steps=False, - callback=None, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - img_callback=None, - log_every_t=100, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - ): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = ( - self.ddpm_num_timesteps - if ddim_use_original_steps - else self.ddim_timesteps - ) - elif timesteps is not None and not ddim_use_original_steps: - subset_end = ( - int( - min(timesteps / self.ddim_timesteps.shape[0], 1) - * self.ddim_timesteps.shape[0] - ) - - 1 - ) - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {"x_inter": [img], "pred_x0": [img]} - time_range = ( - reversed(range(0, timesteps)) - if ddim_use_original_steps - else np.flip(timesteps) - ) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - # print(f"Running DDIM Sampling with {total_steps} timesteps") - - # iterator = gr.Progress().tqdm(time_range, desc="DDIM Sampler", total=total_steps) - iterator = tqdm(time_range, desc="DDIM Sampler", total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample( - x0, ts - ) # TODO deterministic forward pass? - img = ( - img_orig * mask + (1.0 - mask) * img - ) # In the first sampling step, img is pure gaussian noise - - outs = self.p_sample_ddim( - img, - cond, - ts, - index=index, - use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, - temperature=temperature, - noise_dropout=noise_dropout, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - img, pred_x0 = outs - if callback: - callback(i) - if img_callback: - img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates["x_inter"].append(img) - intermediates["pred_x0"].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - - return ( - extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 - + extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise - ) - - @torch.no_grad() - def decode( - self, - x_latent, - cond, - t_start, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - use_original_steps=False, - ): - - timesteps = ( - np.arange(self.ddpm_num_timesteps) - if use_original_steps - else self.ddim_timesteps - ) - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - # print(f"Running DDIM Sampling with {total_steps} timesteps") - - # iterator = gr.Progress().tqdm(time_range, desc="Decoding image", total=total_steps) - iterator = tqdm(time_range, desc="Decoding image", total=total_steps) - x_dec = x_latent - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full( - (x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long - ) - x_dec, _ = self.p_sample_ddim( - x_dec, - cond, - ts, - index=index, - use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return x_dec - - @torch.no_grad() - def p_sample_ddim( - self, - x, - c, - t, - index, - repeat_noise=False, - use_original_steps=False, - quantize_denoised=False, - temperature=1.0, - noise_dropout=0.0, - score_corrector=None, - corrector_kwargs=None, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - ): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.0: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - # When unconditional_guidance_scale == 1: only e_t - # When unconditional_guidance_scale == 0: only unconditional - # When unconditional_guidance_scale > 1: add more unconditional guidance - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score( - self.model, e_t, x, t, c, **corrector_kwargs - ) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = ( - self.model.alphas_cumprod_prev - if use_original_steps - else self.ddim_alphas_prev - ) - sqrt_one_minus_alphas = ( - self.model.sqrt_one_minus_alphas_cumprod - if use_original_steps - else self.ddim_sqrt_one_minus_alphas - ) - sigmas = ( - self.model.ddim_sigmas_for_original_num_steps - if use_original_steps - else self.ddim_sigmas - ) - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full( - (b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device - ) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1.0 - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.0: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise # TODO - return x_prev, pred_x0 diff --git a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_chatgpt.py b/spaces/fkhuggingme/gpt-academic/request_llm/bridge_chatgpt.py deleted file mode 100644 index 48eaba0b9f5498c18648f446b8d8d8066b1bd950..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_chatgpt.py +++ /dev/null @@ -1,277 +0,0 @@ -# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目 - -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import json -import time -import gradio as gr -import logging -import traceback -import requests -import importlib - -# config_private.py放自己的秘密如API和代理网址 -# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件 -from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc -proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \ - get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY') - -timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ - '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。' - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - stream_response = response.iter_lines() - result = '' - while True: - try: chunk = next(stream_response).decode() - except StopIteration: - break - except requests.exceptions.ConnectionError: - chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。 - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode() - if "reduce the length" in error_msg: - raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) - else: - raise RuntimeError("OpenAI拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break # api2d 正常完成 - json_data = json.loads(chunk.lstrip('data:'))['choices'][0] - delta = json_data["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if json_data['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if is_any_api_key(inputs): - chatbot._cookies['api_key'] = inputs - chatbot.append(("输入已识别为openai的api_key", what_keys(inputs))) - yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面 - return - elif not is_any_api_key(chatbot._cookies['api_key']): - chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")) - yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面 - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - try: - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) - except RuntimeError as e: - chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") - yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 - return - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=True - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS);break - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - - is_head_of_the_stream = True - if stream: - stream_response = response.iter_lines() - while True: - chunk = next(stream_response) - # print(chunk.decode()[6:]) - if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - try: - chunk_decoded = chunk.decode() - # 前者API2D的 - if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0): - # 判定为数据流的结束,gpt_replying_buffer也写完了 - logging.info(f'[response] {gpt_replying_buffer}') - break - # 处理数据流的主体 - chunkjson = json.loads(chunk_decoded[6:]) - status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}" - # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出 - gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"] - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - chunk_decoded = chunk.decode() - error_msg = chunk_decoded - if "reduce the length" in error_msg: - if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出 - history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], - max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 - chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") - # history = [] # 清除历史 - elif "does not exist" in error_msg: - chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.") - elif "Incorrect API key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.") - elif "exceeded your current quota" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.") - elif "bad forward key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.") - elif "Not enough point" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.") - else: - from toolbox import regular_txt_to_markdown - tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}") - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - -def generate_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 - """ - if not is_any_api_key(llm_kwargs['api_key']): - raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。") - - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api_key}" - } - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - if what_gpt_answer["content"] == timeout_bot_msg: continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'].strip('api2d-'), - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return headers,payload - - diff --git a/spaces/freddyaboulton/gradio_foliumtest/src/backend/gradio_foliumtest/foliumtest.py b/spaces/freddyaboulton/gradio_foliumtest/src/backend/gradio_foliumtest/foliumtest.py deleted file mode 100644 index 7a9639b0a0dcec5c672bf1cc2c74d01239b8b2b0..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_foliumtest/src/backend/gradio_foliumtest/foliumtest.py +++ /dev/null @@ -1,30 +0,0 @@ -from gradio.components.base import Component -from gradio.data_classes import FileData -from tempfile import NamedTemporaryFile -from folium import Map - - -class FoliumTest(Component): - - EVENTS = ["change"] - - data_model = FileData - - def __init__(self, value: Map = None, *, height: int = 500, label: str = None): - super().__init__(value, label=label) - self.height = height - - def preprocess(self, x): - return x - - def postprocess(self, x: Map): - if not x: - return None - - with NamedTemporaryFile(suffix=".html", delete=False) as tmp: - x.save(tmp.name) - return FileData(path=tmp.name, orig_name="map.html") - - def example_inputs(self): - return {"info": "Do not use as input"} - diff --git a/spaces/frncscp/bullerengue/musika/parse/parse_train.py b/spaces/frncscp/bullerengue/musika/parse/parse_train.py deleted file mode 100644 index 112cfd55eb2136085c924081eb04e226086aa788..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/parse/parse_train.py +++ /dev/null @@ -1,273 +0,0 @@ -import argparse -from typing import Any -import tensorflow as tf - - -class EasyDict(dict): - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - -def params_args(args): - parser = argparse.ArgumentParser() - - parser.add_argument( - "--hop", - type=int, - default=256, - help="Hop size (window size = 4*hop)", - ) - parser.add_argument( - "--mel_bins", - type=int, - default=256, - help="Mel bins in mel-spectrograms", - ) - parser.add_argument( - "--sr", - type=int, - default=44100, - help="Sampling Rate", - ) - parser.add_argument( - "--small", - type=str2bool, - default=False, - help="If True, use model with shorter available context, useful for small datasets", - ) - parser.add_argument( - "--latdepth", - type=int, - default=64, - help="Depth of generated latent vectors", - ) - parser.add_argument( - "--coorddepth", - type=int, - default=64, - help="Dimension of latent coordinate and style random vectors", - ) - parser.add_argument( - "--max_lat_len", - type=int, - default=512, - help="Length of .npy arrays used for training", - ) - parser.add_argument( - "--base_channels", - type=int, - default=128, - help="Base channels for generator and discriminator architectures", - ) - parser.add_argument( - "--shape", - type=int, - default=128, - help="Length of spectrograms time axis", - ) - parser.add_argument( - "--window", - type=int, - default=64, - help="Generator spectrogram window (must divide shape)", - ) - parser.add_argument( - "--bs", - type=int, - default=32, - help="Batch size", - ) - parser.add_argument( - "--lr", - type=float, - default=0.0001, - help="Learning Rate", - ) - parser.add_argument( - "--gp_max_weight", - type=float, - default=10.0, - help="Maximum allowed R1 gradient penalty loss weight. The weight will self-adapt if high values are not needed for stable training.", - ) - parser.add_argument( - "--totsamples", - type=int, - default=300000, - help="Max samples chosen per epoch", - ) - parser.add_argument( - "--epochs", - type=int, - default=250, - help="Number of epochs", - ) - parser.add_argument( - "--save_every", - type=int, - default=1, - help="Save after x epochs", - ) - parser.add_argument( - "--mu_rescale", - type=float, - default=-25.0, - help="Spectrogram mu used to normalize", - ) - parser.add_argument( - "--sigma_rescale", - type=float, - default=75.0, - help="Spectrogram sigma used to normalize", - ) - parser.add_argument( - "--save_path", - type=str, - default="checkpoints", - help="Path where to save checkpoints", - ) - parser.add_argument( - "--train_path", - type=str, - default="training_samples", - help="Path of training samples", - ) - parser.add_argument( - "--dec_path", - type=str, - default="checkpoints/ae", - help="Path of pretrained decoders weights", - ) - parser.add_argument( - "--load_path", - type=str, - default="None", - help="If not None, load models weights from this path", - ) - parser.add_argument( - "--base_path", - type=str, - default="checkpoints", - help="Path where pretrained models are downloaded", - ) - parser.add_argument( - "--log_path", - type=str, - default="logs", - help="Path where to save tensorboard logs", - ) - parser.add_argument( - "--testing", - type=str2bool, - default=False, - help="True if optimizers weight do not need to be loaded", - ) - parser.add_argument( - "--cpu", - type=str2bool, - default=False, - help="True if you wish to use cpu", - ) - parser.add_argument( - "--mixed_precision", - type=str2bool, - default=True, - help="True if your GPU supports mixed precision", - ) - parser.add_argument( - "--xla", - type=str2bool, - default=True, - help="True if you wish to improve training speed with XLA", - ) - parser.add_argument( - "--share_gradio", - type=str2bool, - default=False, - help="True if you wish to create a public URL for the Gradio interface", - ) - - tmp_args = parser.parse_args() - - args.hop = tmp_args.hop - args.mel_bins = tmp_args.mel_bins - args.sr = tmp_args.sr - args.small = tmp_args.small - args.latdepth = tmp_args.latdepth - args.coorddepth = tmp_args.coorddepth - args.max_lat_len = tmp_args.max_lat_len - args.base_channels = tmp_args.base_channels - args.shape = tmp_args.shape - args.window = tmp_args.window - args.bs = tmp_args.bs - args.lr = tmp_args.lr - args.gp_max_weight = tmp_args.gp_max_weight - args.totsamples = tmp_args.totsamples - args.epochs = tmp_args.epochs - args.save_every = tmp_args.save_every - args.mu_rescale = tmp_args.mu_rescale - args.sigma_rescale = tmp_args.sigma_rescale - args.save_path = tmp_args.save_path - args.train_path = tmp_args.train_path - args.dec_path = tmp_args.dec_path - args.load_path = tmp_args.load_path - args.base_path = tmp_args.base_path - args.log_path = tmp_args.log_path - args.testing = tmp_args.testing - args.cpu = tmp_args.cpu - args.mixed_precision = tmp_args.mixed_precision - args.xla = tmp_args.xla - args.share_gradio = tmp_args.share_gradio - - if args.small: - args.latlen = 128 - else: - args.latlen = 256 - args.coordlen = (args.latlen // 2) * 3 - - print() - - args.datatype = tf.float32 - gpuls = tf.config.list_physical_devices("GPU") - if len(gpuls) == 0 or args.cpu: - args.cpu = True - args.mixed_precision = False - tf.config.set_visible_devices([], "GPU") - print() - print("Using CPU...") - print() - if args.mixed_precision: - args.datatype = tf.float16 - print() - print("Using GPU with mixed precision enabled...") - print() - if not args.mixed_precision and not args.cpu: - print() - print("Using GPU without mixed precision...") - print() - - return args - - -def parse_args(): - args = EasyDict() - return params_args(args) diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/base.py b/spaces/fuckyoudeki/AutoGPT/autogpt/memory/base.py deleted file mode 100644 index 691e2299c4caa5c2e9af5b2436727834f3cc6c67..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/base.py +++ /dev/null @@ -1,43 +0,0 @@ -"""Base class for memory providers.""" -import abc - -import openai - -from autogpt.config import AbstractSingleton, Config - -cfg = Config() - - -def get_ada_embedding(text): - text = text.replace("\n", " ") - if cfg.use_azure: - return openai.Embedding.create( - input=[text], - engine=cfg.get_azure_deployment_id_for_model("text-embedding-ada-002"), - )["data"][0]["embedding"] - else: - return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[ - "data" - ][0]["embedding"] - - -class MemoryProviderSingleton(AbstractSingleton): - @abc.abstractmethod - def add(self, data): - pass - - @abc.abstractmethod - def get(self, data): - pass - - @abc.abstractmethod - def clear(self): - pass - - @abc.abstractmethod - def get_relevant(self, data, num_relevant=5): - pass - - @abc.abstractmethod - def get_stats(self): - pass diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Change My Software 10 115 _VERIFIED_.md b/spaces/gotiQspiryo/whisper-ui/examples/Change My Software 10 115 _VERIFIED_.md deleted file mode 100644 index 91579e63e148fab47c48f0f066946089c34fcfea..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Change My Software 10 115 _VERIFIED_.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Change My Software 10 115


        Download Ziphttps://urlgoal.com/2uyLKM



        -
        -This service supplies software that can be used to update the D750 camera “C” firmware to version 1.15. Before proceeding, select Firmware version in the ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Elcomsoft Advanced Sage Password Recovery V2.10.309.Fix Full.rar.md b/spaces/gotiQspiryo/whisper-ui/examples/Elcomsoft Advanced Sage Password Recovery V2.10.309.Fix Full.rar.md deleted file mode 100644 index f8ce95cc3e2f70ca4e663a1598182d9e4b02bcec..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Elcomsoft Advanced Sage Password Recovery V2.10.309.Fix Full.rar.md +++ /dev/null @@ -1,8 +0,0 @@ -
        -

        this elcomsoft advanced archive password recovery crack is a most trusted and dependable tool for recouping the password for every kind of the archive. it uses the extraordinary examination structure of all archive records. it can recover password of all versions of zip, winzip, winrar, ace, pkzip and all most recent archives. this software likewise unlocks the password protected records of winrar, arj, and ace.

        -

        this elcomsoft advanced archive password recovery keygen can recover password records of any type of the archive. it can recover password from zip and rar archives of winzip, winrar, arj, ace, and most recent archives. this software can likewise unlock the password of winrar, arj, and ace protected records.

        -

        Elcomsoft Advanced Sage Password Recovery V2.10.309.full.rar


        DOWNLOAD ★★★ https://urlgoal.com/2uyLQO



        -

        this elcomsoft advanced archive password recovery serial key is a very best application for recouping password for all of the versions of the archive. it can recover password for all versions of zip, winzip, winrar, arj, ace, and all most recent archives. it can likewise unlock the password records of winrar, arj, and ace.

        -

        advanced archive password recovery serial key is a most trusted application for recouping password for all of the versions of the archive. it can recover password for all versions of zip, winzip, winrar, arj, ace, and all most recent archives. it can likewise unlock the password protected records of winrar, arj, and ace.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/download_ML50_v1.sh b/spaces/gradio/HuBERT/examples/multilingual/data_scripts/download_ML50_v1.sh deleted file mode 100644 index 99fbc75920836a4b4bbdbd6b523749843288e450..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/download_ML50_v1.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -# first run download_wmt20.sh; it will install a few useful tools for other scripts -# TODO: need to print out instructions on downloading a few files which requires manually authentication from the websites -bash ./download_wmt20.sh - -python ./download_wmt19_and_before.py -bash ./download_wat19_my.sh -python ./download_ted_and_extract.py -bash ./download_lotus.sh -bash ./download_iitb.sh -bash ./download_af_xh.sh - - -# IWSLT downloading URLs have changed in between; TODO: fix them: -bash ./download_iwslt_and_extract.sh - -# TODO: globalvoices URLs changed; need to be fixed -bash ./download_flores_data.sh diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/flashlight_decoder.py b/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/flashlight_decoder.py deleted file mode 100644 index 8a548bdf6613abc7d9a31d04be6158602ccf0967..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/new/decoders/flashlight_decoder.py +++ /dev/null @@ -1,409 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import gc -import os.path as osp -import warnings -from collections import deque, namedtuple -from typing import Any, Dict, Tuple - -import numpy as np -import torch -from fairseq import tasks -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models.fairseq_model import FairseqModel -from fairseq.utils import apply_to_sample -from omegaconf import open_dict, OmegaConf - -from typing import List - -from .decoder_config import FlashlightDecoderConfig -from .base_decoder import BaseDecoder - -try: - from flashlight.lib.text.decoder import ( - LM, - CriterionType, - DecodeResult, - KenLM, - LexiconDecoder, - LexiconDecoderOptions, - LexiconFreeDecoder, - LexiconFreeDecoderOptions, - LMState, - SmearingMode, - Trie, - ) - from flashlight.lib.text.dictionary import create_word_dict, load_words -except ImportError: - warnings.warn( - "flashlight python bindings are required to use this functionality. " - "Please install from " - "https://github.com/facebookresearch/flashlight/tree/master/bindings/python" - ) - LM = object - LMState = object - - -class KenLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - if cfg.lexicon: - self.lexicon = load_words(cfg.lexicon) - self.word_dict = create_word_dict(self.lexicon) - self.unk_word = self.word_dict.get_index("") - - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.trie = Trie(self.vocab_size, self.silence) - - start_state = self.lm.start(False) - for word, spellings in self.lexicon.items(): - word_idx = self.word_dict.get_index(word) - _, score = self.lm.score(start_state, word_idx) - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{word} {spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append( - [ - { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - "words": [ - self.word_dict.get_entry(x) for x in result.words if x >= 0 - ], - } - for result in nbest_results - ] - ) - return hypos - - -FairseqLMState = namedtuple( - "FairseqLMState", - [ - "prefix", - "incremental_state", - "probs", - ], -) - - -class FairseqLM(LM): - def __init__(self, dictionary: Dictionary, model: FairseqModel) -> None: - super().__init__() - - self.dictionary = dictionary - self.model = model - self.unk = self.dictionary.unk() - - self.save_incremental = False # this currently does not work properly - self.max_cache = 20_000 - - if torch.cuda.is_available(): - model.cuda() - model.eval() - model.make_generation_fast_() - - self.states = {} - self.stateq = deque() - - def start(self, start_with_nothing: bool) -> LMState: - state = LMState() - prefix = torch.LongTensor([[self.dictionary.eos()]]) - incremental_state = {} if self.save_incremental else None - with torch.no_grad(): - res = self.model(prefix.cuda(), incremental_state=incremental_state) - probs = self.model.get_normalized_probs(res, log_probs=True, sample=None) - - if incremental_state is not None: - incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state) - self.states[state] = FairseqLMState( - prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy() - ) - self.stateq.append(state) - - return state - - def score( - self, - state: LMState, - token_index: int, - no_cache: bool = False, - ) -> Tuple[LMState, int]: - """ - Evaluate language model based on the current lm state and new word - Parameters: - ----------- - state: current lm state - token_index: index of the word - (can be lexicon index then you should store inside LM the - mapping between indices of lexicon and lm, or lm index of a word) - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - curr_state = self.states[state] - - def trim_cache(targ_size: int) -> None: - while len(self.stateq) > targ_size: - rem_k = self.stateq.popleft() - rem_st = self.states[rem_k] - rem_st = FairseqLMState(rem_st.prefix, None, None) - self.states[rem_k] = rem_st - - if curr_state.probs is None: - new_incremental_state = ( - curr_state.incremental_state.copy() - if curr_state.incremental_state is not None - else None - ) - with torch.no_grad(): - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cuda(), new_incremental_state - ) - elif self.save_incremental: - new_incremental_state = {} - - res = self.model( - torch.from_numpy(curr_state.prefix).cuda(), - incremental_state=new_incremental_state, - ) - probs = self.model.get_normalized_probs( - res, log_probs=True, sample=None - ) - - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cpu(), new_incremental_state - ) - - curr_state = FairseqLMState( - curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy() - ) - - if not no_cache: - self.states[state] = curr_state - self.stateq.append(state) - - score = curr_state.probs[token_index].item() - - trim_cache(self.max_cache) - - outstate = state.child(token_index) - if outstate not in self.states and not no_cache: - prefix = np.concatenate( - [curr_state.prefix, torch.LongTensor([[token_index]])], -1 - ) - incr_state = curr_state.incremental_state - - self.states[outstate] = FairseqLMState(prefix, incr_state, None) - - if token_index == self.unk: - score = float("-inf") - - return outstate, score - - def finish(self, state: LMState) -> Tuple[LMState, int]: - """ - Evaluate eos for language model based on the current lm state - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - return self.score(state, self.dictionary.eos()) - - def empty_cache(self) -> None: - self.states = {} - self.stateq = deque() - gc.collect() - - -class FairseqLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - self.lexicon = load_words(cfg.lexicon) if cfg.lexicon else None - self.idx_to_wrd = {} - - checkpoint = torch.load(cfg.lmpath, map_location="cpu") - - if "cfg" in checkpoint and checkpoint["cfg"] is not None: - lm_args = checkpoint["cfg"] - else: - lm_args = convert_namespace_to_omegaconf(checkpoint["args"]) - - if not OmegaConf.is_dict(lm_args): - lm_args = OmegaConf.create(lm_args) - - with open_dict(lm_args.task): - lm_args.task.data = osp.dirname(cfg.lmpath) - - task = tasks.setup_task(lm_args.task) - model = task.build_model(lm_args.model) - model.load_state_dict(checkpoint["model"], strict=False) - - self.trie = Trie(self.vocab_size, self.silence) - - self.word_dict = task.dictionary - self.unk_word = self.word_dict.unk() - self.lm = FairseqLM(self.word_dict, model) - - if self.lexicon: - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - if self.unitlm: - word_idx = i - self.idx_to_wrd[i] = word - score = 0 - else: - word_idx = self.word_dict.index(word) - _, score = self.lm.score(start_state, word_idx, no_cache=True) - - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - - def make_hypo(result: DecodeResult) -> Dict[str, Any]: - hypo = { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - } - if self.lexicon: - hypo["words"] = [ - self.idx_to_wrd[x] if self.unitlm else self.word_dict[x] - for x in result.words - if x >= 0 - ] - return hypo - - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append([make_hypo(result) for result in nbest_results]) - self.lm.empty_cache() - - return hypos diff --git a/spaces/gradio/HuBERT/fairseq/criterions/sentence_prediction.py b/spaces/gradio/HuBERT/fairseq/criterions/sentence_prediction.py deleted file mode 100644 index 9519fdc56d7de86b727f74ef5b18db520382e562..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/criterions/sentence_prediction.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_prediction") -class SentencePredictionCriterion(FairseqCriterion): - def __init__(self, task, classification_head_name, regression_target): - super().__init__(task) - self.classification_head_name = classification_head_name - self.regression_target = regression_target - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--classification-head-name', - default='sentence_classification_head', - help='name of the classification head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.classification_head_name in model.classification_heads - ), "model must provide sentence classification head for --criterion=sentence_prediction" - - logits, _ = model( - **sample["net_input"], - features_only=True, - classification_head_name=self.classification_head_name, - ) - targets = model.get_targets(sample, [logits]).view(-1) - sample_size = targets.numel() - - if not self.regression_target: - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - logits = logits.view(-1).float() - targets = targets.float() - loss = F.mse_loss(logits, targets, reduction="sum") - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if not self.regression_target: - preds = logits.argmax(dim=1) - logging_output["ncorrect"] = (preds == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/gradio/HuBERT/fairseq/models/distributed_fairseq_model.py b/spaces/gradio/HuBERT/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 06905455fd615ea962d8478c6093e7b4bbcc83c4..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_GOSSIP_DISABLED = False -try: - import gossip -except ImportError: - _GOSSIP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slow_mo": - if _GOSSIP_DISABLED: - raise ImportError( - "Cannot find gossip library. Please install from: " - "github.com/facebookresearch/stochastic_gradient_push" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - - wrapped_model = gossip.GossipDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - localsgd=(args.slowmo_algorithm == "LocalSGD"), - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/spaces/gradio/HuBERT/fairseq/tasks/denoising.py b/spaces/gradio/HuBERT/fairseq/tasks/denoising.py deleted file mode 100644 index cbf01e14dfad17ee8ab0ae1ca67c2458b84559cb..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/tasks/denoising.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - DenoisingDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - PadDataset, - PrependTokenDataset, - StripTokenDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task -import numpy as np - - -logger = logging.getLogger(__name__) - - -@register_task("denoising") -class DenoisingTask(LegacyFairseqTask): - """ - Denoising task for applying sequence to sequence denoising. (ie. BART) - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", help="path to data directory") - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments" - " per sample for dataset", - ) - parser.add_argument( - "--sample-break-mode", - default="complete_doc", - type=str, - help="mode for breaking sentence", - ) - parser.add_argument( - "--mask", - default=0.0, - type=float, - help="fraction of words/subwords that will be masked", - ) - parser.add_argument( - "--mask-random", - default=0.0, - type=float, - help="instead of using [MASK], use random token this often", - ) - parser.add_argument( - "--insert", - default=0.0, - type=float, - help="insert this percentage of additional random tokens", - ) - parser.add_argument( - "--permute", - default=0.0, - type=float, - help="take this proportion of subwords and permute them", - ) - parser.add_argument( - "--rotate", - default=0.5, - type=float, - help="rotate this proportion of inputs", - ) - parser.add_argument( - "--poisson-lambda", - default=3.0, - type=float, - help="randomly shuffle sentences for this proportion of inputs", - ) - parser.add_argument( - "--permute-sentences", - default=0.0, - type=float, - help="shuffle this proportion of sentences in all inputs", - ) - parser.add_argument( - "--mask-length", - default="subword", - type=str, - choices=["subword", "word", "span-poisson"], - help="mask length to choose", - ) - parser.add_argument( - "--replace-length", - default=-1, - type=int, - help="when masking N tokens, replace with 0, 1, or N tokens (use -1 for N)", - ) - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("") - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - dictionary = Dictionary.load(os.path.join(args.data, "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - dataset = StripTokenDataset(dataset, self.dictionary.eos()) - - dataset = maybe_shorten_dataset( - dataset, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.tokens_per_sample, - self.args.seed, - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for and one for - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - - # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, self.source_dictionary.eos()) - - mask_whole_words = ( - get_whole_word_mask(self.args, self.source_dictionary) - if self.args.mask_length != "subword" - else None - ) - - self.datasets[split] = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - ) - logger.info( - "Split: {0}, Loaded {1} samples of denoising_dataset".format( - split, - len(self.datasets[split]), - ) - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - """ - Generate batches for inference. We assume that the input begins with a - bos symbol (``) and ends with an eos symbol (``). - """ - pad = self.source_dictionary.pad() - eos = self.source_dictionary.eos() - src_dataset = TokenBlockDataset( - src_tokens, - src_lengths, - block_size=self.args.tokens_per_sample - 2, # for and - pad=pad, - eos=eos, - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - prev_output_tokens = PrependTokenDataset( - StripTokenDataset(src_dataset, eos), eos - ) - src_dataset = PadDataset(src_dataset, pad_idx=pad, left_pad=False) - return NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - "prev_output_tokens": PadDataset( - prev_output_tokens, pad_idx=pad, left_pad=False - ), - }, - "target": src_dataset, - }, - sizes=[np.array(src_lengths)], - ) - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.dictionary - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary`.""" - return self.dictionary diff --git a/spaces/gradio/blocks_outputs/run.py b/spaces/gradio/blocks_outputs/run.py deleted file mode 100644 index cd0b4d25a23ff17d855d00253c9a2ed58d878566..0000000000000000000000000000000000000000 --- a/spaces/gradio/blocks_outputs/run.py +++ /dev/null @@ -1,94 +0,0 @@ -import gradio as gr - - -def make_markdown(): - return [ - [ - "# hello again", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - [ - "## hello again again", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - [ - "### hello thrice", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - ] - - -with gr.Blocks() as demo: - with gr.Column(): - txt = gr.Textbox(label="Small Textbox", lines=1, show_label=False) - txt = gr.Textbox(label="Large Textbox", lines=5, show_label=False) - num = gr.Number(label="Number", show_label=False) - check = gr.Checkbox(label="Checkbox", show_label=False) - check_g = gr.CheckboxGroup( - label="Checkbox Group", choices=["One", "Two", "Three"], show_label=False - ) - radio = gr.Radio( - label="Radio", choices=["One", "Two", "Three"], show_label=False - ) - drop = gr.Dropdown( - label="Dropdown", choices=["One", "Two", "Three"], show_label=False - ) - slider = gr.Slider(label="Slider", show_label=False) - audio = gr.Audio(show_label=False) - file = gr.File(show_label=False) - video = gr.Video(show_label=False) - image = gr.Image(show_label=False) - df = gr.Dataframe(show_label=False) - html = gr.HTML(show_label=False) - json = gr.JSON(show_label=False) - md = gr.Markdown(show_label=False) - label = gr.Label(show_label=False) - highlight = gr.HighlightedText(show_label=False) - gr.Dataframe(interactive=True, col_count=(3, "fixed"), label="Dataframe") - gr.Dataframe(interactive=True, col_count=4, label="Dataframe") - gr.Dataframe( - interactive=True, headers=["One", "Two", "Three", "Four"], label="Dataframe" - ) - gr.Dataframe( - interactive=True, - headers=["One", "Two", "Three", "Four"], - col_count=(4, "fixed"), - row_count=(7, "fixed"), - value=[[0, 0, 0, 0]], - label="Dataframe", - ) - gr.Dataframe( - interactive=True, headers=["One", "Two", "Three", "Four"], col_count=4 - ) - df = gr.DataFrame( - [ - [ - "# hello", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - [ - "## hello", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - [ - "### hello", - "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.", - '', - ], - ], - headers=["One", "Two", "Three"], - wrap=True, - datatype=["markdown", "markdown", "html"], - interactive=True, - ) - btn = gr.Button("Run") - btn.click(fn=make_markdown, inputs=None, outputs=df) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/gulabpatel/Real-ESRGAN/realesrgan/utils.py b/spaces/gulabpatel/Real-ESRGAN/realesrgan/utils.py deleted file mode 100644 index 10e7c23d04f777c250160e74470fdfacb16eab88..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/realesrgan/utils.py +++ /dev/null @@ -1,280 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import queue -import threading -import torch -from basicsr.utils.download_util import load_file_from_url -from torch.nn import functional as F - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - -class RealESRGANer(): - """A helper class for upsampling images with RealESRGAN. - - Args: - scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4. - model_path (str): The path to the pretrained model. It can be urls (will first download it automatically). - model (nn.Module): The defined network. Default: None. - tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop - input images into tiles, and then process each of them. Finally, they will be merged into one image. - 0 denotes for do not use tile. Default: 0. - tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10. - pre_pad (int): Pad the input images to avoid border artifacts. Default: 10. - half (float): Whether to use half precision during inference. Default: False. - """ - - def __init__(self, scale, model_path, model=None, tile=0, tile_pad=10, pre_pad=10, half=False): - self.scale = scale - self.tile_size = tile - self.tile_pad = tile_pad - self.pre_pad = pre_pad - self.mod_scale = None - self.half = half - - # initialize model - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - # if the model_path starts with https, it will first download models to the folder: realesrgan/weights - if model_path.startswith('https://'): - model_path = load_file_from_url( - url=model_path, model_dir=os.path.join(ROOT_DIR, 'realesrgan/weights'), progress=True, file_name=None) - loadnet = torch.load(model_path, map_location=torch.device('cpu')) - # prefer to use params_ema - if 'params_ema' in loadnet: - keyname = 'params_ema' - else: - keyname = 'params' - model.load_state_dict(loadnet[keyname], strict=True) - model.eval() - self.model = model.to(self.device) - if self.half: - self.model = self.model.half() - - def pre_process(self, img): - """Pre-process, such as pre-pad and mod pad, so that the images can be divisible - """ - img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float() - self.img = img.unsqueeze(0).to(self.device) - if self.half: - self.img = self.img.half() - - # pre_pad - if self.pre_pad != 0: - self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect') - # mod pad for divisible borders - if self.scale == 2: - self.mod_scale = 2 - elif self.scale == 1: - self.mod_scale = 4 - if self.mod_scale is not None: - self.mod_pad_h, self.mod_pad_w = 0, 0 - _, _, h, w = self.img.size() - if (h % self.mod_scale != 0): - self.mod_pad_h = (self.mod_scale - h % self.mod_scale) - if (w % self.mod_scale != 0): - self.mod_pad_w = (self.mod_scale - w % self.mod_scale) - self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect') - - def process(self): - # model inference - self.output = self.model(self.img) - - def tile_process(self): - """It will first crop input images to tiles, and then process each tile. - Finally, all the processed tiles are merged into one images. - - Modified from: https://github.com/ata4/esrgan-launcher - """ - batch, channel, height, width = self.img.shape - output_height = height * self.scale - output_width = width * self.scale - output_shape = (batch, channel, output_height, output_width) - - # start with black image - self.output = self.img.new_zeros(output_shape) - tiles_x = math.ceil(width / self.tile_size) - tiles_y = math.ceil(height / self.tile_size) - - # loop over all tiles - for y in range(tiles_y): - for x in range(tiles_x): - # extract tile from input image - ofs_x = x * self.tile_size - ofs_y = y * self.tile_size - # input tile area on total image - input_start_x = ofs_x - input_end_x = min(ofs_x + self.tile_size, width) - input_start_y = ofs_y - input_end_y = min(ofs_y + self.tile_size, height) - - # input tile area on total image with padding - input_start_x_pad = max(input_start_x - self.tile_pad, 0) - input_end_x_pad = min(input_end_x + self.tile_pad, width) - input_start_y_pad = max(input_start_y - self.tile_pad, 0) - input_end_y_pad = min(input_end_y + self.tile_pad, height) - - # input tile dimensions - input_tile_width = input_end_x - input_start_x - input_tile_height = input_end_y - input_start_y - tile_idx = y * tiles_x + x + 1 - input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad] - - # upscale tile - try: - with torch.no_grad(): - output_tile = self.model(input_tile) - except RuntimeError as error: - print('Error', error) - print(f'\tTile {tile_idx}/{tiles_x * tiles_y}') - - # output tile area on total image - output_start_x = input_start_x * self.scale - output_end_x = input_end_x * self.scale - output_start_y = input_start_y * self.scale - output_end_y = input_end_y * self.scale - - # output tile area without padding - output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale - output_end_x_tile = output_start_x_tile + input_tile_width * self.scale - output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale - output_end_y_tile = output_start_y_tile + input_tile_height * self.scale - - # put tile into output image - self.output[:, :, output_start_y:output_end_y, - output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile, - output_start_x_tile:output_end_x_tile] - - def post_process(self): - # remove extra pad - if self.mod_scale is not None: - _, _, h, w = self.output.size() - self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale] - # remove prepad - if self.pre_pad != 0: - _, _, h, w = self.output.size() - self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale] - return self.output - - @torch.no_grad() - def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'): - h_input, w_input = img.shape[0:2] - # img: numpy - img = img.astype(np.float32) - if np.max(img) > 256: # 16-bit image - max_range = 65535 - print('\tInput is a 16-bit image') - else: - max_range = 255 - img = img / max_range - if len(img.shape) == 2: # gray image - img_mode = 'L' - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - elif img.shape[2] == 4: # RGBA image with alpha channel - img_mode = 'RGBA' - alpha = img[:, :, 3] - img = img[:, :, 0:3] - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - if alpha_upsampler == 'realesrgan': - alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB) - else: - img_mode = 'RGB' - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - - # ------------------- process image (without the alpha channel) ------------------- # - self.pre_process(img) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_img = self.post_process() - output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy() - output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0)) - if img_mode == 'L': - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY) - - # ------------------- process the alpha channel if necessary ------------------- # - if img_mode == 'RGBA': - if alpha_upsampler == 'realesrgan': - self.pre_process(alpha) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_alpha = self.post_process() - output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy() - output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0)) - output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY) - else: # use the cv2 resize for alpha channel - h, w = alpha.shape[0:2] - output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR) - - # merge the alpha channel - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA) - output_img[:, :, 3] = output_alpha - - # ------------------------------ return ------------------------------ # - if max_range == 65535: # 16-bit image - output = (output_img * 65535.0).round().astype(np.uint16) - else: - output = (output_img * 255.0).round().astype(np.uint8) - - if outscale is not None and outscale != float(self.scale): - output = cv2.resize( - output, ( - int(w_input * outscale), - int(h_input * outscale), - ), interpolation=cv2.INTER_LANCZOS4) - - return output, img_mode - - -class PrefetchReader(threading.Thread): - """Prefetch images. - - Args: - img_list (list[str]): A image list of image paths to be read. - num_prefetch_queue (int): Number of prefetch queue. - """ - - def __init__(self, img_list, num_prefetch_queue): - super().__init__() - self.que = queue.Queue(num_prefetch_queue) - self.img_list = img_list - - def run(self): - for img_path in self.img_list: - img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) - self.que.put(img) - - self.que.put(None) - - def __next__(self): - next_item = self.que.get() - if next_item is None: - raise StopIteration - return next_item - - def __iter__(self): - return self - - -class IOConsumer(threading.Thread): - - def __init__(self, opt, que, qid): - super().__init__() - self._queue = que - self.qid = qid - self.opt = opt - - def run(self): - while True: - msg = self._queue.get() - if isinstance(msg, str) and msg == 'quit': - break - - output = msg['output'] - save_path = msg['save_path'] - cv2.imwrite(save_path, output) - print(f'IO worker {self.qid} is done.') diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_sampler.py b/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_sampler.py deleted file mode 100644 index 00dd07b908497bd07bbe0e394d9eac38acce2b50..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_sampler.py +++ /dev/null @@ -1,63 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -""" -The ray sampler is a module that takes in camera matrices and resolution and batches of rays. -Expects cam2world matrices that use the OpenCV camera coordinate system conventions. -""" - -import torch - -class RaySampler(torch.nn.Module): - def __init__(self): - super().__init__() - self.ray_origins_h, self.ray_directions, self.depths, self.image_coords, self.rendering_options = None, None, None, None, None - - - def forward(self, cam2world_matrix, intrinsics, resolution): - """ - Create batches of rays and return origins and directions. - - cam2world_matrix: (N, 4, 4) - intrinsics: (N, 3, 3) - resolution: int - - ray_origins: (N, M, 3) - ray_dirs: (N, M, 2) - """ - N, M = cam2world_matrix.shape[0], resolution**2 - cam_locs_world = cam2world_matrix[:, :3, 3] - fx = intrinsics[:, 0, 0] - fy = intrinsics[:, 1, 1] - cx = intrinsics[:, 0, 2] - cy = intrinsics[:, 1, 2] - sk = intrinsics[:, 0, 1] - - uv = torch.stack(torch.meshgrid(torch.arange(resolution, dtype=torch.float32, device=cam2world_matrix.device), torch.arange(resolution, dtype=torch.float32, device=cam2world_matrix.device), indexing='ij')) * (1./resolution) + (0.5/resolution) - uv = uv.flip(0).reshape(2, -1).transpose(1, 0) - uv = uv.unsqueeze(0).repeat(cam2world_matrix.shape[0], 1, 1) - - x_cam = uv[:, :, 0].view(N, -1) - y_cam = uv[:, :, 1].view(N, -1) - z_cam = torch.ones((N, M), device=cam2world_matrix.device) - - x_lift = (x_cam - cx.unsqueeze(-1) + cy.unsqueeze(-1)*sk.unsqueeze(-1)/fy.unsqueeze(-1) - sk.unsqueeze(-1)*y_cam/fy.unsqueeze(-1)) / fx.unsqueeze(-1) * z_cam - y_lift = (y_cam - cy.unsqueeze(-1)) / fy.unsqueeze(-1) * z_cam - - cam_rel_points = torch.stack((x_lift, y_lift, z_cam, torch.ones_like(z_cam)), dim=-1) - - world_rel_points = torch.bmm(cam2world_matrix, cam_rel_points.permute(0, 2, 1)).permute(0, 2, 1)[:, :, :3] - - ray_dirs = world_rel_points - cam_locs_world[:, None, :] - ray_dirs = torch.nn.functional.normalize(ray_dirs, dim=2) - - ray_origins = cam_locs_world.unsqueeze(1).repeat(1, ray_dirs.shape[1], 1) - - return ray_origins, ray_dirs \ No newline at end of file diff --git a/spaces/haakohu/deep_privacy2/dp2/loss/utils.py b/spaces/haakohu/deep_privacy2/dp2/loss/utils.py deleted file mode 100644 index 8d6e19c3a0c4718412e6d83e3405c73029275f35..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/loss/utils.py +++ /dev/null @@ -1,26 +0,0 @@ -import torch -import torch.nn.functional as F - - -def nsgan_g_loss(fake_score): - """ - Non-saturating criterion from Goodfellow et al. 2014 - """ - return torch.nn.functional.softplus(-fake_score) - - -def nsgan_d_loss(real_score, fake_score): - """ - Non-saturating criterion from Goodfellow et al. 2014 - """ - d_loss = F.softplus(-real_score) + F.softplus(fake_score) - return d_loss.view(-1) - - -def smooth_masked_l1_loss(x, target, mask): - """ - Pixel-wise l1 loss for the area indicated by mask - """ - # Beta=.1 <-> square loss if pixel difference <= 12.8 - l1 = F.smooth_l1_loss(x*mask, target*mask, beta=.1, reduction="none").sum(dim=[1, 2, 3]) / mask.sum(dim=[1, 2, 3]) - return l1 diff --git a/spaces/hanskabvw1/chat/Dockerfile b/spaces/hanskabvw1/chat/Dockerfile deleted file mode 100644 index 1f185cc85fa318fdf39f91be98db2bb7e805411c..0000000000000000000000000000000000000000 --- a/spaces/hanskabvw1/chat/Dockerfile +++ /dev/null @@ -1,121 +0,0 @@ -ARG MODEL_NAME -ARG MODEL_PARAMS -ARG APP_COLOR -ARG APP_NAME - - -FROM node:19 as chatui-builder -ARG MODEL_NAME -ARG MODEL_PARAMS -ARG APP_COLOR -ARG APP_NAME - -WORKDIR /app - -RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - git gettext && \ - rm -rf /var/lib/apt/lists/* - - -RUN git clone https://github.com/huggingface/chat-ui.git - -WORKDIR /app/chat-ui - - -COPY .env.local.template .env.local.template - -RUN mkdir defaults -ADD defaults /defaults -RUN chmod -R 777 /defaults -RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \ - MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \ - && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \ - && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \ - && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \ - && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \ - echo "${MONGODB_URL}" && \ - envsubst < ".env.local.template" > ".env.local" \ - && rm .env.local.template - - - -RUN --mount=type=cache,target=/app/.npm \ - npm set cache /app/.npm && \ - npm ci - -RUN npm run build - -FROM ghcr.io/huggingface/text-generation-inference:latest - -ARG MODEL_NAME -ARG MODEL_PARAMS -ARG APP_COLOR -ARG APP_NAME - -ENV TZ=Europe/Paris \ - PORT=3000 - - - -RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - gnupg \ - curl \ - gettext && \ - rm -rf /var/lib/apt/lists/* -COPY entrypoint.sh.template entrypoint.sh.template - -RUN mkdir defaults -ADD defaults /defaults -RUN chmod -R 777 /defaults - -RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \ - MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \ - && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \ - && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \ - && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \ - && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \ - envsubst < "entrypoint.sh.template" > "entrypoint.sh" \ - && rm entrypoint.sh.template - - -RUN curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \ - gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg \ - --dearmor - -RUN echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-6.0.list - -RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - mongodb-org && \ - rm -rf /var/lib/apt/lists/* - -RUN mkdir -p /data/db -RUN chown -R 1000:1000 /data - -RUN curl -fsSL https://deb.nodesource.com/setup_19.x | /bin/bash - - -RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \ - nodejs && \ - rm -rf /var/lib/apt/lists/* - -RUN mkdir /app -RUN chown -R 1000:1000 /app - -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -RUN npm config set prefix /home/user/.local -RUN npm install -g pm2 - -COPY --from=chatui-builder --chown=1000 /app/chat-ui/node_modules /app/node_modules -COPY --from=chatui-builder --chown=1000 /app/chat-ui/package.json /app/package.json -COPY --from=chatui-builder --chown=1000 /app/chat-ui/build /app/build - -ENTRYPOINT ["/bin/bash"] -CMD ["entrypoint.sh"] - - diff --git a/spaces/harshasurampudi/Which_Planet/README.md b/spaces/harshasurampudi/Which_Planet/README.md deleted file mode 100644 index 0a11b9b2286027d2b3e6dd50b1b6f7dfb7de836a..0000000000000000000000000000000000000000 --- a/spaces/harshasurampudi/Which_Planet/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Which Planet -emoji: 🚀 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/configs/quick_schedules/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/configs/quick_schedules/README.md deleted file mode 100644 index a278199b8557a1e2fb341fe6757786a6cecb82b3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/configs/quick_schedules/README.md +++ /dev/null @@ -1 +0,0 @@ -These are quick configs for performance or accuracy regression tracking purposes. diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/grouped_batch_sampler.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/grouped_batch_sampler.py deleted file mode 100644 index 138e106136083383d9f8729f1da930804463b297..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/samplers/grouped_batch_sampler.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import numpy as np -from torch.utils.data.sampler import BatchSampler, Sampler - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - """ - - def __init__(self, sampler, group_ids, batch_size): - """ - Args: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a set of integers in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of " - "torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = np.asarray(group_ids) - assert self.group_ids.ndim == 1 - self.batch_size = batch_size - groups = np.unique(self.group_ids).tolist() - - # buffer the indices of each group until batch size is reached - self.buffer_per_group = {k: [] for k in groups} - - def __iter__(self): - for idx in self.sampler: - group_id = self.group_ids[idx] - group_buffer = self.buffer_per_group[group_id] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] # yield a copy of the list - del group_buffer[:] - - def __len__(self): - raise NotImplementedError("len() of GroupedBatchSampler is not well-defined.") diff --git a/spaces/hasibzunair/fifa-tryon-demo/models/pix2pixHD_model.py b/spaces/hasibzunair/fifa-tryon-demo/models/pix2pixHD_model.py deleted file mode 100644 index 3ffddf455618f3b5e6b66c508a854c3fdaa78157..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/models/pix2pixHD_model.py +++ /dev/null @@ -1,493 +0,0 @@ -import numpy as np -import torch -import os -from torch.autograd import Variable -from util.image_pool import ImagePool -import torch.nn as nn - -import cv2 -from .base_model import BaseModel -from . import networks -import torch.nn.functional as F - -NC = 20 - - -def generate_discrete_label(inputs, label_nc, onehot=True, encode=True): - pred_batch = [] - size = inputs.size() - for input in inputs: - input = input.view(1, label_nc, size[2], size[3]) - pred = np.squeeze(input.data.max(1)[1].cpu().numpy(), axis=0) - pred_batch.append(pred) - - pred_batch = np.array(pred_batch) - pred_batch = torch.from_numpy(pred_batch) - label_map = [] - for p in pred_batch: - p = p.view(1, 256, 192) - label_map.append(p) - label_map = torch.stack(label_map, 0) - if not onehot: - return label_map.float().cuda() - size = label_map.size() - oneHot_size = (size[0], label_nc, size[2], size[3]) - input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0) - - return input_label - - -def morpho(mask, iter, bigger=True): - kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) - new = [] - for i in range(len(mask)): - tem = mask[i].cpu().detach().numpy().squeeze().reshape(256, 192, 1)*255 - tem = tem.astype(np.uint8) - if bigger: - tem = cv2.dilate(tem, kernel, iterations=iter) - else: - tem = cv2.erode(tem, kernel, iterations=iter) - tem = tem.astype(np.float64) - tem = tem.reshape(1, 256, 192) - new.append(tem.astype(np.float64)/255.0) - new = np.stack(new) - new = torch.FloatTensor(new).cuda() - return new - - -def morpho_smaller(mask, iter, bigger=True): - kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1, 1)) - new = [] - for i in range(len(mask)): - tem = mask[i].cpu().detach().numpy().squeeze().reshape(256, 192, 1)*255 - tem = tem.astype(np.uint8) - if bigger: - tem = cv2.dilate(tem, kernel, iterations=iter) - else: - tem = cv2.erode(tem, kernel, iterations=iter) - tem = tem.astype(np.float64) - tem = tem.reshape(1, 256, 192) - new.append(tem.astype(np.float64)/255.0) - new = np.stack(new) - new = torch.FloatTensor(new).cuda() - return new - - -def encode(label_map, size): - label_nc = 14 - oneHot_size = (size[0], label_nc, size[2], size[3]) - input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0) - return input_label - - -class Pix2PixHDModel(BaseModel): - def name(self): - return 'Pix2PixHDModel' - - def init_loss_filter(self, use_gan_feat_loss, use_vgg_loss): - flags = (True, use_gan_feat_loss, use_vgg_loss, True, True) - - def loss_filter(g_gan, g_gan_feat, g_vgg, d_real, d_fake): - return [l for (l, f) in zip((g_gan, g_gan_feat, g_vgg, d_real, d_fake), flags) if f] - - return loss_filter - - def get_G(self, in_C, out_c, n_blocks, opt, L=1, S=1): - return networks.define_G(in_C, out_c, opt.ngf, opt.netG, L, S, - opt.n_downsample_global, n_blocks, opt.n_local_enhancers, - opt.n_blocks_local, opt.norm, gpu_ids=self.gpu_ids) - - def get_D(self, inc, opt): - netD = networks.define_D(inc, opt.ndf, opt.n_layers_D, opt.norm, opt.no_lsgan, - opt.num_D, not opt.no_ganFeat_loss, gpu_ids=self.gpu_ids) - return netD - - def cross_entropy2d(self, input, target, weight=None, size_average=True): - n, c, h, w = input.size() - nt, ht, wt = target.size() - - # Handle inconsistent size between input and target - if h != ht or w != wt: - input = F.interpolate(input, size=( - ht, wt), mode="bilinear", align_corners=True) - - input = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c) - target = target.view(-1) - loss = F.cross_entropy( - input, target, weight=weight, size_average=size_average, ignore_index=250 - ) - - return loss - - def ger_average_color(self, mask, arms): - color = torch.zeros(arms.shape).cuda() - for i in range(arms.shape[0]): - count = len(torch.nonzero(mask[i, :, :, :])) - if count < 10: - color[i, 0, :, :] = 0 - color[i, 1, :, :] = 0 - color[i, 2, :, :] = 0 - - else: - color[i, 0, :, :] = arms[i, 0, :, :].sum() / count - color[i, 1, :, :] = arms[i, 1, :, :].sum() / count - color[i, 2, :, :] = arms[i, 2, :, :].sum() / count - return color - - def initialize(self, opt): - BaseModel.initialize(self, opt) - if opt.resize_or_crop != 'none' or not opt.isTrain: # when training at full res this causes OOM - torch.backends.cudnn.benchmark = True - self.isTrain = opt.isTrain - input_nc = opt.label_nc if opt.label_nc != 0 else opt.input_nc - self.count = 0 - # define networks - # Generator network - netG_input_nc = input_nc - # Main Generator - with torch.no_grad(): - self.Unet = networks.define_UnetMask(4, self.gpu_ids).eval() - self.G1 = networks.define_Refine_ResUnet(37, 14, self.gpu_ids).eval() - self.G2 = networks.define_Refine(19+18, 1, self.gpu_ids).eval() - self.G = networks.define_Refine(24, 3, self.gpu_ids).eval() - - self.tanh = nn.Tanh() - self.sigmoid = nn.Sigmoid() - self.BCE = torch.nn.BCEWithLogitsLoss() - - # Discriminator network - if self.isTrain: - use_sigmoid = opt.no_lsgan - netD_input_nc = input_nc + opt.output_nc - netB_input_nc = opt.output_nc * 2 - # self.D1 = self.get_D(17, opt) - # self.D2 = self.get_D(4, opt) - # self.D3=self.get_D(7+3,opt) - # self.D = self.get_D(20, opt) - # self.netB = networks.define_B(netB_input_nc, opt.output_nc, 32, 3, 3, opt.norm, gpu_ids=self.gpu_ids) - - if self.opt.verbose: - print('---------- Networks initialized -------------') - - # load networks - if not self.isTrain or opt.continue_train or opt.load_pretrain: - pretrained_path = '' if not self.isTrain else opt.load_pretrain - self.load_network(self.Unet, 'U', opt.which_epoch, pretrained_path) - self.load_network(self.G1, 'G1', opt.which_epoch, pretrained_path) - self.load_network(self.G2, 'G2', opt.which_epoch, pretrained_path) - self.load_network(self.G, 'G', opt.which_epoch, pretrained_path) - # set loss functions and optimizers - if self.isTrain: - if opt.pool_size > 0 and (len(self.gpu_ids)) > 1: - raise NotImplementedError( - "Fake Pool Not Implemented for MultiGPU") - self.fake_pool = ImagePool(opt.pool_size) - self.old_lr = opt.lr - - # define loss functions - self.loss_filter = self.init_loss_filter( - not opt.no_ganFeat_loss, not opt.no_vgg_loss) - - self.criterionGAN = networks.GANLoss( - use_lsgan=not opt.no_lsgan, tensor=self.Tensor) - self.criterionFeat = torch.nn.L1Loss() - if not opt.no_vgg_loss: - self.criterionVGG = networks.VGGLoss(self.gpu_ids) - self.criterionStyle = networks.StyleLoss(self.gpu_ids) - # Names so we can breakout loss - self.loss_names = self.loss_filter( - 'G_GAN', 'G_GAN_Feat', 'G_VGG', 'D_real', 'D_fake') - # initialize optimizers - # optimizer G - if opt.niter_fix_global > 0: - import sys - if sys.version_info >= (3, 0): - finetune_list = set() - else: - from sets import Set - finetune_list = Set() - - params_dict = dict(self.netG.named_parameters()) - params = [] - for key, value in params_dict.items(): - if key.startswith('model' + str(opt.n_local_enhancers)): - params += [value] - finetune_list.add(key.split('.')[0]) - print( - '------------- Only training the local enhancer ork (for %d epochs) ------------' % opt.niter_fix_global) - print('The layers that are finetuned are ', - sorted(finetune_list)) - - def encode_input(self, label_map, clothes_mask, all_clothes_label): - - size = label_map.size() - oneHot_size = (size[0], 14, size[2], size[3]) - input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_( - 1, label_map.data.long().cuda(), 1.0) - - masked_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - masked_label = masked_label.scatter_( - 1, (label_map * (1 - clothes_mask)).data.long().cuda(), 1.0) - - c_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - c_label = c_label.scatter_( - 1, all_clothes_label.data.long().cuda(), 1.0) - - input_label = Variable(input_label) - - return input_label, masked_label, c_label - - def encode_input_test(self, label_map, label_map_ref, real_image_ref, infer=False): - - if self.opt.label_nc == 0: - input_label = label_map.data.cuda() - input_label_ref = label_map_ref.data.cuda() - else: - # create one-hot vector for label map - size = label_map.size() - oneHot_size = (size[0], self.opt.label_nc, size[2], size[3]) - input_label = torch.cuda.FloatTensor( - torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_( - 1, label_map.data.long().cuda(), 1.0) - input_label_ref = torch.cuda.FloatTensor( - torch.Size(oneHot_size)).zero_() - input_label_ref = input_label_ref.scatter_( - 1, label_map_ref.data.long().cuda(), 1.0) - if self.opt.data_type == 16: - input_label = input_label.half() - input_label_ref = input_label_ref.half() - - input_label = Variable(input_label, volatile=infer) - input_label_ref = Variable(input_label_ref, volatile=infer) - real_image_ref = Variable(real_image_ref.data.cuda()) - - return input_label, input_label_ref, real_image_ref - - def discriminate(self, netD, input_label, test_image, use_pool=False): - input_concat = torch.cat((input_label, test_image.detach()), dim=1) - if use_pool: - fake_query = self.fake_pool.query(input_concat) - return netD.forward(fake_query) - else: - return netD.forward(input_concat) - - def gen_noise(self, shape): - noise = np.zeros(shape, dtype=np.uint8) - # noise - noise = cv2.randn(noise, 0, 255) - noise = np.asarray(noise / 255, dtype=np.uint8) - noise = torch.tensor(noise, dtype=torch.float32) - return noise.cuda() - - def multi_scale_blend(self, fake_img, fake_c, mask, number=4): - alpha = [0, 0.1, 0.3, 0.6, 0.9] - smaller = mask - out = 0 - for i in range(1, number+1): - bigger = smaller - smaller = morpho(smaller, 2, False) - mid = bigger-smaller - out += mid*(alpha[i]*fake_c+(1-alpha[i])*fake_img) - out += smaller*fake_c - out += (1-mask)*fake_img - return out - - def forward(self, label, pre_clothes_mask, img_fore, clothes_mask, clothes, all_clothes_label, real_image, pose, grid, mask_fore): - # Encode Inputs - input_label, masked_label, all_clothes_label = self.encode_input( - label, clothes_mask, all_clothes_label) - arm1_mask = torch.FloatTensor( - (label.cpu().numpy() == 11).astype(np.float)).cuda() - arm2_mask = torch.FloatTensor( - (label.cpu().numpy() == 13).astype(np.float)).cuda() - pre_clothes_mask = torch.FloatTensor( - (pre_clothes_mask.detach().cpu().numpy() > 0.5).astype(np.float)).cuda() - clothes = clothes * pre_clothes_mask - - shape = pre_clothes_mask.shape - - G1_in = torch.cat([pre_clothes_mask, clothes, - all_clothes_label, pose, self.gen_noise(shape)], dim=1) - arm_label = self.G1.refine(G1_in) - - arm_label = self.sigmoid(arm_label) - CE_loss = self.cross_entropy2d( - arm_label, (label * (1 - clothes_mask)).transpose(0, 1)[0].long()) * 10 - - armlabel_map = generate_discrete_label(arm_label.detach(), 14, False) - dis_label = generate_discrete_label(arm_label.detach(), 14) - G2_in = torch.cat([pre_clothes_mask, clothes, - dis_label, pose, self.gen_noise(shape)], 1) - fake_cl = self.G2.refine(G2_in) - fake_cl = self.sigmoid(fake_cl) - CE_loss += self.BCE(fake_cl, clothes_mask) * 10 - - fake_cl_dis = torch.FloatTensor( - (fake_cl.detach().cpu().numpy() > 0.5).astype(np.float)).cuda() - fake_cl_dis = morpho(fake_cl_dis, 1, True) - - new_arm1_mask = torch.FloatTensor( - (armlabel_map.cpu().numpy() == 11).astype(np.float)).cuda() - new_arm2_mask = torch.FloatTensor( - (armlabel_map.cpu().numpy() == 13).astype(np.float)).cuda() - fake_cl_dis = fake_cl_dis*(1 - new_arm1_mask)*(1-new_arm2_mask) - fake_cl_dis *= mask_fore - - arm1_occ = clothes_mask * new_arm1_mask - arm2_occ = clothes_mask * new_arm2_mask - bigger_arm1_occ = morpho(arm1_occ, 10) - bigger_arm2_occ = morpho(arm2_occ, 10) - arm1_full = arm1_occ + (1 - clothes_mask) * arm1_mask - arm2_full = arm2_occ + (1 - clothes_mask) * arm2_mask - armlabel_map *= (1 - new_arm1_mask) - armlabel_map *= (1 - new_arm2_mask) - armlabel_map = armlabel_map * (1 - arm1_full) + arm1_full * 11 - armlabel_map = armlabel_map * (1 - arm2_full) + arm2_full * 13 - armlabel_map *= (1-fake_cl_dis) - dis_label = encode(armlabel_map, armlabel_map.shape) - - fake_c, warped, warped_mask, warped_grid = self.Unet( - clothes, fake_cl_dis, pre_clothes_mask, grid) - mask = fake_c[:, 3, :, :] - mask = self.sigmoid(mask)*fake_cl_dis - fake_c = self.tanh(fake_c[:, 0:3, :, :]) - fake_c = fake_c*(1-mask)+mask*warped - skin_color = self.ger_average_color((arm1_mask + arm2_mask - arm2_mask * arm1_mask), - (arm1_mask + arm2_mask - arm2_mask * arm1_mask) * real_image) - occlude = (1 - bigger_arm1_occ * (arm2_mask + arm1_mask+clothes_mask)) * \ - (1 - bigger_arm2_occ * (arm2_mask + arm1_mask+clothes_mask)) - img_hole_hand = img_fore * \ - (1 - clothes_mask) * occlude * (1 - fake_cl_dis) - - G_in = torch.cat([img_hole_hand, dis_label, fake_c, - skin_color, self.gen_noise(shape)], 1) - fake_image = self.G.refine(G_in.detach()) - fake_image = self.tanh(fake_image) - - loss_D_fake = 0 - loss_D_real = 0 - loss_G_GAN = 0 - loss_G_VGG = 0 - - L1_loss = 0 - - style_loss = L1_loss - - return [self.loss_filter(loss_G_GAN, 0, loss_G_VGG, loss_D_real, loss_D_fake), fake_image, - clothes, arm_label, L1_loss, style_loss, fake_cl, CE_loss, real_image, warped_grid] - - def inference(self, label, pre_clothes_mask, img_fore, clothes_mask, clothes, all_clothes_label, real_image, pose, grid, mask_fore): - # Encode Inputs - input_label, masked_label, all_clothes_label = self.encode_input( - label, clothes_mask, all_clothes_label) - arm1_mask = torch.FloatTensor( - (label.cpu().numpy() == 11).astype(np.float)).cuda() - arm2_mask = torch.FloatTensor( - (label.cpu().numpy() == 13).astype(np.float)).cuda() - pre_clothes_mask = torch.FloatTensor( - (pre_clothes_mask.detach().cpu().numpy() > 0.5).astype(np.float)).cuda() - clothes = clothes * pre_clothes_mask - - shape = pre_clothes_mask.shape - - G1_in = torch.cat([pre_clothes_mask, clothes, - all_clothes_label, pose, self.gen_noise(shape)], dim=1) - arm_label = self.G1.refine(G1_in) - - arm_label = self.sigmoid(arm_label) - - armlabel_map = generate_discrete_label(arm_label.detach(), 14, False) - dis_label = generate_discrete_label(arm_label.detach(), 14) - G2_in = torch.cat([pre_clothes_mask, clothes, - dis_label, pose, self.gen_noise(shape)], 1) - fake_cl = self.G2.refine(G2_in) - fake_cl = self.sigmoid(fake_cl) - - fake_cl_dis = torch.FloatTensor( - (fake_cl.detach().cpu().numpy() > 0.5).astype(np.float)).cuda() - fake_cl_dis = morpho(fake_cl_dis, 1, True) - - new_arm1_mask = torch.FloatTensor( - (armlabel_map.cpu().numpy() == 11).astype(np.float)).cuda() - new_arm2_mask = torch.FloatTensor( - (armlabel_map.cpu().numpy() == 13).astype(np.float)).cuda() - fake_cl_dis = fake_cl_dis*(1 - new_arm1_mask)*(1-new_arm2_mask) - fake_cl_dis *= mask_fore - - arm1_occ = clothes_mask * new_arm1_mask - arm2_occ = clothes_mask * new_arm2_mask - bigger_arm1_occ = morpho(arm1_occ, 10) - bigger_arm2_occ = morpho(arm2_occ, 10) - arm1_full = arm1_occ + (1 - clothes_mask) * arm1_mask - arm2_full = arm2_occ + (1 - clothes_mask) * arm2_mask - armlabel_map *= (1 - new_arm1_mask) - armlabel_map *= (1 - new_arm2_mask) - armlabel_map = armlabel_map * (1 - arm1_full) + arm1_full * 11 - armlabel_map = armlabel_map * (1 - arm2_full) + arm2_full * 13 - armlabel_map *= (1-fake_cl_dis) - dis_label = encode(armlabel_map, armlabel_map.shape) - - fake_c, warped, warped_mask, warped_grid = self.Unet( - clothes, fake_cl_dis, pre_clothes_mask, grid) - mask = fake_c[:, 3, :, :] - mask = self.sigmoid(mask)*fake_cl_dis - fake_c = self.tanh(fake_c[:, 0:3, :, :]) - fake_c = fake_c*(1-mask)+mask*warped - skin_color = self.ger_average_color((arm1_mask + arm2_mask - arm2_mask * arm1_mask), - (arm1_mask + arm2_mask - arm2_mask * arm1_mask) * real_image) - occlude = (1 - bigger_arm1_occ * (arm2_mask + arm1_mask+clothes_mask)) * \ - (1 - bigger_arm2_occ * (arm2_mask + arm1_mask+clothes_mask)) - img_hole_hand = img_fore * \ - (1 - clothes_mask) * occlude * (1 - fake_cl_dis) - - G_in = torch.cat([img_hole_hand, dis_label, fake_c, - skin_color, self.gen_noise(shape)], 1) - fake_image = self.G.refine(G_in.detach()) - fake_image = self.tanh(fake_image) - - return [fake_image, warped, fake_c] - - def save(self, which_epoch): - # self.save_network(self.Unet, 'U', which_epoch, self.gpu_ids) - # self.save_network(self.G, 'G', which_epoch, self.gpu_ids) - # self.save_network(self.G1, 'G1', which_epoch, self.gpu_ids) - # self.save_network(self.G2, 'G2', which_epoch, self.gpu_ids) - # # self.save_network(self.G3, 'G3', which_epoch, self.gpu_ids) - # self.save_network(self.D, 'D', which_epoch, self.gpu_ids) - # self.save_network(self.D1, 'D1', which_epoch, self.gpu_ids) - # self.save_network(self.D2, 'D2', which_epoch, self.gpu_ids) - # self.save_network(self.D3, 'D3', which_epoch, self.gpu_ids) - - pass - - # self.save_network(self.netB, 'B', which_epoch, self.gpu_ids) - - def update_fixed_params(self): - # after fixing the global generator for a number of iterations, also start finetuning it - params = list(self.netG.parameters()) - if self.gen_features: - params += list(self.netE.parameters()) - self.optimizer_G = torch.optim.Adam( - params, lr=self.opt.lr, betas=(self.opt.beta1, 0.999)) - if self.opt.verbose: - print('------------ Now also finetuning global generator -----------') - - def update_learning_rate(self): - lrd = self.opt.lr / self.opt.niter_decay - lr = self.old_lr - lrd - for param_group in self.optimizer_D.param_groups: - param_group['lr'] = lr - for param_group in self.optimizer_G.param_groups: - param_group['lr'] = lr - if self.opt.verbose: - print('update learning rate: %f -> %f' % (self.old_lr, lr)) - self.old_lr = lr - - -class InferenceModel(Pix2PixHDModel): - def forward(self, label, pre_clothes_mask, img_fore, clothes_mask, clothes, all_clothes_label, real_image, pose, grid, mask_fore): - return self.inference(label, pre_clothes_mask, img_fore, clothes_mask, clothes, all_clothes_label, real_image, pose, grid, mask_fore) diff --git a/spaces/hekbobo/bingo/src/components/chat-header.tsx b/spaces/hekbobo/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
        - -
        欢迎使用新必应
        -
        由 AI 支持的网页版 Copilot
        -
        - ) -} diff --git a/spaces/henryu/Clip-image2text/app.py b/spaces/henryu/Clip-image2text/app.py deleted file mode 100644 index f864caf7e00c0dcb9455b8a4a92c22fb838242f5..0000000000000000000000000000000000000000 --- a/spaces/henryu/Clip-image2text/app.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import torch -from clip_interrogator import Config, Interrogator, list_caption_models, list_clip_models - -try: - import gradio as gr -except ImportError: - print("Gradio is not installed, please install it with 'pip install gradio'") - exit(1) - -parser = argparse.ArgumentParser() -parser.add_argument("--lowvram", action='store_true', help="Optimize settings for low VRAM") -parser.add_argument('-s', '--share', action='store_true', help='Create a public link') -args = parser.parse_args() - -if not torch.cuda.is_available(): - print("CUDA is not available, using CPU. Warning: this will be very slow!") - -config = Config(cache_path="cache") -if args.lowvram: - config.apply_low_vram_defaults() -ci = Interrogator(config) - -def image_analysis(image, clip_model_name): - if clip_model_name != ci.config.clip_model_name: - ci.config.clip_model_name = clip_model_name - ci.load_clip_model() - - image = image.convert('RGB') - image_features = ci.image_to_features(image) - - top_mediums = ci.mediums.rank(image_features, 5) - top_artists = ci.artists.rank(image_features, 5) - top_movements = ci.movements.rank(image_features, 5) - top_trendings = ci.trendings.rank(image_features, 5) - top_flavors = ci.flavors.rank(image_features, 5) - - medium_ranks = {medium: sim for medium, sim in zip(top_mediums, ci.similarities(image_features, top_mediums))} - artist_ranks = {artist: sim for artist, sim in zip(top_artists, ci.similarities(image_features, top_artists))} - movement_ranks = {movement: sim for movement, sim in zip(top_movements, ci.similarities(image_features, top_movements))} - trending_ranks = {trending: sim for trending, sim in zip(top_trendings, ci.similarities(image_features, top_trendings))} - flavor_ranks = {flavor: sim for flavor, sim in zip(top_flavors, ci.similarities(image_features, top_flavors))} - - return medium_ranks, artist_ranks, movement_ranks, trending_ranks, flavor_ranks - -def image_to_prompt(image, mode, clip_model_name, blip_model_name): - if blip_model_name != ci.config.caption_model_name: - ci.config.caption_model_name = blip_model_name - ci.load_caption_model() - - if clip_model_name != ci.config.clip_model_name: - ci.config.clip_model_name = clip_model_name - ci.load_clip_model() - - image = image.convert('RGB') - if mode == 'best': - return ci.interrogate(image) - elif mode == 'classic': - return ci.interrogate_classic(image) - elif mode == 'fast': - return ci.interrogate_fast(image) - elif mode == 'negative': - return ci.interrogate_negative(image) - -def prompt_tab(): - with gr.Column(): - with gr.Row(): - image = gr.Image(type='pil', label="Image") - with gr.Column(): - mode = gr.Radio(['best', 'fast', 'classic', 'negative'], label='Mode', value='best') - clip_model = gr.Dropdown(list_clip_models(), value=ci.config.clip_model_name, label='CLIP Model') - blip_model = gr.Dropdown(list_caption_models(), value=ci.config.caption_model_name, label='Caption Model') - prompt = gr.Textbox(label="Prompt") - button = gr.Button("Generate prompt") - button.click(image_to_prompt, inputs=[image, mode, clip_model, blip_model], outputs=prompt) - -def analyze_tab(): - with gr.Column(): - with gr.Row(): - image = gr.Image(type='pil', label="Image") - model = gr.Dropdown(list_clip_models(), value='ViT-L-14/openai', label='CLIP Model') - with gr.Row(): - medium = gr.Label(label="Medium", num_top_classes=5) - artist = gr.Label(label="Artist", num_top_classes=5) - movement = gr.Label(label="Movement", num_top_classes=5) - trending = gr.Label(label="Trending", num_top_classes=5) - flavor = gr.Label(label="Flavor", num_top_classes=5) - button = gr.Button("Analyze") - button.click(image_analysis, inputs=[image, model], outputs=[medium, artist, movement, trending, flavor]) - -with gr.Blocks() as ui: - gr.Markdown("#
        CLIP Image2text
        ") - with gr.Tab("Prompt"): - prompt_tab() - with gr.Tab("Analyze"): - analyze_tab() - -ui.launch(show_api=False, debug=True, share=args.share) \ No newline at end of file diff --git a/spaces/hhhhardman/VITS/ONNXVITS_transforms.py b/spaces/hhhhardman/VITS/ONNXVITS_transforms.py deleted file mode 100644 index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS/ONNXVITS_transforms.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2)) - unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives - unnormalized_derivatives = unnormalized_derivatives_ - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/hjzhp/cgpt-online/README.zh-CN.md b/spaces/hjzhp/cgpt-online/README.zh-CN.md deleted file mode 100644 index e716134c5b946f44d9a6128fe32ac0cc7125db8e..0000000000000000000000000000000000000000 --- a/spaces/hjzhp/cgpt-online/README.zh-CN.md +++ /dev/null @@ -1,117 +0,0 @@ -# ChatGPT-API Demo - -[English](./README.md) | 简体中文 - -一个基于 [OpenAI GPT-3.5 Turbo API](https://platform.openai.com/docs/guides/chat) 的 demo。 - -**🍿 在线预览**: https://chatgpt.ddiu.me - -> ⚠️ 注意: 我们的API密钥限制已用尽。所以演示站点现在不可用。 - -![chat-logo](https://cdn.staticaly.com/gh/yzh990918/static@master/chat-logo.webp) - -## 本地运行 - -### 前置环境 - -1. **Node**: 检查您的开发环境和部署环境是否都使用 `Node v18` 或更高版本。你可以使用 [nvm](https://github.com/nvm-sh/nvm) 管理本地多个 `node` 版本。 - ```bash - node -v - ``` -2. **PNPM**: 我们推荐使用 [pnpm](https://pnpm.io/) 来管理依赖,如果你从来没有安装过 pnpm,可以使用下面的命令安装: - ```bash - npm i -g pnpm - ``` -3. **OPENAI_API_KEY**: 在运行此应用程序之前,您需要从 OpenAI 获取 API 密钥。您可以在 [https://beta.openai.com/signup](https://beta.openai.com/signup) 注册 API 密钥。 - -### 起步运行 - -1. 安装依赖 - ```bash - pnpm install - ``` -2. 复制 `.env.example` 文件,重命名为 `.env`,并添加你的 [OpenAI API key](https://platform.openai.com/account/api-keys) 到 `.env` 文件中 - ```bash - OPENAI_API_KEY=sk-xxx... - ``` -3. 运行应用,本地项目运行在 `http://localhost:3000/` - ```bash - pnpm run dev - ``` - -## 部署 - -### 部署在 Vercel - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fddiu8081%2Fchatgpt-demo&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys) - - - -> ###### 🔒 需要站点密码? -> -> 携带[`SITE_PASSWORD`](#environment-variables)进行部署 -> -> Deploy with Vercel - -![image](https://cdn.staticaly.com/gh/yzh990918/static@master/20230310/image.4wzfb79qt7k0.webp) - -### 部署在 Netlify - -[![Deploy with Netlify](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/ddiu8081/chatgpt-demo#OPENAI_API_KEY=&HTTPS_PROXY=&OPENAI_API_BASE_URL=&HEAD_SCRIPTS=&SECRET_KEY=&OPENAI_API_MODEL=&SITE_PASSWORD=) - -**分步部署教程:** - -1. [Fork](https://github.com/ddiu8081/chatgpt-demo/fork) 此项目,前往 [https://app.netlify.com/start](https://app.netlify.com/start) 新建站点,选择你 `fork` 完成的项目,将其与 `GitHub` 帐户连接。 - -![image](https://cdn.staticaly.com/gh/yzh990918/static@master/20230310/image.3nlt4hgzb16o.webp) - -![image](https://cdn.staticaly.com/gh/yzh990918/static@master/20230310/image.5fhfouap270g.webp) - - -2. 选择要部署的分支,选择 `main` 分支, 在项目设置中配置环境变量,环境变量配置参考下文。 - -![image](https://cdn.staticaly.com/gh/yzh990918/static@master/20230310/image.6dvtfmoijb7k.webp) - -3. 选择默认的构建命令和输出目录,单击 `Deploy Site` 按钮开始部署站点。 - -![image](https://cdn.staticaly.com/gh/yzh990918/static@master/20230310/image.e0n7c0zaen4.webp) - -### 部署在更多的服务器 - -请参考官方部署文档:https://docs.astro.build/en/guides/deploy - -## 环境变量 - -配置本地或者部署的环境变量 - -| 名称 | 描述 | 默认 | -| --- | --- | --- | -| `OPENAI_API_KEY` | 你的 OpenAI API Key | `null` | -| `HTTPS_PROXY` | 为 OpenAI API 提供代理. e.g. `http://127.0.0.1:7890` | `null` | -| `OPENAI_API_BASE_URL` | 请求 OpenAI API 的自定义 Base URL. | `https://api.openai.com` | -| `HEAD_SCRIPTS` | 在页面的 `` 之前注入分析或其他脚本 | `null` | -| `SECRET_KEY` | 项目的秘密字符串。用于生成 API 调用的签名 | `null` | -| `SITE_PASSWORD` | 为网站设置密码。如果未设置,则该网站将是公开的 | `null` | -| `OPENAI_API_MODEL` | 使用的 OpenAI 模型. [模型列表](https://platform.openai.com/docs/api-reference/models/list) | `gpt-3.5-turbo` | - -## 常见问题 - -Q: TypeError: fetch failed (can't connect to OpenAI Api) - -A: 配置环境变量 `HTTPS_PROXY`,参考: https://github.com/ddiu8081/chatgpt-demo/issues/34 - -Q: throw new TypeError(${context} is not a ReadableStream.) - -A: Node 版本需要在 `v18` 或者更高,参考: https://github.com/ddiu8081/chatgpt-demo/issues/65 - -## 参与贡献 - -这个项目的存在要感谢所有做出贡献的人。 - -感谢我们所有的支持者!🙏 - -[![img](https://contributors.nn.ci/api?repo=ddiu8081/chatgpt-demo)](https://github.com/ddiu8081/chatgpt-demo/graphs/contributors) - -## License - -MIT © [ddiu8081](https://github.com/ddiu8081/chatgpt-demo/blob/main/LICENSE) diff --git a/spaces/hkunlp/Binder/utils/mmqa/qimc.py b/spaces/hkunlp/Binder/utils/mmqa/qimc.py deleted file mode 100644 index 3512d8035809a0d7fb072c2392f257420766b8bf..0000000000000000000000000000000000000000 --- a/spaces/hkunlp/Binder/utils/mmqa/qimc.py +++ /dev/null @@ -1,41 +0,0 @@ -import json -import os -import pandas as pd - -ROOT_DIR = os.path.join(os.path.dirname(__file__), "../../") - - -class Question_Image_Match_Classifier(object): - """result are from a T5-3b model finetuned on train set of MMQA.""" - - def __init__(self): - self.whether_retrieve_image = None - self.qi_pairs_should_retrieve = None - self.load_retrieve_info() - self.caption_info = None - with open(os.path.join(ROOT_DIR, "utils", "mmqa", "mmqa_captions.json"), "r") as f: - self.caption_info = json.load(f) - - def load_retrieve_info(self): - df_qc = pd.read_csv(os.path.join(ROOT_DIR, "utils", "mmqa", "qc_mmqa_dev.csv")) - whether_retrieve_image = {} - for index, row in df_qc.iterrows(): - _id = row['id'] - prediction = row['prediction'] - whether_retrieve_image[_id] = True if prediction == "['yes']" else False - self.whether_retrieve_image = whether_retrieve_image - - df_qimc = pd.read_csv(os.path.join(ROOT_DIR, "utils", "mmqa", "qimc_mmqa_dev.csv")) - qi_pairs_should_retrieve = {} - for index, row in df_qimc.iterrows(): - qa = row['question'].lower() - prediction = row['prediction'] - qi_pairs_should_retrieve[qa] = True if prediction == "['yes']" else False - self.qi_pairs_should_retrieve = qi_pairs_should_retrieve - - def judge_match(self, _id, question, pic): - # fixme: hardcode since it is done in pipeline, change that in the future - if not self.whether_retrieve_image[_id]: - return False - image_caption = self.caption_info[os.path.split(pic)[-1].split(".")[0]] - return self.qi_pairs_should_retrieve['qa: {} \n{}'.format(question.lower(), image_caption.lower())] \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/miscellaneous/nnUNetTrainerV2_fullEvals.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/miscellaneous/nnUNetTrainerV2_fullEvals.py deleted file mode 100644 index 5c68bfab22f7047b04a6b8af8fbdd63d15fa46cc..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/miscellaneous/nnUNetTrainerV2_fullEvals.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from multiprocessing.pool import Pool -from time import time - -import numpy as np -import torch -from nnunet.configuration import default_num_threads -from nnunet.inference.segmentation_export import save_segmentation_nifti_from_softmax -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from batchgenerators.utilities.file_and_folder_operations import * -from nnunet.evaluation.region_based_evaluation import evaluate_regions, get_brats_regions - - -class nnUNetTrainerV2_fullEvals(nnUNetTrainerV2): - """ - this trainer only works for brats and nothing else - """ - - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data, - deterministic, fp16) - self.validate_every = 1 - self.evaluation_regions = get_brats_regions() - self.num_val_batches_per_epoch = 0 # we dont need this because this does not evaluate on full images - - def finish_online_evaluation(self): - pass - - def validate(self, do_mirroring: bool = True, use_sliding_window: bool = True, - step_size: float = 0.5, save_softmax: bool = True, use_gaussian: bool = True, overwrite: bool = True, - validation_folder_name: str = 'validation_raw', debug: bool = False, all_in_gpu: bool = False, - force_separate_z: bool = None, interpolation_order: int = 3, interpolation_order_z=0): - """ - disable nnunet postprocessing. this would just waste computation time and does not benefit brats - - !!!We run this with use_sliding_window=False per default (see on_epoch_end). This triggers fully convolutional - inference. THIS ONLY MAKES SENSE WHEN TRAINING ON FULL IMAGES! Make sure use_sliding_window=True when running - with default patch size (128x128x128)!!! - - per default this does not use test time data augmentation (mirroring). The reference implementation, however, - does. I disabled it here because this eats up a lot of computation time - - """ - validation_start = time() - - current_mode = self.network.training - self.network.eval() - - assert self.was_initialized, "must initialize, ideally with checkpoint (or train first)" - if self.dataset_val is None: - self.load_dataset() - self.do_split() - - # predictions as they come from the network go here - output_folder = join(self.output_folder, validation_folder_name) - maybe_mkdir_p(output_folder) - - # this is for debug purposes - my_input_args = {'do_mirroring': do_mirroring, - 'use_sliding_window': use_sliding_window, - 'step_size': step_size, - 'save_softmax': save_softmax, - 'use_gaussian': use_gaussian, - 'overwrite': overwrite, - 'validation_folder_name': validation_folder_name, - 'debug': debug, - 'all_in_gpu': all_in_gpu, - 'force_separate_z': force_separate_z, - 'interpolation_order': interpolation_order, - 'interpolation_order_z': interpolation_order_z, - } - save_json(my_input_args, join(output_folder, "validation_args.json")) - - if do_mirroring: - if not self.data_aug_params['do_mirror']: - raise RuntimeError("We did not train with mirroring so you cannot do inference with mirroring enabled") - mirror_axes = self.data_aug_params['mirror_axes'] - else: - mirror_axes = () - - export_pool = Pool(default_num_threads) - results = [] - - for k in self.dataset_val.keys(): - properties = load_pickle(self.dataset[k]['properties_file']) - fname = properties['list_of_data_files'][0].split("/")[-1][:-12] - if overwrite or (not isfile(join(output_folder, fname + ".nii.gz"))) or \ - (save_softmax and not isfile(join(output_folder, fname + ".npz"))): - data = np.load(self.dataset[k]['data_file'])['data'] - - #print(k, data.shape) - - softmax_pred = self.predict_preprocessed_data_return_seg_and_softmax(data[:-1], - do_mirroring=do_mirroring, - mirror_axes=mirror_axes, - use_sliding_window=use_sliding_window, - step_size=step_size, - use_gaussian=use_gaussian, - all_in_gpu=all_in_gpu, - verbose=False, - mixed_precision=self.fp16)[1] - - # this does not do anything in brats -> remove this line - # softmax_pred = softmax_pred.transpose([0] + [i + 1 for i in self.transpose_backward]) - - if save_softmax: - softmax_fname = join(output_folder, fname + ".npz") - else: - softmax_fname = None - - results.append(export_pool.starmap_async(save_segmentation_nifti_from_softmax, - ((softmax_pred, join(output_folder, fname + ".nii.gz"), - properties, interpolation_order, None, None, None, - softmax_fname, None, force_separate_z, - interpolation_order_z, False), - ) - ) - ) - - _ = [i.get() for i in results] - self.print_to_log_file("finished prediction") - - # evaluate raw predictions - self.print_to_log_file("evaluation of raw predictions") - - # this writes a csv file into output_folder - evaluate_regions(output_folder, self.gt_niftis_folder, self.evaluation_regions) - csv_file = np.loadtxt(join(output_folder, 'summary.csv'), skiprows=1, dtype=str, delimiter=',')[:, 1:] - - # these are the values that are compute with np.nanmean aggregation - whole, core, enhancing = csv_file[-4, :].astype(float) - - # do some cleanup - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - self.network.train(current_mode) - validation_end = time() - self.print_to_log_file('Running the validation took %f seconds' % (validation_end - validation_start)) - self.print_to_log_file('(the time needed for validation is included in the total epoch time!)') - - return whole, core, enhancing - - def on_epoch_end(self): - return_value = True - - # on epoch end is called before the epoch counter is incremented, so we need to do that here to get the correct epoch number - if (self.epoch + 1) % self.validate_every == 0: - whole, core, enhancing = self.validate(do_mirroring=False, use_sliding_window=True, - step_size=0.5, - save_softmax=False, - use_gaussian=True, overwrite=True, - validation_folder_name='validation_after_ep_%04.0d' % self.epoch, - debug=False, all_in_gpu=True) - - here = np.mean((whole, core, enhancing)) - - self.print_to_log_file("After epoch %d: whole %0.4f core %0.4f enhancing: %0.4f" % - (self.epoch, whole, core, enhancing)) - self.print_to_log_file("Mean: %0.4f" % here) - - # now we need to figure out if we are done - fully_trained_nnunet = (0.911, 0.8739, 0.7848) - mean_dice = np.mean(fully_trained_nnunet) - target = 0.97 * mean_dice - - self.all_val_eval_metrics.append(here) - self.print_to_log_file("Target mean: %0.4f" % target) - - if here >= target: - self.print_to_log_file("I am done!") - self.save_checkpoint(join(self.output_folder, "model_final_checkpoint.model")) - return_value = False # this triggers early stopping - - ret_old = super().on_epoch_end() - # if we do not achieve the target accuracy in 1000 epochs then we need to stop the training. This is not built - # to run longer than 1000 epochs - if not ret_old: - return_value = ret_old - - return return_value diff --git a/spaces/huggingchat/chat-ui/src/routes/admin/export/+server.ts b/spaces/huggingchat/chat-ui/src/routes/admin/export/+server.ts deleted file mode 100644 index aed8127ff9e735387b70bf7991338449481de4d8..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/routes/admin/export/+server.ts +++ /dev/null @@ -1,166 +0,0 @@ -import { - PARQUET_EXPORT_DATASET, - PARQUET_EXPORT_HF_TOKEN, - PARQUET_EXPORT_SECRET, -} from "$env/static/private"; -import { collections } from "$lib/server/database"; -import type { Message } from "$lib/types/Message"; -import { error } from "@sveltejs/kit"; -import { pathToFileURL } from "node:url"; -import { unlink } from "node:fs/promises"; -import { uploadFile } from "@huggingface/hub"; -import parquet from "parquetjs"; -import { z } from "zod"; - -// Triger like this: -// curl -X POST "http://localhost:5173/chat/admin/export" -H "Authorization: Bearer " -H "Content-Type: application/json" -d '{"model": "OpenAssistant/oasst-sft-6-llama-30b-xor"}' - -export async function POST({ request }) { - if (!PARQUET_EXPORT_SECRET || !PARQUET_EXPORT_DATASET || !PARQUET_EXPORT_HF_TOKEN) { - throw error(500, "Parquet export is not configured."); - } - - if (request.headers.get("Authorization") !== `Bearer ${PARQUET_EXPORT_SECRET}`) { - throw error(403); - } - - const { model } = z - .object({ - model: z.string(), - }) - .parse(await request.json()); - - const schema = new parquet.ParquetSchema({ - title: { type: "UTF8" }, - created_at: { type: "TIMESTAMP_MILLIS" }, - updated_at: { type: "TIMESTAMP_MILLIS" }, - messages: { - repeated: true, - fields: { - from: { type: "UTF8" }, - content: { type: "UTF8" }, - score: { type: "INT_8", optional: true }, - }, - }, - }); - - const fileName = `/tmp/conversations-${new Date().toJSON().slice(0, 10)}-${Date.now()}.parquet`; - - const writer = await parquet.ParquetWriter.openFile(schema, fileName); - - let count = 0; - console.log("Exporting conversations for model", model); - - for await (const conversation of collections.settings.aggregate<{ - title: string; - created_at: Date; - updated_at: Date; - messages: Message[]; - }>([ - { - $match: { - shareConversationsWithModelAuthors: true, - sessionId: { $exists: true }, - userId: { $exists: false }, - }, - }, - { - $lookup: { - from: "conversations", - localField: "sessionId", - foreignField: "sessionId", - as: "conversations", - pipeline: [{ $match: { model, userId: { $exists: false } } }], - }, - }, - { $unwind: "$conversations" }, - { - $project: { - title: "$conversations.title", - created_at: "$conversations.createdAt", - updated_at: "$conversations.updatedAt", - messages: "$conversations.messages", - }, - }, - ])) { - await writer.appendRow({ - title: conversation.title, - created_at: conversation.created_at, - updated_at: conversation.updated_at, - messages: conversation.messages.map((message: Message) => ({ - from: message.from, - content: message.content, - ...(message.score ? { score: message.score } : undefined), - })), - }); - ++count; - - if (count % 1_000 === 0) { - console.log("Exported", count, "conversations"); - } - } - - console.log("exporting convos with userId"); - - for await (const conversation of collections.settings.aggregate<{ - title: string; - created_at: Date; - updated_at: Date; - messages: Message[]; - }>([ - { $match: { shareConversationsWithModelAuthors: true, userId: { $exists: true } } }, - { - $lookup: { - from: "conversations", - localField: "userId", - foreignField: "userId", - as: "conversations", - pipeline: [{ $match: { model } }], - }, - }, - { $unwind: "$conversations" }, - { - $project: { - title: "$conversations.title", - created_at: "$conversations.createdAt", - updated_at: "$conversations.updatedAt", - messages: "$conversations.messages", - }, - }, - ])) { - await writer.appendRow({ - title: conversation.title, - created_at: conversation.created_at, - updated_at: conversation.updated_at, - messages: conversation.messages.map((message: Message) => ({ - from: message.from, - content: message.content, - ...(message.score ? { score: message.score } : undefined), - })), - }); - ++count; - - if (count % 1_000 === 0) { - console.log("Exported", count, "conversations"); - } - } - - await writer.close(); - - console.log("Uploading", fileName, "to Hugging Face Hub"); - - await uploadFile({ - file: pathToFileURL(fileName) as URL, - credentials: { accessToken: PARQUET_EXPORT_HF_TOKEN }, - repo: { - type: "dataset", - name: PARQUET_EXPORT_DATASET, - }, - }); - - console.log("Upload done"); - - await unlink(fileName); - - return new Response(); -} diff --git a/spaces/hussain-shk/IndiSent/scripts/preprocess_translate.py b/spaces/hussain-shk/IndiSent/scripts/preprocess_translate.py deleted file mode 100644 index 8fbe3c275f7cb655d95125256260190d51b35ca7..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/scripts/preprocess_translate.py +++ /dev/null @@ -1,172 +0,0 @@ -INDIC_NLP_LIB_HOME = "indic_nlp_library" -INDIC_NLP_RESOURCES = "indic_nlp_resources" -import sys - -sys.path.append(r"{}".format(INDIC_NLP_LIB_HOME)) -from indicnlp import common - -common.set_resources_path(INDIC_NLP_RESOURCES) -from indicnlp import loader - -loader.load() -from sacremoses import MosesPunctNormalizer -from sacremoses import MosesTokenizer -from sacremoses import MosesDetokenizer -from collections import defaultdict - -from tqdm import tqdm -from joblib import Parallel, delayed - -from indicnlp.tokenize import indic_tokenize -from indicnlp.tokenize import indic_detokenize -from indicnlp.normalize import indic_normalize -from indicnlp.transliterate import unicode_transliterate - - -en_tok = MosesTokenizer(lang="en") -en_normalizer = MosesPunctNormalizer() - - -def preprocess_line(line, normalizer, lang, transliterate=False): - if lang == "en": - return " ".join( - en_tok.tokenize(en_normalizer.normalize(line.strip()), escape=False) - ) - elif transliterate: - # line = indic_detokenize.trivial_detokenize(line.strip(), lang) - return unicode_transliterate.UnicodeIndicTransliterator.transliterate( - " ".join( - indic_tokenize.trivial_tokenize( - normalizer.normalize(line.strip()), lang - ) - ), - lang, - "hi", - ).replace(" ् ", "्") - else: - # we only need to transliterate for joint training - return " ".join( - indic_tokenize.trivial_tokenize(normalizer.normalize(line.strip()), lang) - ) - - -def preprocess(infname, outfname, lang, transliterate=False): - """ - Normalize, tokenize and script convert(for Indic) - return number of sentences input file - - """ - - n = 0 - num_lines = sum(1 for line in open(infname, "r")) - if lang == "en": - with open(infname, "r", encoding="utf-8") as infile, open( - outfname, "w", encoding="utf-8" - ) as outfile: - - out_lines = Parallel(n_jobs=-1, backend="multiprocessing")( - delayed(preprocess_line)(line, None, lang) - for line in tqdm(infile, total=num_lines) - ) - - for line in out_lines: - outfile.write(line + "\n") - n += 1 - - else: - normfactory = indic_normalize.IndicNormalizerFactory() - normalizer = normfactory.get_normalizer(lang) - # reading - with open(infname, "r", encoding="utf-8") as infile, open( - outfname, "w", encoding="utf-8" - ) as outfile: - - out_lines = Parallel(n_jobs=-1, backend="multiprocessing")( - delayed(preprocess_line)(line, normalizer, lang, transliterate) - for line in tqdm(infile, total=num_lines) - ) - - for line in out_lines: - outfile.write(line + "\n") - n += 1 - return n - - -def old_preprocess(infname, outfname, lang): - """ - Preparing each corpus file: - - Normalization - - Tokenization - - Script coversion to Devanagari for Indic scripts - """ - n = 0 - num_lines = sum(1 for line in open(infname, "r")) - # reading - with open(infname, "r", encoding="utf-8") as infile, open( - outfname, "w", encoding="utf-8" - ) as outfile: - - if lang == "en": - en_tok = MosesTokenizer(lang="en") - en_normalizer = MosesPunctNormalizer() - for line in tqdm(infile, total=num_lines): - outline = " ".join( - en_tok.tokenize(en_normalizer.normalize(line.strip()), escape=False) - ) - outfile.write(outline + "\n") - n += 1 - - else: - normfactory = indic_normalize.IndicNormalizerFactory() - normalizer = normfactory.get_normalizer(lang) - for line in tqdm(infile, total=num_lines): - outline = ( - unicode_transliterate.UnicodeIndicTransliterator.transliterate( - " ".join( - indic_tokenize.trivial_tokenize( - normalizer.normalize(line.strip()), lang - ) - ), - lang, - "hi", - ).replace(" ् ", "्") - ) - - outfile.write(outline + "\n") - n += 1 - return n - - -if __name__ == "__main__": - - # INDIC_NLP_LIB_HOME = "indic_nlp_library" - # INDIC_NLP_RESOURCES = "indic_nlp_resources" - # sys.path.append(r'{}'.format(INDIC_NLP_LIB_HOME)) - # common.set_resources_path(INDIC_NLP_RESOURCES) - - # data_dir = '../joint_training/v1' - # new_dir = data_dir + '.norm' - # for path, subdirs, files in os.walk(data_dir): - # for name in files: - # infile = os.path.join(path, name) - # lang = infile.split('.')[-1] - # outfile = os.path.join(path.replace(data_dir, new_dir), name) - # preprocess(infile, outfile, lang) - # loader.load() - - infname = sys.argv[1] - outfname = sys.argv[2] - lang = sys.argv[3] - - if len(sys.argv) == 4: - transliterate = False - elif len(sys.argv) == 5: - transliterate = sys.argv[4] - if transliterate.lower() == "true": - transliterate = True - else: - transliterate = False - else: - print(f"Invalid arguments: {sys.argv}") - exit() - print(preprocess(infname, outfname, lang, transliterate)) diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_r50_bs8k.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_r50_bs8k.py deleted file mode 100644 index c02bdf3afe8370086cf64fd112244b00cee35a6f..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_r50_bs8k.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.2 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 512 -config.lr = 0.6 -config.verbose = 10000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = 4 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_40epoch_64gpu_vit_t.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_40epoch_64gpu_vit_t.py deleted file mode 100644 index 8516755b656b21536da177402ef6066e3e1039dd..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc03_40epoch_64gpu_vit_t.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "vit_t_dp005_mask0" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.3 -config.fp16 = True -config.weight_decay = 0.1 -config.batch_size = 384 -config.optimizer = "adamw" -config.lr = 0.001 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 40 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = [] diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_8gpus_r50_bs4k.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_8gpus_r50_bs4k.py deleted file mode 100644 index b9f627fa94046d22ab0f0f12a8e339dc2cedfd81..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_8gpus_r50_bs4k.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.2 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 512 -config.lr = 0.4 -config.verbose = 10000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = 2 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hzy123/bingo/src/components/chat-attachments.tsx b/spaces/hzy123/bingo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
        - {attachmentList.map(file => ( -
        - {file.status === 'loading' && ( -
        -
        -
        ) - } - {file.status !== 'error' && ( -
        - -
        ) - } - {file.status === 'error' && ( -
        - uploadImage(file.url)} /> -
        - )} - -
        - ))} -
        - ) : null -} diff --git a/spaces/hzy123/bingo/src/state/index.ts b/spaces/hzy123/bingo/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/iakarshu/latr-vqa/dataset.py b/spaces/iakarshu/latr-vqa/dataset.py deleted file mode 100644 index ccd69f952f08736ed93c5fff9c6cda838f3f01fb..0000000000000000000000000000000000000000 --- a/spaces/iakarshu/latr-vqa/dataset.py +++ /dev/null @@ -1,150 +0,0 @@ -import os -import json -import numpy as np -import pytesseract -from PIL import Image, ImageDraw - -PAD_TOKEN_BOX = [0, 0, 0, 0] -max_seq_len = 512 - -## Function: 1 -## Purpose: Resize and align the bounding box for the different sized image - -def resize_align_bbox(bbox, orig_w, orig_h, target_w, target_h): - x_scale = target_w / orig_w - y_scale = target_h / orig_h - orig_left, orig_top, orig_right, orig_bottom = bbox - x = int(np.round(orig_left * x_scale)) - y = int(np.round(orig_top * y_scale)) - xmax = int(np.round(orig_right * x_scale)) - ymax = int(np.round(orig_bottom * y_scale)) - return [x, y, xmax, ymax] - -## Function: 2 -## Purpose: Reading the json file from the path and return the dictionary - -def load_json_file(file_path): - with open(file_path, 'r') as f: - data = json.load(f) - return data - -## Function: 3 -## Purpose: Getting the address of specific file type, eg: .pdf, .tif, so and so - -def get_specific_file(path, last_entry = 'tif'): - base_path = path - for i in os.listdir(path): - if i.endswith(last_entry): - return os.path.join(base_path, i) - - return '-1' - - -## Function: 4 - - -def get_tokens_with_boxes(unnormalized_word_boxes, list_of_words, tokenizer, pad_token_id = 0, pad_token_box = [0, 0, 0, 0], max_seq_len = 512): - - ''' - This function returns two items: - 1. unnormalized_token_boxes -> a list of len = max_seq_len, containing the boxes corresponding to the tokenized words, - one box might repeat as per the tokenization procedure - 2. tokenized_words -> tokenized words corresponding to the tokenizer and the list_of_words - ''' - - assert len(unnormalized_word_boxes) == len(list_of_words), "Bounding box length!= total words length" - - length_of_box = len(unnormalized_word_boxes) - unnormalized_token_boxes = [] - tokenized_words = [] - - for box, word in zip(unnormalized_word_boxes, list_of_words): - current_tokens = tokenizer(word, add_special_tokens = False).input_ids - unnormalized_token_boxes.extend([box]*len(current_tokens)) - tokenized_words.extend(current_tokens) - - if len(unnormalized_token_boxes)2011 TCNA Handbook Now Available.pdf

        Download File ✒ ✒ ✒ https://gohhs.com/2uz3GD



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/inamXcontru/PoeticTTS/Acon Digital DeVerberate 2.0.1 How to Get the Most Out of the VST VST3 AAX and AU Formats.md b/spaces/inamXcontru/PoeticTTS/Acon Digital DeVerberate 2.0.1 How to Get the Most Out of the VST VST3 AAX and AU Formats.md deleted file mode 100644 index eabed8855af4ca09eb1e93b8441e8657408b74e8..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Acon Digital DeVerberate 2.0.1 How to Get the Most Out of the VST VST3 AAX and AU Formats.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Acon Digital DeVerberate 2.0.1


        Download File ---> https://gohhs.com/2uz4Cm



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/inamXcontru/PoeticTTS/Adobe Premiere Pro CC 2018 v11.1.0.222 (x64) Portable Free Download for Windows PC.md b/spaces/inamXcontru/PoeticTTS/Adobe Premiere Pro CC 2018 v11.1.0.222 (x64) Portable Free Download for Windows PC.md deleted file mode 100644 index f972c5e151e3486d09affd6aefd74005d97d44c1..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Adobe Premiere Pro CC 2018 v11.1.0.222 (x64) Portable Free Download for Windows PC.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Adobe Premiere Pro CC 2018 v11.1.0.222 (x64) Portable download pc


        Download Zip > https://gohhs.com/2uz36A



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/inamXcontru/PoeticTTS/Deadpool English Dubbed In Hindi Free Download.md b/spaces/inamXcontru/PoeticTTS/Deadpool English Dubbed In Hindi Free Download.md deleted file mode 100644 index b9268803dd9510b9772e885882111def6b16acf5..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Deadpool English Dubbed In Hindi Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Deadpool English Dubbed In Hindi Free Download


        DOWNLOAD ===== https://gohhs.com/2uz4LE



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/inamXcontru/PoeticTTS/Disconnect from vCenter Server within PowerCLI script FAQs and Troubleshooting.md b/spaces/inamXcontru/PoeticTTS/Disconnect from vCenter Server within PowerCLI script FAQs and Troubleshooting.md deleted file mode 100644 index 92d22d66841a4aabe028f3cbec31648de5ef86d5..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Disconnect from vCenter Server within PowerCLI script FAQs and Troubleshooting.md +++ /dev/null @@ -1,12 +0,0 @@ - -

        Freeware programs can be downloaded used free of charge and without any time limitations. Freeware products can be used free of charge for both personal and professional (commercial use).

        -

        This license is commonly used for video games and it allows users to download and play the game for free. Basically, a product is offered Free to Play (Freemium) and the user can decide if he wants to pay the money (Premium) for additional features, services, virtual or physical goods that expand the functionality of the game. In some cases, ads may be show to the users.

        -

        Free Download Xwidget For Windows 8


        DOWNLOAD >>> https://gohhs.com/2uz3GL



        -

        After opening the main window you can view provided content in the included browser. More than 2000 plugins, widgets and other add-ons are available for download. It is important to mention that the free version of the program allows you to acquire only a limited number of widgets. There are free apps with similar functionality like DesktopX.

        -

        XWidget is a Desktop Enhancements application like RocketDock, Android-x86, and zSNES from XWidget Software. It has a simple and basic interface, and most importantly, it is free to download. XWidget is a reliable software that is suggested by lots of Windows PC users.

        -

        XWidget is one of the most popular Desktop Enhancements alongside PCSX2, Input Director, and Visual Boy. This app has its advantages compared to other Desktop Enhancements applications. XWidget is lightweight and easy to use, simple for beginners and powerful for professionals. XWidget application is free to download and offers easy-to-install, easy-to-use, secure, and reliable Desktop Enhancements applications.

        -

        Q: How do I access the free XWidget download for Windows PC?
        A: It is easy! Just click the free XWidget download button in the above of this page. Clicking the download button will start the installer to download XWidget free for a PC/laptop.

        -

        You can download XWidget from the official website. You will all free XWidget widgets from the official website Gallery. Other than that, deviantArt is a great source of nice and free XWidget widgets.

        -

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/innovatorved/whisper.api/app/core/models/__init__.py b/spaces/innovatorved/whisper.api/app/core/models/__init__.py deleted file mode 100644 index 620b8d244cf5049eeff0737c78a84236e951b800..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/core/models/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .AuthToken import AuthToken, AuthTokenController -from .User import UserInDB -from .Transcribe import TranscibeInDB, TranscribeController - -from app.core.database import Base, engine - - -Base.metadata.create_all(engine) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Guitar Rig 5 Pro !LINK! Crack Free Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Guitar Rig 5 Pro !LINK! Crack Free Download.md deleted file mode 100644 index d819adaf6a6b656f49196eb8278b35848f7aeb6d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Guitar Rig 5 Pro !LINK! Crack Free Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

        guitar rig 5 pro crack free download


        DOWNLOAD === https://urlin.us/2uEyvK



        - -Sep 13, 2021 - Guitar Rig Crack is an effects processor that's great for building effect sequences, warming up your sign and, yes, recording your guitar. Guitar Rig Crack allows you to create effect sequences. -First you just need to select the effect you want and it will display all the parameters you can adjust. -This is obviously good for beginners. -Then you can use the record function to create your own sequence and save it as a file. -Then there's the playback function, which allows you to play through the effects of the sequences and create your own music. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/BEST Crack Solid Angle Maya To Arnold 1.3.0.1 For Maya 2015-2017 - BEST Cracking.md b/spaces/inreVtussa/clothingai/Examples/BEST Crack Solid Angle Maya To Arnold 1.3.0.1 For Maya 2015-2017 - BEST Cracking.md deleted file mode 100644 index d7c6ea10721f1fd8efeeb48628088ddf02541123..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/BEST Crack Solid Angle Maya To Arnold 1.3.0.1 For Maya 2015-2017 - BEST Cracking.md +++ /dev/null @@ -1,6 +0,0 @@ -

        CRACK Solid Angle Maya To Arnold 1.3.0.1 For Maya 2015-2017 - Cracking


        Download Zip - https://tiurll.com/2uCiFO



        -
        -Katana.2.6.To.Arnold.v2.2.3.0-AMPED [FTUApps] torrent or any other torrent from Windows category. ... Genuine cracked applications direct from the scene group. ... Solid Angle Maya To Arnold 1.3.0.1 for Maya 2015-2017 - Crackingpatching. 1fdad05405
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Cyberlink Powerdirector 13 Keygen Free Download.md b/spaces/inreVtussa/clothingai/Examples/Cyberlink Powerdirector 13 Keygen Free Download.md deleted file mode 100644 index 7522236e4c51255040fb3810385fc007b59aa9f1..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cyberlink Powerdirector 13 Keygen Free Download.md +++ /dev/null @@ -1,20 +0,0 @@ -
        -

        Why You Should Avoid Cyberlink Powerdirector 13 Keygen Free Download

        -

        If you are looking for a way to download Cyberlink Powerdirector 13 for free, you might be tempted by some websites that offer keygen or crack files. However, these files are illegal and risky, and you should avoid them at all costs. Here are some reasons why you should not use Cyberlink Powerdirector 13 keygen free download:

        -
          -
        • It is a violation of the law. Cyberlink Powerdirector 13 is a licensed software that requires a valid product key to activate. Using a keygen or crack file to bypass the activation process is a form of software piracy, which is a criminal offense in many countries. You could face legal consequences such as fines or jail time if you are caught using pirated software.
        • -
        • It is a security threat. Keygen and crack files are often infected with malware, viruses, or spyware that can harm your computer and compromise your personal data. These malicious programs can steal your passwords, bank details, or other sensitive information, or damage your system files and make your computer unusable. You could also expose yourself to hackers who can remotely access your device and control it without your knowledge.
        • -
        • It is a poor quality product. Keygen and crack files are not reliable and can cause errors, crashes, or glitches in the software. You might not be able to use all the features and functions of Cyberlink Powerdirector 13, or experience poor performance and compatibility issues. You could also lose your work or corrupt your files due to unexpected failures or bugs.
        • -
        • It is a waste of time and money. Keygen and crack files are not easy to find and download, and they often require complicated steps to install and run. You might spend hours searching for a working file, only to end up with a broken or fake one. You could also waste your money on buying additional software or services to fix the problems caused by the pirated software.
        • -
        -

        As you can see, using Cyberlink Powerdirector 13 keygen free download is not worth the risk and hassle. Instead of resorting to illegal and dangerous methods, you should consider using the official version of Cyberlink Powerdirector 13 that offers many benefits:

        -

        Cyberlink Powerdirector 13 Keygen Free Download


        Downloadhttps://tiurll.com/2uCixd



        -
          -
        • It is legal and safe. By purchasing a legitimate product key from the official website[^1^], you can activate Cyberlink Powerdirector 13 without any worries. You can also download the software from a trusted source that guarantees its quality and security. You can avoid any legal troubles or malware infections that could ruin your computer and data.
        • -
        • It is high-quality and reliable. By using the official version of Cyberlink Powerdirector 13, you can enjoy all the features and functions that this powerful video editing software has to offer. You can work with hundreds of video and audio tracks, use more than 500 effects and templates, reformat low-resolution videos into 4K resolution, and more[^2^]. You can also expect smooth performance and compatibility with various formats and devices.
        • -
        • It is supported and updated. By registering your product key with Cyberlink, you can access their customer support service that can help you with any issues or questions you might have. You can also receive regular updates that improve the software's functionality and security. You can benefit from new features, bug fixes, and enhancements that keep your software up-to-date.
        • -
        • It is affordable and worthwhile. By investing in the official version of Cyberlink Powerdirector 13, you can get a great value for your money. You can choose from different editions and plans that suit your needs and budget[^3^]. You can also take advantage of discounts, promotions, and free trials that Cyberlink offers from time to time. You can save more money in the long run by avoiding the costs of repairing or replacing your computer or data due to pirated software.
        • -
        -

        In conclusion, Cyberlink Powerdirector 13 keygen free download is a bad idea that could put you in legal trouble, expose you to malware, deliver poor quality results, and waste your time and money. Instead of risking it all for a free download, you should opt for

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py deleted file mode 100644 index 846e39849535ed08accb10d7001f2431a851d372..0000000000000000000000000000000000000000 --- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_to_onnx.py +++ /dev/null @@ -1,31 +0,0 @@ -import ONNXVITS_models -import utils -from text import text_to_sequence -import torch -import commons - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") -symbols = hps.symbols -net_g = ONNXVITS_models.SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("ありがとうございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.tensor([0]) - o = net_g(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1) \ No newline at end of file diff --git a/spaces/j0hngou/vision-diffmask/code/main.py b/spaces/j0hngou/vision-diffmask/code/main.py deleted file mode 100644 index 9b1225f31ff749e95f97e0d041acb58cba3e25f7..0000000000000000000000000000000000000000 --- a/spaces/j0hngou/vision-diffmask/code/main.py +++ /dev/null @@ -1,215 +0,0 @@ -from argparse import ArgumentParser, Namespace -from attributions import attention_rollout, grad_cam -from datamodules import CIFAR10QADataModule, ImageDataModule -from datamodules.utils import datamodule_factory -from functools import partial -from models import ImageInterpretationNet -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning.loggers import WandbLogger -from transformers import ViTForImageClassification -from utils.plot import DrawMaskCallback, log_masks - -import pytorch_lightning as pl - - -def get_experiment_name(args: Namespace): - """Create a name for the experiment based on the command line arguments.""" - # Convert to dictionary - args = vars(args) - - # Create a list with non-experiment arguments - non_experiment_args = [ - "add_blur", - "add_noise", - "add_rotation", - "base_model", - "batch_size", - "class_idx", - "data_dir", - "enable_progress_bar", - "from_pretrained", - "log_every_n_steps", - "num_epochs", - "num_workers", - "sample_images", - "seed", - ] - - # Create experiment name from experiment arguments - return "-".join( - [ - f"{name}={value}" - for name, value in sorted(args.items()) - if name not in non_experiment_args - ] - ) - - -def setup_sample_image_logs( - dm: ImageDataModule, - args: Namespace, - logger: WandbLogger, - n_panels: int = 2, # TODO: change? -): - """Setup the log callbacks for sampling and plotting images.""" - images_per_panel = args.sample_images - - # Sample images - sample_images = [] - iter_loader = iter(dm.val_dataloader()) - for panel in range(n_panels): - X, Y = next(iter_loader) - sample_images += [(X[:images_per_panel], Y[:images_per_panel])] - - # Define mask callback - mask_cb = partial(DrawMaskCallback, log_every_n_steps=args.log_every_n_steps) - - callbacks = [] - for panel in range(n_panels): - # Initialize ViT model - vit = ViTForImageClassification.from_pretrained(args.from_pretrained) - - # Extract samples for current panel - samples = sample_images[panel] - X, _ = samples - - # Log GradCAM - gradcam_masks = grad_cam(X, vit) - log_masks(X, gradcam_masks, f"GradCAM {panel}", logger) - - # Log Attention Rollout - rollout_masks = attention_rollout(X, vit) - log_masks(X, rollout_masks, f"Attention Rollout {panel}", logger) - - # Create mask callback - callbacks += [mask_cb(samples, key=f"{panel}")] - - return callbacks - - -def main(args: Namespace): - # Seed - pl.seed_everything(args.seed) - - # Load pre-trained Transformer - model = ViTForImageClassification.from_pretrained(args.from_pretrained) - - # Load datamodule - dm = datamodule_factory(args) - - # Setup datamodule to sample images for the mask callback - dm.prepare_data() - dm.setup("fit") - - # Create Vision DiffMask for the model - diffmask = ImageInterpretationNet( - model_cfg=model.config, - alpha=args.alpha, - lr=args.lr, - eps=args.eps, - lr_placeholder=args.lr_placeholder, - lr_alpha=args.lr_alpha, - mul_activation=args.mul_activation, - add_activation=args.add_activation, - placeholder=not args.no_placeholder, - weighted_layer_pred=args.weighted_layer_distribution, - ) - diffmask.set_vision_transformer(model) - - # Create wandb logger instance - wandb_logger = WandbLogger( - name=get_experiment_name(args), - project="Patch-DiffMask", - ) - - # Create checkpoint callback - ckpt_cb = ModelCheckpoint( - save_top_k=-1, - dirpath=f"checkpoints/{wandb_logger.version}", - every_n_train_steps=args.log_every_n_steps, - ) - - # Create mask callbacks - mask_cbs = setup_sample_image_logs(dm, args, wandb_logger) - - # Create trainer - trainer = pl.Trainer( - accelerator="auto", - callbacks=[ckpt_cb, *mask_cbs], - enable_progress_bar=args.enable_progress_bar, - logger=wandb_logger, - max_epochs=args.num_epochs, - ) - - # Train the model - trainer.fit(diffmask, dm) - - -if __name__ == "__main__": - parser = ArgumentParser() - - # Trainer - parser.add_argument( - "--enable_progress_bar", - action="store_true", - help="Whether to enable the progress bar (NOT recommended when logging to file).", - ) - parser.add_argument( - "--num_epochs", - type=int, - default=5, - help="Number of epochs to train.", - ) - parser.add_argument( - "--seed", - type=int, - default=123, - help="Random seed for reproducibility.", - ) - - # Logging - parser.add_argument( - "--sample_images", - type=int, - default=8, - help="Number of images to sample for the mask callback.", - ) - parser.add_argument( - "--log_every_n_steps", - type=int, - default=200, - help="Number of steps between logging media & checkpoints.", - ) - - # Base (classification) model - parser.add_argument( - "--base_model", - type=str, - default="ViT", - choices=["ViT"], - help="Base model architecture to train.", - ) - parser.add_argument( - "--from_pretrained", - type=str, - default="tanlq/vit-base-patch16-224-in21k-finetuned-cifar10", - help="The name of the pretrained HF model to load.", - ) - - # Interpretation model - ImageInterpretationNet.add_model_specific_args(parser) - - # Datamodule - ImageDataModule.add_model_specific_args(parser) - CIFAR10QADataModule.add_model_specific_args(parser) - parser.add_argument( - "--dataset", - type=str, - default="CIFAR10", - choices=["MNIST", "CIFAR10", "CIFAR10_QA", "toy"], - help="The dataset to use.", - ) - - args = parser.parse_args() - - main(args) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_checkpoint.py b/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_checkpoint.py deleted file mode 100644 index 2604d969f910ffdd65aff66acc0b6ab09b793b38..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_checkpoint.py +++ /dev/null @@ -1,46 +0,0 @@ -from torch.utils.checkpoint import checkpoint - -import ldm.modules.attention -import ldm.modules.diffusionmodules.openaimodel - - -def BasicTransformerBlock_forward(self, x, context=None): - return checkpoint(self._forward, x, context) - - -def AttentionBlock_forward(self, x): - return checkpoint(self._forward, x) - - -def ResBlock_forward(self, x, emb): - return checkpoint(self._forward, x, emb) - - -stored = [] - - -def add(): - if len(stored) != 0: - return - - stored.extend([ - ldm.modules.attention.BasicTransformerBlock.forward, - ldm.modules.diffusionmodules.openaimodel.ResBlock.forward, - ldm.modules.diffusionmodules.openaimodel.AttentionBlock.forward - ]) - - ldm.modules.attention.BasicTransformerBlock.forward = BasicTransformerBlock_forward - ldm.modules.diffusionmodules.openaimodel.ResBlock.forward = ResBlock_forward - ldm.modules.diffusionmodules.openaimodel.AttentionBlock.forward = AttentionBlock_forward - - -def remove(): - if len(stored) == 0: - return - - ldm.modules.attention.BasicTransformerBlock.forward = stored[0] - ldm.modules.diffusionmodules.openaimodel.ResBlock.forward = stored[1] - ldm.modules.diffusionmodules.openaimodel.AttentionBlock.forward = stored[2] - - stored.clear() - diff --git a/spaces/jamesyoung999/whisper_word_timestamps/app.py b/spaces/jamesyoung999/whisper_word_timestamps/app.py deleted file mode 100644 index 9f5f92ee25ee55c3f0c0370d99f0ce14ddaefded..0000000000000000000000000000000000000000 --- a/spaces/jamesyoung999/whisper_word_timestamps/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import gradio as gr -import librosa -import numpy as np -import moviepy.editor as mpy -import torch - -from PIL import Image, ImageDraw, ImageFont -from transformers import pipeline - - -max_duration = 60 # seconds -fps = 25 -video_width = 640 -video_height = 480 -margin_left = 20 -margin_right = 20 -margin_top = 20 -line_height = 44 - -background_image = Image.open("background.png") -font = ImageFont.truetype("Lato-Regular.ttf", 40) -text_color = (255, 200, 200) -highlight_color = (255, 255, 255) - -# checkpoint = "openai/whisper-tiny" -# checkpoint = "openai/whisper-base" -checkpoint = "openai/whisper-small" - -if torch.cuda.is_available() and torch.cuda.device_count() > 0: - from transformers import ( - AutomaticSpeechRecognitionPipeline, - WhisperForConditionalGeneration, - WhisperProcessor, - ) - model = WhisperForConditionalGeneration.from_pretrained(checkpoint).to("cuda").half() - processor = WhisperProcessor.from_pretrained(checkpoint) - pipe = AutomaticSpeechRecognitionPipeline( - model=model, - tokenizer=processor.tokenizer, - feature_extractor=processor.feature_extractor, - batch_size=8, - torch_dtype=torch.float16, - device="cuda:0" - ) -else: - pipe = pipeline(model=checkpoint) - -# TODO: no longer need to set these manually once the models have been updated on the Hub -# whisper-tiny -# pipe.model.generation_config.alignment_heads = [[2, 2], [3, 0], [3, 2], [3, 3], [3, 4], [3, 5]] -# whisper-base -# pipe.model.generation_config.alignment_heads = [[3, 1], [4, 2], [4, 3], [4, 7], [5, 1], [5, 2], [5, 4], [5, 6]] -# whisper-small -pipe.model.generation_config.alignment_heads = [[5, 3], [5, 9], [8, 0], [8, 4], [8, 7], [8, 8], [9, 0], [9, 7], [9, 9], [10, 5]] - -chunks = [] - -start_chunk = 0 -last_draws = [] -last_image = None - - -def make_frame(t): - global chunks, start_chunk, last_draws, last_image - - # TODO in the Henry V example, the word "desires" has an ending timestamp - # that's too far into the future, and so the word stays highlighted. - # Could fix this by finding the latest word that is active in the chunk - # and only highlight that one. - - image = background_image.copy() - draw = ImageDraw.Draw(image) - - # for debugging: draw frame time - #draw.text((20, 20), str(t), fill=text_color, font=font) - - space_length = draw.textlength(" ", font) - x = margin_left - y = margin_top - - # Create a list of drawing commands - draws = [] - for i in range(start_chunk, len(chunks)): - chunk = chunks[i] - chunk_start = chunk["timestamp"][0] - chunk_end = chunk["timestamp"][1] - if chunk_start > t: break - if chunk_end is None: chunk_end = max_duration - - word = chunk["text"] - word_length = draw.textlength(word + " ", font) - space_length - - if x + word_length >= video_width - margin_right: - x = margin_left - y += line_height - - # restart page when end is reached - if y >= margin_top + line_height * 7: - start_chunk = i - break - - highlight = (chunk_start <= t < chunk_end) - draws.append([x, y, word, word_length, highlight]) - - x += word_length + space_length - - # If the drawing commands didn't change, then reuse the last image, - # otherwise draw a new image - if draws != last_draws: - for x, y, word, word_length, highlight in draws: - if highlight: - color = highlight_color - draw.rectangle([x, y + line_height, x + word_length, y + line_height + 4], fill=color) - else: - color = text_color - - draw.text((x, y), word, fill=color, font=font) - - last_image = np.array(image) - last_draws = draws - - return last_image - - -def predict(audio_path): - global chunks, start_chunk, last_draws, last_image - - start_chunk = 0 - last_draws = [] - last_image = None - - audio_data, sr = librosa.load(audio_path, mono=True) - duration = librosa.get_duration(y=audio_data, sr=sr) - duration = min(max_duration, duration) - audio_data = audio_data[:int(duration * sr)] - - # Run Whisper to get word-level timestamps. - audio_inputs = librosa.resample(audio_data, orig_sr=sr, target_sr=pipe.feature_extractor.sampling_rate) - output = pipe(audio_inputs, chunk_length_s=30, stride_length_s=[4, 2], return_timestamps="word") - chunks = output["chunks"] - #print(chunks) - - # Create the video. - clip = mpy.VideoClip(make_frame, duration=duration) - audio_clip = mpy.AudioFileClip(audio_path).set_duration(duration) - clip = clip.set_audio(audio_clip) - clip.write_videofile("my_video.mp4", fps=fps, codec="libx264", audio_codec="aac") - return "my_video.mp4" - - -title = "Word-level timestamps with Whisper" - -description = """ -This demo shows Whisper word-level timestamps in action using Hugging Face Transformers. It creates a video showing subtitled audio with the current word highlighted. It can even do music lyrics! - -This demo uses the openai/whisper-small checkpoint. - -Since it's only a demo, the output is limited to the first 60 seconds of audio. -To use this on longer audio, duplicate the space -and in app.py change the value of `max_duration`. -""" - -article = """ -
        - -

        Credits:

        - -

          -
        • Shakespeare's "Henry V" speech from acclivity (CC BY-NC 4.0 license) -
        • "Here's to the Crazy Ones" speech by Steve Jobs
        • -
        • "Stupid People" comedy routine by Bill Engvall
        • -
        • "BeOS, It's The OS" song by The Cotton Squares
        • -
        • Lato font by Łukasz Dziedzic (licensed under Open Font License)
        • -
        • Whisper model by OpenAI
        • -
        - -
        -""" - -examples = [ - "examples/steve_jobs_crazy_ones.mp3", - "examples/henry5.wav", - "examples/stupid_people.mp3", - "examples/beos_song.mp3", -] - -gr.Interface( - fn=predict, - inputs=[ - gr.Audio(label="Upload Audio", source="upload", type="filepath"), - ], - outputs=[ - gr.Video(label="Output Video"), - ], - title=title, - description=description, - article=article, - examples=examples, -).launch() diff --git a/spaces/jannisborn/paccmann/cos.py b/spaces/jannisborn/paccmann/cos.py deleted file mode 100644 index 0b8fabf94af125b840b76194d441ee20c9cc91b4..0000000000000000000000000000000000000000 --- a/spaces/jannisborn/paccmann/cos.py +++ /dev/null @@ -1,155 +0,0 @@ -"""COS utitities.""" -import logging -import os -import tempfile -from io import BufferedReader -from typing import List, Optional, Tuple -from urllib.parse import urlparse - -import boto3 -from boto3_type_annotations.s3 import Bucket -from botocore.client import Config - -logger = logging.getLogger(__name__) - - -def connect_bucket(s3_uri: str) -> Tuple[Bucket, List[str]]: - parsed_uri = urlparse(s3_uri) - # parse bucket and path, where path can be empty list - _, bucket_name, *split_key = parsed_uri.path.split("/") - # parsing credentials and host - credentials, host = parsed_uri.netloc.split("@") - # getting keys - access, secret = credentials.split(":") - # establish connection - connection = boto3.resource( - "s3", - endpoint_url="http://{}".format(host), - aws_access_key_id=access, - aws_secret_access_key=secret, - config=Config(signature_version="s3v4"), - region_name="us-east-1", - ) - return connection.Bucket(bucket_name), split_key - - -def ensure_filepath_from_uri(file_uri: str) -> str: - """ - Get a file on the local storage. - In case the file_uri provided is a S3 URI, dowloads the - file and return the local path. - Args: - file_uri (str): a uri, either filesystem or S3. - Returns: - str: the path to the file on the local filesystem. - """ - if file_uri.startswith("s3://"): - try: - bucket, split_key = connect_bucket(file_uri) - path = os.path.join(*split_key) - # create a file handle for storing the file locally - a_file = tempfile.NamedTemporaryFile(delete=False) - # make sure we close the file - a_file.close() - # download the file - bucket.download_file(path, a_file.name) - return a_file.name - except Exception: - message = "Getting file from COS failed " "for the provided URI: {}".format( - file_uri - ) - logger.exception(message) - raise RuntimeError(message) - else: - logger.debug(f"Searching for {file_uri}") - if os.path.exists(file_uri): - return file_uri - else: - message = "File not found on local filesystem." - logger.error(message) - raise RuntimeError(message) - - -# COS configuration -COS_BUCKET_URI = os.environ.get( - "COS_BUCKET_URI", os.path.join(os.getcwd(), "artifacts") -) -COS_UPLOAD_POLICY = os.environ.get("COS_UPLOAD_POLICY", "public-read-write") -# results prefix -RESULTS_PREFIX = "results" - - -def download_from_key(key: str, file_path: Optional[str] = None) -> None: - """Download a single file from COS. - If no file_path is given, object name is taken as relative local path. - Args: - key (str): S3 key. - file_path (str, optional): Path of downloaded file. Defaults to None. - """ - file_path = key if file_path is None else file_path - os.makedirs(os.path.dirname(file_path), exist_ok=True) - BUCKET.download_file(key, file_path) - - -def upload_to_key(file_path: str, key: str) -> None: - """Upload local file to COS. - Args: - file_path (str): Local filepath. - key (str): S3 key. - """ - BUCKET.upload_file(file_path, key) - - -def fileobject_to_key(readable_binary: BufferedReader, key: str) -> None: - """Upload readable, binary file from handle to COS. - Args: - readable_binary (BufferedReader): filehandle, e.g. opened in 'rb' mode. - key (str): S3 key. - """ - BUCKET.upload_fileobj(readable_binary, key) - - -def delete_from_key(key_or_prefix: str) -> None: - """Delete all files matching given prefix from COS. - Args: - key_or_prefix (str): S3 uri including object name prefix. - """ - BUCKET.objects.filter(Prefix=key_or_prefix).delete() - - -def string_to_key(string: str, key: str) -> None: - """Upload string as object to COS. - Args: - string (str): object to be stored. - key (str): S3 key. - """ - BUCKET.put_object(Key=key, Body=string.encode()) - - -def bytes_to_key(some_bytes: bytes, key: str) -> None: - """Upload bytes as object to COS. - Args: - some_bytes (bytes): object to be stored. - key (str): S3 key. - """ - BUCKET.put_object(Key=key, Body=some_bytes) - - -def string_from_key(key: str) -> str: - """Get object from COS as string. - Args: - key (str): S3 key. - Returns: - str: object. - """ - return BUCKET.Object(key).get()["Body"].read().decode("utf-8") - - -def bytes_from_key(key: str) -> bytes: - """Get object from COS as bytes. - Args: - key (str): S3 key. - Returns: - bytes: object. - """ - return BUCKET.Object(key).get()["Body"].read() diff --git a/spaces/javiermontesinos/whisper/README.md b/spaces/javiermontesinos/whisper/README.md deleted file mode 100644 index b323bfdf8a4e6346799e4c8eb1f59bacb8064123..0000000000000000000000000000000000000000 --- a/spaces/javiermontesinos/whisper/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Whisper -emoji: 🦀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/AnimateDiff/animatediff/models/unet_blocks.py b/spaces/jbilcke-hf/AnimateDiff/animatediff/models/unet_blocks.py deleted file mode 100644 index 8a17f2016a699c0469f0d79394b897f2d15df8a7..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/AnimateDiff/animatediff/models/unet_blocks.py +++ /dev/null @@ -1,733 +0,0 @@ -# Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py - -import torch -from torch import nn - -from .attention import Transformer3DModel -from .resnet import Downsample3D, ResnetBlock3D, Upsample3D -from .motion_module import get_motion_module - -import pdb - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", - - unet_use_cross_frame_attention=None, - unet_use_temporal_attention=None, - - use_motion_module=None, - - motion_module_type=None, - motion_module_kwargs=None, -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlock3D": - return DownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - - use_motion_module=use_motion_module, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) - elif down_block_type == "CrossAttnDownBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock3D") - return CrossAttnDownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - - unet_use_cross_frame_attention=unet_use_cross_frame_attention, - unet_use_temporal_attention=unet_use_temporal_attention, - - use_motion_module=use_motion_module, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", - - unet_use_cross_frame_attention=None, - unet_use_temporal_attention=None, - - use_motion_module=None, - motion_module_type=None, - motion_module_kwargs=None, -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlock3D": - return UpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - - use_motion_module=use_motion_module, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) - elif up_block_type == "CrossAttnUpBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock3D") - return CrossAttnUpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - - unet_use_cross_frame_attention=unet_use_cross_frame_attention, - unet_use_temporal_attention=unet_use_temporal_attention, - - use_motion_module=use_motion_module, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock3DCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - upcast_attention=False, - - unet_use_cross_frame_attention=None, - unet_use_temporal_attention=None, - - use_motion_module=None, - - motion_module_type=None, - motion_module_kwargs=None, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - motion_modules = [] - - for _ in range(num_layers): - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - - unet_use_cross_frame_attention=unet_use_cross_frame_attention, - unet_use_temporal_attention=unet_use_temporal_attention, - ) - ) - motion_modules.append( - get_motion_module( - in_channels=in_channels, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) if use_motion_module else None - ) - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - self.motion_modules = nn.ModuleList(motion_modules) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet, motion_module in zip(self.attentions, self.resnets[1:], self.motion_modules): - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - hidden_states = motion_module(hidden_states, temb, encoder_hidden_states=encoder_hidden_states) if motion_module is not None else hidden_states - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class CrossAttnDownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - - unet_use_cross_frame_attention=None, - unet_use_temporal_attention=None, - - use_motion_module=None, - - motion_module_type=None, - motion_module_kwargs=None, - ): - super().__init__() - resnets = [] - attentions = [] - motion_modules = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - - unet_use_cross_frame_attention=unet_use_cross_frame_attention, - unet_use_temporal_attention=unet_use_temporal_attention, - ) - ) - motion_modules.append( - get_motion_module( - in_channels=out_channels, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) if use_motion_module else None - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - self.motion_modules = nn.ModuleList(motion_modules) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample3D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None): - output_states = () - - for resnet, attn, motion_module in zip(self.resnets, self.attentions, self.motion_modules): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - )[0] - if motion_module is not None: - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(motion_module), hidden_states.requires_grad_(), temb, encoder_hidden_states) - - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - # add motion module - hidden_states = motion_module(hidden_states, temb, encoder_hidden_states=encoder_hidden_states) if motion_module is not None else hidden_states - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - - use_motion_module=None, - motion_module_type=None, - motion_module_kwargs=None, - ): - super().__init__() - resnets = [] - motion_modules = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - motion_modules.append( - get_motion_module( - in_channels=out_channels, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) if use_motion_module else None - ) - - self.resnets = nn.ModuleList(resnets) - self.motion_modules = nn.ModuleList(motion_modules) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample3D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None): - output_states = () - - for resnet, motion_module in zip(self.resnets, self.motion_modules): - if self.training and self.gradient_checkpointing: - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - if motion_module is not None: - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(motion_module), hidden_states.requires_grad_(), temb, encoder_hidden_states) - else: - hidden_states = resnet(hidden_states, temb) - - # add motion module - hidden_states = motion_module(hidden_states, temb, encoder_hidden_states=encoder_hidden_states) if motion_module is not None else hidden_states - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnUpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - - unet_use_cross_frame_attention=None, - unet_use_temporal_attention=None, - - use_motion_module=None, - - motion_module_type=None, - motion_module_kwargs=None, - ): - super().__init__() - resnets = [] - attentions = [] - motion_modules = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock3D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - - unet_use_cross_frame_attention=unet_use_cross_frame_attention, - unet_use_temporal_attention=unet_use_temporal_attention, - ) - ) - motion_modules.append( - get_motion_module( - in_channels=out_channels, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) if use_motion_module else None - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - self.motion_modules = nn.ModuleList(motion_modules) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample3D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - attention_mask=None, - ): - for resnet, attn, motion_module in zip(self.resnets, self.attentions, self.motion_modules): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - )[0] - if motion_module is not None: - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(motion_module), hidden_states.requires_grad_(), temb, encoder_hidden_states) - - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - # add motion module - hidden_states = motion_module(hidden_states, temb, encoder_hidden_states=encoder_hidden_states) if motion_module is not None else hidden_states - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - - use_motion_module=None, - motion_module_type=None, - motion_module_kwargs=None, - ): - super().__init__() - resnets = [] - motion_modules = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock3D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - motion_modules.append( - get_motion_module( - in_channels=out_channels, - motion_module_type=motion_module_type, - motion_module_kwargs=motion_module_kwargs, - ) if use_motion_module else None - ) - - self.resnets = nn.ModuleList(resnets) - self.motion_modules = nn.ModuleList(motion_modules) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample3D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None, encoder_hidden_states=None,): - for resnet, motion_module in zip(self.resnets, self.motion_modules): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - if motion_module is not None: - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(motion_module), hidden_states.requires_grad_(), temb, encoder_hidden_states) - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = motion_module(hidden_states, temb, encoder_hidden_states=encoder_hidden_states) if motion_module is not None else hidden_states - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states diff --git a/spaces/jbilcke-hf/ai-clip-factory/next.config.js b/spaces/jbilcke-hf/ai-clip-factory/next.config.js deleted file mode 100644 index 4a29795b01a1f36b3e0f1d19f53852cdf63b9134..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/next.config.js +++ /dev/null @@ -1,11 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - output: 'standalone', - - experimental: { - serverActions: true, - serverActionsBodySizeLimit: '8mb', - }, -} - -module.exports = nextConfig diff --git a/spaces/jbilcke-hf/observer/src/lib/getSpeechSynthesisVoice.ts b/spaces/jbilcke-hf/observer/src/lib/getSpeechSynthesisVoice.ts deleted file mode 100644 index 83de2d8d85c0cfd9679c497be0db8b7502df6e1d..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/lib/getSpeechSynthesisVoice.ts +++ /dev/null @@ -1,22 +0,0 @@ -export function getSpeechSynthesisVoice(speechSynthesis: SpeechSynthesis): SpeechSynthesisVoice { - const allVoices = speechSynthesis.getVoices() - - console.log("all voices:") - console.table(allVoices) - - const fallbackVoice = allVoices[0] - - const enVoices = allVoices.filter(voice => voice.lang.toLowerCase() === "en-us") - - console.log("available english voices:") - console.table(enVoices) - - const kathyVoice = enVoices.find(voice => voice.name.includes("Kathy")) - - // if we find a high-quality voice - const googleVoice = enVoices.find(voice => voice.name.includes("Google")) - - // console.log("google voice:", googleVoice) - - return googleVoice || kathyVoice || fallbackVoice -} \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/read_video.py b/spaces/jcenaa/Segment-Any-RGBD/read_video.py deleted file mode 100644 index 1da1027914daa99d1da9308f0d967ac7e012b49e..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/read_video.py +++ /dev/null @@ -1,7 +0,0 @@ -import cv2 - -Depth_Semantic_SAM_Mask_gif = cv2.VideoCapture('outputs/depth_3d_sam_mask.mp4') - -while(Depth_Semantic_SAM_Mask_gif .isOpened()): - ret, frame = Depth_Semantic_SAM_Mask_gif.read() - print(ret, frame.shape) \ No newline at end of file diff --git a/spaces/jetwill/IDEA-CCNL-Taiyi-Stable-Diffusion-1B-Chinese-v0.11/app.py b/spaces/jetwill/IDEA-CCNL-Taiyi-Stable-Diffusion-1B-Chinese-v0.11/app.py deleted file mode 100644 index 11260502a680424b97f3f23175f38886ee3e14f4..0000000000000000000000000000000000000000 --- a/spaces/jetwill/IDEA-CCNL-Taiyi-Stable-Diffusion-1B-Chinese-v0.11/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1").launch() \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/base.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/base.py deleted file mode 100644 index 675f01682ddf5e31b6cc341735378c6f3b242e49..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/base.py +++ /dev/null @@ -1,73 +0,0 @@ -import abc -from typing import Dict, List - -import numpy as np -import torch -from skimage import color -from skimage.segmentation import mark_boundaries - -from . import colors - -COLORS, _ = colors.generate_colors(151) # 151 - max classes for semantic segmentation - - -class BaseVisualizer: - @abc.abstractmethod - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - """ - Take a batch, make an image from it and visualize - """ - raise NotImplementedError() - - -def visualize_mask_and_images(images_dict: Dict[str, np.ndarray], keys: List[str], - last_without_mask=True, rescale_keys=None, mask_only_first=None, - black_mask=False) -> np.ndarray: - mask = images_dict['mask'] > 0.5 - result = [] - for i, k in enumerate(keys): - img = images_dict[k] - img = np.transpose(img, (1, 2, 0)) - - if rescale_keys is not None and k in rescale_keys: - img = img - img.min() - img /= img.max() + 1e-5 - if len(img.shape) == 2: - img = np.expand_dims(img, 2) - - if img.shape[2] == 1: - img = np.repeat(img, 3, axis=2) - elif (img.shape[2] > 3): - img_classes = img.argmax(2) - img = color.label2rgb(img_classes, colors=COLORS) - - if mask_only_first: - need_mark_boundaries = i == 0 - else: - need_mark_boundaries = i < len(keys) - 1 or not last_without_mask - - if need_mark_boundaries: - if black_mask: - img = img * (1 - mask[0][..., None]) - img = mark_boundaries(img, - mask[0], - color=(1., 0., 0.), - outline_color=(1., 1., 1.), - mode='thick') - result.append(img) - return np.concatenate(result, axis=1) - - -def visualize_mask_and_images_batch(batch: Dict[str, torch.Tensor], keys: List[str], max_items=10, - last_without_mask=True, rescale_keys=None) -> np.ndarray: - batch = {k: tens.detach().cpu().numpy() for k, tens in batch.items() - if k in keys or k == 'mask'} - - batch_size = next(iter(batch.values())).shape[0] - items_to_vis = min(batch_size, max_items) - result = [] - for i in range(items_to_vis): - cur_dct = {k: tens[i] for k, tens in batch.items()} - result.append(visualize_mask_and_images(cur_dct, keys, last_without_mask=last_without_mask, - rescale_keys=rescale_keys)) - return np.concatenate(result, axis=0) diff --git a/spaces/jie1/jie_test4/Rfile.py b/spaces/jie1/jie_test4/Rfile.py deleted file mode 100644 index 1a07dea87e0b244fc6364ba386557fffd0488299..0000000000000000000000000000000000000000 --- a/spaces/jie1/jie_test4/Rfile.py +++ /dev/null @@ -1,11 +0,0 @@ -def j_reads(file): - with open(file, "r") as f: - contents = f.readlines() - return contents - - -def j_read(file): - with open(file, "r") as f: - content = f.readline() - return content - diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py deleted file mode 100644 index 0ea58a574b0f1b444cf182d89927842e98c07837..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Protocol/test_SecretSharing.py +++ /dev/null @@ -1,267 +0,0 @@ -# -# SelfTest/Protocol/test_secret_sharing.py: Self-test for secret sharing protocols -# -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from unittest import main, TestCase, TestSuite -from binascii import unhexlify, hexlify - -from Crypto.Util.py3compat import * -from Crypto.SelfTest.st_common import list_test_cases - -from Crypto.Protocol.SecretSharing import Shamir, _Element, \ - _mult_gf2, _div_gf2 - -class GF2_Tests(TestCase): - - def test_mult_gf2(self): - # Prove mult by zero - x = _mult_gf2(0,0) - self.assertEqual(x, 0) - - # Prove mult by unity - x = _mult_gf2(34, 1) - self.assertEqual(x, 34) - - z = 3 # (x+1) - y = _mult_gf2(z, z) - self.assertEqual(y, 5) # (x+1)^2 = x^2 + 1 - y = _mult_gf2(y, z) - self.assertEqual(y, 15) # (x+1)^3 = x^3 + x^2 + x + 1 - y = _mult_gf2(y, z) - self.assertEqual(y, 17) # (x+1)^4 = x^4 + 1 - - # Prove linearity works - comps = [1, 4, 128, 2**34] - sum_comps = 1+4+128+2**34 - y = 908 - z = _mult_gf2(sum_comps, y) - w = 0 - for x in comps: - w ^= _mult_gf2(x, y) - self.assertEqual(w, z) - - def test_div_gf2(self): - from Crypto.Util.number import size as deg - - x, y = _div_gf2(567, 7) - self.assertTrue(deg(y) < deg(7)) - - w = _mult_gf2(x, 7) ^ y - self.assertEqual(567, w) - - x, y = _div_gf2(7, 567) - self.assertEqual(x, 0) - self.assertEqual(y, 7) - -class Element_Tests(TestCase): - - def test1(self): - # Test encondings - e = _Element(256) - self.assertEqual(int(e), 256) - self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x00")) - - e = _Element(bchr(0)*14 + b("\x01\x10")) - self.assertEqual(int(e), 0x110) - self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x10")) - - # Only 16 byte string are a valid encoding - self.assertRaises(ValueError, _Element, bchr(0)) - - def test2(self): - # Test addition - e = _Element(0x10) - f = _Element(0x0A) - self.assertEqual(int(e+f), 0x1A) - - def test3(self): - # Test multiplication - zero = _Element(0) - one = _Element(1) - two = _Element(2) - - x = _Element(6) * zero - self.assertEqual(int(x), 0) - - x = _Element(6) * one - self.assertEqual(int(x), 6) - - x = _Element(2**127) * two - self.assertEqual(int(x), 1 + 2 + 4 + 128) - - def test4(self): - # Test inversion - one = _Element(1) - - x = one.inverse() - self.assertEqual(int(x), 1) - - x = _Element(82323923) - y = x.inverse() - self.assertEqual(int(x * y), 1) - -class Shamir_Tests(TestCase): - - def test1(self): - # Test splitting - shares = Shamir.split(2, 3, bchr(90)*16) - self.assertEqual(len(shares), 3) - for index in range(3): - self.assertEqual(shares[index][0], index+1) - self.assertEqual(len(shares[index][1]), 16) - - def test2(self): - # Test recombine - from itertools import permutations - - test_vectors = ( - (2, "d9fe73909bae28b3757854c0af7ad405", - "1-594ae8964294174d95c33756d2504170", - "2-d897459d29da574eb40e93ec552ffe6e", - "3-5823de9bf0e068b054b5f07a28056b1b", - "4-db2c1f8bff46d748f795da995bd080cb"), - (2, "bf4f902d9a7efafd1f3ffd9291fd5de9", - "1-557bd3b0748064b533469722d1cc7935", - "2-6b2717164783c66d47cd28f2119f14d0", - "3-8113548ba97d58256bb4424251ae300c", - "4-179e9e5a218483ddaeda57539139cf04"), - (3, "ec96aa5c14c9faa699354cf1da74e904", - "1-64579fbf1908d66f7239bf6e2b4e41e1", - "2-6cd9428df8017b52322561e8c672ae3e", - "3-e418776ef5c0579bd9299277374806dd", - "4-ab3f77a0107398d23b323e581bb43f5d", - "5-23fe42431db2b41bd03ecdc7ea8e97ac"), - (3, "44cf249b68b80fcdc27b47be60c2c145", - "1-d6515a3905cd755119b86e311c801e31", - "2-16693d9ac9f10c254036ced5f8917fa3", - "3-84f74338a48476b99bf5e75a84d3a0d1", - "4-3fe8878dc4a5d35811cf3cbcd33dbe52", - "5-ad76f92fa9d0a9c4ca0c1533af7f6132"), - (5, "5398717c982db935d968eebe53a47f5a", - "1-be7be2dd4c068e7ef576aaa1b1c11b01", - "2-f821f5848441cb98b3eb467e2733ee21", - "3-25ee52f53e203f6e29a0297b5ab486b5", - "4-fc9fb58ef74dab947fbf9acd9d5d83cd", - "5-b1949cce46d81552e65f248d3f74cc5c", - "6-d64797f59977c4d4a7956ad916da7699", - "7-ab608a6546a8b9af8820ff832b1135c7"), - (5, "4a78db90fbf35da5545d2fb728e87596", - "1-08daf9a25d8aa184cfbf02b30a0ed6a0", - "2-dda28261e36f0b14168c2cf153fb734e", - "3-e9fdec5505d674a57f9836c417c1ecaa", - "4-4dce5636ae06dee42d2c82e65f06c735", - "5-3963dc118afc2ba798fa1d452b28ef00", - "6-6dfe6ff5b09e94d2f84c382b12f42424", - "7-6faea9d4d4a4e201bf6c90b9000630c3"), - (10, "eccbf6d66d680b49b073c4f1ddf804aa", - "01-7d8ac32fe4ae209ead1f3220fda34466", - "02-f9144e76988aad647d2e61353a6e96d5", - "03-b14c3b80179203363922d60760271c98", - "04-770bb2a8c28f6cee89e00f4d5cc7f861", - "05-6e3d7073ea368334ef67467871c66799", - "06-248792bc74a98ce024477c13c8fb5f8d", - "07-fcea4640d2db820c0604851e293d2487", - "08-2776c36fb714bb1f8525a0be36fc7dba", - "09-6ee7ac8be773e473a4bf75ee5f065762", - "10-33657fc073354cf91d4a68c735aacfc8", - "11-7645c65094a5868bf225c516fdee2d0c", - "12-840485aacb8226631ecd9c70e3018086"), - (10, "377e63bdbb5f7d4dc58a483d035212bb", - "01-32c53260103be431c843b1a633afe3bd", - "02-0107eb16cb8695084d452d2cc50bc7d6", - "03-df1e5c66cd755287fb0446faccd72a06", - "04-361bbcd5d40797f49dfa1898652da197", - "05-160d3ad1512f7dec7fd9344aed318591", - "06-659af6d95df4f25beca4fb9bfee3b7e8", - "07-37f3b208977bad50b3724566b72bfa9d", - "08-6c1de2dfc69c2986142c26a8248eb316", - "09-5e19220837a396bd4bc8cd685ff314c3", - "10-86e7b864fb0f3d628e46d50c1ba92f1c", - "11-065d0082c80b1aea18f4abe0c49df72e", - "12-84a09430c1d20ea9f388f3123c3733a3"), - ) - - def get_share(p): - pos = p.find('-') - return int(p[:pos]), unhexlify(p[pos + 1:]) - - for tv in test_vectors: - k = tv[0] - secret = unhexlify(tv[1]) - max_perms = 10 - for perm, shares_idx in enumerate(permutations(range(2, len(tv)), k)): - if perm > max_perms: - break - shares = [ get_share(tv[x]) for x in shares_idx ] - result = Shamir.combine(shares, True) - self.assertEqual(secret, result) - - def test3(self): - # Loopback split/recombine - secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) - - shares = Shamir.split(2, 3, secret) - - secret2 = Shamir.combine(shares[:2]) - self.assertEqual(secret, secret2) - - secret3 = Shamir.combine([ shares[0], shares[2] ]) - self.assertEqual(secret, secret3) - - def test4(self): - # Loopback split/recombine (SSSS) - secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) - - shares = Shamir.split(2, 3, secret, ssss=True) - - secret2 = Shamir.combine(shares[:2], ssss=True) - self.assertEqual(secret, secret2) - - def test5(self): - # Detect duplicate shares - secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) - - shares = Shamir.split(2, 3, secret) - self.assertRaises(ValueError, Shamir.combine, (shares[0], shares[0])) - - -def get_tests(config={}): - tests = [] - tests += list_test_cases(GF2_Tests) - tests += list_test_cases(Element_Tests) - tests += list_test_cases(Shamir_Tests) - return tests - -if __name__ == '__main__': - suite = lambda: TestSuite(get_tests()) - main(defaultTest='suite') - diff --git a/spaces/jordonpeter01/MusicGen2/tests/models/test_encodec_model.py b/spaces/jordonpeter01/MusicGen2/tests/models/test_encodec_model.py deleted file mode 100644 index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/tests/models/test_encodec_model.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch - -from audiocraft.models import EncodecModel -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.quantization import DummyQuantizer - - -class TestEncodecModel: - - def _create_encodec_model(self, - sample_rate: int, - channels: int, - dim: int = 5, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - return model - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model = self._create_encodec_model(sample_rate, channels) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = model(x) - assert res.x.shape == x.shape - - def test_model_renorm(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False) - model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True) - - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - codes, scales = model_nonorm.encode(x) - codes, scales = model_renorm.encode(x) - assert scales is not None diff --git a/spaces/jroust/rooster/README.md b/spaces/jroust/rooster/README.md deleted file mode 100644 index 8ce902261ae7b0d96884f5d17d59c0f9032a9385..0000000000000000000000000000000000000000 --- a/spaces/jroust/rooster/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Rooster -emoji: 🏢 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/junkmind/SOTER/training/tools/schedulers.py b/spaces/junkmind/SOTER/training/tools/schedulers.py deleted file mode 100644 index e41f1a6fd8a913d382c2a5f99c23d7946c5cd22a..0000000000000000000000000000000000000000 --- a/spaces/junkmind/SOTER/training/tools/schedulers.py +++ /dev/null @@ -1,46 +0,0 @@ -from bisect import bisect_right - -from torch.optim.lr_scheduler import _LRScheduler - - -class LRStepScheduler(_LRScheduler): - def __init__(self, optimizer, steps, last_epoch=-1): - self.lr_steps = steps - super().__init__(optimizer, last_epoch) - - def get_lr(self): - pos = max(bisect_right([x for x, y in self.lr_steps], self.last_epoch) - 1, 0) - return [self.lr_steps[pos][1] if self.lr_steps[pos][0] <= self.last_epoch else base_lr for base_lr in self.base_lrs] - - -class PolyLR(_LRScheduler): - """Sets the learning rate of each parameter group according to poly learning rate policy - """ - def __init__(self, optimizer, max_iter=90000, power=0.9, last_epoch=-1): - self.max_iter = max_iter - self.power = power - super(PolyLR, self).__init__(optimizer, last_epoch) - - def get_lr(self): - self.last_epoch = (self.last_epoch + 1) % self.max_iter - return [base_lr * ((1 - float(self.last_epoch) / self.max_iter) ** (self.power)) for base_lr in self.base_lrs] - -class ExponentialLRScheduler(_LRScheduler): - """Decays the learning rate of each parameter group by gamma every epoch. - When last_epoch=-1, sets initial lr as lr. - - Args: - optimizer (Optimizer): Wrapped optimizer. - gamma (float): Multiplicative factor of learning rate decay. - last_epoch (int): The index of last epoch. Default: -1. - """ - - def __init__(self, optimizer, gamma, last_epoch=-1): - self.gamma = gamma - super(ExponentialLRScheduler, self).__init__(optimizer, last_epoch) - - def get_lr(self): - if self.last_epoch <= 0: - return self.base_lrs - return [base_lr * self.gamma**self.last_epoch for base_lr in self.base_lrs] - diff --git a/spaces/justYu2001/furniture-detection/utils/add_nms.py b/spaces/justYu2001/furniture-detection/utils/add_nms.py deleted file mode 100644 index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000 --- a/spaces/justYu2001/furniture-detection/utils/add_nms.py +++ /dev/null @@ -1,155 +0,0 @@ -import numpy as np -import onnx -from onnx import shape_inference -try: - import onnx_graphsurgeon as gs -except Exception as e: - print('Import onnx_graphsurgeon failure: %s' % e) - -import logging - -LOGGER = logging.getLogger(__name__) - -class RegisterNMS(object): - def __init__( - self, - onnx_model_path: str, - precision: str = "fp32", - ): - - self.graph = gs.import_onnx(onnx.load(onnx_model_path)) - assert self.graph - LOGGER.info("ONNX graph created successfully") - # Fold constants via ONNX-GS that PyTorch2ONNX may have missed - self.graph.fold_constants() - self.precision = precision - self.batch_size = 1 - def infer(self): - """ - Sanitize the graph by cleaning any unconnected nodes, do a topological resort, - and fold constant inputs values. When possible, run shape inference on the - ONNX graph to determine tensor shapes. - """ - for _ in range(3): - count_before = len(self.graph.nodes) - - self.graph.cleanup().toposort() - try: - for node in self.graph.nodes: - for o in node.outputs: - o.shape = None - model = gs.export_onnx(self.graph) - model = shape_inference.infer_shapes(model) - self.graph = gs.import_onnx(model) - except Exception as e: - LOGGER.info(f"Shape inference could not be performed at this time:\n{e}") - try: - self.graph.fold_constants(fold_shapes=True) - except TypeError as e: - LOGGER.error( - "This version of ONNX GraphSurgeon does not support folding shapes, " - f"please upgrade your onnx_graphsurgeon module. Error:\n{e}" - ) - raise - - count_after = len(self.graph.nodes) - if count_before == count_after: - # No new folding occurred in this iteration, so we can stop for now. - break - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") - - def register_nms( - self, - *, - score_thresh: float = 0.25, - nms_thresh: float = 0.45, - detections_per_img: int = 100, - ): - """ - Register the ``EfficientNMS_TRT`` plugin node. - NMS expects these shapes for its input tensors: - - box_net: [batch_size, number_boxes, 4] - - class_net: [batch_size, number_boxes, number_labels] - Args: - score_thresh (float): The scalar threshold for score (low scoring boxes are removed). - nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU - overlap with previously selected boxes are removed). - detections_per_img (int): Number of best detections to keep after NMS. - """ - - self.infer() - # Find the concat node at the end of the network - op_inputs = self.graph.outputs - op = "EfficientNMS_TRT" - attrs = { - "plugin_version": "1", - "background_class": -1, # no background class - "max_output_boxes": detections_per_img, - "score_threshold": score_thresh, - "iou_threshold": nms_thresh, - "score_activation": False, - "box_coding": 0, - } - - if self.precision == "fp32": - dtype_output = np.float32 - elif self.precision == "fp16": - dtype_output = np.float16 - else: - raise NotImplementedError(f"Currently not supports precision: {self.precision}") - - # NMS Outputs - output_num_detections = gs.Variable( - name="num_dets", - dtype=np.int32, - shape=[self.batch_size, 1], - ) # A scalar indicating the number of valid detections per batch image. - output_boxes = gs.Variable( - name="det_boxes", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img, 4], - ) - output_scores = gs.Variable( - name="det_scores", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img], - ) - output_labels = gs.Variable( - name="det_classes", - dtype=np.int32, - shape=[self.batch_size, detections_per_img], - ) - - op_outputs = [output_num_detections, output_boxes, output_scores, output_labels] - - # Create the NMS Plugin node with the selected inputs. The outputs of the node will also - # become the final outputs of the graph. - self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs) - LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}") - - self.graph.outputs = op_outputs - - self.infer() - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") diff --git a/spaces/jyseo/3DFuse/ldm/models/diffusion/__init__.py b/spaces/jyseo/3DFuse/ldm/models/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kandysh/NER_Tagger/app.py b/spaces/kandysh/NER_Tagger/app.py deleted file mode 100644 index fab02c1224073c915a730cf739b398d7e224c1b9..0000000000000000000000000000000000000000 --- a/spaces/kandysh/NER_Tagger/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import spacy_streamlit -import streamlit as st -import json -from normalizer import process_df -from process_tags import list_ents, no_of_tags, color_creator, scatter_document - - -def main(): - st.set_page_config(layout='wide') - st.info( - "Tags Json is the file we get from exporting tags from d555 tag editor. You may need to format it to be in proper json format. For the demo, use this Tag json file https://github.com/kandysh/odin_json_sheets/blob/main/tag_color.json") - with st.sidebar: - uploaded_file = st.file_uploader("Upload the SENTENCE Json", type="json") - uploaded_color = st.file_uploader("Upload the TAG Json", type="json") - if uploaded_file and uploaded_color is not None: - raw_data = json.load(uploaded_file) - tags_data = json.load(uploaded_color) - st.title(f'{uploaded_file.name.split(".")[0].upper()}') - df_list = [process_df(data) for data in raw_data] - st.plotly_chart(scatter_document(df_list, tags_data), use_container_width=True) - key = 0 - for df in df_list: - ents = list_ents(df) - tags = list(no_of_tags(df).keys()) - doc = [{ - "text": ' '.join(df['words']), - "ents": ents, - "title": None - }] - st.text(f"Sentence {key}") - spacy_streamlit.visualize_ner( - doc, - labels=tags, - show_table=False, - title=None, - manual=True, - displacy_options={ - "colors": color_creator(tags_data["NER"]) - }, - key=f"{key}" - ) - key += 1 - - -if __name__ == "__main__": - main() diff --git a/spaces/keivan/Is_he_fat/app.py b/spaces/keivan/Is_he_fat/app.py deleted file mode 100644 index 891249670d1533a2e33421e1afa760ad73a5f76d..0000000000000000000000000000000000000000 --- a/spaces/keivan/Is_he_fat/app.py +++ /dev/null @@ -1,18 +0,0 @@ - -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('model.pkl') - -categories = ('Fat' , 'Skinny') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['fat.jfif', 'fat1.jfif', 'skinny.jfif', 'skinny1.jfif'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) diff --git a/spaces/keneonyeachonam/sileod-deberta-v3-base-tasksource-nli-021423/README.md b/spaces/keneonyeachonam/sileod-deberta-v3-base-tasksource-nli-021423/README.md deleted file mode 100644 index 71fd1d8d07a2ebf2c0187821ca0b69b1f51ac07f..0000000000000000000000000000000000000000 --- a/spaces/keneonyeachonam/sileod-deberta-v3-base-tasksource-nli-021423/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sileod Deberta V3 Base Tasksource Nli 021423 -emoji: 🌍 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/uniformer.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/uniformer.py deleted file mode 100644 index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/uniformer.py +++ /dev/null @@ -1,422 +0,0 @@ -# -------------------------------------------------------- -# UniFormer -# Copyright (c) 2022 SenseTime X-Lab -# Licensed under The MIT License [see LICENSE for details] -# Written by Kunchang Li -# -------------------------------------------------------- - -from collections import OrderedDict -import math - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from annotator.uniformer.mmcv_custom import load_checkpoint -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CMlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, 1) - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, 1) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CBlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = nn.BatchNorm2d(dim) - self.conv1 = nn.Conv2d(dim, dim, 1) - self.conv2 = nn.Conv2d(dim, dim, 1) - self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = nn.BatchNorm2d(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x))))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - B, N, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.transpose(1, 2).reshape(B, N, H, W) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SABlock_Windows(nn.Module): - def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.window_size=window_size - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.permute(0, 2, 3, 1) - B, H, W, C = x.shape - shortcut = x - x = self.norm1(x) - - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.permute(0, 3, 1, 2).reshape(B, C, H, W) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - self.norm = nn.LayerNorm(embed_dim) - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, _, H, W = x.shape - x = self.proj(x) - B, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - return x - - -@BACKBONES.register_module() -class UniFormer(nn.Module): - """ Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512], - head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6), - pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0], - windows=False, hybrid=False, window_size=14): - """ - Args: - layer (list): number of block in each layer - img_size (int, tuple): input image size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - head_dim (int): dimension of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer (nn.Module): normalization layer - pretrained_path (str): path of pretrained model - use_checkpoint (bool): whether use checkpoint - checkpoint_num (list): index for using checkpoint in every stage - windows (bool): whether use window MHRA - hybrid (bool): whether use hybrid MHRA - window_size (int): size of window (>14) - """ - super().__init__() - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.windows = windows - print(f'Use Checkpoint: {self.use_checkpoint}') - print(f'Checkpoint Number: {self.checkpoint_num}') - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed1 = PatchEmbed( - img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0]) - self.patch_embed2 = PatchEmbed( - img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1]) - self.patch_embed3 = PatchEmbed( - img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2]) - self.patch_embed4 = PatchEmbed( - img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3]) - - self.pos_drop = nn.Dropout(p=drop_rate) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule - num_heads = [dim // head_dim for dim in embed_dim] - self.blocks1 = nn.ModuleList([ - CBlock( - dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(layers[0])]) - self.norm1=norm_layer(embed_dim[0]) - self.blocks2 = nn.ModuleList([ - CBlock( - dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer) - for i in range(layers[1])]) - self.norm2 = norm_layer(embed_dim[1]) - if self.windows: - print('Use local window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - elif hybrid: - print('Use hybrid window for blocks in stage3') - block3 = [] - for i in range(layers[2]): - if (i + 1) % 4 == 0: - block3.append(SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - else: - block3.append(SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - self.blocks3 = nn.ModuleList(block3) - else: - print('Use global window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - self.norm3 = norm_layer(embed_dim[2]) - self.blocks4 = nn.ModuleList([ - SABlock( - dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer) - for i in range(layers[3])]) - self.norm4 = norm_layer(embed_dim[3]) - - # Representation layer - if representation_size: - self.num_features = representation_size - self.pre_logits = nn.Sequential(OrderedDict([ - ('fc', nn.Linear(embed_dim, representation_size)), - ('act', nn.Tanh()) - ])) - else: - self.pre_logits = nn.Identity() - - self.apply(self._init_weights) - self.init_weights(pretrained=pretrained_path) - - def init_weights(self, pretrained): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - print(f'Load pretrained model from {pretrained}') - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed', 'cls_token'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - out = [] - x = self.patch_embed1(x) - x = self.pos_drop(x) - for i, blk in enumerate(self.blocks1): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm1(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed2(x) - for i, blk in enumerate(self.blocks2): - if self.use_checkpoint and i < self.checkpoint_num[1]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm2(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed3(x) - for i, blk in enumerate(self.blocks3): - if self.use_checkpoint and i < self.checkpoint_num[2]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm3(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed4(x) - for i, blk in enumerate(self.blocks4): - if self.use_checkpoint and i < self.checkpoint_num[3]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm4(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - return tuple(out) - - def forward(self, x): - x = self.forward_features(x) - return x diff --git a/spaces/koubi888/uptime/README.md b/spaces/koubi888/uptime/README.md deleted file mode 100644 index fcdf7e9140417ea86e9aacb8d9d6c6d441de4d15..0000000000000000000000000000000000000000 --- a/spaces/koubi888/uptime/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uptime -emoji: 👁 -colorFrom: pink -colorTo: gray -sdk: docker -pinned: false -license: mit -app_port: 3001 -duplicated_from: Anyexyz/uptime ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/text.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/text.py deleted file mode 100644 index bba2d3f7dfffa3bdbf921bdad4ca7143be97c2fd..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/streams/text.py +++ /dev/null @@ -1,143 +0,0 @@ -from __future__ import annotations - -import codecs -from dataclasses import InitVar, dataclass, field -from typing import Any, Callable, Mapping - -from ..abc import ( - AnyByteReceiveStream, - AnyByteSendStream, - AnyByteStream, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, -) - - -@dataclass(eq=False) -class TextReceiveStream(ObjectReceiveStream[str]): - """ - Stream wrapper that decodes bytes to strings using the given encoding. - - Decoding is done using :class:`~codecs.IncrementalDecoder` which returns any completely - received unicode characters as soon as they come in. - - :param transport_stream: any bytes-based receive stream - :param encoding: character encoding to use for decoding bytes to strings (defaults to - ``utf-8``) - :param errors: handling scheme for decoding errors (defaults to ``strict``; see the - `codecs module documentation`_ for a comprehensive list of options) - - .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects - """ - - transport_stream: AnyByteReceiveStream - encoding: InitVar[str] = "utf-8" - errors: InitVar[str] = "strict" - _decoder: codecs.IncrementalDecoder = field(init=False) - - def __post_init__(self, encoding: str, errors: str) -> None: - decoder_class = codecs.getincrementaldecoder(encoding) - self._decoder = decoder_class(errors=errors) - - async def receive(self) -> str: - while True: - chunk = await self.transport_stream.receive() - decoded = self._decoder.decode(chunk) - if decoded: - return decoded - - async def aclose(self) -> None: - await self.transport_stream.aclose() - self._decoder.reset() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return self.transport_stream.extra_attributes - - -@dataclass(eq=False) -class TextSendStream(ObjectSendStream[str]): - """ - Sends strings to the wrapped stream as bytes using the given encoding. - - :param AnyByteSendStream transport_stream: any bytes-based send stream - :param str encoding: character encoding to use for encoding strings to bytes (defaults to - ``utf-8``) - :param str errors: handling scheme for encoding errors (defaults to ``strict``; see the - `codecs module documentation`_ for a comprehensive list of options) - - .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects - """ - - transport_stream: AnyByteSendStream - encoding: InitVar[str] = "utf-8" - errors: str = "strict" - _encoder: Callable[..., tuple[bytes, int]] = field(init=False) - - def __post_init__(self, encoding: str) -> None: - self._encoder = codecs.getencoder(encoding) - - async def send(self, item: str) -> None: - encoded = self._encoder(item, self.errors)[0] - await self.transport_stream.send(encoded) - - async def aclose(self) -> None: - await self.transport_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return self.transport_stream.extra_attributes - - -@dataclass(eq=False) -class TextStream(ObjectStream[str]): - """ - A bidirectional stream that decodes bytes to strings on receive and encodes strings to bytes on - send. - - Extra attributes will be provided from both streams, with the receive stream providing the - values in case of a conflict. - - :param AnyByteStream transport_stream: any bytes-based stream - :param str encoding: character encoding to use for encoding/decoding strings to/from bytes - (defaults to ``utf-8``) - :param str errors: handling scheme for encoding errors (defaults to ``strict``; see the - `codecs module documentation`_ for a comprehensive list of options) - - .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects - """ - - transport_stream: AnyByteStream - encoding: InitVar[str] = "utf-8" - errors: InitVar[str] = "strict" - _receive_stream: TextReceiveStream = field(init=False) - _send_stream: TextSendStream = field(init=False) - - def __post_init__(self, encoding: str, errors: str) -> None: - self._receive_stream = TextReceiveStream( - self.transport_stream, encoding=encoding, errors=errors - ) - self._send_stream = TextSendStream( - self.transport_stream, encoding=encoding, errors=errors - ) - - async def receive(self) -> str: - return await self._receive_stream.receive() - - async def send(self, item: str) -> None: - await self._send_stream.send(item) - - async def send_eof(self) -> None: - await self.transport_stream.send_eof() - - async def aclose(self) -> None: - await self._send_stream.aclose() - await self._receive_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self._send_stream.extra_attributes, - **self._receive_stream.extra_attributes, - } diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py deleted file mode 100644 index 2e572f612e6a29d0a782a0b278deaed9f98f5127..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/transformPen.py +++ /dev/null @@ -1,111 +0,0 @@ -from fontTools.pens.filterPen import FilterPen, FilterPointPen - - -__all__ = ["TransformPen", "TransformPointPen"] - - -class TransformPen(FilterPen): - - """Pen that transforms all coordinates using a Affine transformation, - and passes them to another pen. - """ - - def __init__(self, outPen, transformation): - """The 'outPen' argument is another pen object. It will receive the - transformed coordinates. The 'transformation' argument can either - be a six-tuple, or a fontTools.misc.transform.Transform object. - """ - super(TransformPen, self).__init__(outPen) - if not hasattr(transformation, "transformPoint"): - from fontTools.misc.transform import Transform - - transformation = Transform(*transformation) - self._transformation = transformation - self._transformPoint = transformation.transformPoint - self._stack = [] - - def moveTo(self, pt): - self._outPen.moveTo(self._transformPoint(pt)) - - def lineTo(self, pt): - self._outPen.lineTo(self._transformPoint(pt)) - - def curveTo(self, *points): - self._outPen.curveTo(*self._transformPoints(points)) - - def qCurveTo(self, *points): - if points[-1] is None: - points = self._transformPoints(points[:-1]) + [None] - else: - points = self._transformPoints(points) - self._outPen.qCurveTo(*points) - - def _transformPoints(self, points): - transformPoint = self._transformPoint - return [transformPoint(pt) for pt in points] - - def closePath(self): - self._outPen.closePath() - - def endPath(self): - self._outPen.endPath() - - def addComponent(self, glyphName, transformation): - transformation = self._transformation.transform(transformation) - self._outPen.addComponent(glyphName, transformation) - - -class TransformPointPen(FilterPointPen): - """PointPen that transforms all coordinates using a Affine transformation, - and passes them to another PointPen. - - >>> from fontTools.pens.recordingPen import RecordingPointPen - >>> rec = RecordingPointPen() - >>> pen = TransformPointPen(rec, (2, 0, 0, 2, -10, 5)) - >>> v = iter(rec.value) - >>> pen.beginPath(identifier="contour-0") - >>> next(v) - ('beginPath', (), {'identifier': 'contour-0'}) - >>> pen.addPoint((100, 100), "line") - >>> next(v) - ('addPoint', ((190, 205), 'line', False, None), {}) - >>> pen.endPath() - >>> next(v) - ('endPath', (), {}) - >>> pen.addComponent("a", (1, 0, 0, 1, -10, 5), identifier="component-0") - >>> next(v) - ('addComponent', ('a', ), {'identifier': 'component-0'}) - """ - - def __init__(self, outPointPen, transformation): - """The 'outPointPen' argument is another point pen object. - It will receive the transformed coordinates. - The 'transformation' argument can either be a six-tuple, or a - fontTools.misc.transform.Transform object. - """ - super().__init__(outPointPen) - if not hasattr(transformation, "transformPoint"): - from fontTools.misc.transform import Transform - - transformation = Transform(*transformation) - self._transformation = transformation - self._transformPoint = transformation.transformPoint - - def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs): - self._outPen.addPoint( - self._transformPoint(pt), segmentType, smooth, name, **kwargs - ) - - def addComponent(self, baseGlyphName, transformation, **kwargs): - transformation = self._transformation.transform(transformation) - self._outPen.addComponent(baseGlyphName, transformation, **kwargs) - - -if __name__ == "__main__": - from fontTools.pens.basePen import _TestPen - - pen = TransformPen(_TestPen(None), (2, 0, 0.5, 2, -10, 0)) - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25), (0, 0)) - pen.closePath() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dc0cb21d.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dc0cb21d.js deleted file mode 100644 index c07e1917a90d50911eee1ec1612398938e5a2669..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-dc0cb21d.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as H,i as I,s as J,G as j,e as v,H as K,C,g as S,m as k,E as q,ad as y,J as E,al as U,p as w,t as N,q as T,n as B,a0 as O,r as P,a8 as Q,I as R,K as V,T as W,x as X,$ as Y,b as D,a as z,h as Z,j as p,k as G,y as g}from"./index-8c3da1d9.js";/* empty css */import{B as x}from"./Button-62634b34.js";/* empty css */import{B as $}from"./BlockTitle-338c46c0.js";import"./Info-b95ed9db.js";function ee(l){let e;return{c(){e=R(l[2])},m(n,t){S(n,e,t)},p(n,t){t&4&&V(e,n[2])},d(n){n&&T(e)}}}function le(l){let e,n,t,s,_,m,r;return n=new $({props:{show_label:l[4],info:l[3],$$slots:{default:[ee]},$$scope:{ctx:l}}}),{c(){e=j("label"),v(n.$$.fragment),t=K(),s=j("input"),C(s,"type","number"),s.disabled=l[1],C(s,"class","svelte-og1zwl"),C(e,"class","block")},m(i,f){S(i,e,f),k(n,e,null),q(e,t),q(e,s),y(s,l[0]),_=!0,m||(r=[E(s,"input",l[8]),E(s,"keypress",l[5]),E(s,"blur",l[6])],m=!0)},p(i,[f]){const c={};f&16&&(c.show_label=i[4]),f&8&&(c.info=i[3]),f&2052&&(c.$$scope={dirty:f,ctx:i}),n.$set(c),(!_||f&2)&&(s.disabled=i[1]),f&1&&U(s.value)!==i[0]&&y(s,i[0])},i(i){_||(w(n.$$.fragment,i),_=!0)},o(i){N(n.$$.fragment,i),_=!1},d(i){i&&T(e),B(n),m=!1,O(r)}}}function te(l,e,n){let{value:t=0}=e,{value_is_output:s=!1}=e,{disabled:_=!1}=e,{label:m}=e,{info:r=void 0}=e,{show_label:i=!0}=e;const f=P();function c(){!isNaN(t)&&t!==null&&(f("change",t),s||f("input"))}Q(()=>{n(7,s=!1)});async function h(o){await W(),o.key==="Enter"&&(o.preventDefault(),f("submit"))}function a(o){f("blur")}function b(){t=U(this.value),n(0,t)}return l.$$set=o=>{"value"in o&&n(0,t=o.value),"value_is_output"in o&&n(7,s=o.value_is_output),"disabled"in o&&n(1,_=o.disabled),"label"in o&&n(2,m=o.label),"info"in o&&n(3,r=o.info),"show_label"in o&&n(4,i=o.show_label)},l.$$.update=()=>{l.$$.dirty&1&&c()},[t,_,m,r,i,h,a,s,b]}class ne extends H{constructor(e){super(),I(this,e,te,le,J,{value:0,value_is_output:7,disabled:1,label:2,info:3,show_label:4})}}function ae(l){let e,n,t,s,_,m;const r=[l[9]];let i={};for(let a=0;az(t,"value",f)),D.push(()=>z(t,"value_is_output",c)),t.$on("change",l[13]),t.$on("input",l[14]),t.$on("submit",l[15]),t.$on("blur",l[16]),{c(){v(e.$$.fragment),n=K(),v(t.$$.fragment)},m(a,b){k(e,a,b),S(a,n,b),k(t,a,b),m=!0},p(a,b){const o=b&512?Z(r,[p(a[9])]):{};e.$set(o);const d={};b&4&&(d.label=a[2]),b&8&&(d.info=a[3]),b&256&&(d.show_label=a[8]),b&1024&&(d.disabled=a[10]==="static"),!s&&b&1&&(s=!0,d.value=a[0],G(()=>s=!1)),!_&&b&2&&(_=!0,d.value_is_output=a[1],G(()=>_=!1)),t.$set(d)},i(a){m||(w(e.$$.fragment,a),w(t.$$.fragment,a),m=!0)},o(a){N(e.$$.fragment,a),N(t.$$.fragment,a),m=!1},d(a){B(e,a),a&&T(n),B(t,a)}}}function ue(l){let e,n;return e=new x({props:{visible:l[6],elem_id:l[4],elem_classes:l[5],disable:typeof l[7].container=="boolean"&&!l[7].container,$$slots:{default:[ae]},$$scope:{ctx:l}}}),{c(){v(e.$$.fragment)},m(t,s){k(e,t,s),n=!0},p(t,[s]){const _={};s&64&&(_.visible=t[6]),s&16&&(_.elem_id=t[4]),s&32&&(_.elem_classes=t[5]),s&128&&(_.disable=typeof t[7].container=="boolean"&&!t[7].container),s&132879&&(_.$$scope={dirty:s,ctx:t}),e.$set(_)},i(t){n||(w(e.$$.fragment,t),n=!0)},o(t){N(e.$$.fragment,t),n=!1},d(t){B(e,t)}}}function se(l,e,n){let{label:t="Number"}=e,{info:s=void 0}=e,{elem_id:_=""}=e,{elem_classes:m=[]}=e,{visible:r=!0}=e,{style:i={}}=e,{value:f=0}=e,{show_label:c}=e,{loading_status:h}=e,{mode:a}=e,{value_is_output:b=!1}=e;function o(u){f=u,n(0,f)}function d(u){b=u,n(1,b)}function A(u){g.call(this,l,u)}function F(u){g.call(this,l,u)}function L(u){g.call(this,l,u)}function M(u){g.call(this,l,u)}return l.$$set=u=>{"label"in u&&n(2,t=u.label),"info"in u&&n(3,s=u.info),"elem_id"in u&&n(4,_=u.elem_id),"elem_classes"in u&&n(5,m=u.elem_classes),"visible"in u&&n(6,r=u.visible),"style"in u&&n(7,i=u.style),"value"in u&&n(0,f=u.value),"show_label"in u&&n(8,c=u.show_label),"loading_status"in u&&n(9,h=u.loading_status),"mode"in u&&n(10,a=u.mode),"value_is_output"in u&&n(1,b=u.value_is_output)},[f,b,t,s,_,m,r,i,c,h,a,o,d,A,F,L,M]}class ie extends H{constructor(e){super(),I(this,e,se,ue,J,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,style:7,value:0,show_label:8,loading_status:9,mode:10,value_is_output:1})}}const ce=ie,de=["static","dynamic"],he=l=>({type:{payload:"number"},description:{payload:"numeric value"},example_data:l.value??1});export{ce as Component,he as document,de as modes}; -//# sourceMappingURL=index-dc0cb21d.js.map diff --git a/spaces/ledetele/KrystalPDF/app.py b/spaces/ledetele/KrystalPDF/app.py deleted file mode 100644 index d0ee1880f2684b4d1b40da0dcaeb358ca2ecc7e1..0000000000000000000000000000000000000000 --- a/spaces/ledetele/KrystalPDF/app.py +++ /dev/null @@ -1,190 +0,0 @@ -import urllib.request -import fitz -import re -import numpy as np -import tensorflow_hub as hub -import openai -import gradio as gr -import os -from sklearn.neighbors import NearestNeighbors - -def download_pdf(url, output_path): - urllib.request.urlretrieve(url, output_path) - - -def preprocess(text): - text = text.replace('\n', ' ') - text = re.sub('\s+', ' ', text) - return text - - -def pdf_to_text(path, start_page=1, end_page=None): - doc = fitz.open(path) - total_pages = doc.page_count - - if end_page is None: - end_page = total_pages - - text_list = [] - - for i in range(start_page-1, end_page): - text = doc.load_page(i).get_text("text") - text = preprocess(text) - text_list.append(text) - - doc.close() - return text_list - - -def text_to_chunks(texts, word_length=150, start_page=1): - text_toks = [t.split(' ') for t in texts] - page_nums = [] - chunks = [] - - for idx, words in enumerate(text_toks): - for i in range(0, len(words), word_length): - chunk = words[i:i+word_length] - if (i+word_length) > len(words) and (len(chunk) < word_length) and ( - len(text_toks) != (idx+1)): - text_toks[idx+1] = chunk + text_toks[idx+1] - continue - chunk = ' '.join(chunk).strip() - chunk = f'[{idx+start_page}]' + ' ' + '"' + chunk + '"' - chunks.append(chunk) - return chunks - - -class SemanticSearch: - - def __init__(self): - self.use = hub.load('https://tfhub.dev/google/universal-sentence-encoder/4') - self.fitted = False - - - def fit(self, data, batch=1000, n_neighbors=5): - self.data = data - self.embeddings = self.get_text_embedding(data, batch=batch) - n_neighbors = min(n_neighbors, len(self.embeddings)) - self.nn = NearestNeighbors(n_neighbors=n_neighbors) - self.nn.fit(self.embeddings) - self.fitted = True - - - def __call__(self, text, return_data=True): - inp_emb = self.use([text]) - neighbors = self.nn.kneighbors(inp_emb, return_distance=False)[0] - - if return_data: - return [self.data[i] for i in neighbors] - else: - return neighbors - - - def get_text_embedding(self, texts, batch=1000): - embeddings = [] - for i in range(0, len(texts), batch): - text_batch = texts[i:(i+batch)] - emb_batch = self.use(text_batch) - embeddings.append(emb_batch) - embeddings = np.vstack(embeddings) - return embeddings - - - -def load_recommender(path, start_page=1): - global recommender - texts = pdf_to_text(path, start_page=start_page) - chunks = text_to_chunks(texts, start_page=start_page) - recommender.fit(chunks) - return 'Corpus Loaded.' - -def generate_text(openAI_key,prompt, engine="text-davinci-003"): - openai.api_key = openAI_key - completions = openai.Completion.create( - engine=engine, - prompt=prompt, - max_tokens=512, - n=1, - stop=None, - temperature=0.7, - ) - message = completions.choices[0].text - return message - -def generate_answer(question,openAI_key): - topn_chunks = recommender(question) - prompt = "" - prompt += 'search results:\n\n' - for c in topn_chunks: - prompt += c + '\n\n' - - prompt += "Instructions: Compose a comprehensive reply to the query using the search results given. "\ - "Cite each reference using [ Page Number] notation (every result has this number at the beginning). "\ - "Citation should be done at the end of each sentence. If the search results mention multiple subjects "\ - "with the same name, create separate answers for each. Only include information found in the results and "\ - "don't add any additional information. Make sure the answer is correct and don't output false content. "\ - "If the text does not relate to the query, simply state 'Text Not Found in PDF'. Ignore outlier "\ - "search results which has nothing to do with the question. Only answer what is asked. The "\ - "answer should be short and concise. Answer step-by-step. \n\nQuery: {question}\nAnswer: " - - prompt += f"Query: {question}\nAnswer:" - answer = generate_text(openAI_key, prompt,"text-davinci-003") - return answer - - -def question_answer(url, file, question,openAI_key): - if openAI_key.strip()=='': - return '[ERROR]: Please enter you Open AI Key. Get your key here : https://platform.openai.com/account/api-keys' - if url.strip() == '' and file == None: - return '[ERROR]: Both URL and PDF is empty. Provide atleast one.' - - if url.strip() != '' and file != None: - return '[ERROR]: Both URL and PDF is provided. Please provide only one (eiter URL or PDF).' - - if url.strip() != '': - glob_url = url - download_pdf(glob_url, 'corpus.pdf') - load_recommender('corpus.pdf') - - else: - old_file_name = file.name - file_name = file.name - file_name = file_name[:-12] + file_name[-4:] - os.rename(old_file_name, file_name) - load_recommender(file_name) - - if question.strip() == '': - return '[ERROR]: Question field is empty' - - return generate_answer(question,openAI_key) - - -recommender = SemanticSearch() - -title = 'Krystal PDF AI' -description = """ Krystal PDF AI allows you to chat with your PDF file using Universal Sentence Encoder and Open AI. It gives hallucination free response than other tools as the embeddings are better than OpenAI. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly.""" - -with gr.Blocks() as demo: - - gr.Markdown(f'

        {title}

        ') - gr.Markdown(description) - - with gr.Row(): - - with gr.Group(): - gr.Markdown(f'

        Get your Open AI API key here

        ') - openAI_key=gr.Textbox(label='Enter your OpenAI API key here') - url = gr.Textbox(label='Enter PDF URL here') - gr.Markdown("

        OR

        ") - file = gr.File(label='Upload your PDF/ Research Paper / Book here', file_types=['.pdf']) - question = gr.Textbox(label='Enter your question here') - btn = gr.Button(value='Submit') - btn.style(full_width=True) - - with gr.Group(): - answer = gr.Textbox(label='The answer to your question is :') - - btn.click(question_answer, inputs=[url, file, question,openAI_key], outputs=[answer]) -#openai.api_key = os.getenv('Your_Key_Here') -demo.launch() - diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/pipelines.py b/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/pipelines.py deleted file mode 100644 index e6833ed6ff94f8e2c8e8494bc827843f388b7fb8..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/pipelines/llava/pipelines.py +++ /dev/null @@ -1,48 +0,0 @@ -from typing import Optional - -from extensions.multimodal.abstract_pipeline import AbstractMultimodalPipeline - -available_pipelines = ['llava-7b', 'llava-13b', 'llava-llama-2-13b', 'llava-v1.5-13b', 'llava-v1.5-7b'] - - -def get_pipeline(name: str, params: dict) -> Optional[AbstractMultimodalPipeline]: - if name == 'llava-7b': - from .llava import LLaVA_v0_7B_Pipeline - return LLaVA_v0_7B_Pipeline(params) - if name == 'llava-13b': - from .llava import LLaVA_v0_13B_Pipeline - return LLaVA_v0_13B_Pipeline(params) - if name == 'llava-llama-2-13b': - from .llava import LLaVA_LLaMA_2_13B_Pipeline - return LLaVA_LLaMA_2_13B_Pipeline(params) - if name == 'llava-v1.5-7b': - from .llava import LLaVA_v1_5_7B_Pipeline - return LLaVA_v1_5_7B_Pipeline(params) - if name == 'llava-v1.5-13b': - from .llava import LLaVA_v1_5_13B_Pipeline - return LLaVA_v1_5_13B_Pipeline(params) - return None - - -def get_pipeline_from_model_name(model_name: str, params: dict) -> Optional[AbstractMultimodalPipeline]: - if 'llava' not in model_name.lower(): - return None - if 'llama-2' in model_name.lower(): - if '13b' in model_name.lower(): - from .llava import LLaVA_LLaMA_2_13B_Pipeline - return LLaVA_LLaMA_2_13B_Pipeline(params) - elif 'llava-v1.5' in model_name.lower(): - if '13b' in model_name.lower(): - from .llava import LLaVA_v1_5_13B_Pipeline - return LLaVA_v1_5_13B_Pipeline(params) - if '7b' in model_name.lower(): - from .llava import LLaVA_v1_5_7B_Pipeline - return LLaVA_v1_5_7B_Pipeline(params) - else: - if '7b' in model_name.lower(): - from .llava import LLaVA_v0_7B_Pipeline - return LLaVA_v0_7B_Pipeline(params) - if '13b' in model_name.lower(): - from .llava import LLaVA_v0_13B_Pipeline - return LLaVA_v0_13B_Pipeline(params) - return None diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Al Qawaid Al Arba Arabic.pdf __EXCLUSIVE__.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Al Qawaid Al Arba Arabic.pdf __EXCLUSIVE__.md deleted file mode 100644 index 2361ff4c9f849fba1a62e58f17d9ab65e5cc2b24..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Al Qawaid Al Arba Arabic.pdf __EXCLUSIVE__.md +++ /dev/null @@ -1,28 +0,0 @@ - -

        What is Al Qawaid Al Arba and Why You Should Read It

        -

        Al Qawaid Al Arba, or The Four Principles, is a short treatise by Imam Muhammad bin Abdul-Wahhab that explains the basic rules regarding shirk, or associating partners with Allah. Shirk is the gravest sin in Islam and the only one that Allah will not forgive if a person dies upon it. Therefore, it is essential for every Muslim to learn and understand the concept of shirk and how to avoid it.

        -

        Al Qawaid Al Arba Arabic.pdf


        Download File · https://bytlly.com/2uGwtq



        -

        In this article, we will briefly introduce the book and its author, and highlight some of the benefits of reading and studying it.

        -

        Who is Imam Muhammad bin Abdul-Wahhab?

        -

        Imam Muhammad bin Abdul-Wahhab was a renowned scholar and reformer who lived in the 18th century in the Arabian Peninsula. He was born in 1115 AH (1703 CE) in Uyaynah, a town in Najd. He studied under various scholars of his time, such as his father Sheikh Abdul-Wahhab, Sheikh Abdullah bin Ibrahim Al-Najdi, Sheikh Muhammad Hayat Al-Sindi, and Sheikh Muhammad bin Sulaiman Al-Kurdi.

        -

        He devoted his life to calling people to the pure monotheism of Islam and warning them against shirk and innovation. He wrote many books and treatises on various topics of Islamic creed, jurisprudence, history, and biography. Some of his famous works include Kitab Al-Tawhid (The Book of Monotheism), Kashf Al-Shubuhat (The Removal of Doubts), Usul Al-Thalatha (The Three Fundamental Principles), and Al-Qawaid Al-Arba (The Four Principles).

        -

        He died in 1206 AH (1792 CE) in Diriyah, where he was buried. His teachings and legacy were continued by his students and followers, who became known as the Wahhabiyya or the Salafiyya.

        -

        What are the Four Principles?

        -

        The Four Principles are four rules that Imam Muhammad bin Abdul-Wahhab derived from the Quran and the Sunnah to help Muslims understand the meaning and implications of shirk. They are as follows:

        -

        -
          -
        1. Acknowledging that Allah is the one and only Lord (i.e. Rububiyyah) is not sufficient to affirm that an individual be judged as a Muslim.
        2. -
        3. The polytheists whom the Prophet (peace be upon him) fought against acknowledged that Allah is the Creator, Provider, and Sustainer of everything, yet they still committed shirk by worshipping others besides Him.
        4. -
        5. They say: “We don’t worship them except to bring us closer to Allah.” This is the same excuse that the polytheists of old used to justify their shirk.
        6. -
        7. The Prophet (peace be upon him) did not differentiate between those who worshipped idols, angels, prophets, righteous people, or other created beings. He called them all to worship Allah alone without any partners.
        8. -
        -

        These four principles are based on clear evidences from the Quran and the Sunnah, such as Surah Al-Kafirun, Surah Yunus verse 18, Surah Az-Zumar verse 3, Surah An-Nisa verse 48, Surah Al-Maeda verse 72, Surah Maryam verse 81-82, Surah An-Nahl verse 20-21, Surah Al-Anbiya verse 66-67, Surah Al-Hajj verse 73-74, Surah An-Najm verse 23-24, Surah Al-Furqan verse 55-56, Surah Al-Anam verse 22-24, Surah At-Taubah verse 31-32, Surah Luqman verse 13-15, Surah Ibrahim verse 35-36, -Surah Ibrahim verse 22-23.

        -

        What are the Benefits of Reading and Studying Al Qawaid Al Arba?

        -

        There are many benefits of reading and studying this book, such as:

        -
          -
        • It helps us to understand the essence of Islam, which is to worship Allah alone without any partners.
        • -
        • It helps us to recognize and avoid shirk in all its forms and manifestations.
        • -
        • It helps us to

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py deleted file mode 100644 index 77caafdbb300d8109d5bfdb844f131710ef81f20..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.1 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 300 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/lithiumice/SadTalker/src/utils/preprocess.py b/spaces/lithiumice/SadTalker/src/utils/preprocess.py deleted file mode 100644 index 454e26b2fd1a3b662399700c7805c02a63384301..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/utils/preprocess.py +++ /dev/null @@ -1,160 +0,0 @@ -import numpy as np -import cv2, os, sys, torch -from tqdm import tqdm -from PIL import Image - -# 3dmm extraction -from src.face3d.util.preprocess import align_img -from src.face3d.util.load_mats import load_lm3d -from src.face3d.models import networks -from src.face3d.extract_kp_videos import KeypointExtractor - -from scipy.io import loadmat, savemat -from src.utils.croper import Croper - -import warnings -warnings.filterwarnings("ignore") - -def split_coeff(coeffs): - """ - Return: - coeffs_dict -- a dict of torch.tensors - - Parameters: - coeffs -- torch.tensor, size (B, 256) - """ - id_coeffs = coeffs[:, :80] - exp_coeffs = coeffs[:, 80: 144] - tex_coeffs = coeffs[:, 144: 224] - angles = coeffs[:, 224: 227] - gammas = coeffs[:, 227: 254] - translations = coeffs[:, 254:] - return { - 'id': id_coeffs, - 'exp': exp_coeffs, - 'tex': tex_coeffs, - 'angle': angles, - 'gamma': gammas, - 'trans': translations - } - - -class CropAndExtract(): - def __init__(self, path_of_lm_croper, path_of_net_recon_model, dir_of_BFM_fitting, device): - - self.croper = Croper(path_of_lm_croper) - self.kp_extractor = KeypointExtractor(device) - self.net_recon = networks.define_net_recon(net_recon='resnet50', use_last_fc=False, init_path='').to(device) - checkpoint = torch.load(path_of_net_recon_model, map_location=torch.device(device)) - self.net_recon.load_state_dict(checkpoint['net_recon']) - self.net_recon.eval() - self.lm3d_std = load_lm3d(dir_of_BFM_fitting) - self.device = device - - def generate(self, input_path, save_dir, crop_or_resize='crop'): - - pic_size = 256 - pic_name = os.path.splitext(os.path.split(input_path)[-1])[0] - - landmarks_path = os.path.join(save_dir, pic_name+'_landmarks.txt') - coeff_path = os.path.join(save_dir, pic_name+'.mat') - png_path = os.path.join(save_dir, pic_name+'.png') - - #load input - if not os.path.isfile(input_path): - raise ValueError('input_path must be a valid path to video/image file') - elif input_path.split('.')[-1] in ['jpg', 'png', 'jpeg']: - # loader for first frame - full_frames = [cv2.imread(input_path)] - fps = 25 - else: - # loader for videos - video_stream = cv2.VideoCapture(input_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - full_frames.append(frame) - - x_full_frames= [cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) for frame in full_frames] - - #### crop images as the - if crop_or_resize.lower() == 'crop': # default crop - x_full_frames, crop, quad = self.croper.crop(x_full_frames, xsize=pic_size) - clx, cly, crx, cry = crop - lx, ly, rx, ry = quad - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - crop_info = ((ox2 - ox1, oy2 - oy1), crop, quad) - elif crop_or_resize.lower() == 'full': - x_full_frames, crop, quad = self.croper.crop(x_full_frames, still=True, xsize=pic_size) - clx, cly, crx, cry = crop - lx, ly, rx, ry = quad - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - crop_info = ((ox2 - ox1, oy2 - oy1), crop, quad) - else: # resize mode - oy1, oy2, ox1, ox2 = 0, x_full_frames[0].shape[0], 0, x_full_frames[0].shape[1] - crop_info = ((ox2 - ox1, oy2 - oy1), None, None) - - frames_pil = [Image.fromarray(cv2.resize(frame,(pic_size, pic_size))) for frame in x_full_frames] - if len(frames_pil) == 0: - print('No face is detected in the input file') - return None, None - - # save crop info - for frame in frames_pil: - cv2.imwrite(png_path, cv2.cvtColor(np.array(frame), cv2.COLOR_RGB2BGR)) - - # 2. get the landmark according to the detected face. - if not os.path.isfile(landmarks_path): - lm = self.kp_extractor.extract_keypoint(frames_pil, landmarks_path) - else: - print(' Using saved landmarks.') - lm = np.loadtxt(landmarks_path).astype(np.float32) - lm = lm.reshape([len(x_full_frames), -1, 2]) - - if not os.path.isfile(coeff_path): - # load 3dmm paramter generator from Deep3DFaceRecon_pytorch - video_coeffs, full_coeffs = [], [] - for idx in tqdm(range(len(frames_pil)), desc='3DMM Extraction In Video:'): - frame = frames_pil[idx] - W,H = frame.size - lm1 = lm[idx].reshape([-1, 2]) - - if np.mean(lm1) == -1: - lm1 = (self.lm3d_std[:, :2]+1)/2. - lm1 = np.concatenate( - [lm1[:, :1]*W, lm1[:, 1:2]*H], 1 - ) - else: - lm1[:, -1] = H - 1 - lm1[:, -1] - - trans_params, im1, lm1, _ = align_img(frame, lm1, self.lm3d_std) - - trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)]).astype(np.float32) - im_t = torch.tensor(np.array(im1)/255., dtype=torch.float32).permute(2, 0, 1).to(self.device).unsqueeze(0) - - with torch.no_grad(): - full_coeff = self.net_recon(im_t) - coeffs = split_coeff(full_coeff) - - pred_coeff = {key:coeffs[key].cpu().numpy() for key in coeffs} - - pred_coeff = np.concatenate([ - pred_coeff['exp'], - pred_coeff['angle'], - pred_coeff['trans'], - trans_params[2:][None], - ], 1) - video_coeffs.append(pred_coeff) - full_coeffs.append(full_coeff.cpu().numpy()) - - semantic_npy = np.array(video_coeffs)[:,0] - - savemat(coeff_path, {'coeff_3dmm': semantic_npy, 'full_3dmm': np.array(full_coeffs)[0]}) - - return coeff_path, png_path, crop_info diff --git a/spaces/lixq/bingo61/src/components/chat-image.tsx b/spaces/lixq/bingo61/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
          -
          panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
          -
          -
          -
          -

          添加图像

          -
          -
          - -
          - e.stopPropagation()} - /> - -
          -
          - - -
          -
          - {panel === 'camera-mode' &&
          -
          -
          -
          -
          -
          -
          -
          } -
          -
          - ) -} diff --git a/spaces/ludusc/latent-space-theories/pages/2_Network_comparison.py b/spaces/ludusc/latent-space-theories/pages/2_Network_comparison.py deleted file mode 100644 index 4dac8d8aa40dea95dcd0821f6440523aa5b2d173..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/pages/2_Network_comparison.py +++ /dev/null @@ -1,188 +0,0 @@ -import streamlit as st -import streamlit.components.v1 as components - -import dnnlib -import legacy - -import pickle -import pandas as pd -import numpy as np -from pyvis.network import Network - -import random -from sklearn.metrics.pairwise import cosine_similarity - -from matplotlib.backends.backend_agg import RendererAgg - -from backend.disentangle_concepts import * - -_lock = RendererAgg.lock - -HIGHTLIGHT_COLOR = '#e7bcc5' -st.set_page_config(layout='wide') - - -st.title('Comparison among color directions') -st.write('> **How do the color directions relate to each other?**') -st.write(""" - This page provides a simple network-based framework to inspect the vector similarity (cosine similarity) among the found color vectors. - The nodes are the colors chosen for comparison and the strength of the edge represents the similarity. - - """) - - -annotations_file = './data/textile_annotated_files/seeds0000-100000_S.pkl' -with open(annotations_file, 'rb') as f: - annotations = pickle.load(f) - -concept_vectors = pd.read_csv('./data/stored_vectors/scores_colors_hsv.csv') -concept_vectors['vector'] = [np.array([float(xx) for xx in x]) for x in concept_vectors['vector'].str.split(', ')] -concept_vectors['score'] = concept_vectors['score'].astype(float) -concept_vectors['sign'] = [True if 'sign:True' in val else False for val in concept_vectors['kwargs']] -concept_vectors['extremes'] = [True if 'extremes method:True' in val else False for val in concept_vectors['kwargs']] -concept_vectors['regularization'] = [float(val.split(',')[1].strip('regularization: ')) if 'regularization:' in val else False for val in concept_vectors['kwargs']] -concept_vectors['cl_method'] = [val.split(',')[0].strip('classification method:') if 'classification method:' in val else False for val in concept_vectors['kwargs']] -concept_vectors['num_factors'] = [int(val.split(',')[1].strip('number of factors:')) if 'number of factors:' in val else False for val in concept_vectors['kwargs']] -concept_vectors = concept_vectors.sort_values('score', ascending=False).reset_index() - -with dnnlib.util.open_url('./data/textile_model_files/network-snapshot-005000.pkl') as f: - model = legacy.load_network_pkl(f)['G_ema'].to('cpu') # type: ignore - -COLORS_LIST = ['Gray', 'Red Orange', 'Yellow', 'Green', 'Light Blue', 'Blue', 'Purple', 'Pink', 'Saturation', 'Value'] - -if 'concept_ids' not in st.session_state: - st.session_state.concept_ids = COLORS_LIST -if 'sign' not in st.session_state: - st.session_state.sign = False -if 'extremes' not in st.session_state: - st.session_state.extremes = False -if 'regularization' not in st.session_state: - st.session_state.regularization = False -if 'cl_method' not in st.session_state: - st.session_state.cl_method = False -if 'num_factors' not in st.session_state: - st.session_state.num_factors = False -if 'best' not in st.session_state: - st.session_state.best = True - -# ----------------------------- INPUT ---------------------------------- -st.header('Input') -input_col_1, input_col_2 = st.columns([1,1]) -# --------------------------- INPUT column 1 --------------------------- -with input_col_1: - with st.form('text_form'): - - # image_id = st.number_input('Image ID: ', format='%d', step=1) - st.write('**Choose a series of colors to compare**') - # chosen_text_id_input = st.empty() - # concept_id = chosen_text_id_input.text_input('Concept:', value=st.session_state.concept_id) - concept_ids = st.multiselect('Color (including Saturation and Value):', tuple(COLORS_LIST), default=COLORS_LIST) - choose_text_button = st.form_submit_button('Choose the defined colors') - - if choose_text_button: - st.session_state.concept_ids = list(concept_ids) - - -with input_col_2: - with st.form('text_form_1'): - st.write('Use the best vectors (after hyperparameter tuning)') - best = st.selectbox('Option:', tuple([True, False]), index=0) - sign = True - num_factors=10 - cl_method='LR' - regularization=0.1 - extremes=True - if st.session_state.best is False: - st.write('Options for StyleSpace (not available for Saturation and Value)') - sign = st.selectbox('Sign option:', tuple([True, False]), index=1) - num_factors = st.selectbox('Number of factors option:', tuple([1, 5, 10, 20, False]), index=4) - st.write('Options for InterFaceGAN (not available for Saturation and Value)') - cl_method = st.selectbox('Classification method option:', tuple(['LR', 'SVM', False]), index=2) - regularization = st.selectbox('Regularization option:', tuple([0.1, 1.0, False]), index=2) - st.write('Options for InterFaceGAN (only for Saturation and Value)') - extremes = st.selectbox('Extremes option:', tuple([True, False]), index=1) - - choose_options_button = st.form_submit_button('Choose the defined options') - if choose_options_button: - st.session_state.best = best - if st.session_state.best is False: - st.session_state.sign = sign - st.session_state.num_factors = num_factors - st.session_state.cl_method = cl_method - st.session_state.regularization = regularization - st.session_state.extremes = extremes - -# ---------------------------- SET UP OUTPUT ------------------------------ -epsilon_container = st.empty() -st.header('Comparison') -st.subheader('Color vectors') - -header_col_1, header_col_2 = st.columns([3,1]) -output_col_1, output_col_2 = st.columns([3,1]) - -# ---------------------------- DISPLAY COL 1 ROW 1 ------------------------------ -if st.session_state.best: - tmp = concept_vectors[concept_vectors['color'].isin(st.session_state.concept_ids)].groupby('color').first().reset_index() -else: - tmp = concept_vectors[concept_vectors['color'].isin(st.session_state.concept_ids)] - tmp = tmp[tmp['sign'] == st.session_state.sign][tmp['extremes'] == st.session_state.extremes][tmp['num_factors'] == st.session_state.num_factors][tmp['cl_method'] == st.session_state.cl_method][tmp['regularization'] == st.session_state.regularization] - -info = tmp.loc[:, ['vector', 'score', 'color', 'kwargs']].values -concept_ids = [i[2] for i in info] #+ ' ' + i[3] - -with header_col_1: - st.write('### Similarity graph') - -with header_col_2: - st.write('### Information') - -with output_col_2: - for i,concept_id in enumerate(concept_ids): - st.write(f'''Color: {info[i][2]}.\ - Settings: {info[i][3]}\ - ''') - -with output_col_1: - edges = [] - for i in range(len(concept_ids)): - for j in range(len(concept_ids)): - if i != j and info[i][2] != info[j][2]: - print(f'Similarity between {concept_ids[i]} and {concept_ids[j]}') - similarity = cosine_similarity(info[i][0].reshape(1, -1), info[j][0].reshape(1, -1)) - print(np.round(similarity[0][0], 3)) - edges.append((concept_ids[i], concept_ids[j], np.round(similarity[0][0] + 0.001, 3))) - - - net = Network(height="750px", width="100%",) - for e in edges: - src = e[0] - dst = e[1] - w = e[2] - - net.add_node(src, src, title=src) - net.add_node(dst, dst, title=dst) - net.add_edge(src, dst, value=w, title=src + ' to ' + dst + ' similarity ' +str(w)) - - # Generate network with specific layout settings - net.repulsion( - node_distance=420, - central_gravity=0.33, - spring_length=110, - spring_strength=0.10, - damping=0.95 - ) - - # Save and read graph as HTML file (on Streamlit Sharing) - try: - path = '/tmp' - net.save_graph(f'{path}/pyvis_graph.html') - HtmlFile = open(f'{path}/pyvis_graph.html', 'r', encoding='utf-8') - - # Save and read graph as HTML file (locally) - except: - path = '/html_files' - net.save_graph(f'{path}/pyvis_graph.html') - HtmlFile = open(f'{path}/pyvis_graph.html', 'r', encoding='utf-8') - - # Load HTML file in HTML component for display on Streamlit page - components.html(HtmlFile.read(), height=435) diff --git a/spaces/ma-xu/LIVE/filter.h b/spaces/ma-xu/LIVE/filter.h deleted file mode 100644 index 2dd0b62acb83e94da89696e9a8024c4b919f6749..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/filter.h +++ /dev/null @@ -1,106 +0,0 @@ -#pragma once - -#include "diffvg.h" -#include "atomic.h" - -enum class FilterType { - Box, - Tent, - RadialParabolic, // 4/3(1 - (d/r)) - Hann // https://en.wikipedia.org/wiki/Window_function#Hann_and_Hamming_windows -}; - -struct Filter { - FilterType type; - float radius; -}; - -struct DFilter { - float radius; -}; - -DEVICE -inline -float compute_filter_weight(const Filter &filter, - float dx, - float dy) { - if (fabs(dx) > filter.radius || fabs(dy) > filter.radius) { - return 0; - } - if (filter.type == FilterType::Box) { - return 1.f / square(2 * filter.radius); - } else if (filter.type == FilterType::Tent) { - return (filter.radius - fabs(dx)) * (filter.radius - fabs(dy)) / - square(square(filter.radius)); - } else if (filter.type == FilterType::RadialParabolic) { - return (4.f / 3.f) * (1 - square(dx / filter.radius)) * - (4.f / 3.f) * (1 - square(dy / filter.radius)); - } else { - assert(filter.type == FilterType::Hann); - // normalize dx, dy to [0, 1] - auto ndx = (dx / (2*filter.radius)) + 0.5f; - auto ndy = (dy / (2*filter.radius)) + 0.5f; - // the normalization factor is R^2 - return 0.5f * (1.f - cos(float(2 * M_PI) * ndx)) * - 0.5f * (1.f - cos(float(2 * M_PI) * ndy)) / - square(filter.radius); - } -} - -DEVICE -inline -void d_compute_filter_weight(const Filter &filter, - float dx, - float dy, - float d_return, - DFilter *d_filter) { - if (filter.type == FilterType::Box) { - // return 1.f / square(2 * filter.radius); - atomic_add(d_filter->radius, - d_return * (-2) * 2 * filter.radius / cubic(2 * filter.radius)); - } else if (filter.type == FilterType::Tent) { - // return (filer.radius - fabs(dx)) * (filer.radius - fabs(dy)) / - // square(square(filter.radius)); - auto fx = filter.radius - fabs(dx); - auto fy = filter.radius - fabs(dy); - auto norm = 1 / square(filter.radius); - auto d_fx = d_return * fy * norm; - auto d_fy = d_return * fx * norm; - auto d_norm = d_return * fx * fy; - atomic_add(d_filter->radius, - d_fx + d_fy + (-4) * d_norm / pow(filter.radius, 5)); - } else if (filter.type == FilterType::RadialParabolic) { - // return (4.f / 3.f) * (1 - square(dx / filter.radius)) * - // (4.f / 3.f) * (1 - square(dy / filter.radius)); - // auto d_square_x = d_return * (-4.f / 3.f); - // auto d_square_y = d_return * (-4.f / 3.f); - auto r3 = filter.radius * filter.radius * filter.radius; - auto d_radius = -(2 * square(dx) + 2 * square(dy)) / r3; - atomic_add(d_filter->radius, d_radius); - } else { - assert(filter.type == FilterType::Hann); - // // normalize dx, dy to [0, 1] - // auto ndx = (dx / (2*filter.radius)) + 0.5f; - // auto ndy = (dy / (2*filter.radius)) + 0.5f; - // // the normalization factor is R^2 - // return 0.5f * (1.f - cos(float(2 * M_PI) * ndx)) * - // 0.5f * (1.f - cos(float(2 * M_PI) * ndy)) / - // square(filter.radius); - - // normalize dx, dy to [0, 1] - auto ndx = (dx / (2*filter.radius)) + 0.5f; - auto ndy = (dy / (2*filter.radius)) + 0.5f; - auto fx = 0.5f * (1.f - cos(float(2*M_PI) * ndx)); - auto fy = 0.5f * (1.f - cos(float(2*M_PI) * ndy)); - auto norm = 1 / square(filter.radius); - auto d_fx = d_return * fy * norm; - auto d_fy = d_return * fx * norm; - auto d_norm = d_return * fx * fy; - auto d_ndx = d_fx * 0.5f * sin(float(2*M_PI) * ndx) * float(2*M_PI); - auto d_ndy = d_fy * 0.5f * sin(float(2*M_PI) * ndy) * float(2*M_PI); - atomic_add(d_filter->radius, - d_ndx * (-2*dx / square(2*filter.radius)) + - d_ndy * (-2*dy / square(2*filter.radius)) + - (-2) * d_norm / cubic(filter.radius)); - } -} diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.cpp b/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.cpp deleted file mode 100644 index 222bd565fbffd6484db09876ae9cceabffcb69cd..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_embed/test_interpreter.cpp +++ /dev/null @@ -1,284 +0,0 @@ -#include - -#ifdef _MSC_VER -// Silence MSVC C++17 deprecation warning from Catch regarding std::uncaught_exceptions (up to catch -// 2.0.1; this should be fixed in the next catch release after 2.0.1). -# pragma warning(disable: 4996) -#endif - -#include - -#include -#include -#include - -namespace py = pybind11; -using namespace py::literals; - -class Widget { -public: - Widget(std::string message) : message(message) { } - virtual ~Widget() = default; - - std::string the_message() const { return message; } - virtual int the_answer() const = 0; - -private: - std::string message; -}; - -class PyWidget final : public Widget { - using Widget::Widget; - - int the_answer() const override { PYBIND11_OVERLOAD_PURE(int, Widget, the_answer); } -}; - -PYBIND11_EMBEDDED_MODULE(widget_module, m) { - py::class_(m, "Widget") - .def(py::init()) - .def_property_readonly("the_message", &Widget::the_message); - - m.def("add", [](int i, int j) { return i + j; }); -} - -PYBIND11_EMBEDDED_MODULE(throw_exception, ) { - throw std::runtime_error("C++ Error"); -} - -PYBIND11_EMBEDDED_MODULE(throw_error_already_set, ) { - auto d = py::dict(); - d["missing"].cast(); -} - -TEST_CASE("Pass classes and data between modules defined in C++ and Python") { - auto module = py::module::import("test_interpreter"); - REQUIRE(py::hasattr(module, "DerivedWidget")); - - auto locals = py::dict("hello"_a="Hello, World!", "x"_a=5, **module.attr("__dict__")); - py::exec(R"( - widget = DerivedWidget("{} - {}".format(hello, x)) - message = widget.the_message - )", py::globals(), locals); - REQUIRE(locals["message"].cast() == "Hello, World! - 5"); - - auto py_widget = module.attr("DerivedWidget")("The question"); - auto message = py_widget.attr("the_message"); - REQUIRE(message.cast() == "The question"); - - const auto &cpp_widget = py_widget.cast(); - REQUIRE(cpp_widget.the_answer() == 42); -} - -TEST_CASE("Import error handling") { - REQUIRE_NOTHROW(py::module::import("widget_module")); - REQUIRE_THROWS_WITH(py::module::import("throw_exception"), - "ImportError: C++ Error"); - REQUIRE_THROWS_WITH(py::module::import("throw_error_already_set"), - Catch::Contains("ImportError: KeyError")); -} - -TEST_CASE("There can be only one interpreter") { - static_assert(std::is_move_constructible::value, ""); - static_assert(!std::is_move_assignable::value, ""); - static_assert(!std::is_copy_constructible::value, ""); - static_assert(!std::is_copy_assignable::value, ""); - - REQUIRE_THROWS_WITH(py::initialize_interpreter(), "The interpreter is already running"); - REQUIRE_THROWS_WITH(py::scoped_interpreter(), "The interpreter is already running"); - - py::finalize_interpreter(); - REQUIRE_NOTHROW(py::scoped_interpreter()); - { - auto pyi1 = py::scoped_interpreter(); - auto pyi2 = std::move(pyi1); - } - py::initialize_interpreter(); -} - -bool has_pybind11_internals_builtin() { - auto builtins = py::handle(PyEval_GetBuiltins()); - return builtins.contains(PYBIND11_INTERNALS_ID); -}; - -bool has_pybind11_internals_static() { - auto **&ipp = py::detail::get_internals_pp(); - return ipp && *ipp; -} - -TEST_CASE("Restart the interpreter") { - // Verify pre-restart state. - REQUIRE(py::module::import("widget_module").attr("add")(1, 2).cast() == 3); - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - REQUIRE(py::module::import("external_module").attr("A")(123).attr("value").cast() == 123); - - // local and foreign module internals should point to the same internals: - REQUIRE(reinterpret_cast(*py::detail::get_internals_pp()) == - py::module::import("external_module").attr("internals_at")().cast()); - - // Restart the interpreter. - py::finalize_interpreter(); - REQUIRE(Py_IsInitialized() == 0); - - py::initialize_interpreter(); - REQUIRE(Py_IsInitialized() == 1); - - // Internals are deleted after a restart. - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - pybind11::detail::get_internals(); - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - REQUIRE(reinterpret_cast(*py::detail::get_internals_pp()) == - py::module::import("external_module").attr("internals_at")().cast()); - - // Make sure that an interpreter with no get_internals() created until finalize still gets the - // internals destroyed - py::finalize_interpreter(); - py::initialize_interpreter(); - bool ran = false; - py::module::import("__main__").attr("internals_destroy_test") = - py::capsule(&ran, [](void *ran) { py::detail::get_internals(); *static_cast(ran) = true; }); - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - REQUIRE_FALSE(ran); - py::finalize_interpreter(); - REQUIRE(ran); - py::initialize_interpreter(); - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - - // C++ modules can be reloaded. - auto cpp_module = py::module::import("widget_module"); - REQUIRE(cpp_module.attr("add")(1, 2).cast() == 3); - - // C++ type information is reloaded and can be used in python modules. - auto py_module = py::module::import("test_interpreter"); - auto py_widget = py_module.attr("DerivedWidget")("Hello after restart"); - REQUIRE(py_widget.attr("the_message").cast() == "Hello after restart"); -} - -TEST_CASE("Subinterpreter") { - // Add tags to the modules in the main interpreter and test the basics. - py::module::import("__main__").attr("main_tag") = "main interpreter"; - { - auto m = py::module::import("widget_module"); - m.attr("extension_module_tag") = "added to module in main interpreter"; - - REQUIRE(m.attr("add")(1, 2).cast() == 3); - } - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - - /// Create and switch to a subinterpreter. - auto main_tstate = PyThreadState_Get(); - auto sub_tstate = Py_NewInterpreter(); - - // Subinterpreters get their own copy of builtins. detail::get_internals() still - // works by returning from the static variable, i.e. all interpreters share a single - // global pybind11::internals; - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - - // Modules tags should be gone. - REQUIRE_FALSE(py::hasattr(py::module::import("__main__"), "tag")); - { - auto m = py::module::import("widget_module"); - REQUIRE_FALSE(py::hasattr(m, "extension_module_tag")); - - // Function bindings should still work. - REQUIRE(m.attr("add")(1, 2).cast() == 3); - } - - // Restore main interpreter. - Py_EndInterpreter(sub_tstate); - PyThreadState_Swap(main_tstate); - - REQUIRE(py::hasattr(py::module::import("__main__"), "main_tag")); - REQUIRE(py::hasattr(py::module::import("widget_module"), "extension_module_tag")); -} - -TEST_CASE("Execution frame") { - // When the interpreter is embedded, there is no execution frame, but `py::exec` - // should still function by using reasonable globals: `__main__.__dict__`. - py::exec("var = dict(number=42)"); - REQUIRE(py::globals()["var"]["number"].cast() == 42); -} - -TEST_CASE("Threads") { - // Restart interpreter to ensure threads are not initialized - py::finalize_interpreter(); - py::initialize_interpreter(); - REQUIRE_FALSE(has_pybind11_internals_static()); - - constexpr auto num_threads = 10; - auto locals = py::dict("count"_a=0); - - { - py::gil_scoped_release gil_release{}; - REQUIRE(has_pybind11_internals_static()); - - auto threads = std::vector(); - for (auto i = 0; i < num_threads; ++i) { - threads.emplace_back([&]() { - py::gil_scoped_acquire gil{}; - locals["count"] = locals["count"].cast() + 1; - }); - } - - for (auto &thread : threads) { - thread.join(); - } - } - - REQUIRE(locals["count"].cast() == num_threads); -} - -// Scope exit utility https://stackoverflow.com/a/36644501/7255855 -struct scope_exit { - std::function f_; - explicit scope_exit(std::function f) noexcept : f_(std::move(f)) {} - ~scope_exit() { if (f_) f_(); } -}; - -TEST_CASE("Reload module from file") { - // Disable generation of cached bytecode (.pyc files) for this test, otherwise - // Python might pick up an old version from the cache instead of the new versions - // of the .py files generated below - auto sys = py::module::import("sys"); - bool dont_write_bytecode = sys.attr("dont_write_bytecode").cast(); - sys.attr("dont_write_bytecode") = true; - // Reset the value at scope exit - scope_exit reset_dont_write_bytecode([&]() { - sys.attr("dont_write_bytecode") = dont_write_bytecode; - }); - - std::string module_name = "test_module_reload"; - std::string module_file = module_name + ".py"; - - // Create the module .py file - std::ofstream test_module(module_file); - test_module << "def test():\n"; - test_module << " return 1\n"; - test_module.close(); - // Delete the file at scope exit - scope_exit delete_module_file([&]() { - std::remove(module_file.c_str()); - }); - - // Import the module from file - auto module = py::module::import(module_name.c_str()); - int result = module.attr("test")().cast(); - REQUIRE(result == 1); - - // Update the module .py file with a small change - test_module.open(module_file); - test_module << "def test():\n"; - test_module << " return 2\n"; - test_module.close(); - - // Reload the module - module.reload(); - result = module.attr("test")().cast(); - REQUIRE(result == 2); -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/equal.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/equal.h deleted file mode 100644 index dd5e7d6863f378899330cc1e69d7667a87047338..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/equal.h +++ /dev/null @@ -1,74 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -#include - -namespace thrust -{ -namespace cuda_cub { - -template -bool __host__ __device__ -equal(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2, - BinaryPred binary_pred) -{ - return cuda_cub::mismatch(policy, first1, last1, first2, binary_pred).first == last1; -} - -template -bool __host__ __device__ -equal(execution_policy& policy, - InputIt1 first1, - InputIt1 last1, - InputIt2 first2) -{ - typedef typename thrust::iterator_value::type InputType1; - return cuda_cub::equal(policy, - first1, - last1, - first2, - equal_to()); -} - - - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/macaodha/batdetect2/bat_detect/finetune/finetune_model.py b/spaces/macaodha/batdetect2/bat_detect/finetune/finetune_model.py deleted file mode 100644 index 4fecc48a6417cfa9bbf675b9ffdeb954b039f5b6..0000000000000000000000000000000000000000 --- a/spaces/macaodha/batdetect2/bat_detect/finetune/finetune_model.py +++ /dev/null @@ -1,183 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -import os -import torch -import torch.nn.functional as F -from torch.optim.lr_scheduler import CosineAnnealingLR -import json -import argparse -import glob - -import sys -sys.path.append(os.path.join('..', '..')) -import bat_detect.train.train_model as tm -import bat_detect.train.audio_dataloader as adl -import bat_detect.train.evaluate as evl -import bat_detect.train.train_utils as tu -import bat_detect.train.losses as losses - -import bat_detect.detector.parameters as parameters -import bat_detect.detector.models as models -import bat_detect.detector.post_process as pp -import bat_detect.utils.plot_utils as pu -import bat_detect.utils.detector_utils as du - - -if __name__ == "__main__": - - info_str = '\nBatDetect - Finetune Model\n' - - print(info_str) - parser = argparse.ArgumentParser() - parser.add_argument('audio_path', type=str, help='Input directory for audio') - parser.add_argument('train_ann_path', type=str, - help='Path to where train annotation file is stored') - parser.add_argument('test_ann_path', type=str, - help='Path to where test annotation file is stored') - parser.add_argument('model_path', type=str, - help='Path to pretrained model') - parser.add_argument('--op_model_name', type=str, default='', - help='Path and name for finetuned model') - parser.add_argument('--num_epochs', type=int, default=200, dest='num_epochs', - help='Number of finetuning epochs') - parser.add_argument('--finetune_only_last_layer', action='store_true', - help='Only train final layers') - parser.add_argument('--train_from_scratch', action='store_true', - help='Do not use pretrained weights') - parser.add_argument('--do_not_save_images', action='store_false', - help='Do not save images at the end of training') - parser.add_argument('--notes', type=str, default='', - help='Notes to save in text file') - args = vars(parser.parse_args()) - - params = parameters.get_params(True, '../../experiments/') - if torch.cuda.is_available(): - params['device'] = 'cuda' - else: - params['device'] = 'cpu' - print('\nNote, this will be a lot faster if you use computer with a GPU.\n') - - print('\nAudio directory: ' + args['audio_path']) - print('Train file: ' + args['train_ann_path']) - print('Test file: ' + args['test_ann_path']) - print('Loading model: ' + args['model_path']) - - dataset_name = os.path.basename(args['train_ann_path']).replace('.json', '').replace('_TRAIN', '') - - if args['train_from_scratch']: - print('\nTraining model from scratch i.e. not using pretrained weights') - model, params_train = du.load_model(args['model_path'], False) - else: - model, params_train = du.load_model(args['model_path'], True) - model.to(params['device']) - - params['num_epochs'] = args['num_epochs'] - if args['op_model_name'] != '': - params['model_file_name'] = args['op_model_name'] - classes_to_ignore = params['classes_to_ignore']+params['generic_class'] - - # save notes file - params['notes'] = args['notes'] - if args['notes'] != '': - tu.write_notes_file(params['experiment'] + 'notes.txt', args['notes']) - - - # load train annotations - train_sets = [] - train_sets.append(tu.get_blank_dataset_dict(dataset_name, False, args['train_ann_path'], args['audio_path'])) - params['train_sets'] = [tu.get_blank_dataset_dict(dataset_name, False, os.path.basename(args['train_ann_path']), args['audio_path'])] - - print('\nTrain set:') - data_train, params['class_names'], params['class_inv_freq'] = \ - tu.load_set_of_anns(train_sets, classes_to_ignore, params['events_of_interest']) - print('Number of files', len(data_train)) - - params['genus_names'], params['genus_mapping'] = tu.get_genus_mapping(params['class_names']) - params['class_names_short'] = tu.get_short_class_names(params['class_names']) - - # load test annotations - test_sets = [] - test_sets.append(tu.get_blank_dataset_dict(dataset_name, True, args['test_ann_path'], args['audio_path'])) - params['test_sets'] = [tu.get_blank_dataset_dict(dataset_name, True, os.path.basename(args['test_ann_path']), args['audio_path'])] - - print('\nTest set:') - data_test, _, _ = tu.load_set_of_anns(test_sets, classes_to_ignore, params['events_of_interest']) - print('Number of files', len(data_test)) - - # train loader - train_dataset = adl.AudioLoader(data_train, params, is_train=True) - train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=params['batch_size'], - shuffle=True, num_workers=params['num_workers'], pin_memory=True) - - # test loader - batch size of one because of variable file length - test_dataset = adl.AudioLoader(data_test, params, is_train=False) - test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, - shuffle=False, num_workers=params['num_workers'], pin_memory=True) - - inputs_train = next(iter(train_loader)) - params['ip_height'] = inputs_train['spec'].shape[2] - print('\ntrain batch size :', inputs_train['spec'].shape) - - assert(params_train['model_name'] == 'Net2DFast') - print('\n\nSOME hyperparams need to be the same as the loaded model (e.g. FFT) - currently they are getting overwritten.\n\n') - - # set the number of output classes - num_filts = model.conv_classes_op.in_channels - k_size = model.conv_classes_op.kernel_size - pad = model.conv_classes_op.padding - model.conv_classes_op = torch.nn.Conv2d(num_filts, len(params['class_names'])+1, kernel_size=k_size, padding=pad) - model.conv_classes_op.to(params['device']) - - if args['finetune_only_last_layer']: - print('\nOnly finetuning the final layers.\n') - train_layers_i = ['conv_classes', 'conv_classes_op', 'conv_size', 'conv_size_op'] - train_layers = [tt + '.weight' for tt in train_layers_i] + [tt + '.bias' for tt in train_layers_i] - for name, param in model.named_parameters(): - if name in train_layers: - param.requires_grad = True - else: - param.requires_grad = False - - optimizer = torch.optim.Adam(model.parameters(), lr=params['lr']) - scheduler = CosineAnnealingLR(optimizer, params['num_epochs'] * len(train_loader)) - if params['train_loss'] == 'mse': - det_criterion = losses.mse_loss - elif params['train_loss'] == 'focal': - det_criterion = losses.focal_loss - - # plotting - train_plt_ls = pu.LossPlotter(params['experiment'] + 'train_loss.png', params['num_epochs']+1, - ['train_loss'], None, None, ['epoch', 'train_loss'], logy=True) - test_plt_ls = pu.LossPlotter(params['experiment'] + 'test_loss.png', params['num_epochs']+1, - ['test_loss'], None, None, ['epoch', 'test_loss'], logy=True) - test_plt = pu.LossPlotter(params['experiment'] + 'test.png', params['num_epochs']+1, - ['avg_prec', 'rec_at_x', 'avg_prec_class', 'file_acc', 'top_class'], [0,1], None, ['epoch', '']) - test_plt_class = pu.LossPlotter(params['experiment'] + 'test_avg_prec.png', params['num_epochs']+1, - params['class_names_short'], [0,1], params['class_names_short'], ['epoch', 'avg_prec']) - - # main train loop - for epoch in range(0, params['num_epochs']+1): - - train_loss = tm.train(model, epoch, train_loader, det_criterion, optimizer, scheduler, params) - train_plt_ls.update_and_save(epoch, [train_loss['train_loss']]) - - if epoch % params['num_eval_epochs'] == 0: - # detection accuracy on test set - test_res, test_loss = tm.test(model, epoch, test_loader, det_criterion, params) - test_plt_ls.update_and_save(epoch, [test_loss['test_loss']]) - test_plt.update_and_save(epoch, [test_res['avg_prec'], test_res['rec_at_x'], - test_res['avg_prec_class'], test_res['file_acc'], test_res['top_class']['avg_prec']]) - test_plt_class.update_and_save(epoch, [rs['avg_prec'] for rs in test_res['class_pr']]) - pu.plot_pr_curve_class(params['experiment'] , 'test_pr', 'test_pr', test_res) - - # save finetuned model - print('saving model to: ' + params['model_file_name']) - op_state = {'epoch': epoch + 1, - 'state_dict': model.state_dict(), - 'params' : params} - torch.save(op_state, params['model_file_name']) - - - # save an image with associated prediction for each batch in the test set - if not args['do_not_save_images']: - tm.save_images_batch(model, test_loader, params) diff --git a/spaces/magicr/BuboGPT/ram/models/__init__.py b/spaces/magicr/BuboGPT/ram/models/__init__.py deleted file mode 100644 index 69bdb22f2dba166bac07ab9d63fe8d0562dc88a6..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/ram/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .ram import ram -from .tag2text import tag2text_caption diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm_v2.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm_v2.py deleted file mode 100644 index 5e4538ae3c50b4c457a9fa19bf22b5b1a7b666ee..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm_v2.py +++ /dev/null @@ -1,62 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : test_numeric_batchnorm_v2.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 11/01/2018 -# -# Distributed under terms of the MIT license. - -""" -Test the numerical implementation of batch normalization. - -Author: acgtyrant. -See also: https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues/14 -""" - -import unittest - -import torch -import torch.nn as nn -import torch.optim as optim - -from sync_batchnorm.unittest import TorchTestCase -from sync_batchnorm.batchnorm_reimpl import BatchNorm2dReimpl - - -class NumericTestCasev2(TorchTestCase): - def testNumericBatchNorm(self): - CHANNELS = 16 - batchnorm1 = nn.BatchNorm2d(CHANNELS, momentum=1) - optimizer1 = optim.SGD(batchnorm1.parameters(), lr=0.01) - - batchnorm2 = BatchNorm2dReimpl(CHANNELS, momentum=1) - batchnorm2.weight.data.copy_(batchnorm1.weight.data) - batchnorm2.bias.data.copy_(batchnorm1.bias.data) - optimizer2 = optim.SGD(batchnorm2.parameters(), lr=0.01) - - for _ in range(100): - input_ = torch.rand(16, CHANNELS, 16, 16) - - input1 = input_.clone().requires_grad_(True) - output1 = batchnorm1(input1) - output1.sum().backward() - optimizer1.step() - - input2 = input_.clone().requires_grad_(True) - output2 = batchnorm2(input2) - output2.sum().backward() - optimizer2.step() - - self.assertTensorClose(input1, input2) - self.assertTensorClose(output1, output2) - self.assertTensorClose(input1.grad, input2.grad) - self.assertTensorClose(batchnorm1.weight.grad, batchnorm2.weight.grad) - self.assertTensorClose(batchnorm1.bias.grad, batchnorm2.bias.grad) - self.assertTensorClose(batchnorm1.running_mean, batchnorm2.running_mean) - self.assertTensorClose(batchnorm2.running_mean, batchnorm2.running_mean) - - -if __name__ == '__main__': - unittest.main() - diff --git a/spaces/marccgrau/whisper-asr-diarization/app.py b/spaces/marccgrau/whisper-asr-diarization/app.py deleted file mode 100644 index 79144f1aa1a58ea4431f12afc78ccb35737a6580..0000000000000000000000000000000000000000 --- a/spaces/marccgrau/whisper-asr-diarization/app.py +++ /dev/null @@ -1,569 +0,0 @@ -# Inspiration from https://huggingface.co/spaces/vumichien/whisper-speaker-diarization - -import whisper -import datetime -import subprocess -import gradio as gr -from pathlib import Path -import pandas as pd -import re -import time -import os -import numpy as np -from sklearn.cluster import AgglomerativeClustering - -from pytube import YouTube -import torch -import pyannote.audio -from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding -from pyannote.audio import Audio -from pyannote.core import Segment - -from gpuinfo import GPUInfo - -import wave -import contextlib -from transformers import pipeline -import psutil - -from zipfile import ZipFile -from io import StringIO -import csv - -# ---- Model Loading ---- - -whisper_models = ["base", "small", "medium", "large"] -source_languages = { - "en": "English", - "de": "German", - "es": "Spanish", - "fr": "French", -} - -source_language_list = [key[0] for key in source_languages.items()] - -MODEL_NAME = "openai/whisper-small" -lang = "en" - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) - -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe") - -embedding_model = PretrainedSpeakerEmbedding( - "speechbrain/spkrec-ecapa-voxceleb", - device=torch.device("cuda" if torch.cuda.is_available() else "cpu")) - -# ---- S2T & Speaker diarization ---- - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - - -def convert_time(secs): - return datetime.timedelta(seconds=round(secs)) - -def convert_to_wav(filepath): - _,file_ending = os.path.splitext(f'{filepath}') - audio_file = filepath.replace(file_ending, ".wav") - print("starting conversion to wav") - os.system(f'ffmpeg -i "{filepath}" -ar 16000 -ac 1 -c:a pcm_s16le "{audio_file}"') - return audio_file - - -def speech_to_text(microphone, file_upload, selected_source_lang, whisper_model, num_speakers): - """ - # Transcribe audio file and separate into segment, assign speakers to segments - 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - 2. Generating speaker embeddings for each segments. - 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio - """ - - model = whisper.load_model(whisper_model) - time_start = time.time() - - try: - # Read and convert audio file - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - if microphone is None and file_upload is not None: - file = convert_to_wav(file) - - # Get duration - with contextlib.closing(wave.open(file,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - duration = frames / float(rate) - print(f"conversion to wav ready, duration of audio file: {duration}") - - # Transcribe audio - options = dict(language=selected_source_lang, beam_size=3, best_of=3) - transcribe_options = dict(task="transcribe", **options) - result = model.transcribe(file, **transcribe_options) - segments = result["segments"] - print("whisper done with transcription") - except Exception as e: - raise RuntimeError("Error converting audio file") - - try: - # Create embedding - def segment_embedding(segment): - audio = Audio() - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = audio.crop(file, clip) - return embedding_model(waveform[None]) - - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(segment) - embeddings = np.nan_to_num(embeddings) - print(f'Embedding shape: {embeddings.shape}') - - # Assign speaker label - if num_speakers == 1: - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER 1' - else: - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - - # Make output - objects = { - 'Start' : [], - 'End': [], - 'Speaker': [], - 'Text': [] - } - text = '' - if num_speakers == 1: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - for (i, segment) in enumerate(segments): - text += segment["text"] + ' ' - objects['Text'].append(text) - objects['End'].append(str(convert_time(segment["end"]))) - else: - for (i, segment) in enumerate(segments): - if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - if i != 0: - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - text = '' - text += segment["text"] + ' ' - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - - time_end = time.time() - time_diff = time_end - time_start - memory = psutil.virtual_memory() - gpu_utilization, gpu_memory = GPUInfo.gpu_usage() - gpu_utilization = gpu_utilization[0] if len(gpu_utilization) > 0 else 0 - gpu_memory = gpu_memory[0] if len(gpu_memory) > 0 else 0 - system_info = f""" - *Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB.* - *Processing time: {time_diff:.5} seconds.* - *GPU Utilization: {gpu_utilization}%, GPU Memory: {gpu_memory}MiB.* - """ - - return pd.DataFrame(objects), system_info - - except Exception as e: - raise RuntimeError("Error Running inference with local model", e) - -# ---- Youtube Conversion ---- - -def get_youtube(video_url): - yt = YouTube(video_url) - abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - print("Success download video") - print(abs_video_path) - return abs_video_path - - - -def yt_to_text(video_file_path, selected_source_lang, whisper_model, num_speakers): - """ - # Transcribe youtube link using OpenAI Whisper - 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - 2. Generating speaker embeddings for each segments. - 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio - """ - - model = whisper.load_model(whisper_model) - time_start = time.time() - if(video_file_path == None): - raise ValueError("Error no video input") - print(video_file_path) - - try: - # Read and convert youtube video - _,file_ending = os.path.splitext(f'{video_file_path}') - print(f'file ending is {file_ending}') - audio_file = video_file_path.replace(file_ending, ".wav") - print("starting conversion to wav") - os.system(f'ffmpeg -i "{video_file_path}" -ar 16000 -ac 1 -c:a pcm_s16le "{audio_file}"') - - # Get duration - with contextlib.closing(wave.open(audio_file,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - duration = frames / float(rate) - print(f"conversion to wav ready, duration of audio file: {duration}") - - # Transcribe audio - options = dict(language=selected_source_lang, beam_size=5, best_of=5) - transcribe_options = dict(task="transcribe", **options) - result = model.transcribe(audio_file, **transcribe_options) - segments = result["segments"] - print("starting whisper done with whisper") - except Exception as e: - raise RuntimeError("Error converting video to audio") - - try: - # Create embedding - def segment_embedding(segment): - audio = Audio() - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = audio.crop(audio_file, clip) - return embedding_model(waveform[None]) - - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(segment) - embeddings = np.nan_to_num(embeddings) - print(f'Embedding shape: {embeddings.shape}') - - # Assign speaker label - if num_speakers == 1: - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER 1' - else: - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - - # Make output - objects = { - 'Start' : [], - 'End': [], - 'Speaker': [], - 'Text': [] - } - text = '' - if num_speakers == 1: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - for (i, segment) in enumerate(segments): - text += segment["text"] + ' ' - objects['Text'].append(text) - objects['End'].append(str(convert_time(segment["end"]))) - else: - for (i, segment) in enumerate(segments): - if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - objects['Start'].append(str(convert_time(segment["start"]))) - objects['Speaker'].append(segment["speaker"]) - if i != 0: - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - text = '' - text += segment["text"] + ' ' - objects['End'].append(str(convert_time(segments[i - 1]["end"]))) - objects['Text'].append(text) - - time_end = time.time() - time_diff = time_end - time_start - memory = psutil.virtual_memory() - gpu_utilization, gpu_memory = GPUInfo.gpu_usage() - gpu_utilization = gpu_utilization[0] if len(gpu_utilization) > 0 else 0 - gpu_memory = gpu_memory[0] if len(gpu_memory) > 0 else 0 - system_info = f""" - *Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB.* - *Processing time: {time_diff:.5} seconds.* - *GPU Utilization: {gpu_utilization}%, GPU Memory: {gpu_memory}MiB.* - """ - - return pd.DataFrame(objects), system_info - - except Exception as e: - raise RuntimeError("Error Running inference with local model", e) - -def download_csv(dataframe: pd.DataFrame): - compression_options = dict(method='zip', archive_name='output.csv') - dataframe.to_csv('output.zip', index=False, compression=compression_options) - return 'output.zip' - -# ---- Gradio Layout ---- -# Inspiration from https://huggingface.co/spaces/vumichien/whisper-speaker-diarization - -# -- General Functions -- -df_init = pd.DataFrame(columns=['Start', 'End', 'Speaker', 'Text']) -memory = psutil.virtual_memory() -title = "Whisper speaker diarization & speech recognition" -interface = gr.Blocks(title=title) -interface.encrypt = False - -# -- Functions Audio Input -- -microphone_in = gr.inputs.Audio(source="microphone", - type="filepath", - optional=True) - -upload_in = gr.inputs.Audio(source="upload", - type="filepath", - optional=True) - -selected_source_lang_audio = gr.Dropdown(choices=source_language_list, - type="value", - value="en", - label="Spoken language in audio", - interactive=True) - -selected_whisper_model_audio = gr.Dropdown(choices=whisper_models, - type="value", - value="base", - label="Selected Whisper model", - interactive=True) - -number_speakers_audio = gr.Number(precision=0, - value=2, - label="Selected number of speakers", - interactive=True) - -system_info_audio = gr.Markdown(f"*Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB*") - -transcription_df_audio = gr.DataFrame(value=df_init, - label="Transcription dataframe", - row_count=(0, "dynamic"), - max_rows = 10, - wrap=True, - overflow_row_behaviour='paginate') - -csv_download_audio = gr.outputs.File(label="Download CSV") - -# -- Functions Video Input -- -video_in = gr.Video(label="Video file", - mirror_webcam=False) - -youtube_url_in = gr.Textbox(label="Youtube url", - lines=1, - interactive=True) - -selected_source_lang_yt = gr.Dropdown(choices=source_language_list, - type="value", - value="en", - label="Spoken language in audio", - interactive=True) - -selected_whisper_model_yt = gr.Dropdown(choices=whisper_models, - type="value", - value="base", - label="Selected Whisper model", - interactive=True) - -number_speakers_yt = gr.Number(precision=0, - value=2, - label="Selected number of speakers", - interactive=True) - -system_info_yt = gr.Markdown(f"*Memory: {memory.total / (1024 * 1024 * 1024):.2f}GB, used: {memory.percent}%, available: {memory.available / (1024 * 1024 * 1024):.2f}GB*") - -transcription_df_yt = gr.DataFrame(value=df_init, - label="Transcription dataframe", - row_count=(0, "dynamic"), - max_rows = 10, - wrap=True, - overflow_row_behaviour='paginate') - -csv_download_yt = gr.outputs.File(label="Download CSV") - -with interface: - with gr.Tab("Whisper speaker diarization & speech recognition"): - gr.Markdown(''' -
          -

          Whisper speaker diarization & speech recognition

          - This space uses Whisper models from OpenAI to recoginze the speech and ECAPA-TDNN model from SpeechBrain to encode and clasify speakers -
          - ''') - - with gr.Row(): - gr.Markdown(''' - ### Transcribe youtube link using OpenAI Whisper - ##### 1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts. - ##### 2. Generating speaker embeddings for each segments. - ##### 3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment. - ''') - - with gr.Row(): - with gr.Column(): - microphone_in.render() - upload_in.render() - with gr.Column(): - gr.Markdown(''' - ##### Here you can start the transcription process. - ##### Please select the source language for transcription. - ##### You should select a number of speakers for getting better results. - ''') - selected_source_lang_audio.render() - selected_whisper_model_audio.render() - number_speakers_audio.render() - transcribe_btn = gr.Button("Transcribe audio and initiate diarization") - transcribe_btn.click(speech_to_text, - [ - microphone_in, - upload_in, - selected_source_lang_audio, - selected_whisper_model_audio, - number_speakers_audio - ], - [ - transcription_df_audio, - system_info_audio - ]) - - - with gr.Row(): - gr.Markdown(''' - ##### Here you will get transcription output - ##### ''') - - - with gr.Row(): - with gr.Column(): - transcription_df_audio.render() - system_info_audio.render() - - with gr.Row(): - with gr.Column(): - download_btn = gr.Button("Download transcription dataframe") - download_btn.click(download_csv, transcription_df_audio, csv_download_audio) - csv_download_audio.render() - - with gr.Row(): - gr.Markdown('''Chair of Data Science and Natural Language Processing - University of St. Gallen''') - - with gr.Tab("Youtube Speech to Text"): - with gr.Row(): - gr.Markdown(''' -
          -

          Youtube Speech Recognition & Speaker Diarization

          -
          - ''') - - with gr.Row(): - gr.Markdown(''' - ### Transcribe Youtube link - #### Test with the following examples: - ''') - examples = gr.Examples(examples = - [ - "https://www.youtube.com/watch?v=vnc-Q8V4ihQ", - "https://www.youtube.com/watch?v=_B60aTHCE5E", - "https://www.youtube.com/watch?v=4BdKZxD-ziA", - "https://www.youtube.com/watch?v=4ezBjAW26Js", - ], - label="Examples UNISG", - inputs=[youtube_url_in]) - - with gr.Row(): - with gr.Column(): - youtube_url_in.render() - download_youtube_btn = gr.Button("Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [video_in]) - print(video_in) - - with gr.Row(): - with gr.Column(): - video_in.render() - with gr.Column(): - gr.Markdown(''' - #### Start the transcription process. - #### To initiate, please select the source language for transcription. - #### For better performance select the number of speakers. - ''') - selected_source_lang_yt.render() - selected_whisper_model_yt.render() - number_speakers_yt.render() - transcribe_btn = gr.Button("Transcribe audio and initiate diarization") - transcribe_btn.click(yt_to_text, - [ - video_in, - selected_source_lang_yt, - selected_whisper_model_yt, - number_speakers_yt - ], - [ - transcription_df_yt, - system_info_yt - ]) - - with gr.Row(): - gr.Markdown(''' - #### Here you will get transcription output - #### ''') - - with gr.Row(): - with gr.Column(): - transcription_df_yt.render() - system_info_yt.render() - - with gr.Row(): - with gr.Column(): - download_btn = gr.Button("Download transcription dataframe") - download_btn.click(download_csv, transcription_df_audio, csv_download_yt) - csv_download_yt.render() - - with gr.Row(): - gr.Markdown('''Chair of Data Science and Natural Language Processing - University of St. Gallen''') - - -def main(): - interface.launch() - - -if __name__ == "__main__": - main() diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/audiogen.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/audiogen.py deleted file mode 100644 index 1568f97fe7b84b90c7ef760ef5606fe0a475545a..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/solvers/audiogen.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from . import builders, musicgen - - -class AudioGenSolver(musicgen.MusicGenSolver): - """Solver for AudioGen re-implementation training task. - - Note that this implementation does not strictly follows - the method proposed in https://arxiv.org/abs/2209.15352 - but is derived from MusicGen's training pipeline. - - More information can be found in the AudioGen model card. - """ - DATASET_TYPE: builders.DatasetType = builders.DatasetType.SOUND diff --git a/spaces/matthoffner/chatbot/types/folder.ts b/spaces/matthoffner/chatbot/types/folder.ts deleted file mode 100644 index 7160edea18576e227fadbd3e5d0dce455d3497e6..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/types/folder.ts +++ /dev/null @@ -1,7 +0,0 @@ -export interface FolderInterface { - id: string; - name: string; - type: FolderType; -} - -export type FolderType = 'chat' | 'prompt'; diff --git a/spaces/megaaziib/RVC-V2-Huggingface-Version/rmvpe.py b/spaces/megaaziib/RVC-V2-Huggingface-Version/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/megaaziib/RVC-V2-Huggingface-Version/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/mehdidc/text_to_image_ddgan/datasets_prep/stackmnist_data.py b/spaces/mehdidc/text_to_image_ddgan/datasets_prep/stackmnist_data.py deleted file mode 100644 index 3e9b39fcc431abf2c6a5a68df5433281f204ad08..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/text_to_image_ddgan/datasets_prep/stackmnist_data.py +++ /dev/null @@ -1,65 +0,0 @@ -# --------------------------------------------------------------- -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the NVIDIA Source Code License -# for Denoising Diffusion GAN. To view a copy of this license, see the LICENSE file. -# --------------------------------------------------------------- - - -import numpy as np -from PIL import Image -import torchvision.datasets as dset -import torchvision.transforms as transforms - - -class StackedMNIST(dset.MNIST): - def __init__(self, root, train=True, transform=None, target_transform=None, - download=False): - super(StackedMNIST, self).__init__(root=root, train=train, transform=transform, - target_transform=target_transform, download=download) - - index1 = np.hstack([np.random.permutation(len(self.data)), np.random.permutation(len(self.data))]) - index2 = np.hstack([np.random.permutation(len(self.data)), np.random.permutation(len(self.data))]) - index3 = np.hstack([np.random.permutation(len(self.data)), np.random.permutation(len(self.data))]) - self.num_images = 2 * len(self.data) - - self.index = [] - for i in range(self.num_images): - self.index.append((index1[i], index2[i], index3[i])) - - def __len__(self): - return self.num_images - - def __getitem__(self, index): - img = np.zeros((28, 28, 3), dtype=np.uint8) - target = 0 - for i in range(3): - img_, target_ = self.data[self.index[index][i]], int(self.targets[self.index[index][i]]) - img[:, :, i] = img_ - target += target_ * 10 ** (2 - i) - - img = Image.fromarray(img, mode="RGB") - - if self.transform is not None: - img = self.transform(img) - - if self.target_transform is not None: - target = self.target_transform(target) - - return img, target - -def _data_transforms_stacked_mnist(): - """Get data transforms for cifar10.""" - train_transform = transforms.Compose([ - transforms.Pad(padding=2), - transforms.ToTensor(), - transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)) - ]) - - valid_transform = transforms.Compose([ - transforms.Pad(padding=2), - transforms.ToTensor(), - transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)) - ]) - - return train_transform, valid_transform diff --git a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/swin_transformer.py b/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/swin_transformer.py deleted file mode 100644 index 1c66194deb5dd370e797e57e2712f44303e568cc..0000000000000000000000000000000000000000 --- a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/swin_transformer.py +++ /dev/null @@ -1,802 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# -------------------------------------------------------- -# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py -# -------------------------------------------------------- - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from groundingdino.util.misc import NestedTensor - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - dilation (bool): if True, the output size if 16x downsample, ow 32x downsample. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - dilation=False, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.dilation = dilation - - # if use_checkpoint: - # print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!") - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - # prepare downsample list - downsamplelist = [PatchMerging for i in range(self.num_layers)] - downsamplelist[-1] = None - num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)] - if self.dilation: - downsamplelist[-2] = None - num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2 - for i_layer in range(self.num_layers): - layer = BasicLayer( - # dim=int(embed_dim * 2 ** i_layer), - dim=num_features[i_layer], - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - # downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - downsample=downsamplelist[i_layer], - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - # num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - # def init_weights(self, pretrained=None): - # """Initialize the weights in backbone. - # Args: - # pretrained (str, optional): Path to pre-trained weights. - # Defaults to None. - # """ - - # def _init_weights(m): - # if isinstance(m, nn.Linear): - # trunc_normal_(m.weight, std=.02) - # if isinstance(m, nn.Linear) and m.bias is not None: - # nn.init.constant_(m.bias, 0) - # elif isinstance(m, nn.LayerNorm): - # nn.init.constant_(m.bias, 0) - # nn.init.constant_(m.weight, 1.0) - - # if isinstance(pretrained, str): - # self.apply(_init_weights) - # logger = get_root_logger() - # load_checkpoint(self, pretrained, strict=False, logger=logger) - # elif pretrained is None: - # self.apply(_init_weights) - # else: - # raise TypeError('pretrained must be a str or None') - - def forward_raw(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - # import ipdb; ipdb.set_trace() - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # outs: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - return tuple(outs) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - # in: - # torch.Size([2, 3, 1024, 1024]) - # out: - # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \ - # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])] - - # collect for nesttensors - outs_dict = {} - for idx, out_i in enumerate(outs): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0] - outs_dict[idx] = NestedTensor(out_i, mask) - - return outs_dict - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -def build_swin_transformer(modelname, pretrain_img_size, **kw): - assert modelname in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ] - - model_para_dict = { - "swin_T_224_1k": dict( - embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7 - ), - "swin_B_224_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7 - ), - "swin_B_384_22k": dict( - embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12 - ), - "swin_L_224_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7 - ), - "swin_L_384_22k": dict( - embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12 - ), - } - kw_cgf = model_para_dict[modelname] - kw_cgf.update(kw) - model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf) - return model - - -if __name__ == "__main__": - model = build_swin_transformer("swin_L_384_22k", 384, dilation=True) - x = torch.rand(2, 3, 1024, 1024) - y = model.forward_raw(x) - import ipdb - - ipdb.set_trace() - x = torch.rand(2, 3, 384, 384) - y = model.forward_raw(x) diff --git a/spaces/merve/fill-in-the-blank/public/dataset-worldviews/person-photos.js b/spaces/merve/fill-in-the-blank/public/dataset-worldviews/person-photos.js deleted file mode 100644 index 305b037acebf14e083ead577ce566ad39b81c531..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/dataset-worldviews/person-photos.js +++ /dev/null @@ -1,119 +0,0 @@ - -function createPhotoScroller(){ - - var base_path = 'img/woman_washing_clothes.jpeg' - var data = [ - { - 'path': 'img/labels_1.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'person\', and \'bucket\'', - 'x': 198, - 'y': 30, - 'width': 305, - 'height': 400, - }, - - { - 'path': 'img/labels_4.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'parent\', and \'laundry\'', - 'x': 110, - 'y': 60, - 'width': 450, - 'height': 470, - }, - - - { - 'path': 'img/labels_2.svg', - 'alt': 'Image of a woman washing clothes with bounding boxes including \'hair_boho\', and \'decor_outdoor_rustic\'', - 'x': 198, - 'y': -35, - 'width': 395, - 'height': 500 - }, - - { - 'path': 'img/labels_3.svg', - 'alt': 'Image of a woman washing clothes with one bounding box around her, labeled \'pedestrian\'', - 'x': 190, - 'y': 65, - 'width': 190, - 'height': 315 - }, - ]; - - - var photoIndex = 0; - - var c = d3.conventions({ - sel: d3.select('.person-photos').html(''), - height: 550 - }) - - var photoSel = c.svg.append('svg:image') - .attr('x', 50) - .attr('y', 50) - .attr('width', 700) - .attr('height', 500) - .attr('xlink:href', base_path) - - var photoSel = c.svg.appendMany('svg:image', data) - .attr('x', d => d.x) - .attr('y', d => d.y) - .attr('width', d => d.width) - .attr('height', d => d.height) - .attr('xlink:href', d => d.path) - .attr('alt', d => d.alt) - - - var buttonHeight = 35 - var buttonWidth = 130 - - var buttonSel = c.svg.appendMany('g.photo-button', data) - .translate((d,i) => [(i * 170) + 100, 0]) - .at({ - // class: "dropdown" - }) - .on('click', function(d, i){ - photoIndex = i - setActiveImage() - timer.stop(); - }) - - buttonSel.append('rect') - .at({ - height: buttonHeight, - width: buttonWidth, - // fill: '#fff' - }) - - buttonSel.append('text') - .at({ - textAnchor: 'middle', - // dominantBaseline: 'central', - dy: '.33em', - x: buttonWidth/2, - y: buttonHeight/2, - class: "monospace" - }) - .text((d,i) => 'ground truth ' + (i + 1)) - - // buttonSel.classed('dropdown', true); - - if (window.__photoPersonTimer) window.__photoPersonTimer.stop() - var timer = window.__photoPersonTimer = d3.interval(() => { - photoIndex = (photoIndex + 1) % data.length; - setActiveImage() - }, 2000) - - function setActiveImage(i){ - photoSel.st({opacity: (d, i) => i == photoIndex ? 1 : 0 }) - buttonSel.classed('is-active-button', (d, i) => i == photoIndex) - } - setActiveImage() -} - -createPhotoScroller(); - - - - diff --git a/spaces/merve/hidden-bias/public/dataset-worldviews/style.css b/spaces/merve/hidden-bias/public/dataset-worldviews/style.css deleted file mode 100644 index b8cdd4b074388e961c5dd22322a9e056903f2b2c..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/dataset-worldviews/style.css +++ /dev/null @@ -1,260 +0,0 @@ -:root { - --shaded-shape-color: #9e9e9e; - --not-shaded-shape-color: white; - --classifier-bg-color: #e6e6e6; -} - -.right { - float: right; -} -.left { - float: left; -} - -.gt-shaded { - fill: var(--shaded-shape-color); - stroke: black; - stroke-width: 1; -} - -.gt-unshaded { - fill: var(--not-shaded-shape-color); - stroke: black; - stroke-width: 1; -} - -.shape-label-group { - opacity: 0; -} -.shape-label-group.visible { - opacity: 100; -} - -.incorrect.is-classified { - stroke-width: 2; - transition: stroke-width 0.5s; - transition-timing-function: cubic-bezier(0, 7, 0, 7); - stroke: #d15830; -} - -.correct.is-classified { - stroke-width: 1; - stroke: green; -} - -.shape-label-rect { - opacity: 50; - fill: white; - stroke: none; -} - -.shape-label-text { - color: black; -} - -.source { - text-decoration: none; - font-size: 10px; -} - -.newspaper-image { - width: 450px; -} - -.interface-image { - width: 450px; -} -.summary-text { - opacity: 0; - padding-top: 0px; - padding-bottom: 20px; - text-indent: 50px; -} - -.summary-text.is-classified { - transition: opacity 1000ms; - transition-delay: 2500ms; - opacity: 100; -} - -.classifier { - /* fill:#c2c2c2; - stroke-width: 0;*/ - opacity: 0; -} - -.classifier.is-classified { - transition: opacity 1000ms; - transition-delay: 1500ms; - opacity: 100; - fill: #c2c2c2; - stroke-width: 2; -} - -.classifier-text { - text-anchor: middle; - /*alignment-baseline: central;*/ - font-size: 30px; -} - -.classifier-caption { - width: 800px; - text-align: center; - position: relative; - left: 50%; - margin-left: -400px; - font-size: 12px; - /*right: 50%;*/ -} - -.classifier-bg-shaded { - fill: var(--classifier-bg-color); - stroke-width: 0; -} - -.classifier-bg-unshaded { - fill: var(--classifier-bg-color); -} - -.item-text.invisible { - fill-opacity: 10; -} -.item-text { - fill-opacity: 100; -} - -.explainer-label-text { - padding-left: 2px; - padding-right: 2px; - padding-top: 1px; - padding-bottom: 1px; -} - -mark { - padding-left: 2px; - padding-right: 2px; - padding-top: 1px; - padding-bottom: 1px; - outline: 1px solid #000000; -} - -img.interface { - padding-top: 20px; - padding-right: 20px; - padding-bottom: 20px; - padding-left: 20px; -} - -.classifier-button { - padding: 10px 20px; - text-align: center; - font-family: "Google Sans", sans-serif; - margin-left: 20px; - margin-right: 20px; -} - -.classifer-bg-text { - font-family: "Consolas", "monaco", "monospace"; -} - -.emphasis { - font-weight: 500; -} - -.dropdown { - padding: 8px 7px; - min-width: 200px; - background-color: #f9f9f9; - box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2); - font-family: "Google Sans", sans-serif; - font-size: 14px; -} - -.fake-dropdown { - padding-top: 10px; - padding-bottom: 10px; - padding-left: 10px; - padding-right: 10px; -} - -.monospace { - font-family: "Consolas", "monaco", "monospace"; - font-size: 14px; - font-weight: 500; -} - -.monospace.shaded { - background-color: var(--shaded-shape-color); - outline: 1px solid #000000; - padding: 1px; - font-size: 14px; -} - -.monospace.not-shaded { - background-color: var(--not-shaded-shape-color); - outline: 1px solid #000000; - padding: 1px; - font-size: 14px; -} - -.classifier-info-blurb { - font-style: italic; - font-size: 11; -} - -.photo-button { - cursor: pointer; -} - -.photo-button rect { - fill: #ffffff; -} - -.photo-button.is-active-button rect { - stroke: #000; -} - -.explainer-button { - cursor: pointer; -} - -.explainer-button rect { - fill: #f9f9f9; - stroke: #000000; -} - -.explainer-button.explainer-active-button rect { - fill: #fefefe; - stroke-width: 3; -} - -.tooltip { - width: 180px; - text-align: center; -} - -.tooltip .correct-row span { - outline: 1px solid red; - padding: 2px; -} - -.tooltip .correct-row.is-correct-tooltip span { - outline: 1px solid green; -} - -#row.row-highlighted { - opacity: 0.2; -} - -.shape-row-unhighlighted { - opacity: 0.2; -} - -.results-table { - text-align: center; -} - -.results-table tr.active { - background-color: var(--classifier-bg-color); - outline: 1px solid; -} diff --git a/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py b/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py deleted file mode 100644 index b570848421afd921fae635569c97d0f8f5b33c80..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .config import BigGANConfig -from .model import BigGAN -from .file_utils import PYTORCH_PRETRAINED_BIGGAN_CACHE, cached_path -from .utils import (truncated_noise_sample, save_as_images, - convert_to_images, display_in_terminal, - one_hot_from_int, one_hot_from_names) diff --git a/spaces/microsoft-cognitive-service/mm-react/README.md b/spaces/microsoft-cognitive-service/mm-react/README.md deleted file mode 100644 index fbb8c1154ea80fa02c8c0d3bd8f1a47988739b1b..0000000000000000000000000000000000000000 --- a/spaces/microsoft-cognitive-service/mm-react/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: mm-react -emoji: 💻 -colorFrom: indigo -colorTo: pink -sdk: docker -pinned: false -license: other ---- - -

          Additional Details

          -

          - MM-ReAct Website - · - MM-ReAct Paper - · - MM-ReAct Code -

          - -* If you modify the code you can build "langchain-0.0.94-py3-none-any.whl" from [this folder](https://github.com/microsoft/MM-REACT/tree/main/langchain) using "poetry build" -* [List of environment Variables](https://github.com/microsoft/MM-REACT#here-are-the-list-of-resources-you-need-to-set-up-in-azure-and-their-environment-variables) you need to set as SECRET in huggingface space. diff --git a/spaces/mindspore-ai/Wukong-Huahua/README.md b/spaces/mindspore-ai/Wukong-Huahua/README.md deleted file mode 100644 index 81027f0c682f39e3bcd7d68880b8654a39377078..0000000000000000000000000000000000000000 --- a/spaces/mindspore-ai/Wukong-Huahua/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wukong Huahua -emoji: 🐠 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mrm8488/hf-diffusers/app.py b/spaces/mrm8488/hf-diffusers/app.py deleted file mode 100644 index e9ecd7a02d654c895dbf3da8651bb4c6d2721294..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/hf-diffusers/app.py +++ /dev/null @@ -1,43 +0,0 @@ -from diffusers import DDPMPipeline -import gradio as gr -from ui import title, description, examples - - -RES = None - -models = [ - {'type': 'pokemon', 'res': 64, 'id': 'mrm8488/ddpm-ema-pokemon-64'}, - {'type': 'flowers', 'res': 64, 'id': 'mrm8488/ddpm-ema-flower-64'}, - {'type': 'anime_faces', 'res': 128, 'id': 'mrm8488/ddpm-ema-anime-v2-128'}, - {'type': 'butterflies', 'res': 128, 'id': 'mrm8488/ddpm-ema-butterflies-128'}, - #{'type': 'human_faces', 'res': 256, 'id': 'fusing/ddpm-celeba-hq'} -] -for model in models: - print(model) - pipeline = DDPMPipeline.from_pretrained(model['id']) - pipeline.save_pretrained('.') - model['pipeline'] = pipeline - - -def predict(type): - pipeline = None - for model in models: - if model['type'] == type: - pipeline = model['pipeline'] - RES = model['res'] - break - # run pipeline in inference - image = pipeline()["sample"] - - return image[0] - - -gr.Interface( - predict, - inputs=[gr.components.Dropdown(choices=[model['type'] for model in models], label='Choose a model') - ], - outputs=[gr.Image(shape=(64,64), type="pil", - elem_id="generated_image")], - title=title, - description=description -).launch() diff --git a/spaces/mrneuralnet/P-DFD/dataset/celeb_df.py b/spaces/mrneuralnet/P-DFD/dataset/celeb_df.py deleted file mode 100644 index f5bd36653918e0da454c7bf5988561a2b9d885a8..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-DFD/dataset/celeb_df.py +++ /dev/null @@ -1,126 +0,0 @@ -import numpy as np -from glob import glob -from os import listdir -from os.path import join -from dataset import AbstractDataset - -SPLITS = ["train", "test"] - - -class CelebDF(AbstractDataset): - """ - Celeb-DF v2 Dataset proposed in "Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics". - """ - - def __init__(self, cfg, seed=2022, transforms=None, transform=None, target_transform=None): - # pre-check - if cfg['split'] not in SPLITS: - raise ValueError(f"split should be one of {SPLITS}, but found {cfg['split']}.") - super(CelebDF, self).__init__(cfg, seed, transforms, transform, target_transform) - print(f"Loading data from 'Celeb-DF' of split '{cfg['split']}'" - f"\nPlease wait patiently...") - self.categories = ['original', 'fake'] - self.root = cfg['root'] - images_ids = self.__get_images_ids() - test_ids = self.__get_test_ids() - train_ids = [images_ids[0] - test_ids[0], - images_ids[1] - test_ids[1], - images_ids[2] - test_ids[2]] - self.images, self.targets = self.__get_images( - test_ids if cfg['split'] == "test" else train_ids, cfg['balance']) - assert len(self.images) == len(self.targets), "The number of images and targets not consistent." - print("Data from 'Celeb-DF' loaded.\n") - print(f"Dataset contains {len(self.images)} images.\n") - - def __get_images_ids(self): - youtube_real = listdir(join(self.root, 'YouTube-real', 'images')) - celeb_real = listdir(join(self.root, 'Celeb-real', 'images')) - celeb_fake = listdir(join(self.root, 'Celeb-synthesis', 'images')) - return set(youtube_real), set(celeb_real), set(celeb_fake) - - def __get_test_ids(self): - youtube_real = set() - celeb_real = set() - celeb_fake = set() - with open(join(self.root, "List_of_testing_videos.txt"), "r", encoding="utf-8") as f: - contents = f.readlines() - for line in contents: - name = line.split(" ")[-1] - number = name.split("/")[-1].split(".")[0] - if "YouTube-real" in name: - youtube_real.add(number) - elif "Celeb-real" in name: - celeb_real.add(number) - elif "Celeb-synthesis" in name: - celeb_fake.add(number) - else: - raise ValueError("'List_of_testing_videos.txt' file corrupted.") - return youtube_real, celeb_real, celeb_fake - - def __get_images(self, ids, balance=False): - real = list() - fake = list() - # YouTube-real - for _ in ids[0]: - real.extend(glob(join(self.root, 'YouTube-real', 'images', _, '*.png'))) - # Celeb-real - for _ in ids[1]: - real.extend(glob(join(self.root, 'Celeb-real', 'images', _, '*.png'))) - # Celeb-synthesis - for _ in ids[2]: - fake.extend(glob(join(self.root, 'Celeb-synthesis', 'images', _, '*.png'))) - print(f"Real: {len(real)}, Fake: {len(fake)}") - if balance: - fake = np.random.choice(fake, size=len(real), replace=False) - print(f"After Balance | Real: {len(real)}, Fake: {len(fake)}") - real_tgt = [0] * len(real) - fake_tgt = [1] * len(fake) - return [*real, *fake], [*real_tgt, *fake_tgt] - - -if __name__ == '__main__': - import yaml - - config_path = "../config/dataset/celeb_df.yml" - with open(config_path) as config_file: - config = yaml.load(config_file, Loader=yaml.FullLoader) - config = config["train_cfg"] - # config = config["test_cfg"] - - def run_dataset(): - dataset = CelebDF(config) - print(f"dataset: {len(dataset)}") - for i, _ in enumerate(dataset): - path, target = _ - print(f"path: {path}, target: {target}") - if i >= 9: - break - - - def run_dataloader(display_samples=False): - from torch.utils import data - import matplotlib.pyplot as plt - - dataset = CelebDF(config) - dataloader = data.DataLoader(dataset, batch_size=8, shuffle=True) - print(f"dataset: {len(dataset)}") - for i, _ in enumerate(dataloader): - path, targets = _ - image = dataloader.dataset.load_item(path) - print(f"image: {image.shape}, target: {targets}") - if display_samples: - plt.figure() - img = image[0].permute([1, 2, 0]).numpy() - plt.imshow(img) - # plt.savefig("./img_" + str(i) + ".png") - plt.show() - if i >= 9: - break - - - ########################### - # run the functions below # - ########################### - - # run_dataset() - run_dataloader(False) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/constants.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/constants.py deleted file mode 100644 index 4f159cfe9ac72b0524228fe290181c6898787265..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/dataclass/constants.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum, EnumMeta -from typing import List - - -class StrEnumMeta(EnumMeta): - # this is workaround for submitit pickling leading to instance checks failing in hydra for StrEnum, see - # https://github.com/facebookresearch/hydra/issues/1156 - @classmethod - def __instancecheck__(cls, other): - return "enum" in str(type(other)) - - -class StrEnum(Enum, metaclass=StrEnumMeta): - def __str__(self): - return self.value - - def __eq__(self, other: str): - return self.value == other - - def __repr__(self): - return self.value - - def __hash__(self): - return hash(str(self)) - - -def ChoiceEnum(choices: List[str]): - """return the Enum class used to enforce list of choices""" - return StrEnum("Choices", {k: k for k in choices}) - - -LOG_FORMAT_CHOICES = ChoiceEnum(["json", "none", "simple", "tqdm"]) -DDP_BACKEND_CHOICES = ChoiceEnum([ - "c10d", # alias for pytorch_ddp - "fully_sharded", # FullyShardedDataParallel from fairscale - "legacy_ddp", - "no_c10d", # alias for legacy_ddp - "pytorch_ddp", - "slow_mo", -]) -DDP_COMM_HOOK_CHOICES = ChoiceEnum(["none", "fp16"]) -DATASET_IMPL_CHOICES = ChoiceEnum(["raw", "lazy", "cached", "mmap", "fasta", "huffman"]) -GENERATION_CONSTRAINTS_CHOICES = ChoiceEnum(["ordered", "unordered"]) -GENERATION_DECODING_FORMAT_CHOICES = ChoiceEnum( - ["unigram", "ensemble", "vote", "dp", "bs"] -) -ZERO_SHARDING_CHOICES = ChoiceEnum(["none", "os"]) -PIPELINE_CHECKPOINT_CHOICES = ChoiceEnum(["always", "never", "except_last"]) -PRINT_ALIGNMENT_CHOICES = ChoiceEnum(["hard", "soft"]) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/gelu.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/gelu.py deleted file mode 100644 index a2f1ecff4a3ae3de3eb7d327b9163c46b18a15ed..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/gelu.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -See "Gaussian Error Linear Units (GELUs)" by Dan Hendrycks and Kevin Gimpel with -the corresponding GitHub repo: https://github.com/hendrycks/GELUs -""" - -import math - -import torch -import torch.nn as nn - - -def gelu_accurate(x): - if not hasattr(gelu_accurate, "_a"): - gelu_accurate._a = math.sqrt(2 / math.pi) - return ( - 0.5 * x * (1 + torch.tanh(gelu_accurate._a * (x + 0.044715 * torch.pow(x, 3)))) - ) - - -def gelu(x: torch.Tensor) -> torch.Tensor: - return torch.nn.functional.gelu(x.float()).type_as(x) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/__init__.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/__init__.py deleted file mode 100644 index be783be896396ff659c0bd173a7acebb8a2d165d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.optim.bmuf import FairseqBMUF # noqa -from fairseq.optim.fairseq_optimizer import ( # noqa - FairseqOptimizer, - LegacyFairseqOptimizer, -) -from fairseq.optim.amp_optimizer import AMPOptimizer -from fairseq.optim.fp16_optimizer import FP16Optimizer, MemoryEfficientFP16Optimizer -from fairseq.optim.shard import shard_ -from omegaconf import DictConfig - -__all__ = [ - "AMPOptimizer", - "FairseqOptimizer", - "FP16Optimizer", - "MemoryEfficientFP16Optimizer", - "shard_", -] - -( - _build_optimizer, - register_optimizer, - OPTIMIZER_REGISTRY, - OPTIMIZER_DATACLASS_REGISTRY, -) = registry.setup_registry("--optimizer", base_class=FairseqOptimizer, required=True) - - -def build_optimizer(cfg: DictConfig, params, *extra_args, **extra_kwargs): - if all(isinstance(p, dict) for p in params): - params = [t for p in params for t in p.values()] - params = list(filter(lambda p: p.requires_grad, params)) - return _build_optimizer(cfg, params, *extra_args, **extra_kwargs) - - -# automatically import any Python files in the optim/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.optim." + file_name) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py deleted file mode 100644 index a05f2891524a8b23482e206c1742c3b816b77afb..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from . import register_task - - -@dataclass -class TranslationFromPretrainedXLMConfig(TranslationConfig): - pass - - -@register_task( - "translation_from_pretrained_xlm", dataclass=TranslationFromPretrainedXLMConfig -) -class TranslationFromPretrainedXLMTask(TranslationTask): - """ - Same as TranslationTask except use the MaskedLMDictionary class so that - we can load data that was binarized with the MaskedLMDictionary class. - - This task should be used for the entire training pipeline when we want to - train an NMT model from a pretrained XLM checkpoint: binarizing NMT data, - training NMT with the pretrained XLM checkpoint, and subsequent evaluation - of that trained model. - """ - - @classmethod - def load_dictionary(cls, filename): - """Load the masked LM dictionary from the filename - - Args: - filename (str): the filename - """ - return MaskedLMDictionary.load(filename) diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/permanent_memory/__init__.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/permanent_memory/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/app.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/app.py deleted file mode 100644 index 342f2fb97c2320fd6d9246e1ae7d74f510d73672..0000000000000000000000000000000000000000 --- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/app.py +++ /dev/null @@ -1,105 +0,0 @@ -import streamlit as st -import numpy as np -import os -import pathlib -from inference import infer, InferenceModel - -# ----------------------------------------------------------------------------- -# class SatvisionDemoApp -# -# Directory Structure: base-directory/MOD09GA/year -# MOD09GQ/year -# MYD09GA/year -# MYD09GQ/year -# -# ----------------------------------------------------------------------------- -class SatvisionDemoApp: - - # ------------------------------------------------------------------------- - # __init__ - # ------------------------------------------------------------------------- - def __init__(self): - - self.thumbnail_dir = pathlib.Path('data/thumbnails') - self.image_dir = pathlib.Path('data/images') - print(self.thumbnail_dir) - self.thumbnail_files = sorted(list(self.thumbnail_dir.glob('sv-*.png'))) - self.image_files = sorted(list(self.image_dir.glob('sv-*.npy'))) - print(list(self.image_files)) - self.thumbnail_names = [str(tn_path.name) for tn_path in self.thumbnail_files] - print(self.thumbnail_names) - - self.inferenceModel = InferenceModel() - - # ------------------------------------------------------------------------- - # render_sidebar - # ------------------------------------------------------------------------- - def render_sidebar(self): - - st.sidebar.header("Select an Image") - - for index, thumbnail in enumerate(self.thumbnail_names): - - thumbnail_path = self.thumbnail_dir / thumbnail - - # thumbnail_arr = np.load(thumbnail_path) - print(str(thumbnail_path)) - - st.sidebar.image(str(thumbnail_path), use_column_width=True, caption=thumbnail) - - # ------------------------------------------------------------------------- - # render_main_app - # ------------------------------------------------------------------------- - def render_main_app(self): - - st.title("Satvision-Base Demo") - - st.header("Image Reconstruction Process") - selected_image_index = st.sidebar.selectbox( - "Select an Image", - self.thumbnail_names) - print(selected_image_index) - - selected_image = self.load_selected_image(selected_image_index) - - image, masked_input, output = self.inferenceModel.infer(selected_image) - - col1, col2, col3 = st.columns(3, gap="large") - - # Display the selected image with a title three times side-by-side - - with col1: - st.image(image, use_column_width=True, caption="Input") - - with col2: - st.image(masked_input, use_column_width=True, caption="Input Masked") - - with col3: - st.image(output, use_column_width=True, caption="Reconstruction") - - # ------------------------------------------------------------------------- - # load_selected_image - # ------------------------------------------------------------------------- - def load_selected_image(self, image_name): - - # Load the selected image using NumPy (replace this with your image loading code) - image_name = image_name.replace('.png', '.npy') - - image = np.load(self.image_dir / image_name) - image = np.moveaxis(image, 0, 2) - return image - -# ----------------------------------------------------------------------------- -# main -# ----------------------------------------------------------------------------- -def main(): - - app = SatvisionDemoApp() - - app.render_main_app() - - app.render_sidebar() - -if __name__ == "__main__": - - main() \ No newline at end of file diff --git a/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/channel.py b/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/channel.py deleted file mode 100644 index e48a3cc358f7d4ec668ba76cc86e9bf1f7f76b55..0000000000000000000000000000000000000000 --- a/spaces/nateraw/deepafx-st/deepafx_st/processors/autodiff/channel.py +++ /dev/null @@ -1,28 +0,0 @@ -import torch - -from deepafx_st.processors.autodiff.compressor import Compressor -from deepafx_st.processors.autodiff.peq import ParametricEQ -from deepafx_st.processors.autodiff.fir import FIRFilter - - -class AutodiffChannel(torch.nn.Module): - def __init__(self, sample_rate): - super().__init__() - - self.peq = ParametricEQ(sample_rate) - self.comp = Compressor(sample_rate) - self.ports = [self.peq.ports, self.comp.ports] - self.num_control_params = ( - self.peq.num_control_params + self.comp.num_control_params - ) - - def forward(self, x, p, sample_rate=24000, **kwargs): - - # split params between EQ and Comp. - p_peq = p[:, : self.peq.num_control_params] - p_comp = p[:, self.peq.num_control_params :] - - y = self.peq(x, p_peq, sample_rate) - y = self.comp(y, p_comp, sample_rate) - - return y diff --git a/spaces/nateraw/fuego/app.py b/spaces/nateraw/fuego/app.py deleted file mode 100644 index 29fde1ae0baed0be8924182af4021dace054981c..0000000000000000000000000000000000000000 --- a/spaces/nateraw/fuego/app.py +++ /dev/null @@ -1,287 +0,0 @@ -# Gradio app to run fuego.github_run() on Hugging Face Spaces -# Hosted at https://hf.co/nateraw/fuego -import gradio as gr -import yaml - -import fuego - - -def fuego_github_run_wrapper( - token, - github_repo_id, - github_repo_branch, - script, - requirements_file, - extra_requirements, - script_args, - output_dirs, - private, - delete_space_on_completion, - downgrade_hardware_on_completion, - space_hardware, -): - if not token.strip(): - return gr.update( - value="""## token with write access is required. Get one from here""", - visible=True, - ) - - if script_args.strip(): - script_args = yaml.safe_load(script_args) - - if not requirements_file.strip(): - requirements_file = None - - if extra_requirements.strip(): - extra_requirements = [x.strip() for x in extra_requirements.split("\n")] - else: - extra_requirements = None - - if output_dirs.strip(): - output_dirs = [x.strip() for x in output_dirs.split(",")] - - github_repo_id = github_repo_id.strip() - if not github_repo_id: - return gr.update(value="## GitHub repo ID is required", visible=True) - - script = script.strip() - if not script: - return gr.update(value="## script is required", visible=True) - - github_repo_branch = github_repo_branch.strip() - if not github_repo_branch: - return gr.update("## github repo branch is required", visible=True) - - space_url, dataset_url = fuego.github_run( - github_repo_id.strip(), - script.strip(), - requirements_file, - github_repo_branch, - space_hardware=space_hardware, - private=private, - delete_space_on_completion=delete_space_on_completion, - downgrade_hardware_on_completion=downgrade_hardware_on_completion, - space_output_dirs=output_dirs, - extra_requirements=extra_requirements, - token=token, - **script_args, - ) - output_message = f""" - ## Job launched successfully! 🚀 - - Link to Space - - Link to Dataset - """ - return gr.update(value=output_message, visible=True) - - -description = """ -This app lets you run scripts from GitHub on Spaces, using any hardware you'd like. Just point to a repo, the script you'd like to run, the dependencies to install, and any args to pass to your script, and watch it go. 😎 - -It uses 🔥[fuego](https://github.com/huggingface/fuego)🔥 under the hood to launch your script in one line of Python code. Give the repo a ⭐️ if you think its 🔥. - -**Note: You'll need a Hugging Face token with write access, which you can get from [here](https://hf.co/settings/tokens)** -""" - -additional_info = """ -## Pricing - -Runs using this tool are **free** as long as you use `cpu-basic` hardware. 🔥 - -**See pricing for accelerated hardware (anything other than `cpu-basic`) [here](https://hf.co/pricing#spaces)** - -## What this space does: - 1. Spins up 2 new HF repos for you: a "runner" space repo and an "output" dataset repo. - 2. Uploads your code to the space, as well as some wrapper code that invokes your script. - 3. Runs your code on the space via the wrapper. Logs should show up in the space. - 4. When the script is done, it takes anything saved to the `output_dirs` and uploads the files within to the output dataset repo - 5. Deletes the space (or downgrades, or just leaves on). Depends on your choice of `delete_space_on_completion` and `downgrade_hardware_on_completion`. - -## FAQ - -- If your space ends up having a "no application file" issue, you may need to "factory reset" the space. You can do this from the settings page of the space. -""" - -output_message = gr.Markdown("", visible=False) - -with gr.Blocks(css="style.css") as demo: - gr.Markdown("# 🔥Fuego🔥 GitHub Script Runner") - gr.Markdown(description) - with gr.Accordion("👀 More Details (Hardware Pricing, How it Works, and FAQ)", open=False): - gr.Markdown(additional_info) - - with gr.Row(): - token = gr.Textbox(lines=1, label="Hugging Face token with write access", type="password") - - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown("What script would you like to run? Also, what are its dependencies?") - github_repo_id = gr.Textbox(lines=1, label="GitHub repo ID (ex. huggingface/fuego)") - github_repo_branch = gr.Textbox( - lines=1, label="Branch of GitHub repo (ex. main)", value="main", interactive=True - ) - script = gr.Textbox(lines=1, label="Path to python script in the GitHub repo") - requirements_file = gr.Textbox(lines=1, label="Path to pip requirements file in the repo") - extra_requirements = gr.Textbox( - lines=5, - label="Any extra pip requirements to your script, just as you would write them in requirements.txt", - ) - with gr.Column(): - with gr.Box(): - gr.Markdown("How should we run your script?") - script_args = gr.Textbox(lines=10, label="Script args to your python file. Input here as YAML.") - spaces_output_dirs = gr.Textbox( - lines=1, - label="Name of output directory to save assets to from within your script. Use commas if you have multiple.", - value="./outputs, ./logs", - ) - private = gr.Checkbox(False, label="Should space/dataset be made as private repos?") - delete_space_on_completion = gr.Checkbox(True, label="Delete the space on completion?") - downgrade_hardware_on_completion = gr.Checkbox( - True, - label="Downgrade hardware of the space on completion? Only applicable if not deleting on completion.", - ) - with gr.Row(): - with gr.Column(): - spaces_hardware = gr.Dropdown( - ["cpu-basic", "cpu-upgrade", "t4-small", "t4-medium", "a10g-small", "a10g-large", "a100-large"], - label="Spaces Hardware", - value="cpu-basic", - interactive=True, - ) - spaces_hardware_msg = gr.Markdown( - """ - 🔴 **The hardware you chose is not free, and you will be charged for it** 🔴 - - If you want to run your script for free, please choose `cpu-basic` as your hardware. - """, - visible=False, - ) - spaces_hardware.change( - lambda x: gr.update(visible=True) if x != "cpu-basic" else gr.update(visible=False), - inputs=[spaces_hardware], - outputs=[spaces_hardware_msg], - ) - - with gr.Row(): - with gr.Accordion("👀 Examples", open=False): - gr.Examples( - [ - [ - "pytorch/examples", - "main", - "vae/main.py", - "vae/requirements.txt", - "", - "epochs: 3", - "./results", - False, - True, - True, - "cpu-basic", - ], - [ - "huggingface/transformers", - "main", - "examples/pytorch/text-classification/run_glue.py", - "examples/pytorch/text-classification/requirements.txt", - "tensorboard\ngit+https://github.com/huggingface/transformers@main#egg=transformers", - "model_name_or_path: bert-base-cased\ntask_name: mrpc\ndo_train: True\ndo_eval: True\nmax_seq_length: 128\nper_device_train_batch_size: 32\nlearning_rate: 2e-5\nnum_train_epochs: 3\noutput_dir: ./outputs\nlogging_dir: ./logs\nlogging_steps: 20\nreport_to: tensorboard", - "./outputs,./logs", - False, - True, - True, - "cpu-basic", - ], - ], - inputs=[ - github_repo_id, - github_repo_branch, - script, - requirements_file, - extra_requirements, - script_args, - spaces_output_dirs, - private, - delete_space_on_completion, - downgrade_hardware_on_completion, - spaces_hardware, - ], - outputs=[ - github_repo_id, - github_repo_branch, - script, - requirements_file, - extra_requirements, - script_args, - spaces_output_dirs, - private, - delete_space_on_completion, - downgrade_hardware_on_completion, - spaces_hardware, - ], - cache_examples=False, - ) - - with gr.Row(): - submit = gr.Button("Submit") - reset_btn = gr.Button("Reset fields") - - with gr.Row(): - output_message.render() - - submit.click( - fuego_github_run_wrapper, - inputs=[ - token, - github_repo_id, - github_repo_branch, - script, - requirements_file, - extra_requirements, - script_args, - spaces_output_dirs, - private, - delete_space_on_completion, - downgrade_hardware_on_completion, - spaces_hardware, - ], - outputs=[output_message], - ) - - def reset_fields(): - return { - output_message: gr.update(value="", visible=False), - github_repo_id: gr.update(value=""), - github_repo_branch: gr.update(value="main"), - script: gr.update(value=""), - requirements_file: gr.update(value=""), - extra_requirements: gr.update(value=""), - script_args: gr.update(value=""), - spaces_output_dirs: gr.update(value="./outputs, ./logs"), - private: gr.update(value=False), - delete_space_on_completion: gr.update(value=True), - downgrade_hardware_on_completion: gr.update(value=True), - spaces_hardware: gr.update(value="cpu-basic"), - } - - reset_btn.click( - reset_fields, - outputs=[ - output_message, - github_repo_id, - github_repo_branch, - script, - requirements_file, - extra_requirements, - script_args, - spaces_output_dirs, - private, - delete_space_on_completion, - downgrade_hardware_on_completion, - spaces_hardware, - ], - ) - -if __name__ == "__main__": - demo.launch(debug=True) diff --git a/spaces/nateraw/lavila/run_with_submitit_pretrain.py b/spaces/nateraw/lavila/run_with_submitit_pretrain.py deleted file mode 100644 index 2a9013e4c18b4b41ef2b9676a35f5622f6f95b94..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/run_with_submitit_pretrain.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -A script to run multinode training with submitit. -""" -import argparse -import os -import uuid -from pathlib import Path - -import main_pretrain -import submitit - - -def parse_args(): - parser = main_pretrain.get_args_parser() - parser = argparse.ArgumentParser("Submitit for lavila pre-training", parents=[parser]) - parser.add_argument("--ngpus", default=8, type=int, help="Number of gpus to request on each node") - parser.add_argument("--nodes", default=8, type=int, help="Number of nodes to request") - parser.add_argument("--timeout", default=2880, type=int, help="Duration of the job") - parser.add_argument("--job_dir", default="", type=str, help="Job dir. Leave empty for automatic.") - - parser.add_argument("--partition", default="learnlab", type=str, help="Partition where to submit") - parser.add_argument("--use_volta32", action='store_true', help="Big models? Use this") - parser.add_argument('--comment', default="", type=str, - help='Comment to pass to scheduler, e.g. priority message') - return parser.parse_args() - - -def get_shared_folder() -> Path: - user = os.getenv("USER") - if Path("/checkpoint/").is_dir(): - p = Path(f"/checkpoint/{user}/experiments/lavila_pretrain") - p.mkdir(exist_ok=True) - return p - raise RuntimeError("No shared folder available") - - -def get_init_file(): - # Init file must not exist, but it's parent dir must exist. - os.makedirs(str(get_shared_folder()), exist_ok=True) - init_file = get_shared_folder() / f"{uuid.uuid4().hex}_init" - if init_file.exists(): - os.remove(str(init_file)) - return init_file - - -class Trainer(object): - def __init__(self, args): - self.args = args - - def __call__(self): - import main_pretrain - - self._setup_gpu_args() - main_pretrain.main(self.args) - - def checkpoint(self): - import submitit - - self.args.dist_url = get_init_file().as_uri() - print("Requeuing ", self.args) - empty_trainer = type(self)(self.args) - return submitit.helpers.DelayedSubmission(empty_trainer) - - def _setup_gpu_args(self): - import submitit - from pathlib import Path - - job_env = submitit.JobEnvironment() - self.args.output_dir = Path(str(self.args.output_dir).replace("%j", str(job_env.job_id))) - self.args.gpu = job_env.local_rank - self.args.rank = job_env.global_rank - self.args.world_size = job_env.num_tasks - print(f"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}") - - -def main(): - args = parse_args() - if args.job_dir == "": - args.job_dir = get_shared_folder() / "%j" - - # Note that the folder will depend on the job_id, to easily track experiments - executor = submitit.AutoExecutor(folder=args.job_dir, slurm_max_num_timeout=30) - - num_gpus_per_node = args.ngpus - nodes = args.nodes - timeout_min = args.timeout - - partition = args.partition - kwargs = {} - if args.use_volta32: - kwargs['slurm_constraint'] = 'volta32gb' - if args.comment: - kwargs['slurm_comment'] = args.comment - - executor.update_parameters( - mem_gb=40 * num_gpus_per_node, - gpus_per_node=num_gpus_per_node, - tasks_per_node=num_gpus_per_node, # one task per GPU - cpus_per_task=10, - nodes=nodes, - timeout_min=timeout_min, # max is 60 * 72 - # Below are cluster dependent parameters - slurm_partition=partition, - slurm_signal_delay_s=120, - **kwargs - ) - - executor.update_parameters(name="lavila_pretrain") - - args.dist_url = get_init_file().as_uri() - args.output_dir = args.job_dir - - trainer = Trainer(args) - job = executor.submit(trainer) - - print("Submitted job_id:", job.job_id) - - -if __name__ == "__main__": - main() diff --git a/spaces/nateraw/modelcard-creator/persist.py b/spaces/nateraw/modelcard-creator/persist.py deleted file mode 100644 index 0fd58c1544523ae3a0e800adbf92851ed9c1c854..0000000000000000000000000000000000000000 --- a/spaces/nateraw/modelcard-creator/persist.py +++ /dev/null @@ -1,26 +0,0 @@ -# Thank god this existed. -# https://gist.github.com/okld/0aba4869ba6fdc8d49132e6974e2e662 - -from streamlit import session_state as _state - -_PERSIST_STATE_KEY = f"{__name__}_PERSIST" - - -def persist(key: str) -> str: - """Mark widget state as persistent.""" - if _PERSIST_STATE_KEY not in _state: - _state[_PERSIST_STATE_KEY] = set() - - _state[_PERSIST_STATE_KEY].add(key) - - return key - - -def load_widget_state(): - """Load persistent widget state.""" - if _PERSIST_STATE_KEY in _state: - _state.update({ - key: value - for key, value in _state.items() - if key in _state[_PERSIST_STATE_KEY] - }) \ No newline at end of file diff --git a/spaces/nateraw/run-script-in-background/your_script.py b/spaces/nateraw/run-script-in-background/your_script.py deleted file mode 100644 index 2230ea74d783a959b4244a4780ae529b139607b5..0000000000000000000000000000000000000000 --- a/spaces/nateraw/run-script-in-background/your_script.py +++ /dev/null @@ -1,14 +0,0 @@ -import time -from pathlib import Path - -Path('outputs').mkdir(exist_ok=True, parents=True) - -# Open the file in append mode -with open("outputs/message.txt", "a") as file: - for i in range(2): - # Append the value of i to the file - file.write(str(i) + "\n") - print(f"\t- (from {__file__}): Wrote {i}. Sleeping for 1 min") - # Wait for a minute - time.sleep(60) -print(f"\t- (from {__file__}): Done!") \ No newline at end of file diff --git a/spaces/naver/SuperFeatures/how/utils/plots.py b/spaces/naver/SuperFeatures/how/utils/plots.py deleted file mode 100644 index 305721e129a09285a431bc4043fac7feed380a67..0000000000000000000000000000000000000000 --- a/spaces/naver/SuperFeatures/how/utils/plots.py +++ /dev/null @@ -1,37 +0,0 @@ -"""Plotting classes""" - -import matplotlib -matplotlib.use('Agg') -import matplotlib.pyplot as plt - - -class EpochFigure: - """Basic figure for plotting scores across epochs - - :param str title: Figure title - :param str ylabel: Plot's y label - """ - - def __init__(self, title, *, ylabel): - self.fig = plt.figure() - self.axes = self.fig.add_subplot(1, 1, 1) - self.title = title - self.ylabel = ylabel - - def __del__(self): - plt.close(self.fig) - - def __getattr__(self, name): - # Delegate method calls on self.axes - return getattr(self.axes, name) - - def save(self, path): - """Save figure to given path""" - self.axes.grid(b=True, which='major', color='k', linestyle='-') - self.axes.grid(b=True, which='minor', color='r', linestyle='-', alpha=0.2) - self.axes.minorticks_on() - self.axes.legend() - self.axes.set_xlabel('epoch') - self.axes.set_ylabel(self.ylabel) - self.axes.set_title(self.title) - self.fig.savefig(path) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flux Media Centrafuse 2.0.806.1-Lz0 Free Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flux Media Centrafuse 2.0.806.1-Lz0 Free Download.md deleted file mode 100644 index abedd80571ff0e9ec9842f3b762fff369e0019da..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flux Media Centrafuse 2.0.806.1-Lz0 Free Download.md +++ /dev/null @@ -1,28 +0,0 @@ -
          -

          Flux Media Centrafuse 2.0.806.1-Lz0: A Comprehensive Review

          -

          Flux Media Centrafuse 2.0.806.1-Lz0 is a software product that aims to provide a complete solution for car PC users. It is a multimedia application that integrates various features such as music, video, pictures, navigation, phone, radio, and more into a user-friendly interface. Centrafuse also supports plugins and skins that allow users to customize and extend its functionality.

          -

          In this article, we will review the main features of Flux Media Centrafuse 2.0.806.1-Lz0, as well as its installation, activation, and setup process. We will also compare it with other similar products and evaluate its performance and usability.

          -

          Flux Media Centrafuse 2.0.806.1-Lz0 free download


          Download Zip ✒ ✒ ✒ https://urlcod.com/2uIbRK



          -

          Main Features of Flux Media Centrafuse 2.0.806.1-Lz0

          -

          Flux Media Centrafuse 2.0.806.1-Lz0 offers a wide range of features that cover various aspects of car PC usage. Here are some of the main features that make it stand out:

          -
            -
          • Media Playlist: Centrafuse allows users to create and manage playlists for audio, video, and pictures. It supports various formats such as MP3, WMA, OGG, WAV, FLAC, AVI, WMV, MPG, JPG, BMP, PNG, and more[^1^]. Users can also browse and play media files from external devices such as USB drives, iPods, or CDs.
          • -
          • My Library: Centrafuse organizes media files into a library that can be searched and filtered by various criteria such as genre, artist, album, title, year, rating, etc[^1^]. Users can also edit the metadata of media files and import or export playlists.
          • -
          • Navigation: Centrafuse integrates with various navigation software such as iGuidance, Destinator, Garmin Mobile PC, MapPoint, etc[^1^]. Users can launch the navigation software from within Centrafuse and control it using voice commands or touchscreen gestures.
          • -
          • Phone: Centrafuse supports Bluetooth phone integration that allows users to make and receive calls using their car PC[^1^]. Users can also access their phonebook and call history from within Centrafuse and use voice dialing or text-to-speech features.
          • -
          • Radio: Centrafuse supports FM radio tuner cards that allow users to listen to radio stations using their car PC[^1^]. Users can also scan for stations, save presets, and display RDS information.
          • -
          • Plugins and Skins: Centrafuse supports plugins and skins that allow users to customize and extend its functionality[^1^]. Users can download and install various plugins and skins from the official website or from third-party sources. Some of the popular plugins include weather, news, email, games, OBD-II diagnostics, etc.
          • -
          -

          Installation, Activation, and Setup of Flux Media Centrafuse 2.0.806.1-Lz0

          -

          To install Flux Media Centrafuse 2.0.806.1-Lz0 on your car PC, you need to meet the following minimum system requirements[^1^]:

          -
            -
          • A PC running Windows XP or Vista with at least 512 MB of RAM and 200 MB of free disk space.
          • -
          • A touchscreen monitor with at least 800x600 resolution.
          • -
          • A sound card with speakers or headphones.
          • -
          • A CD-ROM drive or a USB port for installation.
          • -
          -

          You can download the trial version of Centrafuse from the official website or from other sources online[^2^]. The trial version is fully functional but expires after 30 days of use.

          -

          To activate the full version of Centrafuse, you need to purchase a license key from the official website or from authorized resellers[^1^]. The license key is sent to your email address after payment confirmation. You can then enter the license key in the activation window that appears when you launch Centrafuse for the first time.

          -

          After activating Centrafuse, you need to complete the initial setup wizard that guides you through the configuration of various settings such as language, audio output device, screen calibration, navigation software path

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hero Movie _TOP_ Download Hd.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hero Movie _TOP_ Download Hd.md deleted file mode 100644 index 0db2ba46e940cf30f11e7057d07450c299f90fa1..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hero Movie _TOP_ Download Hd.md +++ /dev/null @@ -1,30 +0,0 @@ - -

          How to Download Hero Movie in HD Quality

          -

          Hero is a 2015 Bollywood movie starring Sooraj Pancholi and Athiya Shetty in the lead roles. The movie is a remake of the 1983 classic of the same name, directed by Subhash Ghai. Hero tells the story of a gangster's son who falls in love with the daughter of a police officer, who has some crucial evidence against his father. The movie is a romantic action drama that showcases the power of love and rebellion.

          -

          Hero movie download hd


          Download Filehttps://urlcod.com/2uI9BU



          -

          If you are looking for a way to download Hero movie in HD quality, you have come to the right place. In this article, we will show you how to watch Hero full movie online in HD on various platforms, such as Hotstar[^1^], PogoLinks[^2^], and ZEE5[^3^]. We will also give you some tips on how to optimize your website for the keyword "Hero movie download hd" and rank higher on Google search results.

          -

          Watch Hero Full Movie Online in HD on Hotstar

          -

          Hotstar is one of the most popular streaming platforms in India, where you can watch Hero full movie online in HD quality. Hotstar offers a subscription-based service that gives you access to a wide range of movies, TV shows, sports, and news. You can also watch Hero movie for free on Hotstar if you have a Jio or Airtel SIM card.

          -

          To watch Hero full movie online in HD on Hotstar, follow these steps:

          -
            -
          1. Go to https://www.hotstar.com/sg/movies/hero/1000074190 on your browser.
          2. -
          3. If you have a Hotstar subscription, log in with your credentials. If you don't have a subscription, you can sign up for a free trial or use your Jio or Airtel number to get access.
          4. -
          5. Click on the play button and enjoy Hero full movie online in HD quality.
          6. -
          -

          Download Hero Movie in HD Quality from PogoLinks

          -

          PogoLinks is another website where you can download Hero movie in HD quality for free. PogoLinks is a torrent site that provides direct Google Drive download links for Bollywood and Hollywood movies and web series. You can download Hero movie in various formats and resolutions, such as 480p, 720p, and 1080p.

          -

          To download Hero movie in HD quality from PogoLinks, follow these steps:

          -
            -
          1. Go to https://pogolinks.art/movies/hero-2015/ on your browser.
          2. -
          3. Scroll down and choose the download link that suits your preference. For example, if you want to download Hero movie in 720p quality, click on "Hero 720p - .mkv".
          4. -
          5. You will be redirected to a Google Drive page where you can see the file size and preview the movie. Click on the download icon at the top right corner and save the file to your device.
          6. -
          -

          Watch Hero Full Movie Online in HD on ZEE5

          -

          ZEE5 is another streaming platform where you can watch Hero full movie online in HD quality. ZEE5 is an Indian OTT service that offers original content, movies, TV shows, live TV channels, and music videos. You can watch Hero movie on ZEE5 with a premium subscription or by purchasing a single movie ticket.

          -

          -

          To watch Hero full movie online in HD on ZEE5, follow these steps:

          -
            -
          1. Go to https://www.zee5.com/movies/details/hero/0-0-403592 on your browser.
          2. -
          3. If you have a ZEE5 subscription, log in with your credentials. If you don't have a subscription, you can sign up for a free trial or buy a single movie ticket for Rs.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/niizam/sovits-models/inference/infer_tool_grad.py b/spaces/niizam/sovits-models/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/niizam/sovits-models/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/datasets.md b/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/datasets.md deleted file mode 100644 index 91103f64264aa6f3059611c5fe06ecd65bcb986f..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/docs/tutorials/datasets.md +++ /dev/null @@ -1,290 +0,0 @@ -# Use Custom Datasets - -This document explains how the dataset APIs -([DatasetCatalog](../modules/data.html#detectron2.data.DatasetCatalog), [MetadataCatalog](../modules/data.html#detectron2.data.MetadataCatalog)) -work, and how to use them to add custom datasets. - -Datasets that have builtin support in detectron2 are listed in [builtin datasets](builtin_datasets.md). -If you want to use a custom dataset while also reusing detectron2's data loaders, -you will need to: - -1. __Register__ your dataset (i.e., tell detectron2 how to obtain your dataset). -2. Optionally, __register metadata__ for your dataset. - -Next, we explain the above two concepts in detail. - -The [Colab tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -has a live example of how to register and train on a dataset of custom formats. - -### Register a Dataset - -To let detectron2 know how to obtain a dataset named "my_dataset", users need to implement -a function that returns the items in your dataset and then tell detectron2 about this -function: -```python -def my_dataset_function(): - ... - return list[dict] in the following format - -from detectron2.data import DatasetCatalog -DatasetCatalog.register("my_dataset", my_dataset_function) -# later, to access the data: -data: List[Dict] = DatasetCatalog.get("my_dataset") -``` - -Here, the snippet associates a dataset named "my_dataset" with a function that returns the data. -The function must return the same data (with same order) if called multiple times. -The registration stays effective until the process exits. - -The function can do arbitrary things and should return the data in `list[dict]`, each dict in either -of the following formats: -1. Detectron2's standard dataset dict, described below. This will make it work with many other builtin - features in detectron2, so it's recommended to use it when it's sufficient. -2. Any custom format. You can also return arbitrary dicts in your own format, - such as adding extra keys for new tasks. - Then you will need to handle them properly downstream as well. - See below for more details. - -#### Standard Dataset Dicts - -For standard tasks -(instance detection, instance/semantic/panoptic segmentation, keypoint detection), -we load the original dataset into `list[dict]` with a specification similar to COCO's annotations. -This is our standard representation for a dataset. - -Each dict contains information about one image. -The dict may have the following fields, -and the required fields vary based on what the dataloader or the task needs (see more below). - -```eval_rst -.. list-table:: - :header-rows: 1 - - * - Task - - Fields - * - Common - - file_name, height, width, image_id - - * - Instance detection/segmentation - - annotations - - * - Semantic segmentation - - sem_seg_file_name - - * - Panoptic segmentation - - pan_seg_file_name, segments_info -``` - -+ `file_name`: the full path to the image file. -+ `height`, `width`: integer. The shape of the image. -+ `image_id` (str or int): a unique id that identifies this image. Required by many - evaluators to identify the images, but a dataset may use it for different purposes. -+ `annotations` (list[dict]): Required by __instance detection/segmentation or keypoint detection__ tasks. - Each dict corresponds to annotations of one instance in this image, and - may contain the following keys: - + `bbox` (list[float], required): list of 4 numbers representing the bounding box of the instance. - + `bbox_mode` (int, required): the format of bbox. It must be a member of - [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode). - Currently supports: `BoxMode.XYXY_ABS`, `BoxMode.XYWH_ABS`. - + `category_id` (int, required): an integer in the range [0, num_categories-1] representing the category label. - The value num_categories is reserved to represent the "background" category, if applicable. - + `segmentation` (list[list[float]] or dict): the segmentation mask of the instance. - + If `list[list[float]]`, it represents a list of polygons, one for each connected component - of the object. Each `list[float]` is one simple polygon in the format of `[x1, y1, ..., xn, yn]` (n≥3). - The Xs and Ys are absolute coordinates in unit of pixels. - + If `dict`, it represents the per-pixel segmentation mask in COCO's compressed RLE format. - The dict should have keys "size" and "counts". You can convert a uint8 segmentation mask of 0s and - 1s into such dict by `pycocotools.mask.encode(np.asarray(mask, order="F"))`. - `cfg.INPUT.MASK_FORMAT` must be set to `bitmask` if using the default data loader with such format. - + `keypoints` (list[float]): in the format of [x1, y1, v1,..., xn, yn, vn]. - v[i] means the [visibility](http://cocodataset.org/#format-data) of this keypoint. - `n` must be equal to the number of keypoint categories. - The Xs and Ys are absolute real-value coordinates in range [0, W or H]. - - (Note that the keypoint coordinates in COCO format are integers in range [0, W-1 or H-1], which is different - from our standard format. Detectron2 adds 0.5 to COCO keypoint coordinates to convert them from discrete - pixel indices to floating point coordinates.) - + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd - region". Don't include this field if you don't know what it means. - - If `annotations` is an empty list, it means the image is labeled to have no objects. - Such images will by default be removed from training, - but can be included using `DATALOADER.FILTER_EMPTY_ANNOTATIONS`. - -+ `sem_seg_file_name` (str): - The full path to the semantic segmentation ground truth file. - It should be a grayscale image whose pixel values are integer labels. -+ `pan_seg_file_name` (str): - The full path to panoptic segmentation ground truth file. - It should be an RGB image whose pixel values are integer ids encoded using the - [panopticapi.utils.id2rgb](https://github.com/cocodataset/panopticapi/) function. - The ids are defined by `segments_info`. - If an id does not appear in `segments_info`, the pixel is considered unlabeled - and is usually ignored in training & evaluation. -+ `segments_info` (list[dict]): defines the meaning of each id in panoptic segmentation ground truth. - Each dict has the following keys: - + `id` (int): integer that appears in the ground truth image. - + `category_id` (int): an integer in the range [0, num_categories-1] representing the category label. - + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd region". - - -```eval_rst - -.. note:: - - The PanopticFPN model does not use the panoptic segmentation - format defined here, but a combination of both instance segmentation and semantic segmentation data - format. See :doc:`builtin_datasets` for instructions on COCO. - -``` - -Fast R-CNN (with pre-computed proposals) models are rarely used today. -To train a Fast R-CNN, the following extra keys are needed: - -+ `proposal_boxes` (array): 2D numpy array with shape (K, 4) representing K precomputed proposal boxes for this image. -+ `proposal_objectness_logits` (array): numpy array with shape (K, ), which corresponds to the objectness - logits of proposals in 'proposal_boxes'. -+ `proposal_bbox_mode` (int): the format of the precomputed proposal bbox. - It must be a member of - [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode). - Default is `BoxMode.XYXY_ABS`. - - - -#### Custom Dataset Dicts for New Tasks - -In the `list[dict]` that your dataset function returns, the dictionary can also have __arbitrary custom data__. -This will be useful for a new task that needs extra information not covered -by the standard dataset dicts. In this case, you need to make sure the downstream code can handle your data -correctly. Usually this requires writing a new `mapper` for the dataloader (see [Use Custom Dataloaders](./data_loading.md)). - -When designing a custom format, note that all dicts are stored in memory -(sometimes serialized and with multiple copies). -To save memory, each dict is meant to contain __small__ but sufficient information -about each sample, such as file names and annotations. -Loading full samples typically happens in the data loader. - -For attributes shared among the entire dataset, use `Metadata` (see below). -To avoid extra memory, do not save such information inside each sample. - -### "Metadata" for Datasets - -Each dataset is associated with some metadata, accessible through -`MetadataCatalog.get(dataset_name).some_metadata`. -Metadata is a key-value mapping that contains information that's shared among -the entire dataset, and usually is used to interpret what's in the dataset, e.g., -names of classes, colors of classes, root of files, etc. -This information will be useful for augmentation, evaluation, visualization, logging, etc. -The structure of metadata depends on what is needed from the corresponding downstream code. - -If you register a new dataset through `DatasetCatalog.register`, -you may also want to add its corresponding metadata through -`MetadataCatalog.get(dataset_name).some_key = some_value`, to enable any features that need the metadata. -You can do it like this (using the metadata key "thing_classes" as an example): - -```python -from detectron2.data import MetadataCatalog -MetadataCatalog.get("my_dataset").thing_classes = ["person", "dog"] -``` - -Here is a list of metadata keys that are used by builtin features in detectron2. -If you add your own dataset without these metadata, some features may be -unavailable to you: - -* `thing_classes` (list[str]): Used by all instance detection/segmentation tasks. - A list of names for each instance/thing category. - If you load a COCO format dataset, it will be automatically set by the function `load_coco_json`. - -* `thing_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each thing category. - Used for visualization. If not given, random colors will be used. - -* `stuff_classes` (list[str]): Used by semantic and panoptic segmentation tasks. - A list of names for each stuff category. - -* `stuff_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each stuff category. - Used for visualization. If not given, random colors are used. - -* `ignore_label` (int): Used by semantic and panoptic segmentation tasks. Pixels in ground-truth - annotations with this category label should be ignored in evaluation. Typically these are "unlabeled" - pixels. - -* `keypoint_names` (list[str]): Used by keypoint detection. A list of names for each keypoint. - -* `keypoint_flip_map` (list[tuple[str]]): Used by keypoint detection. A list of pairs of names, - where each pair are the two keypoints that should be flipped if the image is - flipped horizontally during augmentation. -* `keypoint_connection_rules`: list[tuple(str, str, (r, g, b))]. Each tuple specifies a pair of keypoints - that are connected and the color (in [0, 255]) to use for the line between them when visualized. - -Some additional metadata that are specific to the evaluation of certain datasets (e.g. COCO): - -* `thing_dataset_id_to_contiguous_id` (dict[int->int]): Used by all instance detection/segmentation tasks in the COCO format. - A mapping from instance class ids in the dataset to contiguous ids in range [0, #class). - Will be automatically set by the function `load_coco_json`. - -* `stuff_dataset_id_to_contiguous_id` (dict[int->int]): Used when generating prediction json files for - semantic/panoptic segmentation. - A mapping from semantic segmentation class ids in the dataset - to contiguous ids in [0, num_categories). It is useful for evaluation only. - -* `json_file`: The COCO annotation json file. Used by COCO evaluation for COCO-format datasets. -* `panoptic_root`, `panoptic_json`: Used by COCO-format panoptic evaluation. -* `evaluator_type`: Used by the builtin main training script to select - evaluator. Don't use it in a new training script. - You can just provide the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) - for your dataset directly in your main script. - -```eval_rst -.. note:: - - In recognition, sometimes we use the term "thing" for instance-level tasks, - and "stuff" for semantic segmentation tasks. - Both are used in panoptic segmentation tasks. - For background on the concept of "thing" and "stuff", see - `On Seeing Stuff: The Perception of Materials by Humans and Machines - `_. -``` - -### Register a COCO Format Dataset - -If your instance-level (detection, segmentation, keypoint) dataset is already a json file in the COCO format, -the dataset and its associated metadata can be registered easily with: -```python -from detectron2.data.datasets import register_coco_instances -register_coco_instances("my_dataset", {}, "json_annotation.json", "path/to/image/dir") -``` - -If your dataset is in COCO format but need to be further processed, or has extra custom per-instance annotations, -the [load_coco_json](../modules/data.html#detectron2.data.datasets.load_coco_json) -function might be useful. - -### Update the Config for New Datasets - -Once you've registered the dataset, you can use the name of the dataset (e.g., "my_dataset" in -example above) in `cfg.DATASETS.{TRAIN,TEST}`. -There are other configs you might want to change to train or evaluate on new datasets: - -* `MODEL.ROI_HEADS.NUM_CLASSES` and `MODEL.RETINANET.NUM_CLASSES` are the number of thing classes - for R-CNN and RetinaNet models, respectively. -* `MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS` sets the number of keypoints for Keypoint R-CNN. - You'll also need to set [Keypoint OKS](http://cocodataset.org/#keypoints-eval) - with `TEST.KEYPOINT_OKS_SIGMAS` for evaluation. -* `MODEL.SEM_SEG_HEAD.NUM_CLASSES` sets the number of stuff classes for Semantic FPN & Panoptic FPN. -* `TEST.DETECTIONS_PER_IMAGE` controls the maximum number of objects to be detected. - Set it to a larger number if test images may contain >100 objects. -* If you're training Fast R-CNN (with precomputed proposals), `DATASETS.PROPOSAL_FILES_{TRAIN,TEST}` - need to match the datasets. The format of proposal files are documented - [here](../modules/data.html#detectron2.data.load_proposals_into_dataset). - -New models -(e.g. [TensorMask](../../projects/TensorMask), -[PointRend](../../projects/PointRend)) -often have similar configs of their own that need to be changed as well. - -```eval_rst -.. tip:: - - After changing the number of classes, certain layers in a pre-trained model will become incompatible - and therefore cannot be loaded to the new model. - This is expected, and loading such pre-trained models will produce warnings about such layers. -``` diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/transform/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/transform/__init__.py deleted file mode 100644 index 369e1b278899b225d55bfc729514873b4259c7b9..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/data/transform/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .image import ImageResizeTransform diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_confidence.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_confidence.py deleted file mode 100644 index 0c0099952f3e675e42aa7d3b6d35065fdaf43dbb..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_confidence.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import torch -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.layers import ConvTranspose2d - -from ...structures import decorate_predictor_output_class_with_confidences -from ..confidence import DensePoseConfidenceModelConfig, DensePoseUVConfidenceType -from ..utils import initialize_module_params - - -class DensePoseChartConfidencePredictorMixin: - """ - Predictor contains the last layers of a DensePose model that take DensePose head - outputs as an input and produce model outputs. Confidence predictor mixin is used - to generate confidences for segmentation and UV tensors estimated by some - base predictor. Several assumptions need to hold for the base predictor: - 1) the `forward` method must return SIUV tuple as the first result ( - S = coarse segmentation, I = fine segmentation, U and V are intrinsic - chart coordinates) - 2) `interp2d` method must be defined to perform bilinear interpolation; - the same method is typically used for SIUV and confidences - Confidence predictor mixin provides confidence estimates, as described in: - N. Neverova et al., Correlated Uncertainty for Learning Dense Correspondences - from Noisy Labels, NeurIPS 2019 - A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020 - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize confidence predictor using configuration options. - - Args: - cfg (CfgNode): configuration options - input_channels (int): number of input channels - """ - # we rely on base predictor to call nn.Module.__init__ - super().__init__(cfg, input_channels) # pyre-ignore[19] - self.confidence_model_cfg = DensePoseConfidenceModelConfig.from_cfg(cfg) - self._initialize_confidence_estimation_layers(cfg, input_channels) - self._registry = {} - initialize_module_params(self) # pyre-ignore[6] - - def _initialize_confidence_estimation_layers(self, cfg: CfgNode, dim_in: int): - """ - Initialize confidence estimation layers based on configuration options - - Args: - cfg (CfgNode): configuration options - dim_in (int): number of input channels - """ - dim_out_patches = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1 - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - if self.confidence_model_cfg.uv_confidence.enabled: - if self.confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.IID_ISO: - self.sigma_2_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - elif ( - self.confidence_model_cfg.uv_confidence.type - == DensePoseUVConfidenceType.INDEP_ANISO - ): - self.sigma_2_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.kappa_u_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.kappa_v_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - else: - raise ValueError( - f"Unknown confidence model type: " - f"{self.confidence_model_cfg.confidence_model_type}" - ) - if self.confidence_model_cfg.segm_confidence.enabled: - self.fine_segm_confidence_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, 1, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.coarse_segm_confidence_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, 1, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - - def forward(self, head_outputs: torch.Tensor): - """ - Perform forward operation on head outputs used as inputs for the predictor. - Calls forward method from the base predictor and uses its outputs to compute - confidences. - - Args: - head_outputs (Tensor): head outputs used as predictor inputs - Return: - An instance of outputs with confidences, - see `decorate_predictor_output_class_with_confidences` - """ - # assuming base class returns SIUV estimates in its first result - base_predictor_outputs = super().forward(head_outputs) # pyre-ignore[16] - - # create output instance by extending base predictor outputs: - output = self._create_output_instance(base_predictor_outputs) - - if self.confidence_model_cfg.uv_confidence.enabled: - if self.confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.IID_ISO: - # assuming base class defines interp2d method for bilinear interpolation - output.sigma_2 = self.interp2d(self.sigma_2_lowres(head_outputs)) # pyre-ignore[16] - elif ( - self.confidence_model_cfg.uv_confidence.type - == DensePoseUVConfidenceType.INDEP_ANISO - ): - # assuming base class defines interp2d method for bilinear interpolation - output.sigma_2 = self.interp2d(self.sigma_2_lowres(head_outputs)) - output.kappa_u = self.interp2d(self.kappa_u_lowres(head_outputs)) # pyre-ignore[16] - output.kappa_v = self.interp2d(self.kappa_v_lowres(head_outputs)) # pyre-ignore[16] - else: - raise ValueError( - f"Unknown confidence model type: " - f"{self.confidence_model_cfg.confidence_model_type}" - ) - if self.confidence_model_cfg.segm_confidence.enabled: - # base predictor outputs are assumed to have `fine_segm` and `coarse_segm` attributes - # base predictor is assumed to define `interp2d` method for bilinear interpolation - output.fine_segm_confidence = ( - F.softplus( - self.interp2d(self.fine_segm_confidence_lowres(head_outputs)) # pyre-ignore[16] - ) - + self.confidence_model_cfg.segm_confidence.epsilon - ) - output.fine_segm = base_predictor_outputs.fine_segm * torch.repeat_interleave( - output.fine_segm_confidence, base_predictor_outputs.fine_segm.shape[1], dim=1 - ) - output.coarse_segm_confidence = ( - F.softplus( - self.interp2d( - self.coarse_segm_confidence_lowres(head_outputs) # pyre-ignore[16] - ) - ) - + self.confidence_model_cfg.segm_confidence.epsilon - ) - output.coarse_segm = base_predictor_outputs.coarse_segm * torch.repeat_interleave( - output.coarse_segm_confidence, base_predictor_outputs.coarse_segm.shape[1], dim=1 - ) - - return output - - def _create_output_instance(self, base_predictor_outputs: Any): - """ - Create an instance of predictor outputs by copying the outputs from the - base predictor and initializing confidence - - Args: - base_predictor_outputs: an instance of base predictor outputs - (the outputs type is assumed to be a dataclass) - Return: - An instance of outputs with confidences - """ - PredictorOutput = decorate_predictor_output_class_with_confidences( - type(base_predictor_outputs) # pyre-ignore[6] - ) - # base_predictor_outputs is assumed to be a dataclass - # reassign all the fields from base_predictor_outputs (no deep copy!), add new fields - output = PredictorOutput( - **base_predictor_outputs.__dict__, - coarse_segm_confidence=None, - fine_segm_confidence=None, - sigma_1=None, - sigma_2=None, - kappa_u=None, - kappa_v=None, - ) - return output diff --git a/spaces/niro-private/chatCSV/sidebar.py b/spaces/niro-private/chatCSV/sidebar.py deleted file mode 100644 index c1dff82a56ec38d5236be034eefa2e57714ac720..0000000000000000000000000000000000000000 --- a/spaces/niro-private/chatCSV/sidebar.py +++ /dev/null @@ -1,11 +0,0 @@ -import streamlit as st - - -def sidebar(supabase): - st.sidebar.title("Database Information") - number_of_docs = number_of_documents(supabase) - st.sidebar.markdown(f"**Docs in DB:** {number_of_docs}") - -def number_of_documents(supabase): - documents = supabase.table("documents").select("id", count="exact").execute() - return documents.count \ No newline at end of file diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/data/train_dataset.py b/spaces/oguzakif/video-object-remover/FGT_codes/FGT/data/train_dataset.py deleted file mode 100644 index 917af60bdfc094a7a1c68d97c608889b25a1379a..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/data/train_dataset.py +++ /dev/null @@ -1,165 +0,0 @@ -import random -import pickle - -import logging -import torch -import cv2 -import os - -from torch.utils.data.dataset import Dataset -import numpy as np -import cvbase -from .util.STTN_mask import create_random_shape_with_random_motion -import imageio -from .util.flow_utils import region_fill as rf - -logger = logging.getLogger('base') - - -class VideoBasedDataset(Dataset): - def __init__(self, opt, dataInfo): - self.opt = opt - self.sampleMethod = opt['sample'] - self.dataInfo = dataInfo - self.height, self.width = self.opt['input_resolution'] - self.frame_path = dataInfo['frame_path'] - self.flow_path = dataInfo['flow_path'] # The path of the optical flows - self.train_list = os.listdir(self.frame_path) - self.name2length = self.dataInfo['name2len'] - with open(self.name2length, 'rb') as f: - self.name2length = pickle.load(f) - self.sequenceLen = self.opt['num_frames'] - self.flow2rgb = opt['flow2rgb'] # whether to change flow to rgb domain - self.flow_direction = opt[ - 'flow_direction'] # The direction must be in ['for', 'back', 'bi'], indicating forward, backward and bidirectional flows - - def __len__(self): - return len(self.train_list) - - def __getitem__(self, idx): - try: - item = self.load_item(idx) - except: - print('Loading error: ' + self.train_list[idx]) - item = self.load_item(0) - return item - - def frameSample(self, frameLen, sequenceLen): - if self.sampleMethod == 'random': - indices = [i for i in range(frameLen)] - sampleIndices = random.sample(indices, sequenceLen) - elif self.sampleMethod == 'seq': - pivot = random.randint(0, sequenceLen - 1 - frameLen) - sampleIndices = [i for i in range(pivot, pivot + frameLen)] - else: - raise ValueError('Cannot determine the sample method {}'.format(self.sampleMethod)) - return sampleIndices - - def load_item(self, idx): - video = self.train_list[idx] - frame_dir = os.path.join(self.frame_path, video) - forward_flow_dir = os.path.join(self.flow_path, video, 'forward_flo') - backward_flow_dir = os.path.join(self.flow_path, video, 'backward_flo') - frameLen = self.name2length[video] - flowLen = frameLen - 1 - assert frameLen > self.sequenceLen, 'Frame length {} is less than sequence length'.format(frameLen) - sampledIndices = self.frameSample(frameLen, self.sequenceLen) - - # generate random masks for these sampled frames - candidateMasks = create_random_shape_with_random_motion(frameLen, 0.9, 1.1, 1, 10) - - # read the frames and masks - frames, masks, forward_flows, backward_flows = [], [], [], [] - for i in range(len(sampledIndices)): - frame = self.read_frame(os.path.join(frame_dir, '{:05d}.jpg'.format(sampledIndices[i])), self.height, - self.width) - mask = self.read_mask(candidateMasks[sampledIndices[i]], self.height, self.width) - frames.append(frame) - masks.append(mask) - if self.flow_direction == 'for': - forward_flow = self.read_forward_flow(forward_flow_dir, sampledIndices[i], flowLen) - forward_flow = self.diffusion_flow(forward_flow, mask) - forward_flows.append(forward_flow) - elif self.flow_direction == 'back': - backward_flow = self.read_backward_flow(backward_flow_dir, sampledIndices[i]) - backward_flow = self.diffusion_flow(backward_flow, mask) - backward_flows.append(backward_flow) - elif self.flow_direction == 'bi': - forward_flow = self.read_forward_flow(forward_flow_dir, sampledIndices[i], flowLen) - forward_flow = self.diffusion_flow(forward_flow, mask) - forward_flows.append(forward_flow) - backward_flow = self.read_backward_flow(backward_flow_dir, sampledIndices[i]) - backward_flow = self.diffusion_flow(backward_flow, mask) - backward_flows.append(backward_flow) - else: - raise ValueError('Unknown flow direction mode: {}'.format(self.flow_direction)) - inputs = {'frames': frames, 'masks': masks, 'forward_flo': forward_flows, 'backward_flo': backward_flows} - inputs = self.to_tensor(inputs) - inputs['frames'] = (inputs['frames'] / 255.) * 2 - 1 - return inputs - - def diffusion_flow(self, flow, mask): - flow_filled = np.zeros(flow.shape) - flow_filled[:, :, 0] = rf.regionfill(flow[:, :, 0] * (1 - mask), mask) - flow_filled[:, :, 1] = rf.regionfill(flow[:, :, 1] * (1 - mask), mask) - return flow_filled - - def read_frame(self, path, height, width): - frame = imageio.imread(path) - frame = cv2.resize(frame, (width, height), cv2.INTER_LINEAR) - return frame - - def read_mask(self, mask, height, width): - mask = np.array(mask) - mask = mask / 255. - raw_mask = (mask > 0.5).astype(np.uint8) - raw_mask = cv2.resize(raw_mask, dsize=(width, height), interpolation=cv2.INTER_NEAREST) - return raw_mask - - def read_forward_flow(self, forward_flow_dir, sampledIndex, flowLen): - if sampledIndex >= flowLen: - sampledIndex = flowLen - 1 - flow = cvbase.read_flow(os.path.join(forward_flow_dir, '{:05d}.flo'.format(sampledIndex))) - height, width = flow.shape[:2] - flow = cv2.resize(flow, (self.width, self.height), cv2.INTER_LINEAR) - flow[:, :, 0] = flow[:, :, 0] / width * self.width - flow[:, :, 1] = flow[:, :, 1] / height * self.height - return flow - - def read_backward_flow(self, backward_flow_dir, sampledIndex): - if sampledIndex == 0: - sampledIndex = 0 - else: - sampledIndex -= 1 - flow = cvbase.read_flow(os.path.join(backward_flow_dir, '{:05d}.flo'.format(sampledIndex))) - height, width = flow.shape[:2] - flow = cv2.resize(flow, (self.width, self.height), cv2.INTER_LINEAR) - flow[:, :, 0] = flow[:, :, 0] / width * self.width - flow[:, :, 1] = flow[:, :, 1] / height * self.height - return flow - - def to_tensor(self, data_list): - """ - - Args: - data_list: A list contains multiple numpy arrays - - Returns: The stacked tensor list - - """ - keys = list(data_list.keys()) - for key in keys: - if data_list[key] is None or data_list[key] == []: - data_list.pop(key) - else: - item = data_list[key] - if not isinstance(item, list): - item = torch.from_numpy(np.transpose(item, (2, 0, 1))).float() # [c, h, w] - else: - item = np.stack(item, axis=0) - if len(item.shape) == 3: # [t, h, w] - item = item[:, :, :, np.newaxis] - item = torch.from_numpy(np.transpose(item, (0, 3, 1, 2))).float() # [t, c, h, w] - data_list[key] = item - return data_list - diff --git a/spaces/omri374/presidio/index.md b/spaces/omri374/presidio/index.md deleted file mode 100644 index 0f1316c6a412a145679d65d04b06ae378c4553b5..0000000000000000000000000000000000000000 --- a/spaces/omri374/presidio/index.md +++ /dev/null @@ -1,26 +0,0 @@ -# Simple demo website for Presidio -Here's a simple app, written in pure Python, to create a demo website for Presidio. -The app is based on the [streamlit](https://streamlit.io/) package. - -A live version can be found here: https://huggingface.co/spaces/presidio/presidio_demo - -## Requirements -1. Clone the repo and move to the `docs/samples/python/streamlit ` folder -1. Install dependencies (preferably in a virtual environment) - -```sh -pip install -r requirements -``` -> Note: This would install additional packages such as `transformers` and `flair` which are not mandatory for using Presidio. - -2. -3. *Optional*: Update the `analyzer_engine` and `anonymizer_engine` functions for your specific implementation (in `presidio_helpers.py`). -3. Start the app: - -```sh -streamlit run presidio_streamlit.py -``` - -## Output -Output should be similar to this screenshot: -![image](https://user-images.githubusercontent.com/3776619/232289541-d59992e1-52a4-44c1-b904-b22c72c02a5b.png) diff --git a/spaces/osmanriver/Alist/Dockerfile b/spaces/osmanriver/Alist/Dockerfile deleted file mode 100644 index 600c0dfc28c79ae1df031899cb9e3fa6bf7aa2a1..0000000000000000000000000000000000000000 --- a/spaces/osmanriver/Alist/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -FROM nvidia/cuda:11.3.1-base-ubuntu20.04 -ENV DEBIAN_FRONTEND noninteractive - -WORKDIR /content - -RUN apt-get update -y && apt-get upgrade -y && apt-get install -y sudo && apt-get install -y python3-pip && pip3 install --upgrade pip -RUN apt-get install -y curl tzdata aria2 gnupg wget htop sudo git git-lfs software-properties-common build-essential libgl1 zip unzip - -# Config timezone -RUN date -R && sudo ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && date -R - -ENV PATH="/home/admin/.local/bin:${PATH}" -ENV ALIST_TAR="alist-linux-amd64.tar.gz" -# # Alist -# RUN wget https://github.com/alist-org/alist/releases/download/v3.12.2/alist-linux-amd64.tar.gz -RUN curl -s https://api.github.com/repos/alist-org/alist/releases/latest | grep $ALIST_TAR | grep "browser_download_url" | awk '{print$2}' | xargs -I {} wget {} -RUN ls $ALIST_TAR || wget https://github.com/alist-org/alist/releases/download/v3.12.2/alist-linux-amd64.tar.gz -RUN tar -zxvf $ALIST_TAR ; rm *.gz && chmod 777 alist && ls -l - -COPY *.sh . -RUN chmod a+x script.sh - -RUN adduser --disabled-password --gecos '' admin -RUN adduser admin sudo -RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers - -RUN chown -R admin:admin /content -RUN chmod -R 777 /content -RUN chown -R admin:admin /home -RUN chmod -R 777 /home -USER admin - -EXPOSE 5244 - -CMD ["./script.sh"] \ No newline at end of file diff --git a/spaces/passaglia/yomikata-demo/config/config.py b/spaces/passaglia/yomikata-demo/config/config.py deleted file mode 100644 index ec91ec2602ad5aad29467725704bf89b65c969dd..0000000000000000000000000000000000000000 --- a/spaces/passaglia/yomikata-demo/config/config.py +++ /dev/null @@ -1,98 +0,0 @@ -# config.py - -import json -import logging.config -import sys -from pathlib import Path - -from rich.logging import RichHandler - -# Base and Config Directories -BASE_DIR = Path(__file__).parent.parent.absolute() -CONFIG_DIR = Path(BASE_DIR, "config") - -# Data Directories -RAW_DATA_DIR = Path(BASE_DIR, "raw_data") -SENTENCE_DATA_DIR = Path(BASE_DIR, "sentence_data") -TRAIN_DATA_DIR = Path(SENTENCE_DATA_DIR, "train") -VAL_DATA_DIR = Path(SENTENCE_DATA_DIR, "val") -TEST_DATA_DIR = Path(SENTENCE_DATA_DIR, "test") -READING_DATA_DIR = Path(BASE_DIR, "reading_data") - -# Logs Directory -LOGS_DIR = Path(BASE_DIR, "logs") - -# Model Storage Directory -STORES_DIR = Path(BASE_DIR, "stores") -RUN_REGISTRY = Path(STORES_DIR, "runs") - -# Create dirs -RAW_DATA_DIR.mkdir(parents=True, exist_ok=True) -SENTENCE_DATA_DIR.mkdir(parents=True, exist_ok=True) -TRAIN_DATA_DIR.mkdir(parents=True, exist_ok=True) -VAL_DATA_DIR.mkdir(parents=True, exist_ok=True) -TEST_DATA_DIR.mkdir(parents=True, exist_ok=True) -READING_DATA_DIR.mkdir(parents=True, exist_ok=True) -LOGS_DIR.mkdir(parents=True, exist_ok=True) -STORES_DIR.mkdir(parents=True, exist_ok=True) -RUN_REGISTRY.mkdir(parents=True, exist_ok=True) - -# Special tokens reserved -ASCII_SPACE_TOKEN = "\U0000FFFF" # this is used to replace the usual space characters before sending text to mecab, because mecab uses the usual space to separate words. - -# Seed -SEED = 1271297 - -# Training parameters -TRAIN_SIZE = 0.7 -VAL_SIZE = 0.15 -TEST_SIZE = 0.15 -assert TRAIN_SIZE + VAL_SIZE + TEST_SIZE == 1 - -# Heteronym list -with open(Path(CONFIG_DIR, "heteronyms.json")) as fp: - HETERONYMS = json.load(fp) - -# Logger -logging_config = { - "version": 1, - "disable_existing_loggers": False, - "formatters": { - "minimal": {"format": "%(message)s"}, - "detailed": { - "format": "%(levelname)s %(asctime)s [%(name)s:%(filename)s:%(funcName)s:%(lineno)d]\n%(message)s\n" - }, - }, - "handlers": { - "console": { - "class": "logging.StreamHandler", - "stream": sys.stdout, - "formatter": "minimal", - "level": logging.DEBUG, - }, - "info": { - "class": "logging.handlers.RotatingFileHandler", - "filename": Path(LOGS_DIR, "info.log"), - "maxBytes": 10485760, # 1 MB - "backupCount": 10, - "formatter": "detailed", - "level": logging.INFO, - }, - "error": { - "class": "logging.handlers.RotatingFileHandler", - "filename": Path(LOGS_DIR, "error.log"), - "maxBytes": 10485760, # 1 MB - "backupCount": 10, - "formatter": "detailed", - "level": logging.ERROR, - }, - }, - "root": { - "handlers": ["console", "info", "error"], - "level": logging.INFO, - "propagate": True, - }, -} -logging.config.dictConfig(logging_config) -logger = logging.getLogger() -logger.handlers[0] = RichHandler(markup=True) diff --git a/spaces/passaglia/yomikata-demo/yomikata/dictionary.py b/spaces/passaglia/yomikata-demo/yomikata/dictionary.py deleted file mode 100644 index b39bdda85da304624ba9e1f09814099f74299c98..0000000000000000000000000000000000000000 --- a/spaces/passaglia/yomikata-demo/yomikata/dictionary.py +++ /dev/null @@ -1,211 +0,0 @@ -""" -dictionary.py -Provides the Dictionary class which implements Reader using dictionary lookup. -""" - -from difflib import ndiff - -import jaconv -from chirptext import deko -from speach import ttlig -from speach.ttlig import RubyFrag, RubyToken - -from yomikata import utils -from config.config import ASCII_SPACE_TOKEN -from yomikata.reader import Reader - - -class Dictionary(Reader): - def __init__(self, tagger: str = "unidic") -> None: - """Create a Dictionary object to apply furigana using Dictionary lookup - Object holds configuration and tokenizer state. - - Typical usage: - - ```python - reader = Dictionary() - furi = Dictionary.furigana("お前はもう死んでいる") - # "お{前/まえ}はもう{死/し}んでいる" - ``` - - Args: - tagger (str, optional): Tokenizing dictionary to be used。 Defaults to `unidic`. `juman`, `ipadic`, 'sudachi' also possible. - """ - - if tagger == "unidic": - import fugashi - - self.tagger = fugashi.Tagger() - self.token_to_surface = lambda word: word.surface - self.token_to_pos = lambda word: word.feature.pos1 - self.token_to_kana = ( - lambda word: jaconv.kata2hira(str(word)) - if (word.feature.kana == "*" or word.feature.kana is None) - else jaconv.kata2hira(str(word.feature.kana)) - ) - elif tagger == "ipadic": - import fugashi - import ipadic - - self.tagger = fugashi.GenericTagger(ipadic.MECAB_ARGS) - self.token_to_surface = lambda word: word.surface - self.token_to_pos = lambda word: word.feature[0] - self.token_to_kana = ( - lambda word: jaconv.kata2hira(str(word.feature[7])) - if len(word.feature) >= 8 - else jaconv.kata2hira(str(word.surface)) - ) - elif tagger == "juman": - import fugashi - import jumandic - - self.tagger = fugashi.GenericTagger(jumandic.MECAB_ARGS) - self.token_to_surface = lambda word: word.surface - self.token_to_pos = lambda word: word.feature[0] - self.token_to_kana = ( - lambda word: word.feature[5] - if word.feature[5] != "*" - else jaconv.kata2hira(str(word)) - ) - elif tagger == "sudachi": - from sudachipy import dictionary as sudachidict - from sudachipy import tokenizer as sudachitokenizer - - tokenizer_obj = sudachidict.Dictionary(dict="full").create() - mode = sudachitokenizer.Tokenizer.SplitMode.C - self.tagger = lambda s: tokenizer_obj.tokenize(s, mode) - self.token_to_surface = lambda word: word.surface() - self.token_to_pos = lambda word: word.part_of_speech()[0] - self.token_to_kana = lambda word: jaconv.kata2hira( - utils.standardize_text(str(word.reading_form())) - ) - - def furigana(self, text: str) -> str: - text = utils.standardize_text(text) - text = text.replace(" ", ASCII_SPACE_TOKEN) - rubytoken = utils.parse_furigana(text) - output = "" - - for group in rubytoken.groups: - if isinstance(group, ttlig.RubyFrag): - output += f"{{{group.text}/{group.furi}}}" - else: - group = group.replace("{", "").replace("}", "") - for word in self.tagger(group): - kana = self.token_to_kana(word) - surface = self.token_to_surface(word) - pos = self.token_to_pos(word) - if (surface == kana) or pos in ["記号", "補助記号", "特殊"]: - output += surface - else: - output += Dictionary.furi_to_ruby(surface, kana).to_code() - output = output.replace(ASCII_SPACE_TOKEN, " ") - return output - - @staticmethod - def furi_to_ruby(surface, kana): - """Combine a surface string and a kana string to a RubyToken object with furigana. - - Args: - surface (str): Surface string - kana (str): Kana string - - Returns: - RubyToken: RubyToken object with furigana - - This code is modified from the version in the part of speach library: - https://github.com/neocl/speach/ - https://github.com/neocl/speach/blob/main/speach/ttlig.py - :copyright: (c) 2018 Le Tuan Anh - :license: MIT - """ - - def common_substring_from_right(string1, string2): - i = -1 # start from the end of strings - while -i <= min(len(string1), len(string2)): - if string1[i] != string2[i]: # if characters don't match, break - break - i -= 1 # decrement i to move towards start - return string1[i + 1 :] if i != -1 else "" # return common substring - - def assert_rubytoken_kana_match(ruby: RubyToken, kana: str) -> None: - assert ( - "".join( - [token.furi if isinstance(token, RubyFrag) else token for token in ruby.groups] - ) - == kana - ) - - original_kana = kana - - final_text = common_substring_from_right(surface, kana) - - if final_text: - surface = surface[: -len(final_text)] - kana = kana[: -len(final_text)] - - ruby = RubyToken(surface=surface) - if deko.is_kana(surface): - ruby.append(surface) - if final_text: - ruby.append(final_text) - assert_rubytoken_kana_match(ruby, original_kana) - return ruby - - edit_seq = ndiff(surface, kana) - kanji = "" - text = "" - furi = "" - before = "" - expected = "" - for item in edit_seq: - if item.startswith("- "): - # flush text if needed - if expected and kanji and furi: - ruby.append(RubyFrag(text=kanji, furi=furi)) - kanji = "" - furi = "" - print(ruby) - if text: - ruby.append(text) - text = "" - kanji += item[2:] - elif item.startswith("+ "): - if expected and item[2:] == expected: - if expected and kanji and furi: - ruby.append(RubyFrag(text=kanji, furi=furi)) - kanji = "" - furi = "" - ruby.append(item[2:]) - expected = "" - else: - furi += item[2:] - elif item.startswith(" "): - if before == "-" and not furi: - # shifting happened - expected = item[2:] - furi += item[2:] - else: - text += item[2:] - # flush if possible - if kanji and furi: - ruby.append(RubyFrag(text=kanji, furi=furi)) - kanji = "" - furi = "" - else: - # possible error? - pass - before = item[0] # end for - if kanji: - if furi: - ruby.append(RubyFrag(text=kanji, furi=furi)) - else: - ruby.append(kanji) - elif text: - ruby.append(text) - - if final_text: - ruby.append(final_text) - - assert_rubytoken_kana_match(ruby, original_kana) - return ruby diff --git a/spaces/pcuenq/dreambooth-training/convertosd.py b/spaces/pcuenq/dreambooth-training/convertosd.py deleted file mode 100644 index b242edb1de11ad551b3c7ad98f5689fef2c3321a..0000000000000000000000000000000000000000 --- a/spaces/pcuenq/dreambooth-training/convertosd.py +++ /dev/null @@ -1,223 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. -# Written by jachiam - -import argparse -import os.path as osp - -import torch - - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# -# pretty much a no-op - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location='cpu') - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location='cpu') - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location='cpu') - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - - state_dict = {k:v.half() for k,v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) diff --git a/spaces/pikto/Elite-freegpt-webui/client/js/highlight.min.js b/spaces/pikto/Elite-freegpt-webui/client/js/highlight.min.js deleted file mode 100644 index d410b45b38119606525a0a7c0c60c428c5ee6eb7..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/client/js/highlight.min.js +++ /dev/null @@ -1 +0,0 @@ -var hljs=function(){"use strict";var e={exports:{}};function n(e){return e instanceof Map?e.clear=e.delete=e.set=()=>{throw Error("map is read-only")}:e instanceof Set&&(e.add=e.clear=e.delete=()=>{throw Error("set is read-only")}),Object.freeze(e),Object.getOwnPropertyNames(e).forEach(t=>{var a=e[t];"object"!=typeof a||Object.isFrozen(a)||n(a)}),e}e.exports=n,e.exports.default=n;class t{constructor(e){void 0===e.data&&(e.data={}),this.data=e.data,this.isMatchIgnored=!1}ignoreMatch(){this.isMatchIgnored=!0}}function a(e){return e.replace(/&/g,"&").replace(//g,">").replace(/"/g,""").replace(/'/g,"'")}function i(e,...n){let t=Object.create(null);for(let a in e)t[a]=e[a];return n.forEach(e=>{for(let n in e)t[n]=e[n]}),t}let r=e=>!!e.scope||e.sublanguage&&e.language;class s{constructor(e,n){this.buffer="",this.classPrefix=n.classPrefix,e.walk(this)}addText(e){this.buffer+=a(e)}openNode(e){if(!r(e))return;let n="";n=e.sublanguage?"language-"+e.language:((e,{prefix:n})=>{if(e.includes(".")){let t=e.split(".");return[`${n}${t.shift()}`,...t.map((e,n)=>`${e}${"_".repeat(n+1)}`),].join(" ")}return`${n}${e}`})(e.scope,{prefix:this.classPrefix}),this.span(n)}closeNode(e){r(e)&&(this.buffer+="")}value(){return this.buffer}span(e){this.buffer+=``}}let l=(e={})=>{let n={children:[]};return Object.assign(n,e),n};class o{constructor(){this.rootNode=l(),this.stack=[this.rootNode]}get top(){return this.stack[this.stack.length-1]}get root(){return this.rootNode}add(e){this.top.children.push(e)}openNode(e){let n=l({scope:e});this.add(n),this.stack.push(n)}closeNode(){if(this.stack.length>1)return this.stack.pop()}closeAllNodes(){for(;this.closeNode(););}toJSON(){return JSON.stringify(this.rootNode,null,4)}walk(e){return this.constructor._walk(e,this.rootNode)}static _walk(e,n){return"string"==typeof n?e.addText(n):n.children&&(e.openNode(n),n.children.forEach(n=>this._walk(e,n)),e.closeNode(n)),e}static _collapse(e){"string"!=typeof e&&e.children&&(e.children.every(e=>"string"==typeof e)?e.children=[e.children.join("")]:e.children.forEach(e=>{o._collapse(e)}))}}class c extends o{constructor(e){super(),this.options=e}addKeyword(e,n){""!==e&&(this.openNode(n),this.addText(e),this.closeNode())}addText(e){""!==e&&this.add(e)}addSublanguage(e,n){let t=e.root;t.sublanguage=!0,t.language=n,this.add(t)}toHTML(){return new s(this,this.options).value()}finalize(){return!0}}function d(e){return e?"string"==typeof e?e:e.source:null}function g(e){return m("(?=",e,")")}function u(e){return m("(?:",e,")*")}function b(e){return m("(?:",e,")?")}function m(...e){return e.map(e=>d(e)).join("")}function p(...e){let n=(e=>{let n=e[e.length-1];return"object"==typeof n&&n.constructor===Object?(e.splice(e.length-1,1),n):{}})(e);return"("+(n.capture?"":"?:")+e.map(e=>d(e)).join("|")+")"}function h(e){return RegExp(e.toString()+"|").exec("").length-1}let f=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function E(e,{joinWith:n}){let t=0;return e.map(e=>{t+=1;let n=t,a=d(e),i="";for(;a.length>0;){let r=f.exec(a);if(!r){i+=a;break}i+=a.substring(0,r.index),a=a.substring(r.index+r[0].length),"\\"===r[0][0]&&r[1]?i+="\\"+(Number(r[1])+n):(i+=r[0],"("===r[0]&&t++)}return i}).map(e=>`(${e})`).join(n)}let $="[a-zA-Z]\\w*",y="[a-zA-Z_]\\w*",N="\\b\\d+(\\.\\d+)?",w="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",v="\\b(0b[01]+)",x={begin:"\\\\[\\s\\S]",relevance:0},k=(e,n,t={})=>{let a=i({scope:"comment",begin:e,end:n,contains:[]},t);a.contains.push({scope:"doctag",begin:"[ ]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)",end:/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,excludeBegin:!0,relevance:0});let r=p("I","a","is","so","us","to","at","if","in","it","on",/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return a.contains.push({begin:m(/[ ]+/,"(",r,/[.]?[:]?([.][ ]|[ ])/,"){3}")}),a},M=k("//","$"),O=k("/\\*","\\*/"),S=k("#","$");var A=Object.freeze({__proto__:null,MATCH_NOTHING_RE:/\b\B/,IDENT_RE:$,UNDERSCORE_IDENT_RE:y,NUMBER_RE:N,C_NUMBER_RE:w,BINARY_NUMBER_RE:v,RE_STARTERS_RE:"!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",SHEBANG(e={}){let n=/^#![ ]*\//;return e.binary&&(e.begin=m(n,/.*\b/,e.binary,/\b.*/)),i({scope:"meta",begin:n,end:/$/,relevance:0,"on:begin"(e,n){0!==e.index&&n.ignoreMatch()}},e)},BACKSLASH_ESCAPE:x,APOS_STRING_MODE:{scope:"string",begin:"'",end:"'",illegal:"\\n",contains:[x]},QUOTE_STRING_MODE:{scope:"string",begin:'"',end:'"',illegal:"\\n",contains:[x]},PHRASAL_WORDS_MODE:{begin:/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},COMMENT:k,C_LINE_COMMENT_MODE:M,C_BLOCK_COMMENT_MODE:O,HASH_COMMENT_MODE:S,NUMBER_MODE:{scope:"number",begin:N,relevance:0},C_NUMBER_MODE:{scope:"number",begin:w,relevance:0},BINARY_NUMBER_MODE:{scope:"number",begin:v,relevance:0},REGEXP_MODE:{begin:/(?=\/[^/\n]*\/)/,contains:[{scope:"regexp",begin:/\//,end:/\/[gimuy]*/,illegal:/\n/,contains:[x,{begin:/\[/,end:/\]/,relevance:0,contains:[x]},]},]},TITLE_MODE:{scope:"title",begin:$,relevance:0},UNDERSCORE_TITLE_MODE:{scope:"title",begin:y,relevance:0},METHOD_GUARD:{begin:"\\.\\s*[a-zA-Z_]\\w*",relevance:0},END_SAME_AS_BEGIN:e=>Object.assign(e,{"on:begin"(e,n){n.data._beginMatch=e[1]},"on:end"(e,n){n.data._beginMatch!==e[1]&&n.ignoreMatch()}})});function C(e,n){"."===e.input[e.index-1]&&n.ignoreMatch()}function T(e,n){void 0!==e.className&&(e.scope=e.className,delete e.className)}function R(e,n){n&&e.beginKeywords&&(e.begin="\\b("+e.beginKeywords.split(" ").join("|")+")(?!\\.)(?=\\b|\\s)",e.__beforeBegin=C,e.keywords=e.keywords||e.beginKeywords,delete e.beginKeywords,void 0===e.relevance&&(e.relevance=0))}function D(e,n){Array.isArray(e.illegal)&&(e.illegal=p(...e.illegal))}function I(e,n){if(e.match){if(e.begin||e.end)throw Error("begin & end are not supported with match");e.begin=e.match,delete e.match}}function L(e,n){void 0===e.relevance&&(e.relevance=1)}let B=(e,n)=>{if(!e.beforeMatch)return;if(e.starts)throw Error("beforeMatch cannot be used with starts");let t=Object.assign({},e);Object.keys(e).forEach(n=>{delete e[n]}),e.keywords=t.keywords,e.begin=m(t.beforeMatch,g(t.begin)),e.starts={relevance:0,contains:[Object.assign(t,{endsParent:!0})]},e.relevance=0,delete t.beforeMatch},_=["of","and","for","in","not","or","if","then","parent","list","value",],z={},F=e=>{console.error(e)},U=(e,...n)=>{},P=(e,n)=>{z[`${e}/${n}`]||(console.log(`Deprecated as of ${e}. ${n}`),z[`${e}/${n}`]=!0)},j=Error();function K(e,n,{key:t}){let a=0,i=e[t],r={},s={};for(let l=1;l<=n.length;l++)s[l+a]=i[l],r[l+a]=!0,a+=h(n[l-1]);e[t]=s,e[t]._emit=r,e[t]._multi=!0}function q(e){var n;(n=e).scope&&"object"==typeof n.scope&&null!==n.scope&&(n.beginScope=n.scope,delete n.scope),"string"==typeof e.beginScope&&(e.beginScope={_wrap:e.beginScope}),"string"==typeof e.endScope&&(e.endScope={_wrap:e.endScope}),(e=>{if(Array.isArray(e.begin)){if(e.skip||e.excludeBegin||e.returnBegin)throw F("skip, excludeBegin, returnBegin not compatible with beginScope: {}"),j;if("object"!=typeof e.beginScope||null===e.beginScope)throw F("beginScope must be object"),j;K(e,e.begin,{key:"beginScope"}),e.begin=E(e.begin,{joinWith:""})}})(e),(e=>{if(Array.isArray(e.end)){if(e.skip||e.excludeEnd||e.returnEnd)throw F("skip, excludeEnd, returnEnd not compatible with endScope: {}"),j;if("object"!=typeof e.endScope||null===e.endScope)throw F("endScope must be object"),j;K(e,e.end,{key:"endScope"}),e.end=E(e.end,{joinWith:""})}})(e)}class H extends Error{constructor(e,n){super(e),this.name="HTMLInjectionError",this.html=n}}let Z=a,G=i,W=Symbol("nomatch");var Q=(n=>{let a=Object.create(null),r=Object.create(null),s=[],l=!0,o="Could not find the language '{}', did you forget to load/include a language module?",f={disableAutodetect:!0,name:"Plain text",contains:[]},$={ignoreUnescapedHTML:!1,throwUnescapedHTML:!1,noHighlightRe:/^(no-?highlight)$/i,languageDetectRe:/\blang(?:uage)?-([\w-]+)\b/i,classPrefix:"hljs-",cssSelector:"pre code",languages:null,__emitter:c};function y(e){return $.noHighlightRe.test(e)}function N(e,n,t){let a="",i="";"object"==typeof n?(a=e,t=n.ignoreIllegals,i=n.language):(P("10.7.0","highlight(lang, code, ...args) has been deprecated."),P("10.7.0","Please use highlight(code, options) instead.\nhttps://github.com/highlightjs/highlight.js/issues/2277"),i=e,a=n),void 0===t&&(t=!0);let r={code:a,language:i};z("before:highlight",r);let s=r.result?r.result:w(r.language,r.code,t);return s.code=r.code,z("after:highlight",s),s}function w(e,n,r,s){let c=Object.create(null);function g(){var e;if(!M.keywords)return void A.addText(C);let n=0;M.keywordPatternRe.lastIndex=0;let t=M.keywordPatternRe.exec(C),a="";for(;t;){a+=C.substring(n,t.index);let i=N.case_insensitive?t[0].toLowerCase():t[0],r=(e=i,M.keywords[e]);if(r){let[s,l]=r;if(A.addText(a),a="",c[i]=(c[i]||0)+1,c[i]<=7&&(z+=l),s.startsWith("_"))a+=t[0];else{let o=N.classNameAliases[s]||s;A.addKeyword(t[0],o)}}else a+=t[0];n=M.keywordPatternRe.lastIndex,t=M.keywordPatternRe.exec(C)}a+=C.substring(n),A.addText(a)}function u(){null!=M.subLanguage?(()=>{if(""===C)return;let e=null;if("string"==typeof M.subLanguage){if(!a[M.subLanguage])return void A.addText(C);e=w(M.subLanguage,C,!0,S[M.subLanguage]),S[M.subLanguage]=e._top}else e=v(C,M.subLanguage.length?M.subLanguage:null);M.relevance>0&&(z+=e.relevance),A.addSublanguage(e._emitter,e.language)})():g(),C=""}function b(e,n){let t=1,a=n.length-1;for(;t<=a;){if(!e._emit[t]){t++;continue}let i=N.classNameAliases[e[t]]||e[t],r=n[t];i?A.addKeyword(r,i):(C=r,g(),C=""),t++}}function m(e,n){return e.scope&&"string"==typeof e.scope&&A.openNode(N.classNameAliases[e.scope]||e.scope),e.beginScope&&(e.beginScope._wrap?(A.addKeyword(C,N.classNameAliases[e.beginScope._wrap]||e.beginScope._wrap),C=""):e.beginScope._multi&&(b(e.beginScope,n),C="")),M=Object.create(e,{parent:{value:M}})}function p(e){return 0===M.matcher.regexIndex?(C+=e[0],1):(j=!0,0)}let f={};function y(a,i){let s=i&&i[0];if(C+=a,null==s)return u(),0;if("begin"===f.type&&"end"===i.type&&f.index===i.index&&""===s){if(C+=n.slice(i.index,i.index+1),!l){let o=Error(`0 width match regex (${e})`);throw o.languageName=e,o.badRule=f.rule,o}return 1}if(f=i,"begin"===i.type)return(e=>{let n=e[0],a=e.rule,i=new t(a),r=[a.__beforeBegin,a["on:begin"]];for(let s of r)if(s&&(s(e,i),i.isMatchIgnored))return p(n);return a.skip?C+=n:(a.excludeBegin&&(C+=n),u(),a.returnBegin||a.excludeBegin||(C=n)),m(a,e),a.returnBegin?0:n.length})(i);if("illegal"===i.type&&!r){let c=Error('Illegal lexeme "'+s+'" for mode "'+(M.scope||"")+'"');throw c.mode=M,c}if("end"===i.type){let d=function e(a){let i=a[0],r=n.substring(a.index),s=function e(n,a,i){let r=((e,n)=>{let t=e&&e.exec(n);return t&&0===t.index})(n.endRe,i);if(r){if(n["on:end"]){let s=new t(n);n["on:end"](a,s),s.isMatchIgnored&&(r=!1)}if(r){for(;n.endsParent&&n.parent;)n=n.parent;return n}}if(n.endsWithParent)return e(n.parent,a,i)}(M,a,r);if(!s)return W;let l=M;M.endScope&&M.endScope._wrap?(u(),A.addKeyword(i,M.endScope._wrap)):M.endScope&&M.endScope._multi?(u(),b(M.endScope,a)):l.skip?C+=i:(l.returnEnd||l.excludeEnd||(C+=i),u(),l.excludeEnd&&(C=i));do M.scope&&A.closeNode(),M.skip||M.subLanguage||(z+=M.relevance),M=M.parent;while(M!==s.parent);return s.starts&&m(s.starts,a),l.returnEnd?0:i.length}(i);if(d!==W)return d}if("illegal"===i.type&&""===s)return 1;if(P>1e5&&P>3*i.index)throw Error("potential infinite loop, way more iterations than matches");return C+=s,s.length}let N=O(e);if(!N)throw F(o.replace("{}",e)),Error('Unknown language: "'+e+'"');let x=function e(n){function t(e,t){return RegExp(d(e),"m"+(n.case_insensitive?"i":"")+(n.unicodeRegex?"u":"")+(t?"g":""))}class a{constructor(){this.matchIndexes={},this.regexes=[],this.matchAt=1,this.position=0}addRule(e,n){n.position=this.position++,this.matchIndexes[this.matchAt]=n,this.regexes.push([n,e]),this.matchAt+=h(e)+1}compile(){0===this.regexes.length&&(this.exec=()=>null);let e=this.regexes.map(e=>e[1]);this.matcherRe=t(E(e,{joinWith:"|"}),!0),this.lastIndex=0}exec(e){this.matcherRe.lastIndex=this.lastIndex;let n=this.matcherRe.exec(e);if(!n)return null;let t=n.findIndex((e,n)=>n>0&&void 0!==e),a=this.matchIndexes[t];return n.splice(0,t),Object.assign(n,a)}}class r{constructor(){this.rules=[],this.multiRegexes=[],this.count=0,this.lastIndex=0,this.regexIndex=0}getMatcher(e){if(this.multiRegexes[e])return this.multiRegexes[e];let n=new a;return this.rules.slice(e).forEach(([e,t])=>n.addRule(e,t)),n.compile(),this.multiRegexes[e]=n,n}resumingScanAtSamePosition(){return 0!==this.regexIndex}considerAll(){this.regexIndex=0}addRule(e,n){this.rules.push([e,n]),"begin"===n.type&&this.count++}exec(e){let n=this.getMatcher(this.regexIndex);n.lastIndex=this.lastIndex;let t=n.exec(e);if(this.resumingScanAtSamePosition()){if(t&&t.index===this.lastIndex);else{let a=this.getMatcher(0);a.lastIndex=this.lastIndex+1,t=a.exec(e)}}return t&&(this.regexIndex+=t.position+1,this.regexIndex===this.count&&this.considerAll()),t}}if(n.compilerExtensions||(n.compilerExtensions=[]),n.contains&&n.contains.includes("self"))throw Error("ERR: contains `self` is not supported at the top-level of a language. See documentation.");return n.classNameAliases=i(n.classNameAliases||{}),function e(a,s){let l=a;if(a.isCompiled)return l;[T,I,q,B].forEach(e=>e(a,s)),n.compilerExtensions.forEach(e=>e(a,s)),a.__beforeBegin=null,[R,D,L].forEach(e=>e(a,s)),a.isCompiled=!0;let o=null;return"object"==typeof a.keywords&&a.keywords.$pattern&&(a.keywords=Object.assign({},a.keywords),o=a.keywords.$pattern,delete a.keywords.$pattern),o=o||/\w+/,a.keywords&&(a.keywords=function e(n,t,a="keyword"){let i=Object.create(null);return"string"==typeof n?r(a,n.split(" ")):Array.isArray(n)?r(a,n):Object.keys(n).forEach(a=>{Object.assign(i,e(n[a],t,a))}),i;function r(e,n){t&&(n=n.map(e=>e.toLowerCase())),n.forEach(n=>{var t,a,r;let s=n.split("|");i[s[0]]=[e,(t=s[0],a=s[1],a?Number(a):(r=t,_.includes(r.toLowerCase()))?0:1)]})}}(a.keywords,n.case_insensitive)),l.keywordPatternRe=t(o,!0),s&&(a.begin||(a.begin=/\B|\b/),l.beginRe=t(l.begin),a.end||a.endsWithParent||(a.end=/\B|\b/),a.end&&(l.endRe=t(l.end)),l.terminatorEnd=d(l.end)||"",a.endsWithParent&&s.terminatorEnd&&(l.terminatorEnd+=(a.end?"|":"")+s.terminatorEnd)),a.illegal&&(l.illegalRe=t(a.illegal)),a.contains||(a.contains=[]),a.contains=[].concat(...a.contains.map(e=>{var n;return(n="self"===e?a:e).variants&&!n.cachedVariants&&(n.cachedVariants=n.variants.map(e=>i(n,{variants:null},e))),n.cachedVariants?n.cachedVariants:!function e(n){return!!n&&(n.endsWithParent||e(n.starts))}(n)?Object.isFrozen(n)?i(n):n:i(n,{starts:n.starts?i(n.starts):null})})),a.contains.forEach(n=>{e(n,l)}),a.starts&&e(a.starts,s),l.matcher=(e=>{let n=new r;return e.contains.forEach(e=>n.addRule(e.begin,{rule:e,type:"begin"})),e.terminatorEnd&&n.addRule(e.terminatorEnd,{type:"end"}),e.illegal&&n.addRule(e.illegal,{type:"illegal"}),n})(l),l}(n)}(N),k="",M=s||x,S={},A=new $.__emitter($);(()=>{let e=[];for(let n=M;n!==N;n=n.parent)n.scope&&e.unshift(n.scope);e.forEach(e=>A.openNode(e))})();let C="",z=0,U=0,P=0,j=!1;try{for(M.matcher.considerAll();;){P++,j?j=!1:M.matcher.considerAll(),M.matcher.lastIndex=U;let K=M.matcher.exec(n);if(!K)break;let H=y(n.substring(U,K.index),K);U=K.index+H}return y(n.substring(U)),A.closeAllNodes(),A.finalize(),k=A.toHTML(),{language:e,value:k,relevance:z,illegal:!1,_emitter:A,_top:M}}catch(G){if(G.message&&G.message.includes("Illegal"))return{language:e,value:Z(n),illegal:!0,relevance:0,_illegalBy:{message:G.message,index:U,context:n.slice(U-100,U+100),mode:G.mode,resultSoFar:k},_emitter:A};if(l)return{language:e,value:Z(n),illegal:!1,relevance:0,errorRaised:G,_emitter:A,_top:M};throw G}}function v(e,n){n=n||$.languages||Object.keys(a);let t=(e=>{let n={value:Z(e),illegal:!1,relevance:0,_top:f,_emitter:new $.__emitter($)};return n._emitter.addText(e),n})(e),i=n.filter(O).filter(C).map(n=>w(n,e,!1));i.unshift(t);let r=i.sort((e,n)=>{if(e.relevance!==n.relevance)return n.relevance-e.relevance;if(e.language&&n.language){if(O(e.language).supersetOf===n.language)return 1;if(O(n.language).supersetOf===e.language)return -1}return 0}),[s,l]=r,o=s;return o.secondBest=l,o}function x(e){let n=null,t=(e=>{let n=e.className+" ";n+=e.parentNode?e.parentNode.className:"";let t=$.languageDetectRe.exec(n);if(t){let a=O(t[1]);return a||(U(o.replace("{}",t[1])),U("Falling back to no-highlight mode for this block.",e)),a?t[1]:"no-highlight"}return n.split(/\s+/).find(e=>y(e)||O(e))})(e);if(y(t))return;if(z("before:highlightElement",{el:e,language:t}),e.children.length>0&&($.ignoreUnescapedHTML||$.throwUnescapedHTML))throw new H("One of your code blocks includes unescaped HTML.",e.innerHTML);n=e;let a=n.textContent,i=t?N(a,{language:t,ignoreIllegals:!0}):v(a);e.innerHTML=i.value,((e,n,t)=>{let a=n&&r[n]||t;e.classList.add("hljs"),e.classList.add("language-"+a)})(e,t,i.language),e.result={language:i.language,re:i.relevance,relevance:i.relevance},i.secondBest&&(e.secondBest={language:i.secondBest.language,relevance:i.secondBest.relevance}),z("after:highlightElement",{el:e,result:i,text:a})}let k=!1;function M(){"loading"!==document.readyState?document.querySelectorAll($.cssSelector).forEach(x):k=!0}function O(e){return a[e=(e||"").toLowerCase()]||a[r[e]]}function S(e,{languageName:n}){"string"==typeof e&&(e=[e]),e.forEach(e=>{r[e.toLowerCase()]=n})}function C(e){let n=O(e);return n&&!n.disableAutodetect}function z(e,n){let t=e;s.forEach(e=>{e[t]&&e[t](n)})}for(let j in"undefined"!=typeof window&&window.addEventListener&&window.addEventListener("DOMContentLoaded",()=>{k&&M()},!1),Object.assign(n,{highlight:N,highlightAuto:v,highlightAll:M,highlightElement:x,highlightBlock:e=>(P("10.7.0","highlightBlock will be removed entirely in v12.0"),P("10.7.0","Please use highlightElement now."),x(e)),configure(e){$=G($,e)},initHighlighting(){M(),P("10.6.0","initHighlighting() deprecated. Use highlightAll() now.")},initHighlightingOnLoad(){M(),P("10.6.0","initHighlightingOnLoad() deprecated. Use highlightAll() now.")},registerLanguage(e,t){let i=null;try{i=t(n)}catch(r){if(F("Language definition for '{}' could not be registered.".replace("{}",e)),!l)throw r;F(r),i=f}i.name||(i.name=e),a[e]=i,i.rawDefinition=t.bind(null,n),i.aliases&&S(i.aliases,{languageName:e})},unregisterLanguage(e){for(let n of(delete a[e],Object.keys(r)))r[n]===e&&delete r[n]},listLanguages:()=>Object.keys(a),getLanguage:O,registerAliases:S,autoDetection:C,inherit:G,addPlugin(e){var n;(n=e)["before:highlightBlock"]&&!n["before:highlightElement"]&&(n["before:highlightElement"]=e=>{n["before:highlightBlock"](Object.assign({block:e.el},e))}),n["after:highlightBlock"]&&!n["after:highlightElement"]&&(n["after:highlightElement"]=e=>{n["after:highlightBlock"](Object.assign({block:e.el},e))}),s.push(e)}}),n.debugMode=()=>{l=!1},n.safeMode=()=>{l=!0},n.versionString="11.7.0",n.regex={concat:m,lookahead:g,either:p,optional:b,anyNumberOfTimes:u},A)"object"==typeof A[j]&&e.exports(A[j]);return Object.assign(n,A),n})({});let X=e=>({IMPORTANT:{scope:"meta",begin:"!important"},BLOCK_COMMENT:e.C_BLOCK_COMMENT_MODE,HEXCOLOR:{scope:"number",begin:/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},FUNCTION_DISPATCH:{className:"built_in",begin:/[\w-]+(?=\()/},ATTRIBUTE_SELECTOR_MODE:{scope:"selector-attr",begin:/\[/,end:/\]/,illegal:"$",contains:[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},CSS_NUMBER_MODE:{scope:"number",begin:e.NUMBER_RE+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",relevance:0},CSS_VARIABLE:{className:"attr",begin:/--[A-Za-z][A-Za-z0-9_-]*/}}),V=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","main","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video",],J=["any-hover","any-pointer","aspect-ratio","color","color-gamut","color-index","device-aspect-ratio","device-height","device-width","display-mode","forced-colors","grid","height","hover","inverted-colors","monochrome","orientation","overflow-block","overflow-inline","pointer","prefers-color-scheme","prefers-contrast","prefers-reduced-motion","prefers-reduced-transparency","resolution","scan","scripting","update","width","min-width","max-width","min-height","max-height",],Y=["active","any-link","blank","checked","current","default","defined","dir","disabled","drop","empty","enabled","first","first-child","first-of-type","fullscreen","future","focus","focus-visible","focus-within","has","host","host-context","hover","indeterminate","in-range","invalid","is","lang","last-child","last-of-type","left","link","local-link","not","nth-child","nth-col","nth-last-child","nth-last-col","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","past","placeholder-shown","read-only","read-write","required","right","root","scope","target","target-within","user-invalid","valid","visited","where",],ee=["after","backdrop","before","cue","cue-region","first-letter","first-line","grammar-error","marker","part","placeholder","selection","slotted","spelling-error",],en=["align-content","align-items","align-self","all","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","backface-visibility","background","background-attachment","background-blend-mode","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","block-size","border","border-block","border-block-color","border-block-end","border-block-end-color","border-block-end-style","border-block-end-width","border-block-start","border-block-start-color","border-block-start-style","border-block-start-width","border-block-style","border-block-width","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-inline","border-inline-color","border-inline-end","border-inline-end-color","border-inline-end-style","border-inline-end-width","border-inline-start","border-inline-start-color","border-inline-start-style","border-inline-start-width","border-inline-style","border-inline-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","caret-color","clear","clip","clip-path","clip-rule","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","contain","content","content-visibility","counter-increment","counter-reset","cue","cue-after","cue-before","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","flow","font","font-display","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-smoothing","font-stretch","font-style","font-synthesis","font-variant","font-variant-caps","font-variant-east-asian","font-variant-ligatures","font-variant-numeric","font-variant-position","font-variation-settings","font-weight","gap","glyph-orientation-vertical","grid","grid-area","grid-auto-columns","grid-auto-flow","grid-auto-rows","grid-column","grid-column-end","grid-column-start","grid-gap","grid-row","grid-row-end","grid-row-start","grid-template","grid-template-areas","grid-template-columns","grid-template-rows","hanging-punctuation","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inline-size","isolation","justify-content","left","letter-spacing","line-break","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-block","margin-block-end","margin-block-start","margin-bottom","margin-inline","margin-inline-end","margin-inline-start","margin-left","margin-right","margin-top","marks","mask","mask-border","mask-border-mode","mask-border-outset","mask-border-repeat","mask-border-slice","mask-border-source","mask-border-width","mask-clip","mask-composite","mask-image","mask-mode","mask-origin","mask-position","mask-repeat","mask-size","mask-type","max-block-size","max-height","max-inline-size","max-width","min-block-size","min-height","min-inline-size","min-width","mix-blend-mode","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-block","padding-block-end","padding-block-start","padding-bottom","padding-inline","padding-inline-end","padding-inline-start","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","pause","pause-after","pause-before","perspective","perspective-origin","pointer-events","position","quotes","resize","rest","rest-after","rest-before","right","row-gap","scroll-margin","scroll-margin-block","scroll-margin-block-end","scroll-margin-block-start","scroll-margin-bottom","scroll-margin-inline","scroll-margin-inline-end","scroll-margin-inline-start","scroll-margin-left","scroll-margin-right","scroll-margin-top","scroll-padding","scroll-padding-block","scroll-padding-block-end","scroll-padding-block-start","scroll-padding-bottom","scroll-padding-inline","scroll-padding-inline-end","scroll-padding-inline-start","scroll-padding-left","scroll-padding-right","scroll-padding-top","scroll-snap-align","scroll-snap-stop","scroll-snap-type","scrollbar-color","scrollbar-gutter","scrollbar-width","shape-image-threshold","shape-margin","shape-outside","speak","speak-as","src","tab-size","table-layout","text-align","text-align-all","text-align-last","text-combine-upright","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-emphasis","text-emphasis-color","text-emphasis-position","text-emphasis-style","text-indent","text-justify","text-orientation","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-box","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","voice-balance","voice-duration","voice-family","voice-pitch","voice-range","voice-rate","voice-stress","voice-volume","white-space","widows","width","will-change","word-break","word-spacing","word-wrap","writing-mode","z-index",].reverse(),et=Y.concat(ee);var ea="\\.([0-9](_*[0-9])*)",ei="[0-9a-fA-F](_*[0-9a-fA-F])*",er={className:"number",variants:[{begin:`(\\b([0-9](_*[0-9])*)((${ea})|\\.)?|(${ea}))[eE][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:`\\b([0-9](_*[0-9])*)((${ea})[fFdD]?\\b|\\.([fFdD]\\b)?)`},{begin:`(${ea})[fFdD]?\\b`},{begin:"\\b([0-9](_*[0-9])*)[fFdD]\\b"},{begin:`\\b0[xX]((${ei})\\.?|(${ei})?\\.(${ei}))[pP][+-]?([0-9](_*[0-9])*)[fFdD]?\\b`},{begin:"\\b(0|[1-9](_*[0-9])*)[lL]?\\b"},{begin:`\\b0[xX](${ei})[lL]?\\b`},{begin:"\\b0(_*[0-7])*[lL]?\\b"},{begin:"\\b0[bB][01](_*[01])*[lL]?\\b"},],relevance:0};let es="[A-Za-z$_][0-9A-Za-z$_]*",el=["as","in","of","if","for","while","finally","var","new","function","do","return","void","else","break","catch","instanceof","with","throw","case","default","try","switch","continue","typeof","delete","let","yield","const","class","debugger","async","await","static","import","from","export","extends",],eo=["true","false","null","undefined","NaN","Infinity"],ec=["Object","Function","Boolean","Symbol","Math","Date","Number","BigInt","String","RegExp","Array","Float32Array","Float64Array","Int8Array","Uint8Array","Uint8ClampedArray","Int16Array","Int32Array","Uint16Array","Uint32Array","BigInt64Array","BigUint64Array","Set","Map","WeakSet","WeakMap","ArrayBuffer","SharedArrayBuffer","Atomics","DataView","JSON","Promise","Generator","GeneratorFunction","AsyncFunction","Reflect","Proxy","Intl","WebAssembly",],ed=["Error","EvalError","InternalError","RangeError","ReferenceError","SyntaxError","TypeError","URIError",],eg=["setInterval","setTimeout","clearInterval","clearTimeout","require","exports","eval","isFinite","isNaN","parseFloat","parseInt","decodeURI","decodeURIComponent","encodeURI","encodeURIComponent","escape","unescape",],eu=["arguments","this","super","console","window","document","localStorage","module","global",],eb=[].concat(eg,ec,ed);function em(e){var n;let t=e.regex,a=es,i={begin:/<[A-Za-z0-9\\._:-]+/,end:/\/[A-Za-z0-9\\._:-]+>|\/>/,isTrulyOpeningTag(e,n){let t=e[0].length+e.index,a=e.input[t];if("<"===a||","===a)return void n.ignoreMatch();let i;">"===a&&(((e,{after:n})=>{let t="",v={match:[/const|var|let/,/\s+/,a,/\s*/,/=\s*/,/(async\s*)?/,t.lookahead(w),],keywords:"async",className:{1:"keyword",3:"title.function"},contains:[f]};return{name:"Javascript",aliases:["js","jsx","mjs","cjs"],keywords:r,exports:{PARAMS_CONTAINS:h,CLASS_REFERENCE:$},illegal:/#(?![$_A-z])/,contains:[e.SHEBANG({label:"shebang",binary:"node",relevance:5}),{label:"use_strict",className:"meta",relevance:10,begin:/^\s*['"]use (strict|asm)['"]/},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,d,g,u,b,{match:/\$\d+/},o,$,{className:"attr",begin:a+t.lookahead(":"),relevance:0},v,{begin:"("+e.RE_STARTERS_RE+"|\\b(case|return|throw)\\b)\\s*",keywords:"return throw case",relevance:0,contains:[b,e.REGEXP_MODE,{className:"function",begin:w,returnBegin:!0,end:"\\s*=>",contains:[{className:"params",variants:[{begin:e.UNDERSCORE_IDENT_RE,relevance:0},{className:null,begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:r,contains:h},]},]},{begin:/,/,relevance:0},{match:/\s+/,relevance:0},{variants:[{begin:"<>",end:""},{match:/<[A-Za-z0-9\\._:-]+\s*\/>/},{begin:i.begin,"on:begin":i.isTrulyOpeningTag,end:i.end},],subLanguage:"xml",contains:[{begin:i.begin,end:i.end,skip:!0,contains:["self"]},]},]},{variants:[{match:[/function/,/\s+/,a,/(?=\s*\()/]},{match:[/function/,/\s*(?=\()/]},],className:{1:"keyword",3:"title.function"},label:"func.def",contains:[f],illegal:/%/},{beginKeywords:"while if switch catch for"},{begin:"\\b(?!function)"+e.UNDERSCORE_IDENT_RE+"\\([^()]*(\\([^()]*(\\([^()]*\\)[^()]*)*\\)[^()]*)*\\)\\s*\\{",returnBegin:!0,label:"func.def",contains:[f,e.inherit(e.TITLE_MODE,{begin:a,className:"title.function"}),]},{match:/\.\.\./,relevance:0},N,{match:"\\$"+a,relevance:0},{match:[/\bconstructor(?=\s*\()/],className:{1:"title.function"},contains:[f]},y,{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},E,{match:[/get|set/,/\s+/,a,/(?=\()/],className:{1:"keyword",3:"title.function"},contains:[{begin:/\(\)/},f]},{match:/\$[(.]/},]}}let ep=e=>m(/\b/,e,/\w$/.test(e)?/\b/:/\B/),e8=["Protocol","Type"].map(ep),eh=["init","self"].map(ep),ef=["Any","Self"],eE=["actor","any","associatedtype","async","await",/as\?/,/as!/,"as","break","case","catch","class","continue","convenience","default","defer","deinit","didSet","distributed","do","dynamic","else","enum","extension","fallthrough",/fileprivate\(set\)/,"fileprivate","final","for","func","get","guard","if","import","indirect","infix",/init\?/,/init!/,"inout",/internal\(set\)/,"internal","in","is","isolated","nonisolated","lazy","let","mutating","nonmutating",/open\(set\)/,"open","operator","optional","override","postfix","precedencegroup","prefix",/private\(set\)/,"private","protocol",/public\(set\)/,"public","repeat","required","rethrows","return","set","some","static","struct","subscript","super","switch","throws","throw",/try\?/,/try!/,"try","typealias",/unowned\(safe\)/,/unowned\(unsafe\)/,"unowned","var","weak","where","while","willSet",],e$=["false","nil","true"],ey=["assignment","associativity","higherThan","left","lowerThan","none","right",],eN=["#colorLiteral","#column","#dsohandle","#else","#elseif","#endif","#error","#file","#fileID","#fileLiteral","#filePath","#function","#if","#imageLiteral","#keyPath","#line","#selector","#sourceLocation","#warn_unqualified_access","#warning",],ew=["abs","all","any","assert","assertionFailure","debugPrint","dump","fatalError","getVaList","isKnownUniquelyReferenced","max","min","numericCast","pointwiseMax","pointwiseMin","precondition","preconditionFailure","print","readLine","repeatElement","sequence","stride","swap","swift_unboxFromSwiftValueWithType","transcode","type","unsafeBitCast","unsafeDowncast","withExtendedLifetime","withUnsafeMutablePointer","withUnsafePointer","withVaList","withoutActuallyEscaping","zip",],ev=p(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),ex=p(ev,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),ek=m(ev,ex,"*"),eM=p(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),eO=p(eM,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),eS=m(eM,eO,"*"),eA=m(/[A-Z]/,eO,"*"),eC=["autoclosure",m(/convention\(/,p("swift","block","c"),/\)/),"discardableResult","dynamicCallable","dynamicMemberLookup","escaping","frozen","GKInspectable","IBAction","IBDesignable","IBInspectable","IBOutlet","IBSegueAction","inlinable","main","nonobjc","NSApplicationMain","NSCopying","NSManaged",m(/objc\(/,eS,/\)/),"objc","objcMembers","propertyWrapper","requires_stored_property_inits","resultBuilder","testable","UIApplicationMain","unknown","usableFromInline",],eT=["iOS","iOSApplicationExtension","macOS","macOSApplicationExtension","macCatalyst","macCatalystApplicationExtension","watchOS","watchOSApplicationExtension","tvOS","tvOSApplicationExtension","swift",];var eR=Object.freeze({__proto__:null,grmr_bash(e){let n=e.regex,t={};Object.assign(t,{className:"variable",variants:[{begin:n.concat(/\$[\w\d#@][\w\d_]*/,"(?![\\w\\d])(?![$])")},{begin:/\$\{/,end:/\}/,contains:["self",{begin:/:-/,contains:[t]}]},]});let a={className:"subst",begin:/\$\(/,end:/\)/,contains:[e.BACKSLASH_ESCAPE]},i={begin:/<<-?\s*(?=\w+)/,starts:{contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,className:"string"}),]}},r={className:"string",begin:/"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,t,a]};a.contains.push(r);let s={begin:/\$?\(\(/,end:/\)\)/,contains:[{begin:/\d+#[0-9a-f]+/,className:"number"},e.NUMBER_MODE,t,]},l=e.SHEBANG({binary:"(fish|bash|zsh|sh|csh|ksh|tcsh|dash|scsh)",relevance:10}),o={className:"function",begin:/\w[\w\d_]*\s*\(\s*\)\s*\{/,returnBegin:!0,contains:[e.inherit(e.TITLE_MODE,{begin:/\w[\w\d_]*/})],relevance:0};return{name:"Bash",aliases:["sh"],keywords:{$pattern:/\b[a-z][a-z0-9._-]+\b/,keyword:["if","then","else","elif","fi","for","while","in","do","done","case","esac","function",],literal:["true","false"],built_in:["break","cd","continue","eval","exec","exit","export","getopts","hash","pwd","readonly","return","shift","test","times","trap","umask","unset","alias","bind","builtin","caller","command","declare","echo","enable","help","let","local","logout","mapfile","printf","read","readarray","source","type","typeset","ulimit","unalias","set","shopt","autoload","bg","bindkey","bye","cap","chdir","clone","comparguments","compcall","compctl","compdescribe","compfiles","compgroups","compquote","comptags","comptry","compvalues","dirs","disable","disown","echotc","echoti","emulate","fc","fg","float","functions","getcap","getln","history","integer","jobs","kill","limit","log","noglob","popd","print","pushd","pushln","rehash","sched","setcap","setopt","stat","suspend","ttyctl","unfunction","unhash","unlimit","unsetopt","vared","wait","whence","where","which","zcompile","zformat","zftp","zle","zmodload","zparseopts","zprof","zpty","zregexparse","zsocket","zstyle","ztcp","chcon","chgrp","chown","chmod","cp","dd","df","dir","dircolors","ln","ls","mkdir","mkfifo","mknod","mktemp","mv","realpath","rm","rmdir","shred","sync","touch","truncate","vdir","b2sum","base32","base64","cat","cksum","comm","csplit","cut","expand","fmt","fold","head","join","md5sum","nl","numfmt","od","paste","ptx","pr","sha1sum","sha224sum","sha256sum","sha384sum","sha512sum","shuf","sort","split","sum","tac","tail","tr","tsort","unexpand","uniq","wc","arch","basename","chroot","date","dirname","du","echo","env","expr","factor","groups","hostid","id","link","logname","nice","nohup","nproc","pathchk","pinky","printenv","printf","pwd","readlink","runcon","seq","sleep","stat","stdbuf","stty","tee","test","timeout","tty","uname","unlink","uptime","users","who","whoami","yes",]},contains:[l,e.SHEBANG(),o,s,e.HASH_COMMENT_MODE,i,{match:/(\/[a-z._-]+)+/},r,{className:"",begin:/\\"/},{className:"string",begin:/'/,end:/'/},t,]}},grmr_c(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",variants:[{begin:"\\b[a-z\\d_]*_t\\b"},{match:/\batomic_[a-z]{3,6}\b/},]},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={keyword:["asm","auto","break","case","continue","default","do","else","enum","extern","for","fortran","goto","if","inline","register","restrict","return","sizeof","struct","switch","typedef","union","volatile","while","_Alignas","_Alignof","_Atomic","_Generic","_Noreturn","_Static_assert","_Thread_local","alignas","alignof","noreturn","static_assert","thread_local","_Pragma",],type:["float","double","signed","unsigned","int","short","long","char","void","_Bool","_Complex","_Imaginary","_Decimal32","_Decimal64","_Decimal128","const","static","complex","bool","imaginary",],literal:"true false NULL",built_in:"std string wstring cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set pair bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap priority_queue make_pair array shared_ptr abort terminate abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf future isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr"},u=[o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],b={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:u.concat([{begin:/\(/,end:/\)/,keywords:g,contains:u.concat(["self"]),relevance:0},]),relevance:0},m={begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[e.inherit(c,{className:"title.function"}),],relevance:0},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C",aliases:["h"],keywords:g,disableAutodetect:!0,illegal:"=]/,contains:[{beginKeywords:"final class struct"},e.TITLE_MODE,]},]),exports:{preprocessor:o,strings:s,keywords:g}}},grmr_cpp(e){let n=e.regex,t=e.COMMENT("//","$",{contains:[{begin:/\\\n/}]}),a="[a-zA-Z_]\\w*::",i="(?!struct)(decltype\\(auto\\)|"+n.optional(a)+"[a-zA-Z_]\\w*"+n.optional("<[^<>]+>")+")",r={className:"type",begin:"\\b[a-z\\d_]*_t\\b"},s={className:"string",variants:[{begin:'(u8?|U|L)?"',end:'"',illegal:"\\n",contains:[e.BACKSLASH_ESCAPE]},{begin:"(u8?|U|L)?'(\\\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4,8}|[0-7]{3}|\\S)|.)",end:"'",illegal:"."},e.END_SAME_AS_BEGIN({begin:/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,end:/\)([^()\\ ]{0,16})"/}),]},l={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)((ll|LL|l|L)(u|U)?|(u|U)(ll|LL|l|L)?|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},o={className:"meta",begin:/#\s*[a-z]+\b/,end:/$/,keywords:{keyword:"if else elif endif define undef warning error line pragma _Pragma ifdef ifndef include"},contains:[{begin:/\\\n/,relevance:0},e.inherit(s,{className:"string"}),{className:"string",begin:/<.*?>/},t,e.C_BLOCK_COMMENT_MODE,]},c={className:"title",begin:n.optional(a)+e.IDENT_RE,relevance:0},d=n.optional(a)+e.IDENT_RE+"\\s*\\(",g={type:["bool","char","char16_t","char32_t","char8_t","double","float","int","long","short","void","wchar_t","unsigned","signed","const","static",],keyword:["alignas","alignof","and","and_eq","asm","atomic_cancel","atomic_commit","atomic_noexcept","auto","bitand","bitor","break","case","catch","class","co_await","co_return","co_yield","compl","concept","const_cast|10","consteval","constexpr","constinit","continue","decltype","default","delete","do","dynamic_cast|10","else","enum","explicit","export","extern","false","final","for","friend","goto","if","import","inline","module","mutable","namespace","new","noexcept","not","not_eq","nullptr","operator","or","or_eq","override","private","protected","public","reflexpr","register","reinterpret_cast|10","requires","return","sizeof","static_assert","static_cast|10","struct","switch","synchronized","template","this","thread_local","throw","transaction_safe","transaction_safe_dynamic","true","try","typedef","typeid","typename","union","using","virtual","volatile","while","xor","xor_eq",],literal:["NULL","false","nullopt","nullptr","true"],built_in:["_Pragma"],_type_hints:["any","auto_ptr","barrier","binary_semaphore","bitset","complex","condition_variable","condition_variable_any","counting_semaphore","deque","false_type","future","imaginary","initializer_list","istringstream","jthread","latch","lock_guard","multimap","multiset","mutex","optional","ostringstream","packaged_task","pair","promise","priority_queue","queue","recursive_mutex","recursive_timed_mutex","scoped_lock","set","shared_future","shared_lock","shared_mutex","shared_timed_mutex","shared_ptr","stack","string_view","stringstream","timed_mutex","thread","true_type","tuple","unique_lock","unique_ptr","unordered_map","unordered_multimap","unordered_multiset","unordered_set","variant","vector","weak_ptr","wstring","wstring_view",]},u={className:"function.dispatch",relevance:0,keywords:{_hint:["abort","abs","acos","apply","as_const","asin","atan","atan2","calloc","ceil","cerr","cin","clog","cos","cosh","cout","declval","endl","exchange","exit","exp","fabs","floor","fmod","forward","fprintf","fputs","free","frexp","fscanf","future","invoke","isalnum","isalpha","iscntrl","isdigit","isgraph","islower","isprint","ispunct","isspace","isupper","isxdigit","labs","launder","ldexp","log","log10","make_pair","make_shared","make_shared_for_overwrite","make_tuple","make_unique","malloc","memchr","memcmp","memcpy","memset","modf","move","pow","printf","putchar","puts","realloc","scanf","sin","sinh","snprintf","sprintf","sqrt","sscanf","std","stderr","stdin","stdout","strcat","strchr","strcmp","strcpy","strcspn","strlen","strncat","strncmp","strncpy","strpbrk","strrchr","strspn","strstr","swap","tan","tanh","terminate","to_underlying","tolower","toupper","vfprintf","visit","vprintf","vsprintf",]},begin:n.concat(/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,e.IDENT_RE,n.lookahead(/(<[^<>]+>|)\s*\(/))},b=[u,o,r,t,e.C_BLOCK_COMMENT_MODE,l,s],m={variants:[{begin:/=/,end:/;/},{begin:/\(/,end:/\)/},{beginKeywords:"new throw return else",end:/;/},],keywords:g,contains:b.concat([{begin:/\(/,end:/\)/,keywords:g,contains:b.concat(["self"]),relevance:0},]),relevance:0},p={className:"function",begin:"("+i+"[\\*&\\s]+)+"+d,returnBegin:!0,end:/[{;=]/,excludeEnd:!0,keywords:g,illegal:/[^\w\s\*&:<>.]/,contains:[{begin:"decltype\\(auto\\)",keywords:g,relevance:0},{begin:d,returnBegin:!0,contains:[c],relevance:0},{begin:/::/,relevance:0},{begin:/:/,endsWithParent:!0,contains:[s,l]},{relevance:0,match:/,/},{className:"params",begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:[t,e.C_BLOCK_COMMENT_MODE,s,l,r,{begin:/\(/,end:/\)/,keywords:g,relevance:0,contains:["self",t,e.C_BLOCK_COMMENT_MODE,s,l,r]},]},r,t,e.C_BLOCK_COMMENT_MODE,o,]};return{name:"C++",aliases:["cc","c++","h++","hpp","hh","hxx","cxx"],keywords:g,illegal:"",keywords:g,contains:["self",r]},{begin:e.IDENT_RE+"::",keywords:g},{match:[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/,],className:{1:"keyword",3:"title.class"}},])}},grmr_csharp(e){let n={keyword:["abstract","as","base","break","case","catch","class","const","continue","do","else","event","explicit","extern","finally","fixed","for","foreach","goto","if","implicit","in","interface","internal","is","lock","namespace","new","operator","out","override","params","private","protected","public","readonly","record","ref","return","scoped","sealed","sizeof","stackalloc","static","struct","switch","this","throw","try","typeof","unchecked","unsafe","using","virtual","void","volatile","while",].concat(["add","alias","and","ascending","async","await","by","descending","equals","from","get","global","group","init","into","join","let","nameof","not","notnull","on","or","orderby","partial","remove","select","set","unmanaged","value|0","var","when","where","with","yield",]),built_in:["bool","byte","char","decimal","delegate","double","dynamic","enum","float","int","long","nint","nuint","object","sbyte","short","string","ulong","uint","ushort",],literal:["default","false","null","true"]},t=e.inherit(e.TITLE_MODE,{begin:"[a-zA-Z](\\.?\\w)*"}),a={className:"number",variants:[{begin:"\\b(0b[01']+)"},{begin:"(-?)\\b([\\d']+(\\.[\\d']*)?|\\.[\\d']+)(u|U|l|L|ul|UL|f|F|b|B)"},{begin:"(-?)(\\b0[xX][a-fA-F0-9']+|(\\b[\\d']+(\\.[\\d']*)?|\\.[\\d']+)([eE][-+]?[\\d']+)?)"},],relevance:0},i={className:"string",begin:'@"',end:'"',contains:[{begin:'""'}]},r=e.inherit(i,{illegal:/\n/}),s={className:"subst",begin:/\{/,end:/\}/,keywords:n},l=e.inherit(s,{illegal:/\n/}),o={className:"string",begin:/\$"/,end:'"',illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},e.BACKSLASH_ESCAPE,l,]},c={className:"string",begin:/\$@"/,end:'"',contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},s,]},d=e.inherit(c,{illegal:/\n/,contains:[{begin:/\{\{/},{begin:/\}\}/},{begin:'""'},l]});s.contains=[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.C_BLOCK_COMMENT_MODE,],l.contains=[d,o,r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,a,e.inherit(e.C_BLOCK_COMMENT_MODE,{illegal:/\n/}),];let g={variants:[c,o,i,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE]},u={begin:"<",end:">",contains:[{beginKeywords:"in out"},t]},b=e.IDENT_RE+"(<"+e.IDENT_RE+"(\\s*,\\s*"+e.IDENT_RE+")*>)?(\\[\\])?",m={begin:"@"+e.IDENT_RE,relevance:0};return{name:"C#",aliases:["cs","c#"],keywords:n,illegal:/::/,contains:[e.COMMENT("///","$",{returnBegin:!0,contains:[{className:"doctag",variants:[{begin:"///",relevance:0},{begin:""},{begin:""},]},]}),e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"meta",begin:"#",end:"$",keywords:{keyword:"if else elif endif define undef warning error line region endregion pragma checksum"}},g,a,{beginKeywords:"class interface",relevance:0,end:/[{;=]/,illegal:/[^\s:,]/,contains:[{beginKeywords:"where class"},t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{beginKeywords:"namespace",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"record",relevance:0,end:/[{;=]/,illegal:/[^\s:]/,contains:[t,u,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{className:"meta",begin:"^\\s*\\[(?=[\\w])",excludeBegin:!0,end:"\\]",excludeEnd:!0,contains:[{className:"string",begin:/"/,end:/"/},]},{beginKeywords:"new return throw await else",relevance:0},{className:"function",begin:"("+b+"\\s+)+"+e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,end:/\s*[{;=]/,excludeEnd:!0,keywords:n,contains:[{beginKeywords:"public private protected static internal protected abstract async extern override unsafe virtual new sealed partial",relevance:0},{begin:e.IDENT_RE+"\\s*(<[^=]+>\\s*)?\\(",returnBegin:!0,contains:[e.TITLE_MODE,u],relevance:0},{match:/\(\)/},{className:"params",begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:n,relevance:0,contains:[g,a,e.C_BLOCK_COMMENT_MODE]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},m,]}},grmr_css(e){let n=e.regex,t=X(e),a=[e.APOS_STRING_MODE,e.QUOTE_STRING_MODE];return{name:"CSS",case_insensitive:!0,illegal:/[=|'\$]/,keywords:{keyframePosition:"from to"},classNameAliases:{keyframePosition:"selector-tag"},contains:[t.BLOCK_COMMENT,{begin:/-(webkit|moz|ms|o)-(?=[a-z])/},t.CSS_NUMBER_MODE,{className:"selector-id",begin:/#[A-Za-z0-9_-]+/,relevance:0},{className:"selector-class",begin:"\\.[a-zA-Z-][a-zA-Z0-9_-]*",relevance:0},t.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",variants:[{begin:":("+Y.join("|")+")"},{begin:":(:)?("+ee.join("|")+")"},]},t.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:/:/,end:/[;}{]/,contains:[t.BLOCK_COMMENT,t.HEXCOLOR,t.IMPORTANT,t.CSS_NUMBER_MODE,...a,{begin:/(url|data-uri)\(/,end:/\)/,relevance:0,keywords:{built_in:"url data-uri"},contains:[...a,{className:"string",begin:/[^)]/,endsWithParent:!0,excludeEnd:!0},]},t.FUNCTION_DISPATCH,]},{begin:n.lookahead(/@/),end:"[{;]",relevance:0,illegal:/:/,contains:[{className:"keyword",begin:/@-?\w[\w]*(-\w+)*/},{begin:/\s/,endsWithParent:!0,excludeEnd:!0,relevance:0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:/[a-z-]+(?=:)/,className:"attribute"},...a,t.CSS_NUMBER_MODE,]},]},{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b"},]}},grmr_diff(e){let n=e.regex;return{name:"Diff",aliases:["patch"],contains:[{className:"meta",relevance:10,match:n.either(/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{className:"comment",variants:[{begin:n.either(/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),end:/$/},{match:/^\*{15}$/},]},{className:"addition",begin:/^\+/,end:/$/},{className:"deletion",begin:/^-/,end:/$/},{className:"addition",begin:/^!/,end:/$/},]}},grmr_go(e){let n={keyword:["break","case","chan","const","continue","default","defer","else","fallthrough","for","func","go","goto","if","import","interface","map","package","range","return","select","struct","switch","type","var",],type:["bool","byte","complex64","complex128","error","float32","float64","int8","int16","int32","int64","string","uint8","uint16","uint32","uint64","int","uint","uintptr","rune",],literal:["true","false","iota","nil"],built_in:["append","cap","close","complex","copy","imag","len","make","new","panic","print","println","real","recover","delete",]};return{name:"Go",aliases:["golang"],keywords:n,illegal:"e(n,t,a-1))}("(?:<"+t+"~~~(?:\\s*,\\s*"+t+"~~~)*>)?",/~~~/g,2),i={keyword:["synchronized","abstract","private","var","static","if","const ","for","while","strictfp","finally","protected","import","native","final","void","enum","else","break","transient","catch","instanceof","volatile","case","assert","package","default","public","try","switch","continue","throws","protected","public","private","module","requires","exports","do","sealed","yield","permits",],literal:["false","true","null"],type:["char","boolean","long","float","int","byte","short","double",],built_in:["super","this"]},r={className:"meta",begin:"@"+t,contains:[{begin:/\(/,end:/\)/,contains:["self"]},]},s={className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[e.C_BLOCK_COMMENT_MODE],endsParent:!0};return{name:"Java",aliases:["jsp"],keywords:i,illegal:/<\/|#/,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{begin:/\w+@/,relevance:0},{className:"doctag",begin:"@[A-Za-z]+"},]}),{begin:/import java\.[a-z]+\./,keywords:"import",relevance:2},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{begin:/"""/,end:/"""/,className:"string",contains:[e.BACKSLASH_ESCAPE]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{match:[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,t,],className:{1:"keyword",3:"title.class"}},{match:/non-sealed/,scope:"keyword"},{begin:[n.concat(/(?!else)/,t),/\s+/,t,/\s+/,/=(?!=)/],className:{1:"type",3:"variable",5:"operator"}},{begin:[/record/,/\s+/,t],className:{1:"keyword",3:"title.class"},contains:[s,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE]},{beginKeywords:"new throw return else",relevance:0},{begin:["(?:"+a+"\\s+)",e.UNDERSCORE_IDENT_RE,/\s*(?=\()/],className:{2:"title.function"},keywords:i,contains:[{className:"params",begin:/\(/,end:/\)/,keywords:i,relevance:0,contains:[r,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,er,e.C_BLOCK_COMMENT_MODE,]},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},er,r,]}},grmr_javascript:em,grmr_json(e){let n=["true","false","null"],t={scope:"literal",beginKeywords:n.join(" ")};return{name:"JSON",keywords:{literal:n},contains:[{className:"attr",begin:/"(\\.|[^\\"\r\n])*"(?=\s*:)/,relevance:1.01},{match:/[{}[\],:]/,className:"punctuation",relevance:0},e.QUOTE_STRING_MODE,t,e.C_NUMBER_MODE,e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,],illegal:"\\S"}},grmr_kotlin(e){let n={keyword:"abstract as val var vararg get set class object open private protected public noinline crossinline dynamic final enum if else do while for when throw try catch finally import package is in fun override companion reified inline lateinit init interface annotation data sealed internal infix operator out by constructor super tailrec where const inner suspend typealias external expect actual",built_in:"Byte Short Char Int Long Boolean Float Double Void Unit Nothing",literal:"true false null"},t={className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"@"},a={className:"subst",begin:/\$\{/,end:/\}/,contains:[e.C_NUMBER_MODE]},i={className:"variable",begin:"\\$"+e.UNDERSCORE_IDENT_RE},r={className:"string",variants:[{begin:'"""',end:'"""(?=[^"])',contains:[i,a]},{begin:"'",end:"'",illegal:/\n/,contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"',illegal:/\n/,contains:[e.BACKSLASH_ESCAPE,i,a]},]};a.contains.push(r);let s={className:"meta",begin:"@(?:file|property|field|get|set|receiver|param|setparam|delegate)\\s*:(?:\\s*"+e.UNDERSCORE_IDENT_RE+")?"},l={className:"meta",begin:"@"+e.UNDERSCORE_IDENT_RE,contains:[{begin:/\(/,end:/\)/,contains:[e.inherit(r,{className:"string"}),"self"]},]},o=e.COMMENT("/\\*","\\*/",{contains:[e.C_BLOCK_COMMENT_MODE]}),c={variants:[{className:"type",begin:e.UNDERSCORE_IDENT_RE},{begin:/\(/,end:/\)/,contains:[]},]},d=c;return d.variants[1].contains=[c],c.variants[1].contains=[d],{name:"Kotlin",aliases:["kt","kts"],keywords:n,contains:[e.COMMENT("/\\*\\*","\\*/",{relevance:0,contains:[{className:"doctag",begin:"@[A-Za-z]+"}]}),e.C_LINE_COMMENT_MODE,o,{className:"keyword",begin:/\b(break|continue|return|this)\b/,starts:{contains:[{className:"symbol",begin:/@\w+/}]}},t,s,l,{className:"function",beginKeywords:"fun",end:"[(]|$",returnBegin:!0,excludeEnd:!0,keywords:n,relevance:5,contains:[{begin:e.UNDERSCORE_IDENT_RE+"\\s*\\(",returnBegin:!0,relevance:0,contains:[e.UNDERSCORE_TITLE_MODE]},{className:"type",begin://,keywords:"reified",relevance:0},{className:"params",begin:/\(/,end:/\)/,endsParent:!0,keywords:n,relevance:0,contains:[{begin:/:/,end:/[=,\/]/,endsWithParent:!0,contains:[c,e.C_LINE_COMMENT_MODE,o],relevance:0},e.C_LINE_COMMENT_MODE,o,s,l,r,e.C_NUMBER_MODE,]},o,]},{begin:[/class|interface|trait/,/\s+/,e.UNDERSCORE_IDENT_RE],beginScope:{3:"title.class"},keywords:"class interface trait",end:/[:\{(]|$/,excludeEnd:!0,illegal:"extends implements",contains:[{beginKeywords:"public protected internal private constructor"},e.UNDERSCORE_TITLE_MODE,{className:"type",begin://,excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:/[,:]\s*/,end:/[<\(,){\s]|$/,excludeBegin:!0,returnEnd:!0},s,l,]},r,{className:"meta",begin:"^#!/usr/bin/env",end:"$",illegal:"\n"},er,]}},grmr_less(e){let n=X(e),t="([\\w-]+|@\\{[\\w-]+\\})",a=[],i=[],r=e=>({className:"string",begin:"~?"+e+".*?"+e}),s=(e,n,t)=>({className:e,begin:n,relevance:t}),l={$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")};i.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,r("'"),r('"'),n.CSS_NUMBER_MODE,{begin:"(url|data-uri)\\(",starts:{className:"string",end:"[\\)\\n]",excludeEnd:!0}},n.HEXCOLOR,{begin:"\\(",end:"\\)",contains:i,keywords:l,relevance:0},s("variable","@@?[\\w-]+",10),s("variable","@\\{[\\w-]+\\}"),s("built_in","~?`[^`]*?`"),{className:"attribute",begin:"[\\w-]+\\s*:",end:":",returnBegin:!0,excludeEnd:!0},n.IMPORTANT,{beginKeywords:"and not"},n.FUNCTION_DISPATCH);let o=i.concat({begin:/\{/,end:/\}/,contains:a}),c={beginKeywords:"when",endsWithParent:!0,contains:[{beginKeywords:"and not"}].concat(i)},d={begin:t+"\\s*:",returnBegin:!0,end:/[;}]/,relevance:0,contains:[{begin:/-(webkit|moz|ms|o)-/},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b",end:/(?=:)/,starts:{endsWithParent:!0,illegal:"[<=$]",relevance:0,contains:i}},]},g={variants:[{begin:"[\\.#:&\\[>]",end:"[;{}]"},{begin:t,end:/\{/},],returnBegin:!0,returnEnd:!0,illegal:"[<='$\"]",relevance:0,contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,c,s("keyword","all\\b"),s("variable","@\\{[\\w-]+\\}"),{begin:"\\b("+V.join("|")+")\\b",className:"selector-tag"},n.CSS_NUMBER_MODE,s("selector-tag",t,0),s("selector-id","#"+t),s("selector-class","\\."+t,0),s("selector-tag","&",0),n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},{begin:/\(/,end:/\)/,relevance:0,contains:o},{begin:"!important"},n.FUNCTION_DISPATCH,]},u={begin:`[\\w-]+:(:)?(${et.join("|")})`,returnBegin:!0,contains:[g]};return a.push(e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,{className:"keyword",begin:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{end:"[;{}]",keywords:l,returnEnd:!0,contains:i,relevance:0}},{className:"variable",variants:[{begin:"@[\\w-]+\\s*:",relevance:15},{begin:"@[\\w-]+"},],starts:{end:"[;}]",returnEnd:!0,contains:o}},u,d,g,c,n.FUNCTION_DISPATCH),{name:"Less",case_insensitive:!0,illegal:"[=>'/<($\"]",contains:a}},grmr_lua(e){let n="\\[=*\\[",t="\\]=*\\]",a={begin:n,end:t,contains:["self"]},i=[e.COMMENT("--(?!\\[=*\\[)","$"),e.COMMENT("--\\[=*\\[",t,{contains:[a],relevance:10}),];return{name:"Lua",keywords:{$pattern:e.UNDERSCORE_IDENT_RE,literal:"true false nil",keyword:"and break do else elseif end for goto if in local not or repeat return then until while",built_in:"_G _ENV _VERSION __index __newindex __mode __call __metatable __tostring __len __gc __add __sub __mul __div __mod __pow __concat __unm __eq __lt __le assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall arg self coroutine resume yield status wrap create running debug getupvalue debug sethook getmetatable gethook setmetatable setlocal traceback setfenv getinfo setupvalue getlocal getregistry getfenv io lines write close flush open output type read stderr stdin input stdout popen tmpfile math log max acos huge ldexp pi cos tanh pow deg tan cosh sinh random randomseed frexp ceil floor rad abs sqrt modf asin min mod fmod log10 atan2 exp sin atan os exit setlocale date getenv difftime remove time clock tmpname rename execute package preload loadlib loaded loaders cpath config path seeall string sub upper len gfind rep find match char dump gmatch reverse byte format gsub lower table setn insert getn foreachi maxn foreach concat sort remove"},contains:i.concat([{className:"function",beginKeywords:"function",end:"\\)",contains:[e.inherit(e.TITLE_MODE,{begin:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{className:"params",begin:"\\(",endsWithParent:!0,contains:i},].concat(i)},e.C_NUMBER_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,{className:"string",begin:n,end:t,contains:[a],relevance:5},])}},grmr_makefile(e){let n={className:"variable",variants:[{begin:"\\$\\("+e.UNDERSCORE_IDENT_RE+"\\)",contains:[e.BACKSLASH_ESCAPE]},{begin:/\$[@%`]+/},]},]},]};return{name:"HTML, XML",aliases:["html","xhtml","rss","atom","xjb","xsd","xsl","plist","wsf","svg",],case_insensitive:!0,unicodeRegex:!0,contains:[{className:"meta",begin://,relevance:10,contains:[i,l,s,r,{begin:/\[/,end:/\]/,contains:[{className:"meta",begin://,contains:[i,r,l,s]},]},]},e.COMMENT(//,{relevance:10}),{begin://,relevance:10},a,{className:"meta",end:/\?>/,variants:[{begin:/<\?xml/,relevance:10,contains:[l]},{begin:/<\?[a-z][a-z0-9]+/},]},{className:"tag",begin:/)/,end:/>/,keywords:{name:"style"},contains:[o],starts:{end:/<\/style>/,returnEnd:!0,subLanguage:["css","xml"]}},{className:"tag",begin:/)/,end:/>/,keywords:{name:"script"},contains:[o],starts:{end:/<\/script>/,returnEnd:!0,subLanguage:["javascript","handlebars","xml"]}},{className:"tag",begin:/<>|<\/>/},{className:"tag",begin:n.concat(//,/>/,/\s/)))),end:/\/?>/,contains:[{className:"name",begin:t,relevance:0,starts:o},]},{className:"tag",begin:n.concat(/<\//,n.lookahead(n.concat(t,/>/))),contains:[{className:"name",begin:t,relevance:0},{begin:/>/,relevance:0,endsParent:!0},]},]}},grmr_markdown(e){let n={begin:/<\/?[A-Za-z_]/,end:">",subLanguage:"xml",relevance:0},t={variants:[{begin:/\[.+?\]\[.*?\]/,relevance:0},{begin:/\[.+?\]\(((data|javascript|mailto):|(?:http|ftp)s?:\/\/).*?\)/,relevance:2},{begin:e.regex.concat(/\[.+?\]\(/,/[A-Za-z][A-Za-z0-9+.-]*/,/:\/\/.*?\)/),relevance:2},{begin:/\[.+?\]\([./?&#].*?\)/,relevance:1},{begin:/\[.*?\]\(.*?\)/,relevance:0},],returnBegin:!0,contains:[{match:/\[(?=\])/},{className:"string",relevance:0,begin:"\\[",end:"\\]",excludeBegin:!0,returnEnd:!0},{className:"link",relevance:0,begin:"\\]\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0},{className:"symbol",relevance:0,begin:"\\]\\[",end:"\\]",excludeBegin:!0,excludeEnd:!0},]},a={className:"strong",contains:[],variants:[{begin:/_{2}(?!\s)/,end:/_{2}/},{begin:/\*{2}(?!\s)/,end:/\*{2}/},]},i={className:"emphasis",contains:[],variants:[{begin:/\*(?![*\s])/,end:/\*/},{begin:/_(?![_\s])/,end:/_/,relevance:0},]},r=e.inherit(a,{contains:[]}),s=e.inherit(i,{contains:[]});a.contains.push(s),i.contains.push(r);let l=[n,t];return[a,i,r,s].forEach(e=>{e.contains=e.contains.concat(l)}),{name:"Markdown",aliases:["md","mkdown","mkd"],contains:[{className:"section",variants:[{begin:"^#{1,6}",end:"$",contains:l=l.concat(a,i)},{begin:"(?=^.+?\\n[=-]{2,}$)",contains:[{begin:"^[=-]*$"},{begin:"^",end:"\\n",contains:l},]},]},n,{className:"bullet",begin:"^[ ]*([*+-]|(\\d+\\.))(?=\\s+)",end:"\\s+",excludeEnd:!0},a,i,{className:"quote",begin:"^>\\s+",contains:l,end:"$"},{className:"code",variants:[{begin:"(`{3,})[^`](.|\\n)*?\\1`*[ ]*"},{begin:"(~{3,})[^~](.|\\n)*?\\1~*[ ]*"},{begin:"```",end:"```+[ ]*$"},{begin:"~~~",end:"~~~+[ ]*$"},{begin:"`.+?`"},{begin:"(?=^( {4}|\\t))",contains:[{begin:"^( {4}|\\t)",end:"(\\n)$"}],relevance:0},]},{begin:"^[-\\*]{3,}",end:"$"},t,{begin:/^\[[^\n]+\]:/,returnBegin:!0,contains:[{className:"symbol",begin:/\[/,end:/\]/,excludeBegin:!0,excludeEnd:!0},{className:"link",begin:/:\s*/,end:/$/,excludeBegin:!0},]},]}},grmr_objectivec(e){let n=/[a-zA-Z@][a-zA-Z0-9_]*/,t={$pattern:n,keyword:["@interface","@class","@protocol","@implementation"]};return{name:"Objective-C",aliases:["mm","objc","obj-c","obj-c++","objective-c++"],keywords:{"variable.language":["this","super"],$pattern:n,keyword:["while","export","sizeof","typedef","const","struct","for","union","volatile","static","mutable","if","do","return","goto","enum","else","break","extern","asm","case","default","register","explicit","typename","switch","continue","inline","readonly","assign","readwrite","self","@synchronized","id","typeof","nonatomic","IBOutlet","IBAction","strong","weak","copy","in","out","inout","bycopy","byref","oneway","__strong","__weak","__block","__autoreleasing","@private","@protected","@public","@try","@property","@end","@throw","@catch","@finally","@autoreleasepool","@synthesize","@dynamic","@selector","@optional","@required","@encode","@package","@import","@defs","@compatibility_alias","__bridge","__bridge_transfer","__bridge_retained","__bridge_retain","__covariant","__contravariant","__kindof","_Nonnull","_Nullable","_Null_unspecified","__FUNCTION__","__PRETTY_FUNCTION__","__attribute__","getter","setter","retain","unsafe_unretained","nonnull","nullable","null_unspecified","null_resettable","class","instancetype","NS_DESIGNATED_INITIALIZER","NS_UNAVAILABLE","NS_REQUIRES_SUPER","NS_RETURNS_INNER_POINTER","NS_INLINE","NS_AVAILABLE","NS_DEPRECATED","NS_ENUM","NS_OPTIONS","NS_SWIFT_UNAVAILABLE","NS_ASSUME_NONNULL_BEGIN","NS_ASSUME_NONNULL_END","NS_REFINED_FOR_SWIFT","NS_SWIFT_NAME","NS_SWIFT_NOTHROW","NS_DURING","NS_HANDLER","NS_ENDHANDLER","NS_VALUERETURN","NS_VOIDRETURN",],literal:["false","true","FALSE","TRUE","nil","YES","NO","NULL",],built_in:["dispatch_once_t","dispatch_queue_t","dispatch_sync","dispatch_async","dispatch_once",],type:["int","float","char","unsigned","signed","short","long","double","wchar_t","unichar","void","bool","BOOL","id|0","_Bool",]},illegal:"/,end:/$/,illegal:"\\n"},e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,]},{className:"class",begin:"("+t.keyword.join("|")+")\\b",end:/(\{|$)/,excludeEnd:!0,keywords:t,contains:[e.UNDERSCORE_TITLE_MODE]},{begin:"\\."+e.UNDERSCORE_IDENT_RE,relevance:0},]}},grmr_perl(e){let n=e.regex,t=/[dualxmsipngr]{0,12}/,a={$pattern:/[\w.]+/,keyword:"abs accept alarm and atan2 bind binmode bless break caller chdir chmod chomp chop chown chr chroot close closedir connect continue cos crypt dbmclose dbmopen defined delete die do dump each else elsif endgrent endhostent endnetent endprotoent endpwent endservent eof eval exec exists exit exp fcntl fileno flock for foreach fork format formline getc getgrent getgrgid getgrnam gethostbyaddr gethostbyname gethostent getlogin getnetbyaddr getnetbyname getnetent getpeername getpgrp getpriority getprotobyname getprotobynumber getprotoent getpwent getpwnam getpwuid getservbyname getservbyport getservent getsockname getsockopt given glob gmtime goto grep gt hex if index int ioctl join keys kill last lc lcfirst length link listen local localtime log lstat lt ma map mkdir msgctl msgget msgrcv msgsnd my ne next no not oct open opendir or ord our pack package pipe pop pos print printf prototype push q|0 qq quotemeta qw qx rand read readdir readline readlink readpipe recv redo ref rename require reset return reverse rewinddir rindex rmdir say scalar seek seekdir select semctl semget semop send setgrent sethostent setnetent setpgrp setpriority setprotoent setpwent setservent setsockopt shift shmctl shmget shmread shmwrite shutdown sin sleep socket socketpair sort splice split sprintf sqrt srand stat state study sub substr symlink syscall sysopen sysread sysseek system syswrite tell telldir tie tied time times tr truncate uc ucfirst umask undef unless unlink unpack unshift untie until use utime values vec wait waitpid wantarray warn when while write x|0 xor y|0"},i={className:"subst",begin:"[$@]\\{",end:"\\}",keywords:a},r={begin:/->\{/,end:/\}/},s={variants:[{begin:/\$\d/},{begin:n.concat(/[$%@](\^\w\b|#\w+(::\w+)*|\{\w+\}|\w+(::\w*)*)/,"(?![A-Za-z])(?![@$%])")},{begin:/[$%@][^\s\w{]/,relevance:0},]},l=[e.BACKSLASH_ESCAPE,i,s],o=[/!/,/\//,/\|/,/\?/,/'/,/"/,/#/],c=(e,a,i="\\1")=>{let r="\\1"===i?i:n.concat(i,a);return n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,r,/(?:\\.|[^\\\/])*?/,i,t)},d=(e,a,i)=>n.concat(n.concat("(?:",e,")"),a,/(?:\\.|[^\\\/])*?/,i,t),g=[s,e.HASH_COMMENT_MODE,e.COMMENT(/^=\w/,/=cut/,{endsWithParent:!0}),r,{className:"string",contains:l,variants:[{begin:"q[qwxr]?\\s*\\(",end:"\\)",relevance:5},{begin:"q[qwxr]?\\s*\\[",end:"\\]",relevance:5},{begin:"q[qwxr]?\\s*\\{",end:"\\}",relevance:5},{begin:"q[qwxr]?\\s*\\|",end:"\\|",relevance:5},{begin:"q[qwxr]?\\s*<",end:">",relevance:5},{begin:"qw\\s+q",end:"q",relevance:5},{begin:"'",end:"'",contains:[e.BACKSLASH_ESCAPE]},{begin:'"',end:'"'},{begin:"`",end:"`",contains:[e.BACKSLASH_ESCAPE]},{begin:/\{\w+\}/,relevance:0},{begin:"-?\\w+\\s*=>",relevance:0},]},{className:"number",begin:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",relevance:0},{begin:"(\\/\\/|"+e.RE_STARTERS_RE+"|\\b(split|return|print|reverse|grep)\\b)\\s*",keywords:"split return print reverse grep",relevance:0,contains:[e.HASH_COMMENT_MODE,{className:"regexp",variants:[{begin:c("s|tr|y",n.either(...o,{capture:!0}))},{begin:c("s|tr|y","\\(","\\)")},{begin:c("s|tr|y","\\[","\\]")},{begin:c("s|tr|y","\\{","\\}")},],relevance:2},{className:"regexp",variants:[{begin:/(m|qr)\/\//,relevance:0},{begin:d("(?:m|qr)?",/\//,/\//)},{begin:d("m|qr",n.either(...o,{capture:!0}),/\1/)},{begin:d("m|qr",/\(/,/\)/)},{begin:d("m|qr",/\[/,/\]/)},{begin:d("m|qr",/\{/,/\}/)},]},]},{className:"function",beginKeywords:"sub",end:"(\\s*\\(.*?\\))?[;{]",excludeEnd:!0,relevance:5,contains:[e.TITLE_MODE]},{begin:"-\\w\\b",relevance:0},{begin:"^__DATA__$",end:"^__END__$",subLanguage:"mojolicious",contains:[{begin:"^@@.*",end:"$",className:"comment"}]},];return i.contains=g,r.contains=g,{name:"Perl",aliases:["pl","pm"],keywords:a,contains:g}},grmr_php(e){let n=e.regex,t=/(?![A-Za-z0-9])(?![$])/,a=n.concat(/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/,t),i=n.concat(/(\\?[A-Z][a-z0-9_\x7f-\xff]+|\\?[A-Z]+(?=[A-Z][a-z0-9_\x7f-\xff])){1,}/,t),r={scope:"variable",match:"\\$+"+a},s={scope:"subst",variants:[{begin:/\$\w+/},{begin:/\{\$/,end:/\}/},]},l=e.inherit(e.APOS_STRING_MODE,{illegal:null}),o="[ \n]",c={scope:"string",variants:[e.inherit(e.QUOTE_STRING_MODE,{illegal:null,contains:e.QUOTE_STRING_MODE.contains.concat(s)}),l,e.END_SAME_AS_BEGIN({begin:/<<<[ \t]*(\w+)\n/,end:/[ \t]*(\w+)\b/,contains:e.QUOTE_STRING_MODE.contains.concat(s)}),]},d={scope:"number",variants:[{begin:"\\b0[bB][01]+(?:_[01]+)*\\b"},{begin:"\\b0[oO][0-7]+(?:_[0-7]+)*\\b"},{begin:"\\b0[xX][\\da-fA-F]+(?:_[\\da-fA-F]+)*\\b"},{begin:"(?:\\b\\d+(?:_\\d+)*(\\.(?:\\d+(?:_\\d+)*))?|\\B\\.\\d+)(?:[eE][+-]?\\d+)?"},],relevance:0},g=["false","null","true"],u=["__CLASS__","__DIR__","__FILE__","__FUNCTION__","__COMPILER_HALT_OFFSET__","__LINE__","__METHOD__","__NAMESPACE__","__TRAIT__","die","echo","exit","include","include_once","print","require","require_once","array","abstract","and","as","binary","bool","boolean","break","callable","case","catch","class","clone","const","continue","declare","default","do","double","else","elseif","empty","enddeclare","endfor","endforeach","endif","endswitch","endwhile","enum","eval","extends","final","finally","float","for","foreach","from","global","goto","if","implements","instanceof","insteadof","int","integer","interface","isset","iterable","list","match|0","mixed","new","never","object","or","private","protected","public","readonly","real","return","string","switch","throw","trait","try","unset","use","var","void","while","xor","yield",],b=["Error|0","AppendIterator","ArgumentCountError","ArithmeticError","ArrayIterator","ArrayObject","AssertionError","BadFunctionCallException","BadMethodCallException","CachingIterator","CallbackFilterIterator","CompileError","Countable","DirectoryIterator","DivisionByZeroError","DomainException","EmptyIterator","ErrorException","Exception","FilesystemIterator","FilterIterator","GlobIterator","InfiniteIterator","InvalidArgumentException","IteratorIterator","LengthException","LimitIterator","LogicException","MultipleIterator","NoRewindIterator","OutOfBoundsException","OutOfRangeException","OuterIterator","OverflowException","ParentIterator","ParseError","RangeException","RecursiveArrayIterator","RecursiveCachingIterator","RecursiveCallbackFilterIterator","RecursiveDirectoryIterator","RecursiveFilterIterator","RecursiveIterator","RecursiveIteratorIterator","RecursiveRegexIterator","RecursiveTreeIterator","RegexIterator","RuntimeException","SeekableIterator","SplDoublyLinkedList","SplFileInfo","SplFileObject","SplFixedArray","SplHeap","SplMaxHeap","SplMinHeap","SplObjectStorage","SplObserver","SplPriorityQueue","SplQueue","SplStack","SplSubject","SplTempFileObject","TypeError","UnderflowException","UnexpectedValueException","UnhandledMatchError","ArrayAccess","BackedEnum","Closure","Fiber","Generator","Iterator","IteratorAggregate","Serializable","Stringable","Throwable","Traversable","UnitEnum","WeakReference","WeakMap","Directory","__PHP_Incomplete_Class","parent","php_user_filter","self","static","stdClass",],m={keyword:u,literal:(e=>{let n=[];return e.forEach(e=>{n.push(e),e.toLowerCase()===e?n.push(e.toUpperCase()):n.push(e.toLowerCase())}),n})(g),built_in:b},p=e=>e.map(e=>e.replace(/\|\d+$/,"")),h={variants:[{match:[/new/,n.concat(o,"+"),n.concat("(?!",p(b).join("\\b|"),"\\b)"),i,],scope:{1:"keyword",4:"title.class"}},]},f=n.concat(a,"\\b(?!\\()"),E={variants:[{match:[n.concat(/::/,n.lookahead(/(?!class\b)/)),f],scope:{2:"variable.constant"}},{match:[/::/,/class/],scope:{2:"variable.language"}},{match:[i,n.concat(/::/,n.lookahead(/(?!class\b)/)),f],scope:{1:"title.class",3:"variable.constant"}},{match:[i,n.concat("::",n.lookahead(/(?!class\b)/))],scope:{1:"title.class"}},{match:[i,/::/,/class/],scope:{1:"title.class",3:"variable.language"}},]},$={scope:"attr",match:n.concat(a,n.lookahead(":"),n.lookahead(/(?!::)/))},y={relevance:0,begin:/\(/,end:/\)/,keywords:m,contains:[$,r,E,e.C_BLOCK_COMMENT_MODE,c,d,h]},N={relevance:0,match:[/\b/,n.concat("(?!fn\\b|function\\b|",p(u).join("\\b|"),"|",p(b).join("\\b|"),"\\b)"),a,n.concat(o,"*"),n.lookahead(/(?=\()/),],scope:{3:"title.function.invoke"},contains:[y]};y.contains.push(N);let w=[$,E,e.C_BLOCK_COMMENT_MODE,c,d,h];return{case_insensitive:!1,keywords:m,contains:[{begin:n.concat(/#\[\s*/,i),beginScope:"meta",end:/]/,endScope:"meta",keywords:{literal:g,keyword:["new","array"]},contains:[{begin:/\[/,end:/]/,keywords:{literal:g,keyword:["new","array"]},contains:["self",...w]},...w,{scope:"meta",match:i},]},e.HASH_COMMENT_MODE,e.COMMENT("//","$"),e.COMMENT("/\\*","\\*/",{contains:[{scope:"doctag",match:"@[A-Za-z]+"},]}),{match:/__halt_compiler\(\);/,keywords:"__halt_compiler",starts:{scope:"comment",end:e.MATCH_NOTHING_RE,contains:[{match:/\?>/,scope:"meta",endsParent:!0}]}},{scope:"meta",variants:[{begin:/<\?php/,relevance:10},{begin:/<\?=/},{begin:/<\?/,relevance:.1},{begin:/\?>/},]},{scope:"variable.language",match:/\$this\b/},r,N,E,{match:[/const/,/\s/,a],scope:{1:"keyword",3:"variable.constant"}},h,{scope:"function",relevance:0,beginKeywords:"fn function",end:/[;{]/,excludeEnd:!0,illegal:"[$%\\[]",contains:[{beginKeywords:"use"},e.UNDERSCORE_TITLE_MODE,{begin:"=>",endsParent:!0},{scope:"params",begin:"\\(",end:"\\)",excludeBegin:!0,excludeEnd:!0,keywords:m,contains:["self",r,E,e.C_BLOCK_COMMENT_MODE,c,d]},]},{scope:"class",variants:[{beginKeywords:"enum",illegal:/[($"]/},{beginKeywords:"class interface trait",illegal:/[:($"]/},],relevance:0,end:/\{/,excludeEnd:!0,contains:[{beginKeywords:"extends implements"},e.UNDERSCORE_TITLE_MODE,]},{beginKeywords:"namespace",relevance:0,end:";",illegal:/[.']/,contains:[e.inherit(e.UNDERSCORE_TITLE_MODE,{scope:"title.class"}),]},{beginKeywords:"use",relevance:0,end:";",contains:[{match:/\b(as|const|function)\b/,scope:"keyword"},e.UNDERSCORE_TITLE_MODE,]},c,d,]}},grmr_php_template:e=>({name:"PHP template",subLanguage:"xml",contains:[{begin:/<\?(php|=)?/,end:/\?>/,subLanguage:"php",contains:[{begin:"/\\*",end:"\\*/",skip:!0},{begin:'b"',end:'"',skip:!0},{begin:"b'",end:"'",skip:!0},e.inherit(e.APOS_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0}),e.inherit(e.QUOTE_STRING_MODE,{illegal:null,className:null,contains:null,skip:!0}),]},]}),grmr_plaintext:e=>({name:"Plain text",aliases:["text","txt"],disableAutodetect:!0}),grmr_python(e){let n=e.regex,t=/[\p{XID_Start}_]\p{XID_Continue}*/u,a=["and","as","assert","async","await","break","case","class","continue","def","del","elif","else","except","finally","for","from","global","if","import","in","is","lambda","match","nonlocal|10","not","or","pass","raise","return","try","while","with","yield",],i={$pattern:/[A-Za-z]\w+|__\w+__/,keyword:a,built_in:["__import__","abs","all","any","ascii","bin","bool","breakpoint","bytearray","bytes","callable","chr","classmethod","compile","complex","delattr","dict","dir","divmod","enumerate","eval","exec","filter","float","format","frozenset","getattr","globals","hasattr","hash","help","hex","id","input","int","isinstance","issubclass","iter","len","list","locals","map","max","memoryview","min","next","object","oct","open","ord","pow","print","property","range","repr","reversed","round","set","setattr","slice","sorted","staticmethod","str","sum","super","tuple","type","vars","zip",],literal:["__debug__","Ellipsis","False","None","NotImplemented","True",],type:["Any","Callable","Coroutine","Dict","List","Literal","Generic","Optional","Sequence","Set","Tuple","Type","Union",]},r={className:"meta",begin:/^(>>>|\.\.\.) /},s={className:"subst",begin:/\{/,end:/\}/,keywords:i,illegal:/#/},l={begin:/\{\{/,relevance:0},o={className:"string",contains:[e.BACKSLASH_ESCAPE],variants:[{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{begin:/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,r],relevance:10},{begin:/([fF][rR]|[rR][fF]|[fF])'''/,end:/'''/,contains:[e.BACKSLASH_ESCAPE,r,l,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"""/,end:/"""/,contains:[e.BACKSLASH_ESCAPE,r,l,s]},{begin:/([uU]|[rR])'/,end:/'/,relevance:10},{begin:/([uU]|[rR])"/,end:/"/,relevance:10},{begin:/([bB]|[bB][rR]|[rR][bB])'/,end:/'/},{begin:/([bB]|[bB][rR]|[rR][bB])"/,end:/"/},{begin:/([fF][rR]|[rR][fF]|[fF])'/,end:/'/,contains:[e.BACKSLASH_ESCAPE,l,s]},{begin:/([fF][rR]|[rR][fF]|[fF])"/,end:/"/,contains:[e.BACKSLASH_ESCAPE,l,s]},e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,]},c="[0-9](_?[0-9])*",d=`(\\b(${c}))?\\.(${c})|\\b(${c})\\.`,g="\\b|"+a.join("|"),u={className:"number",relevance:0,variants:[{begin:`(\\b(${c})|(${d}))[eE][+-]?(${c})[jJ]?(?=${g})`},{begin:`(${d})[jJ]?`},{begin:`\\b([1-9](_?[0-9])*|0+(_?0)*)[lLjJ]?(?=${g})`},{begin:`\\b0[bB](_?[01])+[lL]?(?=${g})`},{begin:`\\b0[oO](_?[0-7])+[lL]?(?=${g})`},{begin:`\\b0[xX](_?[0-9a-fA-F])+[lL]?(?=${g})`},{begin:`\\b(${c})[jJ](?=${g})`},]},b={className:"comment",begin:n.lookahead(/# type:/),end:/$/,keywords:i,contains:[{begin:/# type:/},{begin:/#/,end:/\b\B/,endsWithParent:!0},]},m={className:"params",variants:[{className:"",begin:/\(\s*\)/,skip:!0},{begin:/\(/,end:/\)/,excludeBegin:!0,excludeEnd:!0,keywords:i,contains:["self",r,u,o,e.HASH_COMMENT_MODE]},]};return s.contains=[o,u,r],{name:"Python",aliases:["py","gyp","ipython"],unicodeRegex:!0,keywords:i,illegal:/(<\/|->|\?)|=>/,contains:[r,u,{begin:/\bself\b/},{beginKeywords:"if",relevance:0},o,b,e.HASH_COMMENT_MODE,{match:[/\bdef/,/\s+/,t],scope:{1:"keyword",3:"title.function"},contains:[m]},{variants:[{match:[/\bclass/,/\s+/,t,/\s*/,/\(\s*/,t,/\s*\)/]},{match:[/\bclass/,/\s+/,t]},],scope:{1:"keyword",3:"title.class",6:"title.class.inherited"}},{className:"meta",begin:/^[\t ]*@/,end:/(?=#)|$/,contains:[u,m,o]},]}},grmr_python_repl:e=>({aliases:["pycon"],contains:[{className:"meta.prompt",starts:{end:/ |$/,starts:{end:"$",subLanguage:"python"}},variants:[{begin:/^>>>(?=[ ]|$)/},{begin:/^\.\.\.(?=[ ]|$)/},]},]}),grmr_r(e){let n=e.regex,t=/(?:(?:[a-zA-Z]|\.[._a-zA-Z])[._a-zA-Z0-9]*)|\.(?!\d)/,a=n.either(/0[xX][0-9a-fA-F]+\.[0-9a-fA-F]*[pP][+-]?\d+i?/,/0[xX][0-9a-fA-F]+(?:[pP][+-]?\d+)?[Li]?/,/(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?[Li]?/),i=/[=!<>:]=|\|\||&&|:::?|<-|<<-|->>|->|\|>|[-+*\/?!$&|:<=>@^~]|\*\*/,r=n.either(/[()]/,/[{}]/,/\[\[/,/[[\]]/,/\\/,/,/);return{name:"R",keywords:{$pattern:t,keyword:"function if in break next repeat else for while",literal:"NULL NA TRUE FALSE Inf NaN NA_integer_|10 NA_real_|10 NA_character_|10 NA_complex_|10",built_in:"LETTERS letters month.abb month.name pi T F abs acos acosh all any anyNA Arg as.call as.character as.complex as.double as.environment as.integer as.logical as.null.default as.numeric as.raw asin asinh atan atanh attr attributes baseenv browser c call ceiling class Conj cos cosh cospi cummax cummin cumprod cumsum digamma dim dimnames emptyenv exp expression floor forceAndCall gamma gc.time globalenv Im interactive invisible is.array is.atomic is.call is.character is.complex is.double is.environment is.expression is.finite is.function is.infinite is.integer is.language is.list is.logical is.matrix is.na is.name is.nan is.null is.numeric is.object is.pairlist is.raw is.recursive is.single is.symbol lazyLoadDBfetch length lgamma list log max min missing Mod names nargs nzchar oldClass on.exit pos.to.env proc.time prod quote range Re rep retracemem return round seq_along seq_len seq.int sign signif sin sinh sinpi sqrt standardGeneric substitute sum switch tan tanh tanpi tracemem trigamma trunc unclass untracemem UseMethod xtfrm"},contains:[e.COMMENT(/#'/,/$/,{contains:[{scope:"doctag",match:/@examples/,starts:{end:n.lookahead(n.either(/\n^#'\s*(?=@[a-zA-Z]+)/,/\n^(?!#')/)),endsParent:!0}},{scope:"doctag",begin:"@param",end:/$/,contains:[{scope:"variable",variants:[{match:t},{match:/`(?:\\.|[^`\\])+`/}],endsParent:!0},]},{scope:"doctag",match:/@[a-zA-Z]+/},{scope:"keyword",match:/\\[a-zA-Z]+/},]}),e.HASH_COMMENT_MODE,{scope:"string",contains:[e.BACKSLASH_ESCAPE],variants:[e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\(/,end:/\)(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\{/,end:/\}(-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]"(-*)\[/,end:/\](-*)"/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\(/,end:/\)(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\{/,end:/\}(-*)'/}),e.END_SAME_AS_BEGIN({begin:/[rR]'(-*)\[/,end:/\](-*)'/}),{begin:'"',end:'"',relevance:0},{begin:"'",end:"'",relevance:0},]},{relevance:0,variants:[{scope:{1:"operator",2:"number"},match:[i,a]},{scope:{1:"operator",2:"number"},match:[/%[^%]*%/,a]},{scope:{1:"punctuation",2:"number"},match:[r,a]},{scope:{2:"number"},match:[/[^a-zA-Z0-9._]|^/,a]},]},{scope:{3:"operator"},match:[t,/\s+/,/<-/,/\s+/]},{scope:"operator",relevance:0,variants:[{match:i},{match:/%[^%]*%/},]},{scope:"punctuation",relevance:0,match:r},{begin:"`",end:"`",contains:[{begin:/\\./}]},]}},grmr_ruby(e){let n=e.regex,t="([a-zA-Z_]\\w*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?)",a=n.either(/\b([A-Z]+[a-z0-9]+)+/,/\b([A-Z]+[a-z0-9]+)+[A-Z]+/),i=n.concat(a,/(::\w+)*/),r={"variable.constant":["__FILE__","__LINE__","__ENCODING__"],"variable.language":["self","super"],keyword:["alias","and","begin","BEGIN","break","case","class","defined","do","else","elsif","end","END","ensure","for","if","in","module","next","not","or","redo","require","rescue","retry","return","then","undef","unless","until","when","while","yield","include","extend","prepend","public","private","protected","raise","throw",],built_in:["proc","lambda","attr_accessor","attr_reader","attr_writer","define_method","private_constant","module_function",],literal:["true","false","nil"]},s={className:"doctag",begin:"@[A-Za-z]+"},l={begin:"#<",end:">"},o=[e.COMMENT("#","$",{contains:[s]}),e.COMMENT("^=begin","^=end",{contains:[s],relevance:10}),e.COMMENT("^__END__",e.MATCH_NOTHING_RE),],c={className:"subst",begin:/#\{/,end:/\}/,keywords:r},d={className:"string",contains:[e.BACKSLASH_ESCAPE,c],variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/`/,end:/`/},{begin:/%[qQwWx]?\(/,end:/\)/},{begin:/%[qQwWx]?\[/,end:/\]/},{begin:/%[qQwWx]?\{/,end:/\}/},{begin:/%[qQwWx]?/},{begin:/%[qQwWx]?\//,end:/\//},{begin:/%[qQwWx]?%/,end:/%/},{begin:/%[qQwWx]?-/,end:/-/},{begin:/%[qQwWx]?\|/,end:/\|/},{begin:/\B\?(\\\d{1,3})/},{begin:/\B\?(\\x[A-Fa-f0-9]{1,2})/},{begin:/\B\?(\\u\{?[A-Fa-f0-9]{1,6}\}?)/},{begin:/\B\?(\\M-\\C-|\\M-\\c|\\c\\M-|\\M-|\\C-\\M-)[\x20-\x7e]/},{begin:/\B\?\\(c|C-)[\x20-\x7e]/},{begin:/\B\?\\?\S/},{begin:n.concat(/<<[-~]?'?/,n.lookahead(/(\w+)(?=\W)[^\n]*\n(?:[^\n]*\n)*?\s*\1\b/)),contains:[e.END_SAME_AS_BEGIN({begin:/(\w+)/,end:/(\w+)/,contains:[e.BACKSLASH_ESCAPE,c]}),]},]},g="[0-9](_?[0-9])*",u={className:"number",relevance:0,variants:[{begin:`\\b([1-9](_?[0-9])*|0)(\\.(${g}))?([eE][+-]?(${g})|r)?i?\\b`},{begin:"\\b0[dD][0-9](_?[0-9])*r?i?\\b"},{begin:"\\b0[bB][0-1](_?[0-1])*r?i?\\b"},{begin:"\\b0[oO][0-7](_?[0-7])*r?i?\\b"},{begin:"\\b0[xX][0-9a-fA-F](_?[0-9a-fA-F])*r?i?\\b"},{begin:"\\b0(_?[0-7])+r?i?\\b"},]},b={variants:[{match:/\(\)/},{className:"params",begin:/\(/,end:/(?=\))/,excludeBegin:!0,endsParent:!0,keywords:r},]},m=[d,{variants:[{match:[/class\s+/,i,/\s+<\s+/,i]},{match:[/\b(class|module)\s+/,i]},],scope:{2:"title.class",4:"title.class.inherited"},keywords:r},{match:[/(include|extend)\s+/,i],scope:{2:"title.class"},keywords:r},{relevance:0,match:[i,/\.new[. (]/],scope:{1:"title.class"}},{relevance:0,match:/\b[A-Z][A-Z_0-9]+\b/,className:"variable.constant"},{relevance:0,match:a,scope:"title.class"},{match:[/def/,/\s+/,t],scope:{1:"keyword",3:"title.function"},contains:[b]},{begin:e.IDENT_RE+"::"},{className:"symbol",begin:e.UNDERSCORE_IDENT_RE+"(!|\\?)?:",relevance:0},{className:"symbol",begin:":(?!\\s)",contains:[d,{begin:t}],relevance:0},u,{className:"variable",begin:"(\\$\\W)|((\\$|@@?)(\\w+))(?=[^@$?])(?![A-Za-z])(?![@$?'])"},{className:"params",begin:/\|/,end:/\|/,excludeBegin:!0,excludeEnd:!0,relevance:0,keywords:r},{begin:"("+e.RE_STARTERS_RE+"|unless)\\s*",keywords:"unless",contains:[{className:"regexp",contains:[e.BACKSLASH_ESCAPE,c],illegal:/\n/,variants:[{begin:"/",end:"/[a-z]*"},{begin:/%r\{/,end:/\}[a-z]*/},{begin:"%r\\(",end:"\\)[a-z]*"},{begin:"%r!",end:"![a-z]*"},{begin:"%r\\[",end:"\\][a-z]*"},]},].concat(l,o),relevance:0},].concat(l,o);return c.contains=m,b.contains=m,o.unshift(l),{name:"Ruby",aliases:["rb","gemspec","podspec","thor","irb"],keywords:r,illegal:/\/\*/,contains:[e.SHEBANG({binary:"ruby"})].concat([{begin:/^\s*=>/,starts:{end:"$",contains:m}},{className:"meta.prompt",begin:"^([>?]>|[\\w#]+\\(\\w+\\):\\d+:\\d+[>*]|(\\w+-)?\\d+\\.\\d+\\.\\d+(p\\d+)?[^\\d][^>]+>)(?=[ ])",starts:{end:"$",keywords:r,contains:m}},]).concat(o).concat(m)}},grmr_rust(e){let n=e.regex,t={className:"title.function.invoke",relevance:0,begin:n.concat(/\b/,/(?!let\b)/,e.IDENT_RE,n.lookahead(/\s*\(/))},a="([ui](8|16|32|64|128|size)|f(32|64))?",i=["drop ","Copy","Send","Sized","Sync","Drop","Fn","FnMut","FnOnce","ToOwned","Clone","Debug","PartialEq","PartialOrd","Eq","Ord","AsRef","AsMut","Into","From","Default","Iterator","Extend","IntoIterator","DoubleEndedIterator","ExactSizeIterator","SliceConcatExt","ToString","assert!","assert_eq!","bitflags!","bytes!","cfg!","col!","concat!","concat_idents!","debug_assert!","debug_assert_eq!","env!","panic!","file!","format!","format_args!","include_bytes!","include_str!","line!","local_data_key!","module_path!","option_env!","print!","println!","select!","stringify!","try!","unimplemented!","unreachable!","vec!","write!","writeln!","macro_rules!","assert_ne!","debug_assert_ne!",],r=["i8","i16","i32","i64","i128","isize","u8","u16","u32","u64","u128","usize","f32","f64","str","char","bool","Box","Option","Result","String","Vec",];return{name:"Rust",aliases:["rs"],keywords:{$pattern:e.IDENT_RE+"!?",type:r,keyword:["abstract","as","async","await","become","box","break","const","continue","crate","do","dyn","else","enum","extern","false","final","fn","for","if","impl","in","let","loop","macro","match","mod","move","mut","override","priv","pub","ref","return","self","Self","static","struct","super","trait","true","try","type","typeof","unsafe","unsized","use","virtual","where","while","yield",],literal:["true","false","Some","None","Ok","Err"],built_in:i},illegal:""},t,]}},grmr_scss(e){let n=X(e),t="@[a-z-]+",a={className:"variable",begin:"(\\$[a-zA-Z-][a-zA-Z0-9_-]*)\\b",relevance:0};return{name:"SCSS",case_insensitive:!0,illegal:"[=/|']",contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,n.CSS_NUMBER_MODE,{className:"selector-id",begin:"#[A-Za-z0-9_-]+",relevance:0},{className:"selector-class",begin:"\\.[A-Za-z0-9_-]+",relevance:0},n.ATTRIBUTE_SELECTOR_MODE,{className:"selector-tag",begin:"\\b("+V.join("|")+")\\b",relevance:0},{className:"selector-pseudo",begin:":("+Y.join("|")+")"},{className:"selector-pseudo",begin:":(:)?("+ee.join("|")+")"},a,{begin:/\(/,end:/\)/,contains:[n.CSS_NUMBER_MODE]},n.CSS_VARIABLE,{className:"attribute",begin:"\\b("+en.join("|")+")\\b"},{begin:"\\b(whitespace|wait|w-resize|visible|vertical-text|vertical-ideographic|uppercase|upper-roman|upper-alpha|underline|transparent|top|thin|thick|text|text-top|text-bottom|tb-rl|table-header-group|table-footer-group|sw-resize|super|strict|static|square|solid|small-caps|separate|se-resize|scroll|s-resize|rtl|row-resize|ridge|right|repeat|repeat-y|repeat-x|relative|progress|pointer|overline|outside|outset|oblique|nowrap|not-allowed|normal|none|nw-resize|no-repeat|no-drop|newspaper|ne-resize|n-resize|move|middle|medium|ltr|lr-tb|lowercase|lower-roman|lower-alpha|loose|list-item|line|line-through|line-edge|lighter|left|keep-all|justify|italic|inter-word|inter-ideograph|inside|inset|inline|inline-block|inherit|inactive|ideograph-space|ideograph-parenthesis|ideograph-numeric|ideograph-alpha|horizontal|hidden|help|hand|groove|fixed|ellipsis|e-resize|double|dotted|distribute|distribute-space|distribute-letter|distribute-all-lines|disc|disabled|default|decimal|dashed|crosshair|collapse|col-resize|circle|char|center|capitalize|break-word|break-all|bottom|both|bolder|bold|block|bidi-override|below|baseline|auto|always|all-scroll|absolute|table|table-cell)\\b"},{begin:/:/,end:/[;}{]/,relevance:0,contains:[n.BLOCK_COMMENT,a,n.HEXCOLOR,n.CSS_NUMBER_MODE,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.IMPORTANT,n.FUNCTION_DISPATCH,]},{begin:"@(page|font-face)",keywords:{$pattern:t,keyword:"@page @font-face"}},{begin:"@",end:"[{;]",returnBegin:!0,keywords:{$pattern:/[a-z-]+/,keyword:"and or not only",attribute:J.join(" ")},contains:[{begin:t,className:"keyword"},{begin:/[a-z-]+(?=:)/,className:"attribute"},a,e.QUOTE_STRING_MODE,e.APOS_STRING_MODE,n.HEXCOLOR,n.CSS_NUMBER_MODE,]},n.FUNCTION_DISPATCH,]}},grmr_shell:e=>({name:"Shell Session",aliases:["console","shellsession"],contains:[{className:"meta.prompt",begin:/^\s{0,3}[/~\w\d[\]()@-]*[>%$#][ ]?/,starts:{end:/[^\\](?=\s*$)/,subLanguage:"bash"}},]}),grmr_sql(e){let n=e.regex,t=e.COMMENT("--","$"),a=["true","false","unknown"],i=["bigint","binary","blob","boolean","char","character","clob","date","dec","decfloat","decimal","float","int","integer","interval","nchar","nclob","national","numeric","real","row","smallint","time","timestamp","varchar","varying","varbinary",],r=["abs","acos","array_agg","asin","atan","avg","cast","ceil","ceiling","coalesce","corr","cos","cosh","count","covar_pop","covar_samp","cume_dist","dense_rank","deref","element","exp","extract","first_value","floor","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","last_value","lead","listagg","ln","log","log10","lower","max","min","mod","nth_value","ntile","nullif","percent_rank","percentile_cont","percentile_disc","position","position_regex","power","rank","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","row_number","sin","sinh","sqrt","stddev_pop","stddev_samp","substring","substring_regex","sum","tan","tanh","translate","translate_regex","treat","trim","trim_array","unnest","upper","value_of","var_pop","var_samp","width_bucket",],s=["create table","insert into","primary key","foreign key","not null","alter table","add constraint","grouping sets","on overflow","character set","respect nulls","ignore nulls","nulls first","nulls last","depth first","breadth first",],l=r,o=["abs","acos","all","allocate","alter","and","any","are","array","array_agg","array_max_cardinality","as","asensitive","asin","asymmetric","at","atan","atomic","authorization","avg","begin","begin_frame","begin_partition","between","bigint","binary","blob","boolean","both","by","call","called","cardinality","cascaded","case","cast","ceil","ceiling","char","char_length","character","character_length","check","classifier","clob","close","coalesce","collate","collect","column","commit","condition","connect","constraint","contains","convert","copy","corr","corresponding","cos","cosh","count","covar_pop","covar_samp","create","cross","cube","cume_dist","current","current_catalog","current_date","current_default_transform_group","current_path","current_role","current_row","current_schema","current_time","current_timestamp","current_path","current_role","current_transform_group_for_type","current_user","cursor","cycle","date","day","deallocate","dec","decimal","decfloat","declare","default","define","delete","dense_rank","deref","describe","deterministic","disconnect","distinct","double","drop","dynamic","each","element","else","empty","end","end_frame","end_partition","end-exec","equals","escape","every","except","exec","execute","exists","exp","external","extract","false","fetch","filter","first_value","float","floor","for","foreign","frame_row","free","from","full","function","fusion","get","global","grant","group","grouping","groups","having","hold","hour","identity","in","indicator","initial","inner","inout","insensitive","insert","int","integer","intersect","intersection","interval","into","is","join","json_array","json_arrayagg","json_exists","json_object","json_objectagg","json_query","json_table","json_table_primitive","json_value","lag","language","large","last_value","lateral","lead","leading","left","like","like_regex","listagg","ln","local","localtime","localtimestamp","log","log10","lower","match","match_number","match_recognize","matches","max","member","merge","method","min","minute","mod","modifies","module","month","multiset","national","natural","nchar","nclob","new","no","none","normalize","not","nth_value","ntile","null","nullif","numeric","octet_length","occurrences_regex","of","offset","old","omit","on","one","only","open","or","order","out","outer","over","overlaps","overlay","parameter","partition","pattern","per","percent","percent_rank","percentile_cont","percentile_disc","period","portion","position","position_regex","power","precedes","precision","prepare","primary","procedure","ptf","range","rank","reads","real","recursive","ref","references","referencing","regr_avgx","regr_avgy","regr_count","regr_intercept","regr_r2","regr_slope","regr_sxx","regr_sxy","regr_syy","release","result","return","returns","revoke","right","rollback","rollup","row","row_number","rows","running","savepoint","scope","scroll","search","second","seek","select","sensitive","session_user","set","show","similar","sin","sinh","skip","smallint","some","specific","specifictype","sql","sqlexception","sqlstate","sqlwarning","sqrt","start","static","stddev_pop","stddev_samp","submultiset","subset","substring","substring_regex","succeeds","sum","symmetric","system","system_time","system_user","table","tablesample","tan","tanh","then","time","timestamp","timezone_hour","timezone_minute","to","trailing","translate","translate_regex","translation","treat","trigger","trim","trim_array","true","truncate","uescape","union","unique","unknown","unnest","update","upper","user","using","value","values","value_of","var_pop","var_samp","varbinary","varchar","varying","versioning","when","whenever","where","width_bucket","window","with","within","without","year","add","asc","collation","desc","final","first","last","view",].filter(e=>!r.includes(e)),c={begin:n.concat(/\b/,n.either(...l),/\s*\(/),relevance:0,keywords:{built_in:l}};return{name:"SQL",case_insensitive:!0,illegal:/[{}]|<\//,keywords:{$pattern:/\b[\w\.]+/,keyword:((e,{exceptions:n,when:t}={})=>{let a=t;return n=n||[],e.map(e=>e.match(/\|\d+$/)||n.includes(e)?e:a(e)?e+"|0":e)})(o,{when:e=>e.length<3}),literal:a,type:i,built_in:["current_catalog","current_date","current_default_transform_group","current_path","current_role","current_schema","current_transform_group_for_type","current_user","session_user","system_time","system_user","current_time","localtime","current_timestamp","localtimestamp",]},contains:[{begin:n.either(...s),relevance:0,keywords:{$pattern:/[\w\.]+/,keyword:o.concat(s),literal:a,type:i}},{className:"type",begin:n.either("double precision","large object","with timezone","without timezone")},c,{className:"variable",begin:/@[a-z0-9]+/},{className:"string",variants:[{begin:/'/,end:/'/,contains:[{begin:/''/}]},]},{begin:/"/,end:/"/,contains:[{begin:/""/},]},e.C_NUMBER_MODE,e.C_BLOCK_COMMENT_MODE,t,{className:"operator",begin:/[-+*/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?/,relevance:0},]}},grmr_swift(e){let n={match:/\s+/,relevance:0},t=e.COMMENT("/\\*","\\*/",{contains:["self"]}),a=[e.C_LINE_COMMENT_MODE,t],i={match:[/\./,p(...e8,...eh)],className:{2:"keyword"}},r={match:m(/\./,p(...eE)),relevance:0},s=eE.filter(e=>"string"==typeof e).concat(["_|0"]),l={variants:[{className:"keyword",match:p(...eE.filter(e=>"string"!=typeof e).concat(ef).map(ep),...eh)},]},o={$pattern:p(/\b\w+/,/#\w+/),keyword:s.concat(eN),literal:e$},c=[i,r,l],d=[{match:m(/\./,p(...ew)),relevance:0},{className:"built_in",match:m(/\b/,p(...ew),/(?=\()/)},],u={match:/->/,relevance:0},b=[u,{className:"operator",relevance:0,variants:[{match:ek},{match:`\\.(\\.|${ex})+`}]},],h="([0-9a-fA-F]_*)+",f={className:"number",relevance:0,variants:[{match:"\\b(([0-9]_*)+)(\\.(([0-9]_*)+))?([eE][+-]?(([0-9]_*)+))?\\b"},{match:`\\b0x(${h})(\\.(${h}))?([pP][+-]?(([0-9]_*)+))?\\b`},{match:/\b0o([0-7]_*)+\b/},{match:/\b0b([01]_*)+\b/},]},E=(e="")=>({className:"subst",variants:[{match:m(/\\/,e,/[0\\tnr"']/)},{match:m(/\\/,e,/u\{[0-9a-fA-F]{1,8}\}/)},]}),$=(e="")=>({className:"subst",match:m(/\\/,e,/[\t ]*(?:[\r\n]|\r\n)/)}),y=(e="")=>({className:"subst",label:"interpol",begin:m(/\\/,e,/\(/),end:/\)/}),N=(e="")=>({begin:m(e,/"""/),end:m(/"""/,e),contains:[E(e),$(e),y(e)]}),w=(e="")=>({begin:m(e,/"/),end:m(/"/,e),contains:[E(e),y(e)]}),v={className:"string",variants:[N(),N("#"),N("##"),N("###"),w(),w("#"),w("##"),w("###"),]},x={match:m(/`/,eS,/`/)},k=[x,{className:"variable",match:/\$\d+/},{className:"variable",match:`\\$${eO}+`},],M=[{match:/(@|#(un)?)available/,className:"keyword",starts:{contains:[{begin:/\(/,end:/\)/,keywords:eT,contains:[...b,f,v]},]}},{className:"keyword",match:m(/@/,p(...eC))},{className:"meta",match:m(/@/,eS)},],O={match:g(/\b[A-Z]/),relevance:0,contains:[{className:"type",match:m(/(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)/,eO,"+")},{className:"type",match:eA,relevance:0},{match:/[?!]+/,relevance:0},{match:/\.\.\./,relevance:0},{match:m(/\s+&\s+/,g(eA)),relevance:0},]};O.contains.push({begin://,keywords:o,contains:[...a,...c,...M,u,O]});let S={begin:/\(/,end:/\)/,relevance:0,keywords:o,contains:["self",{match:m(eS,/\s*:/),keywords:"_|0",relevance:0},...a,...c,...d,...b,f,v,...k,...M,O,]},A={begin://,contains:[...a,O]},C={begin:/\(/,end:/\)/,keywords:o,contains:[{begin:p(g(m(eS,/\s*:/)),g(m(eS,/\s+/,eS,/\s*:/))),end:/:/,relevance:0,contains:[{className:"keyword",match:/\b_\b/},{className:"params",match:eS},]},...a,...c,...b,f,v,...M,O,S,],endsParent:!0,illegal:/["']/},T={match:[/func/,/\s+/,p(x.match,eS,ek)],className:{1:"keyword",3:"title.function"},contains:[A,C,n],illegal:[/\[/,/%/]};for(let R of v.variants){let D=R.contains.find(e=>"interpol"===e.label);D.keywords=o;let I=[...c,...d,...b,f,v,...k];D.contains=[...I,{begin:/\(/,end:/\)/,contains:["self",...I]},]}return{name:"Swift",keywords:o,contains:[...a,T,{match:[/\b(?:subscript|init[?!]?)/,/\s*(?=[<(])/],className:{1:"keyword"},contains:[A,C,n],illegal:/\[|%/},{beginKeywords:"struct protocol class extension enum actor",end:"\\{",excludeEnd:!0,keywords:o,contains:[e.inherit(e.TITLE_MODE,{className:"title.class",begin:/[A-Za-z$_][\u00C0-\u02B80-9A-Za-z$_]*/}),...c,]},{match:[/operator/,/\s+/,ek],className:{1:"keyword",3:"title"}},{begin:[/precedencegroup/,/\s+/,eA],className:{1:"keyword",3:"title"},contains:[O],keywords:[...ey,...e$],end:/}/},{beginKeywords:"import",end:/$/,contains:[...a],relevance:0},...c,...d,...b,f,v,...k,...M,O,S,]}},grmr_typescript(e){let n=em(e),t=["any","void","number","boolean","string","object","never","symbol","bigint","unknown",],a={beginKeywords:"namespace",end:/\{/,excludeEnd:!0,contains:[n.exports.CLASS_REFERENCE]},i={beginKeywords:"interface",end:/\{/,excludeEnd:!0,keywords:{keyword:"interface extends",built_in:t},contains:[n.exports.CLASS_REFERENCE]},r={$pattern:es,keyword:el.concat(["type","namespace","interface","public","private","protected","implements","declare","abstract","readonly","enum","override",]),literal:eo,built_in:eb.concat(t),"variable.language":eu},s={className:"meta",begin:"@[A-Za-z$_][0-9A-Za-z$_]*"},l=(e,n,t)=>{let a=e.contains.findIndex(e=>e.label===n);if(-1===a)throw Error("can not find mode to replace");e.contains.splice(a,1,t)};return Object.assign(n.keywords,r),n.exports.PARAMS_CONTAINS.push(s),n.contains=n.contains.concat([s,a,i]),l(n,"shebang",e.SHEBANG()),l(n,"use_strict",{className:"meta",relevance:10,begin:/^\s*['"]use strict['"]/}),n.contains.find(e=>"func.def"===e.label).relevance=0,Object.assign(n,{name:"TypeScript",aliases:["ts","tsx"]}),n},grmr_vbnet(e){let n=e.regex,t=/\d{1,2}\/\d{1,2}\/\d{4}/,a=/\d{4}-\d{1,2}-\d{1,2}/,i=/(\d|1[012])(:\d+){0,2} *(AM|PM)/,r=/\d{1,2}(:\d{1,2}){1,2}/,s={className:"literal",variants:[{begin:n.concat(/# */,n.either(a,t),/ *#/)},{begin:n.concat(/# */,r,/ *#/)},{begin:n.concat(/# */,i,/ *#/)},{begin:n.concat(/# */,n.either(a,t),/ +/,n.either(i,r),/ *#/)},]},l=e.COMMENT(/'''/,/$/,{contains:[{className:"doctag",begin:/<\/?/,end:/>/}]}),o=e.COMMENT(null,/$/,{variants:[{begin:/'/},{begin:/([\t ]|^)REM(?=\s)/}]});return{name:"Visual Basic .NET",aliases:["vb"],case_insensitive:!0,classNameAliases:{label:"symbol"},keywords:{keyword:"addhandler alias aggregate ansi as async assembly auto binary by byref byval call case catch class compare const continue custom declare default delegate dim distinct do each equals else elseif end enum erase error event exit explicit finally for friend from function get global goto group handles if implements imports in inherits interface into iterator join key let lib loop me mid module mustinherit mustoverride mybase myclass namespace narrowing new next notinheritable notoverridable of off on operator option optional order overloads overridable overrides paramarray partial preserve private property protected public raiseevent readonly redim removehandler resume return select set shadows shared skip static step stop structure strict sub synclock take text then throw to try unicode until using when where while widening with withevents writeonly yield",built_in:"addressof and andalso await directcast gettype getxmlnamespace is isfalse isnot istrue like mod nameof new not or orelse trycast typeof xor cbool cbyte cchar cdate cdbl cdec cint clng cobj csbyte cshort csng cstr cuint culng cushort",type:"boolean byte char date decimal double integer long object sbyte short single string uinteger ulong ushort",literal:"true false nothing"},illegal:"//|\\{|\\}|endif|gosub|variant|wend|^\\$ ",contains:[{className:"string",begin:/"(""|[^/n])"C\b/},{className:"string",begin:/"/,end:/"/,illegal:/\n/,contains:[{begin:/""/}]},s,{className:"number",relevance:0,variants:[{begin:/\b\d[\d_]*((\.[\d_]+(E[+-]?[\d_]+)?)|(E[+-]?[\d_]+))[RFD@!#]?/},{begin:/\b\d[\d_]*((U?[SIL])|[%&])?/},{begin:/&H[\dA-F_]+((U?[SIL])|[%&])?/},{begin:/&O[0-7_]+((U?[SIL])|[%&])?/},{begin:/&B[01_]+((U?[SIL])|[%&])?/},]},{className:"label",begin:/^\w+:/},l,o,{className:"meta",begin:/[\t ]*#(const|disable|else|elseif|enable|end|externalsource|if|region)\b/,end:/$/,keywords:{keyword:"const disable else elseif enable end externalsource if region then"},contains:[o]},]}},grmr_wasm(e){e.regex;let n=e.COMMENT(/\(;/,/;\)/);return n.contains.push("self"),{name:"WebAssembly",keywords:{$pattern:/[\w.]+/,keyword:["anyfunc","block","br","br_if","br_table","call","call_indirect","data","drop","elem","else","end","export","func","global.get","global.set","local.get","local.set","local.tee","get_global","get_local","global","if","import","local","loop","memory","memory.grow","memory.size","module","mut","nop","offset","param","result","return","select","set_global","set_local","start","table","tee_local","then","type","unreachable",]},contains:[e.COMMENT(/;;/,/$/),n,{match:[/(?:offset|align)/,/\s*/,/=/],className:{1:"keyword",3:"operator"}},{className:"variable",begin:/\$[\w_]+/},{match:/(\((?!;)|\))+/,className:"punctuation",relevance:0},{begin:[/(?:func|call|call_indirect)/,/\s+/,/\$[^\s)]+/],className:{1:"keyword",3:"title.function"}},e.QUOTE_STRING_MODE,{match:/(i32|i64|f32|f64)(?!\.)/,className:"type"},{className:"keyword",match:/\b(f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|nearest|neg?|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|store(?:8|16|32)?|sqrt|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))\b/},{className:"number",relevance:0,match:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/},]}},grmr_yaml(e){let n="true false yes no null",t="[\\w#;/?:@&=+$,.~*'()[\\]]+",a={className:"string",relevance:0,variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/\S+/},],contains:[e.BACKSLASH_ESCAPE,{className:"template-variable",variants:[{begin:/\{\{/,end:/\}\}/},{begin:/%\{/,end:/\}/},]},]},i=e.inherit(a,{variants:[{begin:/'/,end:/'/},{begin:/"/,end:/"/},{begin:/[^\s,{}[\]]+/},]}),r={end:",",endsWithParent:!0,excludeEnd:!0,keywords:n,relevance:0},s=[{className:"attr",variants:[{begin:"\\w[\\w :\\/.-]*:(?=[ ]|$)"},{begin:'"\\w[\\w :\\/.-]*":(?=[ ]|$)'},{begin:"'\\w[\\w :\\/.-]*':(?=[ ]|$)"},]},{className:"meta",begin:"^---\\s*$",relevance:10},{className:"string",begin:"[\\|>]([1-9]?[+-])?[ ]*\\n( +)[^ ][^\\n]*\\n(\\2[^\\n]+\\n?)*"},{begin:"<%[%=-]?",end:"[%-]?%>",subLanguage:"ruby",excludeBegin:!0,excludeEnd:!0,relevance:0},{className:"type",begin:"!\\w+!"+t},{className:"type",begin:"!<"+t+">"},{className:"type",begin:"!"+t},{className:"type",begin:"!!"+t},{className:"meta",begin:"&"+e.UNDERSCORE_IDENT_RE+"$"},{className:"meta",begin:"\\*"+e.UNDERSCORE_IDENT_RE+"$"},{className:"bullet",begin:"-(?=[ ]|$)",relevance:0},e.HASH_COMMENT_MODE,{beginKeywords:n,keywords:{literal:n}},{className:"number",begin:"\\b[0-9]{4}(-[0-9][0-9]){0,2}([Tt \\t][0-9][0-9]?(:[0-9][0-9]){2})?(\\.[0-9]*)?([ \\t])*(Z|[-+][0-9][0-9]?(:[0-9][0-9])?)?\\b"},{className:"number",begin:e.C_NUMBER_RE+"\\b",relevance:0},{begin:/\{/,end:/\}/,contains:[r],illegal:"\\n",relevance:0},{begin:"\\[",end:"\\]",contains:[r],illegal:"\\n",relevance:0},a,],l=[...s];return l.pop(),l.push(i),r.contains=l,{name:"YAML",case_insensitive:!0,aliases:["yml"],contains:s}}});let eD=Q;for(let eI of Object.keys(eR)){let eL=eI.replace("grmr_","").replace("_","-");eD.registerLanguage(eL,eR[eI])}return eD}();"object"==typeof exports&&"undefined"!=typeof module&&(module.exports=hljs); \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/installed.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/installed.py deleted file mode 100644 index edb38aa1a6c54dcb73e2f74b6bdfff337841d99f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/distributions/installed.py +++ /dev/null @@ -1,23 +0,0 @@ -from pip._internal.distributions.base import AbstractDistribution -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution - - -class InstalledDistribution(AbstractDistribution): - """Represents an installed package. - - This does not need any preparation as the required information has already - been computed. - """ - - def get_metadata_distribution(self) -> BaseDistribution: - assert self.req.satisfied_by is not None, "not actually installed" - return self.req.satisfied_by - - def prepare_distribution_metadata( - self, - finder: PackageFinder, - build_isolation: bool, - check_build_deps: bool, - ) -> None: - pass diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/results.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/results.py deleted file mode 100644 index 0313049763bb09475051eff9841059fbbfa7d13f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/results.py +++ /dev/null @@ -1,796 +0,0 @@ -# results.py -from collections.abc import ( - MutableMapping, - Mapping, - MutableSequence, - Iterator, - Sequence, - Container, -) -import pprint -from typing import Tuple, Any, Dict, Set, List - -str_type: Tuple[type, ...] = (str, bytes) -_generator_type = type((_ for _ in ())) - - -class _ParseResultsWithOffset: - tup: Tuple["ParseResults", int] - __slots__ = ["tup"] - - def __init__(self, p1: "ParseResults", p2: int): - self.tup: Tuple[ParseResults, int] = (p1, p2) - - def __getitem__(self, i): - return self.tup[i] - - def __getstate__(self): - return self.tup - - def __setstate__(self, *args): - self.tup = args[0] - - -class ParseResults: - """Structured parse results, to provide multiple means of access to - the parsed data: - - - as a list (``len(results)``) - - by list index (``results[0], results[1]``, etc.) - - by attribute (``results.`` - see :class:`ParserElement.set_results_name`) - - Example:: - - integer = Word(nums) - date_str = (integer.set_results_name("year") + '/' - + integer.set_results_name("month") + '/' - + integer.set_results_name("day")) - # equivalent form: - # date_str = (integer("year") + '/' - # + integer("month") + '/' - # + integer("day")) - - # parse_string returns a ParseResults object - result = date_str.parse_string("1999/12/31") - - def test(s, fn=repr): - print(f"{s} -> {fn(eval(s))}") - test("list(result)") - test("result[0]") - test("result['month']") - test("result.day") - test("'month' in result") - test("'minutes' in result") - test("result.dump()", str) - - prints:: - - list(result) -> ['1999', '/', '12', '/', '31'] - result[0] -> '1999' - result['month'] -> '12' - result.day -> '31' - 'month' in result -> True - 'minutes' in result -> False - result.dump() -> ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - - _null_values: Tuple[Any, ...] = (None, [], ()) - - _name: str - _parent: "ParseResults" - _all_names: Set[str] - _modal: bool - _toklist: List[Any] - _tokdict: Dict[str, Any] - - __slots__ = ( - "_name", - "_parent", - "_all_names", - "_modal", - "_toklist", - "_tokdict", - ) - - class List(list): - """ - Simple wrapper class to distinguish parsed list results that should be preserved - as actual Python lists, instead of being converted to :class:`ParseResults`:: - - LBRACK, RBRACK = map(pp.Suppress, "[]") - element = pp.Forward() - item = ppc.integer - element_list = LBRACK + pp.DelimitedList(element) + RBRACK - - # add parse actions to convert from ParseResults to actual Python collection types - def as_python_list(t): - return pp.ParseResults.List(t.as_list()) - element_list.add_parse_action(as_python_list) - - element <<= item | element_list - - element.run_tests(''' - 100 - [2,3,4] - [[2, 1],3,4] - [(2, 1),3,4] - (2,3,4) - ''', post_parse=lambda s, r: (r[0], type(r[0]))) - - prints:: - - 100 - (100, ) - - [2,3,4] - ([2, 3, 4], ) - - [[2, 1],3,4] - ([[2, 1], 3, 4], ) - - (Used internally by :class:`Group` when `aslist=True`.) - """ - - def __new__(cls, contained=None): - if contained is None: - contained = [] - - if not isinstance(contained, list): - raise TypeError( - f"{cls.__name__} may only be constructed with a list, not {type(contained).__name__}" - ) - - return list.__new__(cls) - - def __new__(cls, toklist=None, name=None, **kwargs): - if isinstance(toklist, ParseResults): - return toklist - self = object.__new__(cls) - self._name = None - self._parent = None - self._all_names = set() - - if toklist is None: - self._toklist = [] - elif isinstance(toklist, (list, _generator_type)): - self._toklist = ( - [toklist[:]] - if isinstance(toklist, ParseResults.List) - else list(toklist) - ) - else: - self._toklist = [toklist] - self._tokdict = dict() - return self - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance - ): - self._tokdict: Dict[str, _ParseResultsWithOffset] - self._modal = modal - if name is not None and name != "": - if isinstance(name, int): - name = str(name) - if not modal: - self._all_names = {name} - self._name = name - if toklist not in self._null_values: - if isinstance(toklist, (str_type, type)): - toklist = [toklist] - if asList: - if isinstance(toklist, ParseResults): - self[name] = _ParseResultsWithOffset( - ParseResults(toklist._toklist), 0 - ) - else: - self[name] = _ParseResultsWithOffset( - ParseResults(toklist[0]), 0 - ) - self[name]._name = name - else: - try: - self[name] = toklist[0] - except (KeyError, TypeError, IndexError): - if toklist is not self: - self[name] = toklist - else: - self._name = name - - def __getitem__(self, i): - if isinstance(i, (int, slice)): - return self._toklist[i] - else: - if i not in self._all_names: - return self._tokdict[i][-1][0] - else: - return ParseResults([v[0] for v in self._tokdict[i]]) - - def __setitem__(self, k, v, isinstance=isinstance): - if isinstance(v, _ParseResultsWithOffset): - self._tokdict[k] = self._tokdict.get(k, list()) + [v] - sub = v[0] - elif isinstance(k, (int, slice)): - self._toklist[k] = v - sub = v - else: - self._tokdict[k] = self._tokdict.get(k, list()) + [ - _ParseResultsWithOffset(v, 0) - ] - sub = v - if isinstance(sub, ParseResults): - sub._parent = self - - def __delitem__(self, i): - if isinstance(i, (int, slice)): - mylen = len(self._toklist) - del self._toklist[i] - - # convert int to slice - if isinstance(i, int): - if i < 0: - i += mylen - i = slice(i, i + 1) - # get removed indices - removed = list(range(*i.indices(mylen))) - removed.reverse() - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for j in removed: - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position - (position > j) - ) - else: - del self._tokdict[i] - - def __contains__(self, k) -> bool: - return k in self._tokdict - - def __len__(self) -> int: - return len(self._toklist) - - def __bool__(self) -> bool: - return not not (self._toklist or self._tokdict) - - def __iter__(self) -> Iterator: - return iter(self._toklist) - - def __reversed__(self) -> Iterator: - return iter(self._toklist[::-1]) - - def keys(self): - return iter(self._tokdict) - - def values(self): - return (self[k] for k in self.keys()) - - def items(self): - return ((k, self[k]) for k in self.keys()) - - def haskeys(self) -> bool: - """ - Since ``keys()`` returns an iterator, this method is helpful in bypassing - code that looks for the existence of any defined results names.""" - return not not self._tokdict - - def pop(self, *args, **kwargs): - """ - Removes and returns item at specified index (default= ``last``). - Supports both ``list`` and ``dict`` semantics for ``pop()``. If - passed no argument or an integer argument, it will use ``list`` - semantics and pop tokens from the list of parsed tokens. If passed - a non-integer argument (most likely a string), it will use ``dict`` - semantics and pop the corresponding value from any defined results - names. A second default return value argument is supported, just as in - ``dict.pop()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - def remove_first(tokens): - tokens.pop(0) - numlist.add_parse_action(remove_first) - print(numlist.parse_string("0 123 321")) # -> ['123', '321'] - - label = Word(alphas) - patt = label("LABEL") + Word(nums)[1, ...] - print(patt.parse_string("AAB 123 321").dump()) - - # Use pop() in a parse action to remove named result (note that corresponding value is not - # removed from list form of results) - def remove_LABEL(tokens): - tokens.pop("LABEL") - return tokens - patt.add_parse_action(remove_LABEL) - print(patt.parse_string("AAB 123 321").dump()) - - prints:: - - ['AAB', '123', '321'] - - LABEL: 'AAB' - - ['AAB', '123', '321'] - """ - if not args: - args = [-1] - for k, v in kwargs.items(): - if k == "default": - args = (args[0], v) - else: - raise TypeError(f"pop() got an unexpected keyword argument {k!r}") - if isinstance(args[0], int) or len(args) == 1 or args[0] in self: - index = args[0] - ret = self[index] - del self[index] - return ret - else: - defaultvalue = args[1] - return defaultvalue - - def get(self, key, default_value=None): - """ - Returns named result matching the given key, or if there is no - such name, then returns the given ``default_value`` or ``None`` if no - ``default_value`` is specified. - - Similar to ``dict.get()``. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string("1999/12/31") - print(result.get("year")) # -> '1999' - print(result.get("hour", "not specified")) # -> 'not specified' - print(result.get("hour")) # -> None - """ - if key in self: - return self[key] - else: - return default_value - - def insert(self, index, ins_string): - """ - Inserts new element at location index in the list of parsed tokens. - - Similar to ``list.insert()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to insert the parse location in the front of the parsed results - def insert_locn(locn, tokens): - tokens.insert(0, locn) - numlist.add_parse_action(insert_locn) - print(numlist.parse_string("0 123 321")) # -> [0, '0', '123', '321'] - """ - self._toklist.insert(index, ins_string) - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position + (position > index) - ) - - def append(self, item): - """ - Add single element to end of ``ParseResults`` list of elements. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to compute the sum of the parsed integers, and add it to the end - def append_sum(tokens): - tokens.append(sum(map(int, tokens))) - numlist.add_parse_action(append_sum) - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321', 444] - """ - self._toklist.append(item) - - def extend(self, itemseq): - """ - Add sequence of elements to end of ``ParseResults`` list of elements. - - Example:: - - patt = Word(alphas)[1, ...] - - # use a parse action to append the reverse of the matched strings, to make a palindrome - def make_palindrome(tokens): - tokens.extend(reversed([t[::-1] for t in tokens])) - return ''.join(tokens) - patt.add_parse_action(make_palindrome) - print(patt.parse_string("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' - """ - if isinstance(itemseq, ParseResults): - self.__iadd__(itemseq) - else: - self._toklist.extend(itemseq) - - def clear(self): - """ - Clear all elements and results names. - """ - del self._toklist[:] - self._tokdict.clear() - - def __getattr__(self, name): - try: - return self[name] - except KeyError: - if name.startswith("__"): - raise AttributeError(name) - return "" - - def __add__(self, other: "ParseResults") -> "ParseResults": - ret = self.copy() - ret += other - return ret - - def __iadd__(self, other: "ParseResults") -> "ParseResults": - if not other: - return self - - if other._tokdict: - offset = len(self._toklist) - addoffset = lambda a: offset if a < 0 else a + offset - otheritems = other._tokdict.items() - otherdictitems = [ - (k, _ParseResultsWithOffset(v[0], addoffset(v[1]))) - for k, vlist in otheritems - for v in vlist - ] - for k, v in otherdictitems: - self[k] = v - if isinstance(v[0], ParseResults): - v[0]._parent = self - - self._toklist += other._toklist - self._all_names |= other._all_names - return self - - def __radd__(self, other) -> "ParseResults": - if isinstance(other, int) and other == 0: - # useful for merging many ParseResults using sum() builtin - return self.copy() - else: - # this may raise a TypeError - so be it - return other + self - - def __repr__(self) -> str: - return f"{type(self).__name__}({self._toklist!r}, {self.as_dict()})" - - def __str__(self) -> str: - return ( - "[" - + ", ".join( - [ - str(i) if isinstance(i, ParseResults) else repr(i) - for i in self._toklist - ] - ) - + "]" - ) - - def _asStringList(self, sep=""): - out = [] - for item in self._toklist: - if out and sep: - out.append(sep) - if isinstance(item, ParseResults): - out += item._asStringList() - else: - out.append(str(item)) - return out - - def as_list(self) -> list: - """ - Returns the parse results as a nested list of matching tokens, all converted to strings. - - Example:: - - patt = Word(alphas)[1, ...] - result = patt.parse_string("sldkj lsdkj sldkj") - # even though the result prints in string-like form, it is actually a pyparsing ParseResults - print(type(result), result) # -> ['sldkj', 'lsdkj', 'sldkj'] - - # Use as_list() to create an actual list - result_list = result.as_list() - print(type(result_list), result_list) # -> ['sldkj', 'lsdkj', 'sldkj'] - """ - return [ - res.as_list() if isinstance(res, ParseResults) else res - for res in self._toklist - ] - - def as_dict(self) -> dict: - """ - Returns the named parse results as a nested dictionary. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('12/31/1999') - print(type(result), repr(result)) # -> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) - - result_dict = result.as_dict() - print(type(result_dict), repr(result_dict)) # -> {'day': '1999', 'year': '12', 'month': '31'} - - # even though a ParseResults supports dict-like access, sometime you just need to have a dict - import json - print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable - print(json.dumps(result.as_dict())) # -> {"month": "31", "day": "1999", "year": "12"} - """ - - def to_item(obj): - if isinstance(obj, ParseResults): - return obj.as_dict() if obj.haskeys() else [to_item(v) for v in obj] - else: - return obj - - return dict((k, to_item(v)) for k, v in self.items()) - - def copy(self) -> "ParseResults": - """ - Returns a new shallow copy of a :class:`ParseResults` object. `ParseResults` - items contained within the source are shared with the copy. Use - :class:`ParseResults.deepcopy()` to create a copy with its own separate - content values. - """ - ret = ParseResults(self._toklist) - ret._tokdict = self._tokdict.copy() - ret._parent = self._parent - ret._all_names |= self._all_names - ret._name = self._name - return ret - - def deepcopy(self) -> "ParseResults": - """ - Returns a new deep copy of a :class:`ParseResults` object. - """ - ret = self.copy() - # replace values with copies if they are of known mutable types - for i, obj in enumerate(self._toklist): - if isinstance(obj, ParseResults): - self._toklist[i] = obj.deepcopy() - elif isinstance(obj, (str, bytes)): - pass - elif isinstance(obj, MutableMapping): - self._toklist[i] = dest = type(obj)() - for k, v in obj.items(): - dest[k] = v.deepcopy() if isinstance(v, ParseResults) else v - elif isinstance(obj, Container): - self._toklist[i] = type(obj)( - v.deepcopy() if isinstance(v, ParseResults) else v for v in obj - ) - return ret - - def get_name(self): - r""" - Returns the results name for this token expression. Useful when several - different expressions might match at a particular location. - - Example:: - - integer = Word(nums) - ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") - house_number_expr = Suppress('#') + Word(nums, alphanums) - user_data = (Group(house_number_expr)("house_number") - | Group(ssn_expr)("ssn") - | Group(integer)("age")) - user_info = user_data[1, ...] - - result = user_info.parse_string("22 111-22-3333 #221B") - for item in result: - print(item.get_name(), ':', item[0]) - - prints:: - - age : 22 - ssn : 111-22-3333 - house_number : 221B - """ - if self._name: - return self._name - elif self._parent: - par: "ParseResults" = self._parent - parent_tokdict_items = par._tokdict.items() - return next( - ( - k - for k, vlist in parent_tokdict_items - for v, loc in vlist - if v is self - ), - None, - ) - elif ( - len(self) == 1 - and len(self._tokdict) == 1 - and next(iter(self._tokdict.values()))[0][1] in (0, -1) - ): - return next(iter(self._tokdict.keys())) - else: - return None - - def dump(self, indent="", full=True, include_list=True, _depth=0) -> str: - """ - Diagnostic method for listing out the contents of - a :class:`ParseResults`. Accepts an optional ``indent`` argument so - that this string can be embedded in a nested display of other data. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('1999/12/31') - print(result.dump()) - - prints:: - - ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - out = [] - NL = "\n" - out.append(indent + str(self.as_list()) if include_list else "") - - if full: - if self.haskeys(): - items = sorted((str(k), v) for k, v in self.items()) - for k, v in items: - if out: - out.append(NL) - out.append(f"{indent}{(' ' * _depth)}- {k}: ") - if isinstance(v, ParseResults): - if v: - out.append( - v.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ) - ) - else: - out.append(str(v)) - else: - out.append(repr(v)) - if any(isinstance(vv, ParseResults) for vv in self): - v = self - for i, vv in enumerate(v): - if isinstance(vv, ParseResults): - out.append( - "\n{}{}[{}]:\n{}{}{}".format( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - vv.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ), - ) - ) - else: - out.append( - "\n%s%s[%d]:\n%s%s%s" - % ( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - str(vv), - ) - ) - - return "".join(out) - - def pprint(self, *args, **kwargs): - """ - Pretty-printer for parsed results as a list, using the - `pprint `_ module. - Accepts additional positional or keyword args as defined for - `pprint.pprint `_ . - - Example:: - - ident = Word(alphas, alphanums) - num = Word(nums) - func = Forward() - term = ident | num | Group('(' + func + ')') - func <<= ident + Group(Optional(DelimitedList(term))) - result = func.parse_string("fna a,b,(fnb c,d,200),100") - result.pprint(width=40) - - prints:: - - ['fna', - ['a', - 'b', - ['(', 'fnb', ['c', 'd', '200'], ')'], - '100']] - """ - pprint.pprint(self.as_list(), *args, **kwargs) - - # add support for pickle protocol - def __getstate__(self): - return ( - self._toklist, - ( - self._tokdict.copy(), - None, - self._all_names, - self._name, - ), - ) - - def __setstate__(self, state): - self._toklist, (self._tokdict, par, inAccumNames, self._name) = state - self._all_names = set(inAccumNames) - self._parent = None - - def __getnewargs__(self): - return self._toklist, self._name - - def __dir__(self): - return dir(type(self)) + list(self.keys()) - - @classmethod - def from_dict(cls, other, name=None) -> "ParseResults": - """ - Helper classmethod to construct a ``ParseResults`` from a ``dict``, preserving the - name-value relations as results names. If an optional ``name`` argument is - given, a nested ``ParseResults`` will be returned. - """ - - def is_iterable(obj): - try: - iter(obj) - except Exception: - return False - # str's are iterable, but in pyparsing, we don't want to iterate over them - else: - return not isinstance(obj, str_type) - - ret = cls([]) - for k, v in other.items(): - if isinstance(v, Mapping): - ret += cls.from_dict(v, name=k) - else: - ret += cls([v], name=k, asList=is_iterable(v)) - if name is not None: - ret = cls([ret], name=name) - return ret - - asList = as_list - """Deprecated - use :class:`as_list`""" - asDict = as_dict - """Deprecated - use :class:`as_dict`""" - getName = get_name - """Deprecated - use :class:`get_name`""" - - -MutableMapping.register(ParseResults) -MutableSequence.register(ParseResults) diff --git a/spaces/pknez/face-swap-docker/roop/processors/__init__.py b/spaces/pknez/face-swap-docker/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/plzdontcry/dakubettergpt/src/assets/icons/FolderIcon.tsx b/spaces/plzdontcry/dakubettergpt/src/assets/icons/FolderIcon.tsx deleted file mode 100644 index 6621594692a11e3f2134bae276f67e754e877187..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/assets/icons/FolderIcon.tsx +++ /dev/null @@ -1,17 +0,0 @@ -import React from 'react'; - -const FolderIcon = (props: React.SVGProps) => { - return ( - - - - ); -}; - -export default FolderIcon; diff --git a/spaces/pratikskarnik/Indian-Food-Recognition/README.md b/spaces/pratikskarnik/Indian-Food-Recognition/README.md deleted file mode 100644 index 49d7ef60009e8f2cf53884eec944a8e06c53406b..0000000000000000000000000000000000000000 --- a/spaces/pratikskarnik/Indian-Food-Recognition/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Indian Food Recognition -emoji: 💻 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/include/pa_win_wdmks.h b/spaces/prerna9811/Chord/portaudio/include/pa_win_wdmks.h deleted file mode 100644 index bc2f6897c5712be184813ffcfa06244810e66090..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/include/pa_win_wdmks.h +++ /dev/null @@ -1,137 +0,0 @@ -#ifndef PA_WIN_WDMKS_H -#define PA_WIN_WDMKS_H -/* - * $Id$ - * PortAudio Portable Real-Time Audio Library - * WDM/KS specific extensions - * - * Copyright (c) 1999-2007 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup public_header - @brief WDM Kernel Streaming-specific PortAudio API extension header file. -*/ - - -#include "portaudio.h" - -#include - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - /** Flags to indicate valid fields in PaWinWDMKSInfo. - @see PaWinWDMKSInfo - @version Available as of 19.5.0. - */ - typedef enum PaWinWDMKSFlags - { - /** Makes WDMKS use the supplied latency figures instead of relying on the frame size reported - by the WaveCyclic device. Use at own risk! - */ - paWinWDMKSOverrideFramesize = (1 << 0), - - /** Makes WDMKS (output stream) use the given channelMask instead of the default. - @version Available as of 19.5.0. - */ - paWinWDMKSUseGivenChannelMask = (1 << 1), - - } PaWinWDMKSFlags; - - typedef struct PaWinWDMKSInfo{ - unsigned long size; /**< sizeof(PaWinWDMKSInfo) */ - PaHostApiTypeId hostApiType; /**< paWDMKS */ - unsigned long version; /**< 1 */ - - /** Flags indicate which fields are valid. - @see PaWinWDMKSFlags - @version Available as of 19.5.0. - */ - unsigned long flags; - - /** The number of packets to use for WaveCyclic devices, range is [2, 8]. Set to zero for default value of 2. */ - unsigned noOfPackets; - - /** If paWinWDMKSUseGivenChannelMask bit is set in flags, use this as channelMask instead of default. - @see PaWinWDMKSFlags - @version Available as of 19.5.0. - */ - unsigned channelMask; - } PaWinWDMKSInfo; - - typedef enum PaWDMKSType - { - Type_kNotUsed, - Type_kWaveCyclic, - Type_kWaveRT, - Type_kCnt, - } PaWDMKSType; - - typedef enum PaWDMKSSubType - { - SubType_kUnknown, - SubType_kNotification, - SubType_kPolled, - SubType_kCnt, - } PaWDMKSSubType; - - typedef struct PaWinWDMKSDeviceInfo { - wchar_t filterPath[MAX_PATH]; /**< KS filter path in Unicode! */ - wchar_t topologyPath[MAX_PATH]; /**< Topology filter path in Unicode! */ - PaWDMKSType streamingType; - GUID deviceProductGuid; /**< The product GUID of the device (if supported) */ - } PaWinWDMKSDeviceInfo; - - typedef struct PaWDMKSDirectionSpecificStreamInfo - { - PaDeviceIndex device; - unsigned channels; /**< No of channels the device is opened with */ - unsigned framesPerHostBuffer; /**< No of frames of the device buffer */ - int endpointPinId; /**< Endpoint pin ID (on topology filter if topologyName is not empty) */ - int muxNodeId; /**< Only valid for input */ - PaWDMKSSubType streamingSubType; /**< Not known until device is opened for streaming */ - } PaWDMKSDirectionSpecificStreamInfo; - - typedef struct PaWDMKSSpecificStreamInfo { - PaWDMKSDirectionSpecificStreamInfo input; - PaWDMKSDirectionSpecificStreamInfo output; - } PaWDMKSSpecificStreamInfo; - -#ifdef __cplusplus -} -#endif /* __cplusplus */ - -#endif /* PA_WIN_DS_H */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/DcxImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/DcxImagePlugin.py deleted file mode 100644 index cde9d42f09f304679180b673bf4d8fdb68d6b4b3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/DcxImagePlugin.py +++ /dev/null @@ -1,79 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# DCX file handling -# -# DCX is a container file format defined by Intel, commonly used -# for fax applications. Each DCX file consists of a directory -# (a list of file offsets) followed by a set of (usually 1-bit) -# PCX files. -# -# History: -# 1995-09-09 fl Created -# 1996-03-20 fl Properly derived from PcxImageFile. -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 2002-07-30 fl Fixed file handling -# -# Copyright (c) 1997-98 by Secret Labs AB. -# Copyright (c) 1995-96 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -from . import Image -from ._binary import i32le as i32 -from .PcxImagePlugin import PcxImageFile - -MAGIC = 0x3ADE68B1 # QUIZ: what's this value, then? - - -def _accept(prefix): - return len(prefix) >= 4 and i32(prefix) == MAGIC - - -## -# Image plugin for the Intel DCX format. - - -class DcxImageFile(PcxImageFile): - format = "DCX" - format_description = "Intel DCX" - _close_exclusive_fp_after_loading = False - - def _open(self): - # Header - s = self.fp.read(4) - if not _accept(s): - msg = "not a DCX file" - raise SyntaxError(msg) - - # Component directory - self._offset = [] - for i in range(1024): - offset = i32(self.fp.read(4)) - if not offset: - break - self._offset.append(offset) - - self._fp = self.fp - self.frame = None - self.n_frames = len(self._offset) - self.is_animated = self.n_frames > 1 - self.seek(0) - - def seek(self, frame): - if not self._seek_check(frame): - return - self.frame = frame - self.fp = self._fp - self.fp.seek(self._offset[frame]) - PcxImageFile._open(self) - - def tell(self): - return self.frame - - -Image.register_open(DcxImageFile.format, DcxImageFile, _accept) - -Image.register_extension(DcxImageFile.format, ".dcx") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py deleted file mode 100644 index c231599e37b3a5864a774387d717baf297957876..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F_.py +++ /dev/null @@ -1,46 +0,0 @@ -from io import BytesIO -from fontTools import cffLib -from . import DefaultTable - - -class table_C_F_F_(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.cff = cffLib.CFFFontSet() - self._gaveGlyphOrder = False - - def decompile(self, data, otFont): - self.cff.decompile(BytesIO(data), otFont, isCFF2=False) - assert len(self.cff) == 1, "can't deal with multi-font CFF tables." - - def compile(self, otFont): - f = BytesIO() - self.cff.compile(f, otFont, isCFF2=False) - return f.getvalue() - - def haveGlyphNames(self): - if hasattr(self.cff[self.cff.fontNames[0]], "ROS"): - return False # CID-keyed font - else: - return True - - def getGlyphOrder(self): - if self._gaveGlyphOrder: - from fontTools import ttLib - - raise ttLib.TTLibError("illegal use of getGlyphOrder()") - self._gaveGlyphOrder = True - return self.cff[self.cff.fontNames[0]].getGlyphOrder() - - def setGlyphOrder(self, glyphOrder): - pass - # XXX - # self.cff[self.cff.fontNames[0]].setGlyphOrder(glyphOrder) - - def toXML(self, writer, otFont): - self.cff.toXML(writer) - - def fromXML(self, name, attrs, content, otFont): - if not hasattr(self, "cff"): - self.cff = cffLib.CFFFontSet() - self.cff.fromXML(name, attrs, content, otFont) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2f5b4dfc.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2f5b4dfc.js deleted file mode 100644 index 5358ddb29cb0715846ea56dce350f8b159064d48..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-2f5b4dfc.js +++ /dev/null @@ -1,2 +0,0 @@ -import{B as O}from"./Button-89057c03.js";import{B as P}from"./BlockTitle-49fa584d.js";import{S as Q}from"./Index-37584f50.js";import{default as Je}from"./Example-c3701be8.js";import"./index-0526d562.js";import"./svelte/svelte.js";import"./Info-586340e7.js";const{SvelteComponent:V,append:S,attr:g,detach:W,element:z,init:X,init_binding_group:Y,insert:Z,listen:M,noop:T,run_all:y,safe_not_equal:p,set_data:x,set_input_value:U,space:$,text:ee,toggle_class:j}=window.__gradio__svelte__internal,{createEventDispatcher:le}=window.__gradio__svelte__internal;function te(i){let e,l,n,_=!1,u,f,h,m,d,c,r;return d=Y(i[8][0]),{c(){e=z("label"),l=z("input"),u=$(),f=z("span"),h=ee(i[1]),l.disabled=i[3],g(l,"type","radio"),g(l,"name",n=`radio-${i[2]}`),l.__value=i[2],U(l,l.__value),g(l,"class","svelte-1mhtq7j"),g(f,"class","ml-2 svelte-1mhtq7j"),g(e,"data-testid",m=`${i[2]}-radio-label`),g(e,"class","svelte-1mhtq7j"),j(e,"disabled",i[3]),j(e,"selected",i[4]),d.p(l)},m(o,t){Z(o,e,t),S(e,l),l.checked=l.__value===i[0],S(e,u),S(e,f),S(f,h),c||(r=[M(l,"input",i[6]),M(l,"change",i[7])],c=!0)},p(o,[t]){t&8&&(l.disabled=o[3]),t&4&&n!==(n=`radio-${o[2]}`)&&g(l,"name",n),t&4&&(l.__value=o[2],U(l,l.__value),_=!0),(_||t&1)&&(l.checked=l.__value===o[0]),t&2&&x(h,o[1]),t&4&&m!==(m=`${o[2]}-radio-label`)&&g(e,"data-testid",m),t&8&&j(e,"disabled",o[3]),t&16&&j(e,"selected",o[4])},i:T,o:T,d(o){o&&W(e),d.r(),c=!1,y(r)}}}function ne(i,e,l){let n,{display_value:_}=e,{internal_value:u}=e,{disabled:f=!1}=e,{selected:h=null}=e;const m=le(),d=[[]],c=()=>m("input",u);function r(){h=this.__value,l(0,h)}return i.$$set=o=>{"display_value"in o&&l(1,_=o.display_value),"internal_value"in o&&l(2,u=o.internal_value),"disabled"in o&&l(3,f=o.disabled),"selected"in o&&l(0,h=o.selected)},i.$$.update=()=>{i.$$.dirty&5&&l(4,n=h===u)},[h,_,u,f,n,m,c,r,d]}class ie extends V{constructor(e){super(),X(this,e,ne,te,p,{display_value:1,internal_value:2,disabled:3,selected:0})}}const ae=ie;const{SvelteComponent:se,add_flush_callback:_e,assign:ue,attr:oe,bind:fe,binding_callbacks:de,check_outros:ce,create_component:C,destroy_component:R,detach:w,element:re,empty:me,ensure_array_like:A,get_spread_object:he,get_spread_update:be,group_outros:ge,init:ve,insert:k,mount_component:E,outro_and_destroy_block:we,safe_not_equal:ke,set_data:qe,space:F,text:Be,transition_in:q,transition_out:B,update_keyed_each:Se}=window.__gradio__svelte__internal,{afterUpdate:je}=window.__gradio__svelte__internal;function G(i,e,l){const n=i.slice();return n[19]=e[l],n[21]=l,n}function Ce(i){let e;return{c(){e=Be(i[2])},m(l,n){k(l,e,n)},p(l,n){n&4&&qe(e,l[2])},d(l){l&&w(e)}}}function H(i,e){let l,n,_,u;function f(d){e[16](d)}function h(){return e[17](e[19],e[21])}let m={display_value:e[19][0],internal_value:e[19][1],disabled:e[13]};return e[0]!==void 0&&(m.selected=e[0]),n=new ae({props:m}),de.push(()=>fe(n,"selected",f)),n.$on("input",h),{key:i,first:null,c(){l=me(),C(n.$$.fragment),this.first=l},m(d,c){k(d,l,c),E(n,d,c),u=!0},p(d,c){e=d;const r={};c&128&&(r.display_value=e[19][0]),c&128&&(r.internal_value=e[19][1]),c&8192&&(r.disabled=e[13]),!_&&c&1&&(_=!0,r.selected=e[0],_e(()=>_=!1)),n.$set(r)},i(d){u||(q(n.$$.fragment,d),u=!0)},o(d){B(n.$$.fragment,d),u=!1},d(d){d&&w(l),R(n,d)}}}function Re(i){let e,l,n,_,u,f=[],h=new Map,m;const d=[{autoscroll:i[1].autoscroll},{i18n:i[1].i18n},i[12]];let c={};for(let t=0;tt[21];for(let t=0;t{l(14,r=!1)});function K(s){c=s,l(0,c)}const L=(s,N)=>_.dispatch("select",{value:s[1],index:N});return i.$$set=s=>{"gradio"in s&&l(1,_=s.gradio),"label"in s&&l(2,u=s.label),"info"in s&&l(3,f=s.info),"elem_id"in s&&l(4,h=s.elem_id),"elem_classes"in s&&l(5,m=s.elem_classes),"visible"in s&&l(6,d=s.visible),"value"in s&&l(0,c=s.value),"value_is_output"in s&&l(14,r=s.value_is_output),"choices"in s&&l(7,o=s.choices),"show_label"in s&&l(8,t=s.show_label),"container"in s&&l(9,a=s.container),"scale"in s&&l(10,b=s.scale),"min_width"in s&&l(11,v=s.min_width),"loading_status"in s&&l(12,D=s.loading_status),"interactive"in s&&l(15,I=s.interactive)},i.$$.update=()=>{i.$$.dirty&1&&J(),i.$$.dirty&32768&&l(13,n=!I)},[c,_,u,f,h,m,d,o,t,a,b,v,D,n,r,I,K,L]}class Fe extends se{constructor(e){super(),ve(this,e,Ie,Ee,ke,{gradio:1,label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:14,choices:7,show_label:8,container:9,scale:10,min_width:11,loading_status:12,interactive:15})}}export{Je as BaseExample,ae as BaseRadio,Fe as default}; -//# sourceMappingURL=Index-2f5b4dfc.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm.py deleted file mode 100644 index b7a7c878957f3864a2f83f3cbb58c8f01e2fcbc8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm.py +++ /dev/null @@ -1,1440 +0,0 @@ -""" -Nothing here but dictionaries for generating LinearSegmentedColormaps, -and a dictionary of these dictionaries. - -Documentation for each is in pyplot.colormaps(). Please update this -with the purpose and type of your colormap if you add data for one here. -""" - -from functools import partial - -import numpy as np - -_binary_data = { - 'red': ((0., 1., 1.), (1., 0., 0.)), - 'green': ((0., 1., 1.), (1., 0., 0.)), - 'blue': ((0., 1., 1.), (1., 0., 0.)) - } - -_autumn_data = {'red': ((0., 1.0, 1.0), (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), (1.0, 1.0, 1.0)), - 'blue': ((0., 0., 0.), (1.0, 0., 0.))} - -_bone_data = {'red': ((0., 0., 0.), - (0.746032, 0.652778, 0.652778), - (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), - (0.365079, 0.319444, 0.319444), - (0.746032, 0.777778, 0.777778), - (1.0, 1.0, 1.0)), - 'blue': ((0., 0., 0.), - (0.365079, 0.444444, 0.444444), - (1.0, 1.0, 1.0))} - -_cool_data = {'red': ((0., 0., 0.), (1.0, 1.0, 1.0)), - 'green': ((0., 1., 1.), (1.0, 0., 0.)), - 'blue': ((0., 1., 1.), (1.0, 1., 1.))} - -_copper_data = {'red': ((0., 0., 0.), - (0.809524, 1.000000, 1.000000), - (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), - (1.0, 0.7812, 0.7812)), - 'blue': ((0., 0., 0.), - (1.0, 0.4975, 0.4975))} - -def _flag_red(x): return 0.75 * np.sin((x * 31.5 + 0.25) * np.pi) + 0.5 -def _flag_green(x): return np.sin(x * 31.5 * np.pi) -def _flag_blue(x): return 0.75 * np.sin((x * 31.5 - 0.25) * np.pi) + 0.5 -_flag_data = {'red': _flag_red, 'green': _flag_green, 'blue': _flag_blue} - -def _prism_red(x): return 0.75 * np.sin((x * 20.9 + 0.25) * np.pi) + 0.67 -def _prism_green(x): return 0.75 * np.sin((x * 20.9 - 0.25) * np.pi) + 0.33 -def _prism_blue(x): return -1.1 * np.sin((x * 20.9) * np.pi) -_prism_data = {'red': _prism_red, 'green': _prism_green, 'blue': _prism_blue} - -def _ch_helper(gamma, s, r, h, p0, p1, x): - """Helper function for generating picklable cubehelix colormaps.""" - # Apply gamma factor to emphasise low or high intensity values - xg = x ** gamma - # Calculate amplitude and angle of deviation from the black to white - # diagonal in the plane of constant perceived intensity. - a = h * xg * (1 - xg) / 2 - phi = 2 * np.pi * (s / 3 + r * x) - return xg + a * (p0 * np.cos(phi) + p1 * np.sin(phi)) - -def cubehelix(gamma=1.0, s=0.5, r=-1.5, h=1.0): - """ - Return custom data dictionary of (r, g, b) conversion functions, which can - be used with :func:`register_cmap`, for the cubehelix color scheme. - - Unlike most other color schemes cubehelix was designed by D.A. Green to - be monotonically increasing in terms of perceived brightness. - Also, when printed on a black and white postscript printer, the scheme - results in a greyscale with monotonically increasing brightness. - This color scheme is named cubehelix because the (r, g, b) values produced - can be visualised as a squashed helix around the diagonal in the - (r, g, b) color cube. - - For a unit color cube (i.e. 3D coordinates for (r, g, b) each in the - range 0 to 1) the color scheme starts at (r, g, b) = (0, 0, 0), i.e. black, - and finishes at (r, g, b) = (1, 1, 1), i.e. white. For some fraction *x*, - between 0 and 1, the color is the corresponding grey value at that - fraction along the black to white diagonal (x, x, x) plus a color - element. This color element is calculated in a plane of constant - perceived intensity and controlled by the following parameters. - - Parameters - ---------- - gamma : float, default: 1 - Gamma factor emphasizing either low intensity values (gamma < 1), or - high intensity values (gamma > 1). - s : float, default: 0.5 (purple) - The starting color. - r : float, default: -1.5 - The number of r, g, b rotations in color that are made from the start - to the end of the color scheme. The default of -1.5 corresponds to -> - B -> G -> R -> B. - h : float, default: 1 - The hue, i.e. how saturated the colors are. If this parameter is zero - then the color scheme is purely a greyscale. - """ - return {'red': partial(_ch_helper, gamma, s, r, h, -0.14861, 1.78277), - 'green': partial(_ch_helper, gamma, s, r, h, -0.29227, -0.90649), - 'blue': partial(_ch_helper, gamma, s, r, h, 1.97294, 0.0)} - -_cubehelix_data = cubehelix() - -_bwr_data = ((0.0, 0.0, 1.0), (1.0, 1.0, 1.0), (1.0, 0.0, 0.0)) -_brg_data = ((0.0, 0.0, 1.0), (1.0, 0.0, 0.0), (0.0, 1.0, 0.0)) - -# Gnuplot palette functions -def _g0(x): return 0 -def _g1(x): return 0.5 -def _g2(x): return 1 -def _g3(x): return x -def _g4(x): return x ** 2 -def _g5(x): return x ** 3 -def _g6(x): return x ** 4 -def _g7(x): return np.sqrt(x) -def _g8(x): return np.sqrt(np.sqrt(x)) -def _g9(x): return np.sin(x * np.pi / 2) -def _g10(x): return np.cos(x * np.pi / 2) -def _g11(x): return np.abs(x - 0.5) -def _g12(x): return (2 * x - 1) ** 2 -def _g13(x): return np.sin(x * np.pi) -def _g14(x): return np.abs(np.cos(x * np.pi)) -def _g15(x): return np.sin(x * 2 * np.pi) -def _g16(x): return np.cos(x * 2 * np.pi) -def _g17(x): return np.abs(np.sin(x * 2 * np.pi)) -def _g18(x): return np.abs(np.cos(x * 2 * np.pi)) -def _g19(x): return np.abs(np.sin(x * 4 * np.pi)) -def _g20(x): return np.abs(np.cos(x * 4 * np.pi)) -def _g21(x): return 3 * x -def _g22(x): return 3 * x - 1 -def _g23(x): return 3 * x - 2 -def _g24(x): return np.abs(3 * x - 1) -def _g25(x): return np.abs(3 * x - 2) -def _g26(x): return (3 * x - 1) / 2 -def _g27(x): return (3 * x - 2) / 2 -def _g28(x): return np.abs((3 * x - 1) / 2) -def _g29(x): return np.abs((3 * x - 2) / 2) -def _g30(x): return x / 0.32 - 0.78125 -def _g31(x): return 2 * x - 0.84 -def _g32(x): - ret = np.zeros(len(x)) - m = (x < 0.25) - ret[m] = 4 * x[m] - m = (x >= 0.25) & (x < 0.92) - ret[m] = -2 * x[m] + 1.84 - m = (x >= 0.92) - ret[m] = x[m] / 0.08 - 11.5 - return ret -def _g33(x): return np.abs(2 * x - 0.5) -def _g34(x): return 2 * x -def _g35(x): return 2 * x - 0.5 -def _g36(x): return 2 * x - 1 - -gfunc = {i: globals()[f"_g{i}"] for i in range(37)} - -_gnuplot_data = { - 'red': gfunc[7], - 'green': gfunc[5], - 'blue': gfunc[15], -} - -_gnuplot2_data = { - 'red': gfunc[30], - 'green': gfunc[31], - 'blue': gfunc[32], -} - -_ocean_data = { - 'red': gfunc[23], - 'green': gfunc[28], - 'blue': gfunc[3], -} - -_afmhot_data = { - 'red': gfunc[34], - 'green': gfunc[35], - 'blue': gfunc[36], -} - -_rainbow_data = { - 'red': gfunc[33], - 'green': gfunc[13], - 'blue': gfunc[10], -} - -_seismic_data = ( - (0.0, 0.0, 0.3), (0.0, 0.0, 1.0), - (1.0, 1.0, 1.0), (1.0, 0.0, 0.0), - (0.5, 0.0, 0.0)) - -_terrain_data = ( - (0.00, (0.2, 0.2, 0.6)), - (0.15, (0.0, 0.6, 1.0)), - (0.25, (0.0, 0.8, 0.4)), - (0.50, (1.0, 1.0, 0.6)), - (0.75, (0.5, 0.36, 0.33)), - (1.00, (1.0, 1.0, 1.0))) - -_gray_data = {'red': ((0., 0, 0), (1., 1, 1)), - 'green': ((0., 0, 0), (1., 1, 1)), - 'blue': ((0., 0, 0), (1., 1, 1))} - -_hot_data = {'red': ((0., 0.0416, 0.0416), - (0.365079, 1.000000, 1.000000), - (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), - (0.365079, 0.000000, 0.000000), - (0.746032, 1.000000, 1.000000), - (1.0, 1.0, 1.0)), - 'blue': ((0., 0., 0.), - (0.746032, 0.000000, 0.000000), - (1.0, 1.0, 1.0))} - -_hsv_data = {'red': ((0., 1., 1.), - (0.158730, 1.000000, 1.000000), - (0.174603, 0.968750, 0.968750), - (0.333333, 0.031250, 0.031250), - (0.349206, 0.000000, 0.000000), - (0.666667, 0.000000, 0.000000), - (0.682540, 0.031250, 0.031250), - (0.841270, 0.968750, 0.968750), - (0.857143, 1.000000, 1.000000), - (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), - (0.158730, 0.937500, 0.937500), - (0.174603, 1.000000, 1.000000), - (0.507937, 1.000000, 1.000000), - (0.666667, 0.062500, 0.062500), - (0.682540, 0.000000, 0.000000), - (1.0, 0., 0.)), - 'blue': ((0., 0., 0.), - (0.333333, 0.000000, 0.000000), - (0.349206, 0.062500, 0.062500), - (0.507937, 1.000000, 1.000000), - (0.841270, 1.000000, 1.000000), - (0.857143, 0.937500, 0.937500), - (1.0, 0.09375, 0.09375))} - -_jet_data = {'red': ((0.00, 0, 0), - (0.35, 0, 0), - (0.66, 1, 1), - (0.89, 1, 1), - (1.00, 0.5, 0.5)), - 'green': ((0.000, 0, 0), - (0.125, 0, 0), - (0.375, 1, 1), - (0.640, 1, 1), - (0.910, 0, 0), - (1.000, 0, 0)), - 'blue': ((0.00, 0.5, 0.5), - (0.11, 1, 1), - (0.34, 1, 1), - (0.65, 0, 0), - (1.00, 0, 0))} - -_pink_data = {'red': ((0., 0.1178, 0.1178), (0.015873, 0.195857, 0.195857), - (0.031746, 0.250661, 0.250661), - (0.047619, 0.295468, 0.295468), - (0.063492, 0.334324, 0.334324), - (0.079365, 0.369112, 0.369112), - (0.095238, 0.400892, 0.400892), - (0.111111, 0.430331, 0.430331), - (0.126984, 0.457882, 0.457882), - (0.142857, 0.483867, 0.483867), - (0.158730, 0.508525, 0.508525), - (0.174603, 0.532042, 0.532042), - (0.190476, 0.554563, 0.554563), - (0.206349, 0.576204, 0.576204), - (0.222222, 0.597061, 0.597061), - (0.238095, 0.617213, 0.617213), - (0.253968, 0.636729, 0.636729), - (0.269841, 0.655663, 0.655663), - (0.285714, 0.674066, 0.674066), - (0.301587, 0.691980, 0.691980), - (0.317460, 0.709441, 0.709441), - (0.333333, 0.726483, 0.726483), - (0.349206, 0.743134, 0.743134), - (0.365079, 0.759421, 0.759421), - (0.380952, 0.766356, 0.766356), - (0.396825, 0.773229, 0.773229), - (0.412698, 0.780042, 0.780042), - (0.428571, 0.786796, 0.786796), - (0.444444, 0.793492, 0.793492), - (0.460317, 0.800132, 0.800132), - (0.476190, 0.806718, 0.806718), - (0.492063, 0.813250, 0.813250), - (0.507937, 0.819730, 0.819730), - (0.523810, 0.826160, 0.826160), - (0.539683, 0.832539, 0.832539), - (0.555556, 0.838870, 0.838870), - (0.571429, 0.845154, 0.845154), - (0.587302, 0.851392, 0.851392), - (0.603175, 0.857584, 0.857584), - (0.619048, 0.863731, 0.863731), - (0.634921, 0.869835, 0.869835), - (0.650794, 0.875897, 0.875897), - (0.666667, 0.881917, 0.881917), - (0.682540, 0.887896, 0.887896), - (0.698413, 0.893835, 0.893835), - (0.714286, 0.899735, 0.899735), - (0.730159, 0.905597, 0.905597), - (0.746032, 0.911421, 0.911421), - (0.761905, 0.917208, 0.917208), - (0.777778, 0.922958, 0.922958), - (0.793651, 0.928673, 0.928673), - (0.809524, 0.934353, 0.934353), - (0.825397, 0.939999, 0.939999), - (0.841270, 0.945611, 0.945611), - (0.857143, 0.951190, 0.951190), - (0.873016, 0.956736, 0.956736), - (0.888889, 0.962250, 0.962250), - (0.904762, 0.967733, 0.967733), - (0.920635, 0.973185, 0.973185), - (0.936508, 0.978607, 0.978607), - (0.952381, 0.983999, 0.983999), - (0.968254, 0.989361, 0.989361), - (0.984127, 0.994695, 0.994695), (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), (0.015873, 0.102869, 0.102869), - (0.031746, 0.145479, 0.145479), - (0.047619, 0.178174, 0.178174), - (0.063492, 0.205738, 0.205738), - (0.079365, 0.230022, 0.230022), - (0.095238, 0.251976, 0.251976), - (0.111111, 0.272166, 0.272166), - (0.126984, 0.290957, 0.290957), - (0.142857, 0.308607, 0.308607), - (0.158730, 0.325300, 0.325300), - (0.174603, 0.341178, 0.341178), - (0.190476, 0.356348, 0.356348), - (0.206349, 0.370899, 0.370899), - (0.222222, 0.384900, 0.384900), - (0.238095, 0.398410, 0.398410), - (0.253968, 0.411476, 0.411476), - (0.269841, 0.424139, 0.424139), - (0.285714, 0.436436, 0.436436), - (0.301587, 0.448395, 0.448395), - (0.317460, 0.460044, 0.460044), - (0.333333, 0.471405, 0.471405), - (0.349206, 0.482498, 0.482498), - (0.365079, 0.493342, 0.493342), - (0.380952, 0.517549, 0.517549), - (0.396825, 0.540674, 0.540674), - (0.412698, 0.562849, 0.562849), - (0.428571, 0.584183, 0.584183), - (0.444444, 0.604765, 0.604765), - (0.460317, 0.624669, 0.624669), - (0.476190, 0.643958, 0.643958), - (0.492063, 0.662687, 0.662687), - (0.507937, 0.680900, 0.680900), - (0.523810, 0.698638, 0.698638), - (0.539683, 0.715937, 0.715937), - (0.555556, 0.732828, 0.732828), - (0.571429, 0.749338, 0.749338), - (0.587302, 0.765493, 0.765493), - (0.603175, 0.781313, 0.781313), - (0.619048, 0.796819, 0.796819), - (0.634921, 0.812029, 0.812029), - (0.650794, 0.826960, 0.826960), - (0.666667, 0.841625, 0.841625), - (0.682540, 0.856040, 0.856040), - (0.698413, 0.870216, 0.870216), - (0.714286, 0.884164, 0.884164), - (0.730159, 0.897896, 0.897896), - (0.746032, 0.911421, 0.911421), - (0.761905, 0.917208, 0.917208), - (0.777778, 0.922958, 0.922958), - (0.793651, 0.928673, 0.928673), - (0.809524, 0.934353, 0.934353), - (0.825397, 0.939999, 0.939999), - (0.841270, 0.945611, 0.945611), - (0.857143, 0.951190, 0.951190), - (0.873016, 0.956736, 0.956736), - (0.888889, 0.962250, 0.962250), - (0.904762, 0.967733, 0.967733), - (0.920635, 0.973185, 0.973185), - (0.936508, 0.978607, 0.978607), - (0.952381, 0.983999, 0.983999), - (0.968254, 0.989361, 0.989361), - (0.984127, 0.994695, 0.994695), (1.0, 1.0, 1.0)), - 'blue': ((0., 0., 0.), (0.015873, 0.102869, 0.102869), - (0.031746, 0.145479, 0.145479), - (0.047619, 0.178174, 0.178174), - (0.063492, 0.205738, 0.205738), - (0.079365, 0.230022, 0.230022), - (0.095238, 0.251976, 0.251976), - (0.111111, 0.272166, 0.272166), - (0.126984, 0.290957, 0.290957), - (0.142857, 0.308607, 0.308607), - (0.158730, 0.325300, 0.325300), - (0.174603, 0.341178, 0.341178), - (0.190476, 0.356348, 0.356348), - (0.206349, 0.370899, 0.370899), - (0.222222, 0.384900, 0.384900), - (0.238095, 0.398410, 0.398410), - (0.253968, 0.411476, 0.411476), - (0.269841, 0.424139, 0.424139), - (0.285714, 0.436436, 0.436436), - (0.301587, 0.448395, 0.448395), - (0.317460, 0.460044, 0.460044), - (0.333333, 0.471405, 0.471405), - (0.349206, 0.482498, 0.482498), - (0.365079, 0.493342, 0.493342), - (0.380952, 0.503953, 0.503953), - (0.396825, 0.514344, 0.514344), - (0.412698, 0.524531, 0.524531), - (0.428571, 0.534522, 0.534522), - (0.444444, 0.544331, 0.544331), - (0.460317, 0.553966, 0.553966), - (0.476190, 0.563436, 0.563436), - (0.492063, 0.572750, 0.572750), - (0.507937, 0.581914, 0.581914), - (0.523810, 0.590937, 0.590937), - (0.539683, 0.599824, 0.599824), - (0.555556, 0.608581, 0.608581), - (0.571429, 0.617213, 0.617213), - (0.587302, 0.625727, 0.625727), - (0.603175, 0.634126, 0.634126), - (0.619048, 0.642416, 0.642416), - (0.634921, 0.650600, 0.650600), - (0.650794, 0.658682, 0.658682), - (0.666667, 0.666667, 0.666667), - (0.682540, 0.674556, 0.674556), - (0.698413, 0.682355, 0.682355), - (0.714286, 0.690066, 0.690066), - (0.730159, 0.697691, 0.697691), - (0.746032, 0.705234, 0.705234), - (0.761905, 0.727166, 0.727166), - (0.777778, 0.748455, 0.748455), - (0.793651, 0.769156, 0.769156), - (0.809524, 0.789314, 0.789314), - (0.825397, 0.808969, 0.808969), - (0.841270, 0.828159, 0.828159), - (0.857143, 0.846913, 0.846913), - (0.873016, 0.865261, 0.865261), - (0.888889, 0.883229, 0.883229), - (0.904762, 0.900837, 0.900837), - (0.920635, 0.918109, 0.918109), - (0.936508, 0.935061, 0.935061), - (0.952381, 0.951711, 0.951711), - (0.968254, 0.968075, 0.968075), - (0.984127, 0.984167, 0.984167), (1.0, 1.0, 1.0))} - -_spring_data = {'red': ((0., 1., 1.), (1.0, 1.0, 1.0)), - 'green': ((0., 0., 0.), (1.0, 1.0, 1.0)), - 'blue': ((0., 1., 1.), (1.0, 0.0, 0.0))} - - -_summer_data = {'red': ((0., 0., 0.), (1.0, 1.0, 1.0)), - 'green': ((0., 0.5, 0.5), (1.0, 1.0, 1.0)), - 'blue': ((0., 0.4, 0.4), (1.0, 0.4, 0.4))} - - -_winter_data = {'red': ((0., 0., 0.), (1.0, 0.0, 0.0)), - 'green': ((0., 0., 0.), (1.0, 1.0, 1.0)), - 'blue': ((0., 1., 1.), (1.0, 0.5, 0.5))} - -_nipy_spectral_data = { - 'red': [ - (0.0, 0.0, 0.0), (0.05, 0.4667, 0.4667), - (0.10, 0.5333, 0.5333), (0.15, 0.0, 0.0), - (0.20, 0.0, 0.0), (0.25, 0.0, 0.0), - (0.30, 0.0, 0.0), (0.35, 0.0, 0.0), - (0.40, 0.0, 0.0), (0.45, 0.0, 0.0), - (0.50, 0.0, 0.0), (0.55, 0.0, 0.0), - (0.60, 0.0, 0.0), (0.65, 0.7333, 0.7333), - (0.70, 0.9333, 0.9333), (0.75, 1.0, 1.0), - (0.80, 1.0, 1.0), (0.85, 1.0, 1.0), - (0.90, 0.8667, 0.8667), (0.95, 0.80, 0.80), - (1.0, 0.80, 0.80), - ], - 'green': [ - (0.0, 0.0, 0.0), (0.05, 0.0, 0.0), - (0.10, 0.0, 0.0), (0.15, 0.0, 0.0), - (0.20, 0.0, 0.0), (0.25, 0.4667, 0.4667), - (0.30, 0.6000, 0.6000), (0.35, 0.6667, 0.6667), - (0.40, 0.6667, 0.6667), (0.45, 0.6000, 0.6000), - (0.50, 0.7333, 0.7333), (0.55, 0.8667, 0.8667), - (0.60, 1.0, 1.0), (0.65, 1.0, 1.0), - (0.70, 0.9333, 0.9333), (0.75, 0.8000, 0.8000), - (0.80, 0.6000, 0.6000), (0.85, 0.0, 0.0), - (0.90, 0.0, 0.0), (0.95, 0.0, 0.0), - (1.0, 0.80, 0.80), - ], - 'blue': [ - (0.0, 0.0, 0.0), (0.05, 0.5333, 0.5333), - (0.10, 0.6000, 0.6000), (0.15, 0.6667, 0.6667), - (0.20, 0.8667, 0.8667), (0.25, 0.8667, 0.8667), - (0.30, 0.8667, 0.8667), (0.35, 0.6667, 0.6667), - (0.40, 0.5333, 0.5333), (0.45, 0.0, 0.0), - (0.5, 0.0, 0.0), (0.55, 0.0, 0.0), - (0.60, 0.0, 0.0), (0.65, 0.0, 0.0), - (0.70, 0.0, 0.0), (0.75, 0.0, 0.0), - (0.80, 0.0, 0.0), (0.85, 0.0, 0.0), - (0.90, 0.0, 0.0), (0.95, 0.0, 0.0), - (1.0, 0.80, 0.80), - ], -} - - -# 34 colormaps based on color specifications and designs -# developed by Cynthia Brewer (https://colorbrewer2.org/). -# The ColorBrewer palettes have been included under the terms -# of an Apache-stype license (for details, see the file -# LICENSE_COLORBREWER in the license directory of the matplotlib -# source distribution). - -# RGB values taken from Brewer's Excel sheet, divided by 255 - -_Blues_data = ( - (0.96862745098039216, 0.98431372549019602, 1.0 ), - (0.87058823529411766, 0.92156862745098034, 0.96862745098039216), - (0.77647058823529413, 0.85882352941176465, 0.93725490196078431), - (0.61960784313725492, 0.792156862745098 , 0.88235294117647056), - (0.41960784313725491, 0.68235294117647061, 0.83921568627450982), - (0.25882352941176473, 0.5725490196078431 , 0.77647058823529413), - (0.12941176470588237, 0.44313725490196076, 0.70980392156862748), - (0.03137254901960784, 0.31764705882352939, 0.61176470588235299), - (0.03137254901960784, 0.18823529411764706, 0.41960784313725491) - ) - -_BrBG_data = ( - (0.32941176470588235, 0.18823529411764706, 0.0196078431372549 ), - (0.5490196078431373 , 0.31764705882352939, 0.0392156862745098 ), - (0.74901960784313726, 0.50588235294117645, 0.17647058823529413), - (0.87450980392156863, 0.76078431372549016, 0.49019607843137253), - (0.96470588235294119, 0.90980392156862744, 0.76470588235294112), - (0.96078431372549022, 0.96078431372549022, 0.96078431372549022), - (0.7803921568627451 , 0.91764705882352937, 0.89803921568627454), - (0.50196078431372548, 0.80392156862745101, 0.75686274509803919), - (0.20784313725490197, 0.59215686274509804, 0.5607843137254902 ), - (0.00392156862745098, 0.4 , 0.36862745098039218), - (0.0 , 0.23529411764705882, 0.18823529411764706) - ) - -_BuGn_data = ( - (0.96862745098039216, 0.9882352941176471 , 0.99215686274509807), - (0.89803921568627454, 0.96078431372549022, 0.97647058823529409), - (0.8 , 0.92549019607843142, 0.90196078431372551), - (0.6 , 0.84705882352941175, 0.78823529411764703), - (0.4 , 0.76078431372549016, 0.64313725490196083), - (0.25490196078431371, 0.68235294117647061, 0.46274509803921571), - (0.13725490196078433, 0.54509803921568623, 0.27058823529411763), - (0.0 , 0.42745098039215684, 0.17254901960784313), - (0.0 , 0.26666666666666666, 0.10588235294117647) - ) - -_BuPu_data = ( - (0.96862745098039216, 0.9882352941176471 , 0.99215686274509807), - (0.8784313725490196 , 0.92549019607843142, 0.95686274509803926), - (0.74901960784313726, 0.82745098039215681, 0.90196078431372551), - (0.61960784313725492, 0.73725490196078436, 0.85490196078431369), - (0.5490196078431373 , 0.58823529411764708, 0.77647058823529413), - (0.5490196078431373 , 0.41960784313725491, 0.69411764705882351), - (0.53333333333333333, 0.25490196078431371, 0.61568627450980395), - (0.50588235294117645, 0.05882352941176471, 0.48627450980392156), - (0.30196078431372547, 0.0 , 0.29411764705882354) - ) - -_GnBu_data = ( - (0.96862745098039216, 0.9882352941176471 , 0.94117647058823528), - (0.8784313725490196 , 0.95294117647058818, 0.85882352941176465), - (0.8 , 0.92156862745098034, 0.77254901960784317), - (0.6588235294117647 , 0.8666666666666667 , 0.70980392156862748), - (0.4823529411764706 , 0.8 , 0.7686274509803922 ), - (0.30588235294117649, 0.70196078431372544, 0.82745098039215681), - (0.16862745098039217, 0.5490196078431373 , 0.74509803921568629), - (0.03137254901960784, 0.40784313725490196, 0.67450980392156867), - (0.03137254901960784, 0.25098039215686274, 0.50588235294117645) - ) - -_Greens_data = ( - (0.96862745098039216, 0.9882352941176471 , 0.96078431372549022), - (0.89803921568627454, 0.96078431372549022, 0.8784313725490196 ), - (0.7803921568627451 , 0.9137254901960784 , 0.75294117647058822), - (0.63137254901960782, 0.85098039215686272, 0.60784313725490191), - (0.45490196078431372, 0.7686274509803922 , 0.46274509803921571), - (0.25490196078431371, 0.6705882352941176 , 0.36470588235294116), - (0.13725490196078433, 0.54509803921568623, 0.27058823529411763), - (0.0 , 0.42745098039215684, 0.17254901960784313), - (0.0 , 0.26666666666666666, 0.10588235294117647) - ) - -_Greys_data = ( - (1.0 , 1.0 , 1.0 ), - (0.94117647058823528, 0.94117647058823528, 0.94117647058823528), - (0.85098039215686272, 0.85098039215686272, 0.85098039215686272), - (0.74117647058823533, 0.74117647058823533, 0.74117647058823533), - (0.58823529411764708, 0.58823529411764708, 0.58823529411764708), - (0.45098039215686275, 0.45098039215686275, 0.45098039215686275), - (0.32156862745098042, 0.32156862745098042, 0.32156862745098042), - (0.14509803921568629, 0.14509803921568629, 0.14509803921568629), - (0.0 , 0.0 , 0.0 ) - ) - -_Oranges_data = ( - (1.0 , 0.96078431372549022, 0.92156862745098034), - (0.99607843137254903, 0.90196078431372551, 0.80784313725490198), - (0.99215686274509807, 0.81568627450980391, 0.63529411764705879), - (0.99215686274509807, 0.68235294117647061, 0.41960784313725491), - (0.99215686274509807, 0.55294117647058827, 0.23529411764705882), - (0.94509803921568625, 0.41176470588235292, 0.07450980392156863), - (0.85098039215686272, 0.28235294117647058, 0.00392156862745098), - (0.65098039215686276, 0.21176470588235294, 0.01176470588235294), - (0.49803921568627452, 0.15294117647058825, 0.01568627450980392) - ) - -_OrRd_data = ( - (1.0 , 0.96862745098039216, 0.92549019607843142), - (0.99607843137254903, 0.90980392156862744, 0.78431372549019607), - (0.99215686274509807, 0.83137254901960789, 0.61960784313725492), - (0.99215686274509807, 0.73333333333333328, 0.51764705882352946), - (0.9882352941176471 , 0.55294117647058827, 0.34901960784313724), - (0.93725490196078431, 0.396078431372549 , 0.28235294117647058), - (0.84313725490196079, 0.18823529411764706, 0.12156862745098039), - (0.70196078431372544, 0.0 , 0.0 ), - (0.49803921568627452, 0.0 , 0.0 ) - ) - -_PiYG_data = ( - (0.55686274509803924, 0.00392156862745098, 0.32156862745098042), - (0.77254901960784317, 0.10588235294117647, 0.49019607843137253), - (0.87058823529411766, 0.46666666666666667, 0.68235294117647061), - (0.94509803921568625, 0.71372549019607845, 0.85490196078431369), - (0.99215686274509807, 0.8784313725490196 , 0.93725490196078431), - (0.96862745098039216, 0.96862745098039216, 0.96862745098039216), - (0.90196078431372551, 0.96078431372549022, 0.81568627450980391), - (0.72156862745098038, 0.88235294117647056, 0.52549019607843139), - (0.49803921568627452, 0.73725490196078436, 0.25490196078431371), - (0.30196078431372547, 0.5725490196078431 , 0.12941176470588237), - (0.15294117647058825, 0.39215686274509803, 0.09803921568627451) - ) - -_PRGn_data = ( - (0.25098039215686274, 0.0 , 0.29411764705882354), - (0.46274509803921571, 0.16470588235294117, 0.51372549019607838), - (0.6 , 0.4392156862745098 , 0.6705882352941176 ), - (0.76078431372549016, 0.6470588235294118 , 0.81176470588235294), - (0.90588235294117647, 0.83137254901960789, 0.90980392156862744), - (0.96862745098039216, 0.96862745098039216, 0.96862745098039216), - (0.85098039215686272, 0.94117647058823528, 0.82745098039215681), - (0.65098039215686276, 0.85882352941176465, 0.62745098039215685), - (0.35294117647058826, 0.68235294117647061, 0.38039215686274508), - (0.10588235294117647, 0.47058823529411764, 0.21568627450980393), - (0.0 , 0.26666666666666666, 0.10588235294117647) - ) - -_PuBu_data = ( - (1.0 , 0.96862745098039216, 0.98431372549019602), - (0.92549019607843142, 0.90588235294117647, 0.94901960784313721), - (0.81568627450980391, 0.81960784313725488, 0.90196078431372551), - (0.65098039215686276, 0.74117647058823533, 0.85882352941176465), - (0.45490196078431372, 0.66274509803921566, 0.81176470588235294), - (0.21176470588235294, 0.56470588235294117, 0.75294117647058822), - (0.0196078431372549 , 0.4392156862745098 , 0.69019607843137254), - (0.01568627450980392, 0.35294117647058826, 0.55294117647058827), - (0.00784313725490196, 0.2196078431372549 , 0.34509803921568627) - ) - -_PuBuGn_data = ( - (1.0 , 0.96862745098039216, 0.98431372549019602), - (0.92549019607843142, 0.88627450980392153, 0.94117647058823528), - (0.81568627450980391, 0.81960784313725488, 0.90196078431372551), - (0.65098039215686276, 0.74117647058823533, 0.85882352941176465), - (0.40392156862745099, 0.66274509803921566, 0.81176470588235294), - (0.21176470588235294, 0.56470588235294117, 0.75294117647058822), - (0.00784313725490196, 0.50588235294117645, 0.54117647058823526), - (0.00392156862745098, 0.42352941176470588, 0.34901960784313724), - (0.00392156862745098, 0.27450980392156865, 0.21176470588235294) - ) - -_PuOr_data = ( - (0.49803921568627452, 0.23137254901960785, 0.03137254901960784), - (0.70196078431372544, 0.34509803921568627, 0.02352941176470588), - (0.8784313725490196 , 0.50980392156862742, 0.07843137254901961), - (0.99215686274509807, 0.72156862745098038, 0.38823529411764707), - (0.99607843137254903, 0.8784313725490196 , 0.71372549019607845), - (0.96862745098039216, 0.96862745098039216, 0.96862745098039216), - (0.84705882352941175, 0.85490196078431369, 0.92156862745098034), - (0.69803921568627447, 0.6705882352941176 , 0.82352941176470584), - (0.50196078431372548, 0.45098039215686275, 0.67450980392156867), - (0.32941176470588235, 0.15294117647058825, 0.53333333333333333), - (0.17647058823529413, 0.0 , 0.29411764705882354) - ) - -_PuRd_data = ( - (0.96862745098039216, 0.95686274509803926, 0.97647058823529409), - (0.90588235294117647, 0.88235294117647056, 0.93725490196078431), - (0.83137254901960789, 0.72549019607843135, 0.85490196078431369), - (0.78823529411764703, 0.58039215686274515, 0.7803921568627451 ), - (0.87450980392156863, 0.396078431372549 , 0.69019607843137254), - (0.90588235294117647, 0.16078431372549021, 0.54117647058823526), - (0.80784313725490198, 0.07058823529411765, 0.33725490196078434), - (0.59607843137254901, 0.0 , 0.2627450980392157 ), - (0.40392156862745099, 0.0 , 0.12156862745098039) - ) - -_Purples_data = ( - (0.9882352941176471 , 0.98431372549019602, 0.99215686274509807), - (0.93725490196078431, 0.92941176470588238, 0.96078431372549022), - (0.85490196078431369, 0.85490196078431369, 0.92156862745098034), - (0.73725490196078436, 0.74117647058823533, 0.86274509803921573), - (0.61960784313725492, 0.60392156862745094, 0.78431372549019607), - (0.50196078431372548, 0.49019607843137253, 0.72941176470588232), - (0.41568627450980394, 0.31764705882352939, 0.63921568627450975), - (0.32941176470588235, 0.15294117647058825, 0.5607843137254902 ), - (0.24705882352941178, 0.0 , 0.49019607843137253) - ) - -_RdBu_data = ( - (0.40392156862745099, 0.0 , 0.12156862745098039), - (0.69803921568627447, 0.09411764705882353, 0.16862745098039217), - (0.83921568627450982, 0.37647058823529411, 0.30196078431372547), - (0.95686274509803926, 0.6470588235294118 , 0.50980392156862742), - (0.99215686274509807, 0.85882352941176465, 0.7803921568627451 ), - (0.96862745098039216, 0.96862745098039216, 0.96862745098039216), - (0.81960784313725488, 0.89803921568627454, 0.94117647058823528), - (0.5725490196078431 , 0.77254901960784317, 0.87058823529411766), - (0.2627450980392157 , 0.57647058823529407, 0.76470588235294112), - (0.12941176470588237, 0.4 , 0.67450980392156867), - (0.0196078431372549 , 0.18823529411764706, 0.38039215686274508) - ) - -_RdGy_data = ( - (0.40392156862745099, 0.0 , 0.12156862745098039), - (0.69803921568627447, 0.09411764705882353, 0.16862745098039217), - (0.83921568627450982, 0.37647058823529411, 0.30196078431372547), - (0.95686274509803926, 0.6470588235294118 , 0.50980392156862742), - (0.99215686274509807, 0.85882352941176465, 0.7803921568627451 ), - (1.0 , 1.0 , 1.0 ), - (0.8784313725490196 , 0.8784313725490196 , 0.8784313725490196 ), - (0.72941176470588232, 0.72941176470588232, 0.72941176470588232), - (0.52941176470588236, 0.52941176470588236, 0.52941176470588236), - (0.30196078431372547, 0.30196078431372547, 0.30196078431372547), - (0.10196078431372549, 0.10196078431372549, 0.10196078431372549) - ) - -_RdPu_data = ( - (1.0 , 0.96862745098039216, 0.95294117647058818), - (0.99215686274509807, 0.8784313725490196 , 0.86666666666666667), - (0.9882352941176471 , 0.77254901960784317, 0.75294117647058822), - (0.98039215686274506, 0.62352941176470589, 0.70980392156862748), - (0.96862745098039216, 0.40784313725490196, 0.63137254901960782), - (0.86666666666666667, 0.20392156862745098, 0.59215686274509804), - (0.68235294117647061, 0.00392156862745098, 0.49411764705882355), - (0.47843137254901963, 0.00392156862745098, 0.46666666666666667), - (0.28627450980392155, 0.0 , 0.41568627450980394) - ) - -_RdYlBu_data = ( - (0.6470588235294118 , 0.0 , 0.14901960784313725), - (0.84313725490196079, 0.18823529411764706 , 0.15294117647058825), - (0.95686274509803926, 0.42745098039215684 , 0.2627450980392157 ), - (0.99215686274509807, 0.68235294117647061 , 0.38039215686274508), - (0.99607843137254903, 0.8784313725490196 , 0.56470588235294117), - (1.0 , 1.0 , 0.74901960784313726), - (0.8784313725490196 , 0.95294117647058818 , 0.97254901960784312), - (0.6705882352941176 , 0.85098039215686272 , 0.9137254901960784 ), - (0.45490196078431372, 0.67843137254901964 , 0.81960784313725488), - (0.27058823529411763, 0.45882352941176469 , 0.70588235294117652), - (0.19215686274509805, 0.21176470588235294 , 0.58431372549019611) - ) - -_RdYlGn_data = ( - (0.6470588235294118 , 0.0 , 0.14901960784313725), - (0.84313725490196079, 0.18823529411764706 , 0.15294117647058825), - (0.95686274509803926, 0.42745098039215684 , 0.2627450980392157 ), - (0.99215686274509807, 0.68235294117647061 , 0.38039215686274508), - (0.99607843137254903, 0.8784313725490196 , 0.54509803921568623), - (1.0 , 1.0 , 0.74901960784313726), - (0.85098039215686272, 0.93725490196078431 , 0.54509803921568623), - (0.65098039215686276, 0.85098039215686272 , 0.41568627450980394), - (0.4 , 0.74117647058823533 , 0.38823529411764707), - (0.10196078431372549, 0.59607843137254901 , 0.31372549019607843), - (0.0 , 0.40784313725490196 , 0.21568627450980393) - ) - -_Reds_data = ( - (1.0 , 0.96078431372549022 , 0.94117647058823528), - (0.99607843137254903, 0.8784313725490196 , 0.82352941176470584), - (0.9882352941176471 , 0.73333333333333328 , 0.63137254901960782), - (0.9882352941176471 , 0.5725490196078431 , 0.44705882352941179), - (0.98431372549019602, 0.41568627450980394 , 0.29019607843137257), - (0.93725490196078431, 0.23137254901960785 , 0.17254901960784313), - (0.79607843137254897, 0.094117647058823528, 0.11372549019607843), - (0.6470588235294118 , 0.058823529411764705, 0.08235294117647058), - (0.40392156862745099, 0.0 , 0.05098039215686274) - ) - -_Spectral_data = ( - (0.61960784313725492, 0.003921568627450980, 0.25882352941176473), - (0.83529411764705885, 0.24313725490196078 , 0.30980392156862746), - (0.95686274509803926, 0.42745098039215684 , 0.2627450980392157 ), - (0.99215686274509807, 0.68235294117647061 , 0.38039215686274508), - (0.99607843137254903, 0.8784313725490196 , 0.54509803921568623), - (1.0 , 1.0 , 0.74901960784313726), - (0.90196078431372551, 0.96078431372549022 , 0.59607843137254901), - (0.6705882352941176 , 0.8666666666666667 , 0.64313725490196083), - (0.4 , 0.76078431372549016 , 0.6470588235294118 ), - (0.19607843137254902, 0.53333333333333333 , 0.74117647058823533), - (0.36862745098039218, 0.30980392156862746 , 0.63529411764705879) - ) - -_YlGn_data = ( - (1.0 , 1.0 , 0.89803921568627454), - (0.96862745098039216, 0.9882352941176471 , 0.72549019607843135), - (0.85098039215686272, 0.94117647058823528 , 0.63921568627450975), - (0.67843137254901964, 0.8666666666666667 , 0.55686274509803924), - (0.47058823529411764, 0.77647058823529413 , 0.47450980392156861), - (0.25490196078431371, 0.6705882352941176 , 0.36470588235294116), - (0.13725490196078433, 0.51764705882352946 , 0.2627450980392157 ), - (0.0 , 0.40784313725490196 , 0.21568627450980393), - (0.0 , 0.27058823529411763 , 0.16078431372549021) - ) - -_YlGnBu_data = ( - (1.0 , 1.0 , 0.85098039215686272), - (0.92941176470588238, 0.97254901960784312 , 0.69411764705882351), - (0.7803921568627451 , 0.9137254901960784 , 0.70588235294117652), - (0.49803921568627452, 0.80392156862745101 , 0.73333333333333328), - (0.25490196078431371, 0.71372549019607845 , 0.7686274509803922 ), - (0.11372549019607843, 0.56862745098039214 , 0.75294117647058822), - (0.13333333333333333, 0.36862745098039218 , 0.6588235294117647 ), - (0.14509803921568629, 0.20392156862745098 , 0.58039215686274515), - (0.03137254901960784, 0.11372549019607843 , 0.34509803921568627) - ) - -_YlOrBr_data = ( - (1.0 , 1.0 , 0.89803921568627454), - (1.0 , 0.96862745098039216 , 0.73725490196078436), - (0.99607843137254903, 0.8901960784313725 , 0.56862745098039214), - (0.99607843137254903, 0.7686274509803922 , 0.30980392156862746), - (0.99607843137254903, 0.6 , 0.16078431372549021), - (0.92549019607843142, 0.4392156862745098 , 0.07843137254901961), - (0.8 , 0.29803921568627451 , 0.00784313725490196), - (0.6 , 0.20392156862745098 , 0.01568627450980392), - (0.4 , 0.14509803921568629 , 0.02352941176470588) - ) - -_YlOrRd_data = ( - (1.0 , 1.0 , 0.8 ), - (1.0 , 0.92941176470588238 , 0.62745098039215685), - (0.99607843137254903, 0.85098039215686272 , 0.46274509803921571), - (0.99607843137254903, 0.69803921568627447 , 0.29803921568627451), - (0.99215686274509807, 0.55294117647058827 , 0.23529411764705882), - (0.9882352941176471 , 0.30588235294117649 , 0.16470588235294117), - (0.8901960784313725 , 0.10196078431372549 , 0.10980392156862745), - (0.74117647058823533, 0.0 , 0.14901960784313725), - (0.50196078431372548, 0.0 , 0.14901960784313725) - ) - - -# ColorBrewer's qualitative maps, implemented using ListedColormap -# for use with mpl.colors.NoNorm - -_Accent_data = ( - (0.49803921568627452, 0.78823529411764703, 0.49803921568627452), - (0.74509803921568629, 0.68235294117647061, 0.83137254901960789), - (0.99215686274509807, 0.75294117647058822, 0.52549019607843139), - (1.0, 1.0, 0.6 ), - (0.2196078431372549, 0.42352941176470588, 0.69019607843137254), - (0.94117647058823528, 0.00784313725490196, 0.49803921568627452), - (0.74901960784313726, 0.35686274509803922, 0.09019607843137254), - (0.4, 0.4, 0.4 ), - ) - -_Dark2_data = ( - (0.10588235294117647, 0.61960784313725492, 0.46666666666666667), - (0.85098039215686272, 0.37254901960784315, 0.00784313725490196), - (0.45882352941176469, 0.4392156862745098, 0.70196078431372544), - (0.90588235294117647, 0.16078431372549021, 0.54117647058823526), - (0.4, 0.65098039215686276, 0.11764705882352941), - (0.90196078431372551, 0.6705882352941176, 0.00784313725490196), - (0.65098039215686276, 0.46274509803921571, 0.11372549019607843), - (0.4, 0.4, 0.4 ), - ) - -_Paired_data = ( - (0.65098039215686276, 0.80784313725490198, 0.8901960784313725 ), - (0.12156862745098039, 0.47058823529411764, 0.70588235294117652), - (0.69803921568627447, 0.87450980392156863, 0.54117647058823526), - (0.2, 0.62745098039215685, 0.17254901960784313), - (0.98431372549019602, 0.60392156862745094, 0.6 ), - (0.8901960784313725, 0.10196078431372549, 0.10980392156862745), - (0.99215686274509807, 0.74901960784313726, 0.43529411764705883), - (1.0, 0.49803921568627452, 0.0 ), - (0.792156862745098, 0.69803921568627447, 0.83921568627450982), - (0.41568627450980394, 0.23921568627450981, 0.60392156862745094), - (1.0, 1.0, 0.6 ), - (0.69411764705882351, 0.34901960784313724, 0.15686274509803921), - ) - -_Pastel1_data = ( - (0.98431372549019602, 0.70588235294117652, 0.68235294117647061), - (0.70196078431372544, 0.80392156862745101, 0.8901960784313725 ), - (0.8, 0.92156862745098034, 0.77254901960784317), - (0.87058823529411766, 0.79607843137254897, 0.89411764705882357), - (0.99607843137254903, 0.85098039215686272, 0.65098039215686276), - (1.0, 1.0, 0.8 ), - (0.89803921568627454, 0.84705882352941175, 0.74117647058823533), - (0.99215686274509807, 0.85490196078431369, 0.92549019607843142), - (0.94901960784313721, 0.94901960784313721, 0.94901960784313721), - ) - -_Pastel2_data = ( - (0.70196078431372544, 0.88627450980392153, 0.80392156862745101), - (0.99215686274509807, 0.80392156862745101, 0.67450980392156867), - (0.79607843137254897, 0.83529411764705885, 0.90980392156862744), - (0.95686274509803926, 0.792156862745098, 0.89411764705882357), - (0.90196078431372551, 0.96078431372549022, 0.78823529411764703), - (1.0, 0.94901960784313721, 0.68235294117647061), - (0.94509803921568625, 0.88627450980392153, 0.8 ), - (0.8, 0.8, 0.8 ), - ) - -_Set1_data = ( - (0.89411764705882357, 0.10196078431372549, 0.10980392156862745), - (0.21568627450980393, 0.49411764705882355, 0.72156862745098038), - (0.30196078431372547, 0.68627450980392157, 0.29019607843137257), - (0.59607843137254901, 0.30588235294117649, 0.63921568627450975), - (1.0, 0.49803921568627452, 0.0 ), - (1.0, 1.0, 0.2 ), - (0.65098039215686276, 0.33725490196078434, 0.15686274509803921), - (0.96862745098039216, 0.50588235294117645, 0.74901960784313726), - (0.6, 0.6, 0.6), - ) - -_Set2_data = ( - (0.4, 0.76078431372549016, 0.6470588235294118 ), - (0.9882352941176471, 0.55294117647058827, 0.3843137254901961 ), - (0.55294117647058827, 0.62745098039215685, 0.79607843137254897), - (0.90588235294117647, 0.54117647058823526, 0.76470588235294112), - (0.65098039215686276, 0.84705882352941175, 0.32941176470588235), - (1.0, 0.85098039215686272, 0.18431372549019609), - (0.89803921568627454, 0.7686274509803922, 0.58039215686274515), - (0.70196078431372544, 0.70196078431372544, 0.70196078431372544), - ) - -_Set3_data = ( - (0.55294117647058827, 0.82745098039215681, 0.7803921568627451 ), - (1.0, 1.0, 0.70196078431372544), - (0.74509803921568629, 0.72941176470588232, 0.85490196078431369), - (0.98431372549019602, 0.50196078431372548, 0.44705882352941179), - (0.50196078431372548, 0.69411764705882351, 0.82745098039215681), - (0.99215686274509807, 0.70588235294117652, 0.3843137254901961 ), - (0.70196078431372544, 0.87058823529411766, 0.41176470588235292), - (0.9882352941176471, 0.80392156862745101, 0.89803921568627454), - (0.85098039215686272, 0.85098039215686272, 0.85098039215686272), - (0.73725490196078436, 0.50196078431372548, 0.74117647058823533), - (0.8, 0.92156862745098034, 0.77254901960784317), - (1.0, 0.92941176470588238, 0.43529411764705883), - ) - - -# The next 7 palettes are from the Yorick scientific visualization package, -# an evolution of the GIST package, both by David H. Munro. -# They are released under a BSD-like license (see LICENSE_YORICK in -# the license directory of the matplotlib source distribution). -# -# Most palette functions have been reduced to simple function descriptions -# by Reinier Heeres, since the rgb components were mostly straight lines. -# gist_earth_data and gist_ncar_data were simplified by a script and some -# manual effort. - -_gist_earth_data = \ -{'red': ( -(0.0, 0.0, 0.0000), -(0.2824, 0.1882, 0.1882), -(0.4588, 0.2714, 0.2714), -(0.5490, 0.4719, 0.4719), -(0.6980, 0.7176, 0.7176), -(0.7882, 0.7553, 0.7553), -(1.0000, 0.9922, 0.9922), -), 'green': ( -(0.0, 0.0, 0.0000), -(0.0275, 0.0000, 0.0000), -(0.1098, 0.1893, 0.1893), -(0.1647, 0.3035, 0.3035), -(0.2078, 0.3841, 0.3841), -(0.2824, 0.5020, 0.5020), -(0.5216, 0.6397, 0.6397), -(0.6980, 0.7171, 0.7171), -(0.7882, 0.6392, 0.6392), -(0.7922, 0.6413, 0.6413), -(0.8000, 0.6447, 0.6447), -(0.8078, 0.6481, 0.6481), -(0.8157, 0.6549, 0.6549), -(0.8667, 0.6991, 0.6991), -(0.8745, 0.7103, 0.7103), -(0.8824, 0.7216, 0.7216), -(0.8902, 0.7323, 0.7323), -(0.8980, 0.7430, 0.7430), -(0.9412, 0.8275, 0.8275), -(0.9569, 0.8635, 0.8635), -(0.9647, 0.8816, 0.8816), -(0.9961, 0.9733, 0.9733), -(1.0000, 0.9843, 0.9843), -), 'blue': ( -(0.0, 0.0, 0.0000), -(0.0039, 0.1684, 0.1684), -(0.0078, 0.2212, 0.2212), -(0.0275, 0.4329, 0.4329), -(0.0314, 0.4549, 0.4549), -(0.2824, 0.5004, 0.5004), -(0.4667, 0.2748, 0.2748), -(0.5451, 0.3205, 0.3205), -(0.7843, 0.3961, 0.3961), -(0.8941, 0.6651, 0.6651), -(1.0000, 0.9843, 0.9843), -)} - -_gist_gray_data = { - 'red': gfunc[3], - 'green': gfunc[3], - 'blue': gfunc[3], -} - -def _gist_heat_red(x): return 1.5 * x -def _gist_heat_green(x): return 2 * x - 1 -def _gist_heat_blue(x): return 4 * x - 3 -_gist_heat_data = { - 'red': _gist_heat_red, 'green': _gist_heat_green, 'blue': _gist_heat_blue} - -_gist_ncar_data = \ -{'red': ( -(0.0, 0.0, 0.0000), -(0.3098, 0.0000, 0.0000), -(0.3725, 0.3993, 0.3993), -(0.4235, 0.5003, 0.5003), -(0.5333, 1.0000, 1.0000), -(0.7922, 1.0000, 1.0000), -(0.8471, 0.6218, 0.6218), -(0.8980, 0.9235, 0.9235), -(1.0000, 0.9961, 0.9961), -), 'green': ( -(0.0, 0.0, 0.0000), -(0.0510, 0.3722, 0.3722), -(0.1059, 0.0000, 0.0000), -(0.1569, 0.7202, 0.7202), -(0.1608, 0.7537, 0.7537), -(0.1647, 0.7752, 0.7752), -(0.2157, 1.0000, 1.0000), -(0.2588, 0.9804, 0.9804), -(0.2706, 0.9804, 0.9804), -(0.3176, 1.0000, 1.0000), -(0.3686, 0.8081, 0.8081), -(0.4275, 1.0000, 1.0000), -(0.5216, 1.0000, 1.0000), -(0.6314, 0.7292, 0.7292), -(0.6863, 0.2796, 0.2796), -(0.7451, 0.0000, 0.0000), -(0.7922, 0.0000, 0.0000), -(0.8431, 0.1753, 0.1753), -(0.8980, 0.5000, 0.5000), -(1.0000, 0.9725, 0.9725), -), 'blue': ( -(0.0, 0.5020, 0.5020), -(0.0510, 0.0222, 0.0222), -(0.1098, 1.0000, 1.0000), -(0.2039, 1.0000, 1.0000), -(0.2627, 0.6145, 0.6145), -(0.3216, 0.0000, 0.0000), -(0.4157, 0.0000, 0.0000), -(0.4745, 0.2342, 0.2342), -(0.5333, 0.0000, 0.0000), -(0.5804, 0.0000, 0.0000), -(0.6314, 0.0549, 0.0549), -(0.6902, 0.0000, 0.0000), -(0.7373, 0.0000, 0.0000), -(0.7922, 0.9738, 0.9738), -(0.8000, 1.0000, 1.0000), -(0.8431, 1.0000, 1.0000), -(0.8980, 0.9341, 0.9341), -(1.0000, 0.9961, 0.9961), -)} - -_gist_rainbow_data = ( - (0.000, (1.00, 0.00, 0.16)), - (0.030, (1.00, 0.00, 0.00)), - (0.215, (1.00, 1.00, 0.00)), - (0.400, (0.00, 1.00, 0.00)), - (0.586, (0.00, 1.00, 1.00)), - (0.770, (0.00, 0.00, 1.00)), - (0.954, (1.00, 0.00, 1.00)), - (1.000, (1.00, 0.00, 0.75)) -) - -_gist_stern_data = { - 'red': ( - (0.000, 0.000, 0.000), (0.0547, 1.000, 1.000), - (0.250, 0.027, 0.250), # (0.2500, 0.250, 0.250), - (1.000, 1.000, 1.000)), - 'green': ((0, 0, 0), (1, 1, 1)), - 'blue': ( - (0.000, 0.000, 0.000), (0.500, 1.000, 1.000), - (0.735, 0.000, 0.000), (1.000, 1.000, 1.000)) -} - -def _gist_yarg(x): return 1 - x -_gist_yarg_data = {'red': _gist_yarg, 'green': _gist_yarg, 'blue': _gist_yarg} - -# This bipolar colormap was generated from CoolWarmFloat33.csv of -# "Diverging Color Maps for Scientific Visualization" by Kenneth Moreland. -# -_coolwarm_data = { - 'red': [ - (0.0, 0.2298057, 0.2298057), - (0.03125, 0.26623388, 0.26623388), - (0.0625, 0.30386891, 0.30386891), - (0.09375, 0.342804478, 0.342804478), - (0.125, 0.38301334, 0.38301334), - (0.15625, 0.424369608, 0.424369608), - (0.1875, 0.46666708, 0.46666708), - (0.21875, 0.509635204, 0.509635204), - (0.25, 0.552953156, 0.552953156), - (0.28125, 0.596262162, 0.596262162), - (0.3125, 0.639176211, 0.639176211), - (0.34375, 0.681291281, 0.681291281), - (0.375, 0.722193294, 0.722193294), - (0.40625, 0.761464949, 0.761464949), - (0.4375, 0.798691636, 0.798691636), - (0.46875, 0.833466556, 0.833466556), - (0.5, 0.865395197, 0.865395197), - (0.53125, 0.897787179, 0.897787179), - (0.5625, 0.924127593, 0.924127593), - (0.59375, 0.944468518, 0.944468518), - (0.625, 0.958852946, 0.958852946), - (0.65625, 0.96732803, 0.96732803), - (0.6875, 0.969954137, 0.969954137), - (0.71875, 0.966811177, 0.966811177), - (0.75, 0.958003065, 0.958003065), - (0.78125, 0.943660866, 0.943660866), - (0.8125, 0.923944917, 0.923944917), - (0.84375, 0.89904617, 0.89904617), - (0.875, 0.869186849, 0.869186849), - (0.90625, 0.834620542, 0.834620542), - (0.9375, 0.795631745, 0.795631745), - (0.96875, 0.752534934, 0.752534934), - (1.0, 0.705673158, 0.705673158)], - 'green': [ - (0.0, 0.298717966, 0.298717966), - (0.03125, 0.353094838, 0.353094838), - (0.0625, 0.406535296, 0.406535296), - (0.09375, 0.458757618, 0.458757618), - (0.125, 0.50941904, 0.50941904), - (0.15625, 0.558148092, 0.558148092), - (0.1875, 0.604562568, 0.604562568), - (0.21875, 0.648280772, 0.648280772), - (0.25, 0.688929332, 0.688929332), - (0.28125, 0.726149107, 0.726149107), - (0.3125, 0.759599947, 0.759599947), - (0.34375, 0.788964712, 0.788964712), - (0.375, 0.813952739, 0.813952739), - (0.40625, 0.834302879, 0.834302879), - (0.4375, 0.849786142, 0.849786142), - (0.46875, 0.860207984, 0.860207984), - (0.5, 0.86541021, 0.86541021), - (0.53125, 0.848937047, 0.848937047), - (0.5625, 0.827384882, 0.827384882), - (0.59375, 0.800927443, 0.800927443), - (0.625, 0.769767752, 0.769767752), - (0.65625, 0.734132809, 0.734132809), - (0.6875, 0.694266682, 0.694266682), - (0.71875, 0.650421156, 0.650421156), - (0.75, 0.602842431, 0.602842431), - (0.78125, 0.551750968, 0.551750968), - (0.8125, 0.49730856, 0.49730856), - (0.84375, 0.439559467, 0.439559467), - (0.875, 0.378313092, 0.378313092), - (0.90625, 0.312874446, 0.312874446), - (0.9375, 0.24128379, 0.24128379), - (0.96875, 0.157246067, 0.157246067), - (1.0, 0.01555616, 0.01555616)], - 'blue': [ - (0.0, 0.753683153, 0.753683153), - (0.03125, 0.801466763, 0.801466763), - (0.0625, 0.84495867, 0.84495867), - (0.09375, 0.883725899, 0.883725899), - (0.125, 0.917387822, 0.917387822), - (0.15625, 0.945619588, 0.945619588), - (0.1875, 0.968154911, 0.968154911), - (0.21875, 0.98478814, 0.98478814), - (0.25, 0.995375608, 0.995375608), - (0.28125, 0.999836203, 0.999836203), - (0.3125, 0.998151185, 0.998151185), - (0.34375, 0.990363227, 0.990363227), - (0.375, 0.976574709, 0.976574709), - (0.40625, 0.956945269, 0.956945269), - (0.4375, 0.931688648, 0.931688648), - (0.46875, 0.901068838, 0.901068838), - (0.5, 0.865395561, 0.865395561), - (0.53125, 0.820880546, 0.820880546), - (0.5625, 0.774508472, 0.774508472), - (0.59375, 0.726736146, 0.726736146), - (0.625, 0.678007945, 0.678007945), - (0.65625, 0.628751763, 0.628751763), - (0.6875, 0.579375448, 0.579375448), - (0.71875, 0.530263762, 0.530263762), - (0.75, 0.481775914, 0.481775914), - (0.78125, 0.434243684, 0.434243684), - (0.8125, 0.387970225, 0.387970225), - (0.84375, 0.343229596, 0.343229596), - (0.875, 0.300267182, 0.300267182), - (0.90625, 0.259301199, 0.259301199), - (0.9375, 0.220525627, 0.220525627), - (0.96875, 0.184115123, 0.184115123), - (1.0, 0.150232812, 0.150232812)] - } - -# Implementation of Carey Rappaport's CMRmap. -# See `A Color Map for Effective Black-and-White Rendering of Color-Scale -# Images' by Carey Rappaport -# https://www.mathworks.com/matlabcentral/fileexchange/2662-cmrmap-m -_CMRmap_data = {'red': ((0.000, 0.00, 0.00), - (0.125, 0.15, 0.15), - (0.250, 0.30, 0.30), - (0.375, 0.60, 0.60), - (0.500, 1.00, 1.00), - (0.625, 0.90, 0.90), - (0.750, 0.90, 0.90), - (0.875, 0.90, 0.90), - (1.000, 1.00, 1.00)), - 'green': ((0.000, 0.00, 0.00), - (0.125, 0.15, 0.15), - (0.250, 0.15, 0.15), - (0.375, 0.20, 0.20), - (0.500, 0.25, 0.25), - (0.625, 0.50, 0.50), - (0.750, 0.75, 0.75), - (0.875, 0.90, 0.90), - (1.000, 1.00, 1.00)), - 'blue': ((0.000, 0.00, 0.00), - (0.125, 0.50, 0.50), - (0.250, 0.75, 0.75), - (0.375, 0.50, 0.50), - (0.500, 0.15, 0.15), - (0.625, 0.00, 0.00), - (0.750, 0.10, 0.10), - (0.875, 0.50, 0.50), - (1.000, 1.00, 1.00))} - - -# An MIT licensed, colorblind-friendly heatmap from Wistia: -# https://github.com/wistia/heatmap-palette -# https://wistia.com/learn/culture/heatmaps-for-colorblindness -# -# >>> import matplotlib.colors as c -# >>> colors = ["#e4ff7a", "#ffe81a", "#ffbd00", "#ffa000", "#fc7f00"] -# >>> cm = c.LinearSegmentedColormap.from_list('wistia', colors) -# >>> _wistia_data = cm._segmentdata -# >>> del _wistia_data['alpha'] -# -_wistia_data = { - 'red': [(0.0, 0.8941176470588236, 0.8941176470588236), - (0.25, 1.0, 1.0), - (0.5, 1.0, 1.0), - (0.75, 1.0, 1.0), - (1.0, 0.9882352941176471, 0.9882352941176471)], - 'green': [(0.0, 1.0, 1.0), - (0.25, 0.9098039215686274, 0.9098039215686274), - (0.5, 0.7411764705882353, 0.7411764705882353), - (0.75, 0.6274509803921569, 0.6274509803921569), - (1.0, 0.4980392156862745, 0.4980392156862745)], - 'blue': [(0.0, 0.47843137254901963, 0.47843137254901963), - (0.25, 0.10196078431372549, 0.10196078431372549), - (0.5, 0.0, 0.0), - (0.75, 0.0, 0.0), - (1.0, 0.0, 0.0)], -} - - -# Categorical palettes from Vega: -# https://github.com/vega/vega/wiki/Scales -# (divided by 255) -# - -_tab10_data = ( - (0.12156862745098039, 0.4666666666666667, 0.7058823529411765 ), # 1f77b4 - (1.0, 0.4980392156862745, 0.054901960784313725), # ff7f0e - (0.17254901960784313, 0.6274509803921569, 0.17254901960784313 ), # 2ca02c - (0.8392156862745098, 0.15294117647058825, 0.1568627450980392 ), # d62728 - (0.5803921568627451, 0.403921568627451, 0.7411764705882353 ), # 9467bd - (0.5490196078431373, 0.33725490196078434, 0.29411764705882354 ), # 8c564b - (0.8901960784313725, 0.4666666666666667, 0.7607843137254902 ), # e377c2 - (0.4980392156862745, 0.4980392156862745, 0.4980392156862745 ), # 7f7f7f - (0.7372549019607844, 0.7411764705882353, 0.13333333333333333 ), # bcbd22 - (0.09019607843137255, 0.7450980392156863, 0.8117647058823529), # 17becf -) - -_tab20_data = ( - (0.12156862745098039, 0.4666666666666667, 0.7058823529411765 ), # 1f77b4 - (0.6823529411764706, 0.7803921568627451, 0.9098039215686274 ), # aec7e8 - (1.0, 0.4980392156862745, 0.054901960784313725), # ff7f0e - (1.0, 0.7333333333333333, 0.47058823529411764 ), # ffbb78 - (0.17254901960784313, 0.6274509803921569, 0.17254901960784313 ), # 2ca02c - (0.596078431372549, 0.8745098039215686, 0.5411764705882353 ), # 98df8a - (0.8392156862745098, 0.15294117647058825, 0.1568627450980392 ), # d62728 - (1.0, 0.596078431372549, 0.5882352941176471 ), # ff9896 - (0.5803921568627451, 0.403921568627451, 0.7411764705882353 ), # 9467bd - (0.7725490196078432, 0.6901960784313725, 0.8352941176470589 ), # c5b0d5 - (0.5490196078431373, 0.33725490196078434, 0.29411764705882354 ), # 8c564b - (0.7686274509803922, 0.611764705882353, 0.5803921568627451 ), # c49c94 - (0.8901960784313725, 0.4666666666666667, 0.7607843137254902 ), # e377c2 - (0.9686274509803922, 0.7137254901960784, 0.8235294117647058 ), # f7b6d2 - (0.4980392156862745, 0.4980392156862745, 0.4980392156862745 ), # 7f7f7f - (0.7803921568627451, 0.7803921568627451, 0.7803921568627451 ), # c7c7c7 - (0.7372549019607844, 0.7411764705882353, 0.13333333333333333 ), # bcbd22 - (0.8588235294117647, 0.8588235294117647, 0.5529411764705883 ), # dbdb8d - (0.09019607843137255, 0.7450980392156863, 0.8117647058823529 ), # 17becf - (0.6196078431372549, 0.8549019607843137, 0.8980392156862745), # 9edae5 -) - -_tab20b_data = ( - (0.2235294117647059, 0.23137254901960785, 0.4745098039215686 ), # 393b79 - (0.3215686274509804, 0.32941176470588235, 0.6392156862745098 ), # 5254a3 - (0.4196078431372549, 0.43137254901960786, 0.8117647058823529 ), # 6b6ecf - (0.611764705882353, 0.6196078431372549, 0.8705882352941177 ), # 9c9ede - (0.38823529411764707, 0.4745098039215686, 0.2235294117647059 ), # 637939 - (0.5490196078431373, 0.6352941176470588, 0.3215686274509804 ), # 8ca252 - (0.7098039215686275, 0.8117647058823529, 0.4196078431372549 ), # b5cf6b - (0.807843137254902, 0.8588235294117647, 0.611764705882353 ), # cedb9c - (0.5490196078431373, 0.42745098039215684, 0.19215686274509805), # 8c6d31 - (0.7411764705882353, 0.6196078431372549, 0.2235294117647059 ), # bd9e39 - (0.9058823529411765, 0.7294117647058823, 0.3215686274509804 ), # e7ba52 - (0.9058823529411765, 0.796078431372549, 0.5803921568627451 ), # e7cb94 - (0.5176470588235295, 0.23529411764705882, 0.2235294117647059 ), # 843c39 - (0.6784313725490196, 0.28627450980392155, 0.2901960784313726 ), # ad494a - (0.8392156862745098, 0.3803921568627451, 0.4196078431372549 ), # d6616b - (0.9058823529411765, 0.5882352941176471, 0.611764705882353 ), # e7969c - (0.4823529411764706, 0.2549019607843137, 0.45098039215686275), # 7b4173 - (0.6470588235294118, 0.3176470588235294, 0.5803921568627451 ), # a55194 - (0.807843137254902, 0.42745098039215684, 0.7411764705882353 ), # ce6dbd - (0.8705882352941177, 0.6196078431372549, 0.8392156862745098 ), # de9ed6 -) - -_tab20c_data = ( - (0.19215686274509805, 0.5098039215686274, 0.7411764705882353 ), # 3182bd - (0.4196078431372549, 0.6823529411764706, 0.8392156862745098 ), # 6baed6 - (0.6196078431372549, 0.792156862745098, 0.8823529411764706 ), # 9ecae1 - (0.7764705882352941, 0.8588235294117647, 0.9372549019607843 ), # c6dbef - (0.9019607843137255, 0.3333333333333333, 0.050980392156862744), # e6550d - (0.9921568627450981, 0.5529411764705883, 0.23529411764705882 ), # fd8d3c - (0.9921568627450981, 0.6823529411764706, 0.4196078431372549 ), # fdae6b - (0.9921568627450981, 0.8156862745098039, 0.6352941176470588 ), # fdd0a2 - (0.19215686274509805, 0.6392156862745098, 0.32941176470588235 ), # 31a354 - (0.4549019607843137, 0.7686274509803922, 0.4627450980392157 ), # 74c476 - (0.6313725490196078, 0.8509803921568627, 0.6078431372549019 ), # a1d99b - (0.7803921568627451, 0.9137254901960784, 0.7529411764705882 ), # c7e9c0 - (0.4588235294117647, 0.4196078431372549, 0.6941176470588235 ), # 756bb1 - (0.6196078431372549, 0.6039215686274509, 0.7843137254901961 ), # 9e9ac8 - (0.7372549019607844, 0.7411764705882353, 0.8627450980392157 ), # bcbddc - (0.8549019607843137, 0.8549019607843137, 0.9215686274509803 ), # dadaeb - (0.38823529411764707, 0.38823529411764707, 0.38823529411764707 ), # 636363 - (0.5882352941176471, 0.5882352941176471, 0.5882352941176471 ), # 969696 - (0.7411764705882353, 0.7411764705882353, 0.7411764705882353 ), # bdbdbd - (0.8509803921568627, 0.8509803921568627, 0.8509803921568627 ), # d9d9d9 -) - - -datad = { - 'Blues': _Blues_data, - 'BrBG': _BrBG_data, - 'BuGn': _BuGn_data, - 'BuPu': _BuPu_data, - 'CMRmap': _CMRmap_data, - 'GnBu': _GnBu_data, - 'Greens': _Greens_data, - 'Greys': _Greys_data, - 'OrRd': _OrRd_data, - 'Oranges': _Oranges_data, - 'PRGn': _PRGn_data, - 'PiYG': _PiYG_data, - 'PuBu': _PuBu_data, - 'PuBuGn': _PuBuGn_data, - 'PuOr': _PuOr_data, - 'PuRd': _PuRd_data, - 'Purples': _Purples_data, - 'RdBu': _RdBu_data, - 'RdGy': _RdGy_data, - 'RdPu': _RdPu_data, - 'RdYlBu': _RdYlBu_data, - 'RdYlGn': _RdYlGn_data, - 'Reds': _Reds_data, - 'Spectral': _Spectral_data, - 'Wistia': _wistia_data, - 'YlGn': _YlGn_data, - 'YlGnBu': _YlGnBu_data, - 'YlOrBr': _YlOrBr_data, - 'YlOrRd': _YlOrRd_data, - 'afmhot': _afmhot_data, - 'autumn': _autumn_data, - 'binary': _binary_data, - 'bone': _bone_data, - 'brg': _brg_data, - 'bwr': _bwr_data, - 'cool': _cool_data, - 'coolwarm': _coolwarm_data, - 'copper': _copper_data, - 'cubehelix': _cubehelix_data, - 'flag': _flag_data, - 'gist_earth': _gist_earth_data, - 'gist_gray': _gist_gray_data, - 'gist_heat': _gist_heat_data, - 'gist_ncar': _gist_ncar_data, - 'gist_rainbow': _gist_rainbow_data, - 'gist_stern': _gist_stern_data, - 'gist_yarg': _gist_yarg_data, - 'gnuplot': _gnuplot_data, - 'gnuplot2': _gnuplot2_data, - 'gray': _gray_data, - 'hot': _hot_data, - 'hsv': _hsv_data, - 'jet': _jet_data, - 'nipy_spectral': _nipy_spectral_data, - 'ocean': _ocean_data, - 'pink': _pink_data, - 'prism': _prism_data, - 'rainbow': _rainbow_data, - 'seismic': _seismic_data, - 'spring': _spring_data, - 'summer': _summer_data, - 'terrain': _terrain_data, - 'winter': _winter_data, - # Qualitative - 'Accent': {'listed': _Accent_data}, - 'Dark2': {'listed': _Dark2_data}, - 'Paired': {'listed': _Paired_data}, - 'Pastel1': {'listed': _Pastel1_data}, - 'Pastel2': {'listed': _Pastel2_data}, - 'Set1': {'listed': _Set1_data}, - 'Set2': {'listed': _Set2_data}, - 'Set3': {'listed': _Set3_data}, - 'tab10': {'listed': _tab10_data}, - 'tab20': {'listed': _tab20_data}, - 'tab20b': {'listed': _tab20b_data}, - 'tab20c': {'listed': _tab20c_data}, -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm_listed.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm_listed.py deleted file mode 100644 index a331ad74a5f03688005dc14d5867653b3d77e20c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_cm_listed.py +++ /dev/null @@ -1,2071 +0,0 @@ -from .colors import ListedColormap - -_magma_data = [[0.001462, 0.000466, 0.013866], - [0.002258, 0.001295, 0.018331], - [0.003279, 0.002305, 0.023708], - [0.004512, 0.003490, 0.029965], - [0.005950, 0.004843, 0.037130], - [0.007588, 0.006356, 0.044973], - [0.009426, 0.008022, 0.052844], - [0.011465, 0.009828, 0.060750], - [0.013708, 0.011771, 0.068667], - [0.016156, 0.013840, 0.076603], - [0.018815, 0.016026, 0.084584], - [0.021692, 0.018320, 0.092610], - [0.024792, 0.020715, 0.100676], - [0.028123, 0.023201, 0.108787], - [0.031696, 0.025765, 0.116965], - [0.035520, 0.028397, 0.125209], - [0.039608, 0.031090, 0.133515], - [0.043830, 0.033830, 0.141886], - [0.048062, 0.036607, 0.150327], - [0.052320, 0.039407, 0.158841], - [0.056615, 0.042160, 0.167446], - [0.060949, 0.044794, 0.176129], - [0.065330, 0.047318, 0.184892], - [0.069764, 0.049726, 0.193735], - [0.074257, 0.052017, 0.202660], - [0.078815, 0.054184, 0.211667], - [0.083446, 0.056225, 0.220755], - [0.088155, 0.058133, 0.229922], - [0.092949, 0.059904, 0.239164], - [0.097833, 0.061531, 0.248477], - [0.102815, 0.063010, 0.257854], - [0.107899, 0.064335, 0.267289], - [0.113094, 0.065492, 0.276784], - [0.118405, 0.066479, 0.286321], - [0.123833, 0.067295, 0.295879], - [0.129380, 0.067935, 0.305443], - [0.135053, 0.068391, 0.315000], - [0.140858, 0.068654, 0.324538], - [0.146785, 0.068738, 0.334011], - [0.152839, 0.068637, 0.343404], - [0.159018, 0.068354, 0.352688], - [0.165308, 0.067911, 0.361816], - [0.171713, 0.067305, 0.370771], - [0.178212, 0.066576, 0.379497], - [0.184801, 0.065732, 0.387973], - [0.191460, 0.064818, 0.396152], - [0.198177, 0.063862, 0.404009], - [0.204935, 0.062907, 0.411514], - [0.211718, 0.061992, 0.418647], - [0.218512, 0.061158, 0.425392], - [0.225302, 0.060445, 0.431742], - [0.232077, 0.059889, 0.437695], - [0.238826, 0.059517, 0.443256], - [0.245543, 0.059352, 0.448436], - [0.252220, 0.059415, 0.453248], - [0.258857, 0.059706, 0.457710], - [0.265447, 0.060237, 0.461840], - [0.271994, 0.060994, 0.465660], - [0.278493, 0.061978, 0.469190], - [0.284951, 0.063168, 0.472451], - [0.291366, 0.064553, 0.475462], - [0.297740, 0.066117, 0.478243], - [0.304081, 0.067835, 0.480812], - [0.310382, 0.069702, 0.483186], - [0.316654, 0.071690, 0.485380], - [0.322899, 0.073782, 0.487408], - [0.329114, 0.075972, 0.489287], - [0.335308, 0.078236, 0.491024], - [0.341482, 0.080564, 0.492631], - [0.347636, 0.082946, 0.494121], - [0.353773, 0.085373, 0.495501], - [0.359898, 0.087831, 0.496778], - [0.366012, 0.090314, 0.497960], - [0.372116, 0.092816, 0.499053], - [0.378211, 0.095332, 0.500067], - [0.384299, 0.097855, 0.501002], - [0.390384, 0.100379, 0.501864], - [0.396467, 0.102902, 0.502658], - [0.402548, 0.105420, 0.503386], - [0.408629, 0.107930, 0.504052], - [0.414709, 0.110431, 0.504662], - [0.420791, 0.112920, 0.505215], - [0.426877, 0.115395, 0.505714], - [0.432967, 0.117855, 0.506160], - [0.439062, 0.120298, 0.506555], - [0.445163, 0.122724, 0.506901], - [0.451271, 0.125132, 0.507198], - [0.457386, 0.127522, 0.507448], - [0.463508, 0.129893, 0.507652], - [0.469640, 0.132245, 0.507809], - [0.475780, 0.134577, 0.507921], - [0.481929, 0.136891, 0.507989], - [0.488088, 0.139186, 0.508011], - [0.494258, 0.141462, 0.507988], - [0.500438, 0.143719, 0.507920], - [0.506629, 0.145958, 0.507806], - [0.512831, 0.148179, 0.507648], - [0.519045, 0.150383, 0.507443], - [0.525270, 0.152569, 0.507192], - [0.531507, 0.154739, 0.506895], - [0.537755, 0.156894, 0.506551], - [0.544015, 0.159033, 0.506159], - [0.550287, 0.161158, 0.505719], - [0.556571, 0.163269, 0.505230], - [0.562866, 0.165368, 0.504692], - [0.569172, 0.167454, 0.504105], - [0.575490, 0.169530, 0.503466], - [0.581819, 0.171596, 0.502777], - [0.588158, 0.173652, 0.502035], - [0.594508, 0.175701, 0.501241], - [0.600868, 0.177743, 0.500394], - [0.607238, 0.179779, 0.499492], - [0.613617, 0.181811, 0.498536], - [0.620005, 0.183840, 0.497524], - [0.626401, 0.185867, 0.496456], - [0.632805, 0.187893, 0.495332], - [0.639216, 0.189921, 0.494150], - [0.645633, 0.191952, 0.492910], - [0.652056, 0.193986, 0.491611], - [0.658483, 0.196027, 0.490253], - [0.664915, 0.198075, 0.488836], - [0.671349, 0.200133, 0.487358], - [0.677786, 0.202203, 0.485819], - [0.684224, 0.204286, 0.484219], - [0.690661, 0.206384, 0.482558], - [0.697098, 0.208501, 0.480835], - [0.703532, 0.210638, 0.479049], - [0.709962, 0.212797, 0.477201], - [0.716387, 0.214982, 0.475290], - [0.722805, 0.217194, 0.473316], - [0.729216, 0.219437, 0.471279], - [0.735616, 0.221713, 0.469180], - [0.742004, 0.224025, 0.467018], - [0.748378, 0.226377, 0.464794], - [0.754737, 0.228772, 0.462509], - [0.761077, 0.231214, 0.460162], - [0.767398, 0.233705, 0.457755], - [0.773695, 0.236249, 0.455289], - [0.779968, 0.238851, 0.452765], - [0.786212, 0.241514, 0.450184], - [0.792427, 0.244242, 0.447543], - [0.798608, 0.247040, 0.444848], - [0.804752, 0.249911, 0.442102], - [0.810855, 0.252861, 0.439305], - [0.816914, 0.255895, 0.436461], - [0.822926, 0.259016, 0.433573], - [0.828886, 0.262229, 0.430644], - [0.834791, 0.265540, 0.427671], - [0.840636, 0.268953, 0.424666], - [0.846416, 0.272473, 0.421631], - [0.852126, 0.276106, 0.418573], - [0.857763, 0.279857, 0.415496], - [0.863320, 0.283729, 0.412403], - [0.868793, 0.287728, 0.409303], - [0.874176, 0.291859, 0.406205], - [0.879464, 0.296125, 0.403118], - [0.884651, 0.300530, 0.400047], - [0.889731, 0.305079, 0.397002], - [0.894700, 0.309773, 0.393995], - [0.899552, 0.314616, 0.391037], - [0.904281, 0.319610, 0.388137], - [0.908884, 0.324755, 0.385308], - [0.913354, 0.330052, 0.382563], - [0.917689, 0.335500, 0.379915], - [0.921884, 0.341098, 0.377376], - [0.925937, 0.346844, 0.374959], - [0.929845, 0.352734, 0.372677], - [0.933606, 0.358764, 0.370541], - [0.937221, 0.364929, 0.368567], - [0.940687, 0.371224, 0.366762], - [0.944006, 0.377643, 0.365136], - [0.947180, 0.384178, 0.363701], - [0.950210, 0.390820, 0.362468], - [0.953099, 0.397563, 0.361438], - [0.955849, 0.404400, 0.360619], - [0.958464, 0.411324, 0.360014], - [0.960949, 0.418323, 0.359630], - [0.963310, 0.425390, 0.359469], - [0.965549, 0.432519, 0.359529], - [0.967671, 0.439703, 0.359810], - [0.969680, 0.446936, 0.360311], - [0.971582, 0.454210, 0.361030], - [0.973381, 0.461520, 0.361965], - [0.975082, 0.468861, 0.363111], - [0.976690, 0.476226, 0.364466], - [0.978210, 0.483612, 0.366025], - [0.979645, 0.491014, 0.367783], - [0.981000, 0.498428, 0.369734], - [0.982279, 0.505851, 0.371874], - [0.983485, 0.513280, 0.374198], - [0.984622, 0.520713, 0.376698], - [0.985693, 0.528148, 0.379371], - [0.986700, 0.535582, 0.382210], - [0.987646, 0.543015, 0.385210], - [0.988533, 0.550446, 0.388365], - [0.989363, 0.557873, 0.391671], - [0.990138, 0.565296, 0.395122], - [0.990871, 0.572706, 0.398714], - [0.991558, 0.580107, 0.402441], - [0.992196, 0.587502, 0.406299], - [0.992785, 0.594891, 0.410283], - [0.993326, 0.602275, 0.414390], - [0.993834, 0.609644, 0.418613], - [0.994309, 0.616999, 0.422950], - [0.994738, 0.624350, 0.427397], - [0.995122, 0.631696, 0.431951], - [0.995480, 0.639027, 0.436607], - [0.995810, 0.646344, 0.441361], - [0.996096, 0.653659, 0.446213], - [0.996341, 0.660969, 0.451160], - [0.996580, 0.668256, 0.456192], - [0.996775, 0.675541, 0.461314], - [0.996925, 0.682828, 0.466526], - [0.997077, 0.690088, 0.471811], - [0.997186, 0.697349, 0.477182], - [0.997254, 0.704611, 0.482635], - [0.997325, 0.711848, 0.488154], - [0.997351, 0.719089, 0.493755], - [0.997351, 0.726324, 0.499428], - [0.997341, 0.733545, 0.505167], - [0.997285, 0.740772, 0.510983], - [0.997228, 0.747981, 0.516859], - [0.997138, 0.755190, 0.522806], - [0.997019, 0.762398, 0.528821], - [0.996898, 0.769591, 0.534892], - [0.996727, 0.776795, 0.541039], - [0.996571, 0.783977, 0.547233], - [0.996369, 0.791167, 0.553499], - [0.996162, 0.798348, 0.559820], - [0.995932, 0.805527, 0.566202], - [0.995680, 0.812706, 0.572645], - [0.995424, 0.819875, 0.579140], - [0.995131, 0.827052, 0.585701], - [0.994851, 0.834213, 0.592307], - [0.994524, 0.841387, 0.598983], - [0.994222, 0.848540, 0.605696], - [0.993866, 0.855711, 0.612482], - [0.993545, 0.862859, 0.619299], - [0.993170, 0.870024, 0.626189], - [0.992831, 0.877168, 0.633109], - [0.992440, 0.884330, 0.640099], - [0.992089, 0.891470, 0.647116], - [0.991688, 0.898627, 0.654202], - [0.991332, 0.905763, 0.661309], - [0.990930, 0.912915, 0.668481], - [0.990570, 0.920049, 0.675675], - [0.990175, 0.927196, 0.682926], - [0.989815, 0.934329, 0.690198], - [0.989434, 0.941470, 0.697519], - [0.989077, 0.948604, 0.704863], - [0.988717, 0.955742, 0.712242], - [0.988367, 0.962878, 0.719649], - [0.988033, 0.970012, 0.727077], - [0.987691, 0.977154, 0.734536], - [0.987387, 0.984288, 0.742002], - [0.987053, 0.991438, 0.749504]] - -_inferno_data = [[0.001462, 0.000466, 0.013866], - [0.002267, 0.001270, 0.018570], - [0.003299, 0.002249, 0.024239], - [0.004547, 0.003392, 0.030909], - [0.006006, 0.004692, 0.038558], - [0.007676, 0.006136, 0.046836], - [0.009561, 0.007713, 0.055143], - [0.011663, 0.009417, 0.063460], - [0.013995, 0.011225, 0.071862], - [0.016561, 0.013136, 0.080282], - [0.019373, 0.015133, 0.088767], - [0.022447, 0.017199, 0.097327], - [0.025793, 0.019331, 0.105930], - [0.029432, 0.021503, 0.114621], - [0.033385, 0.023702, 0.123397], - [0.037668, 0.025921, 0.132232], - [0.042253, 0.028139, 0.141141], - [0.046915, 0.030324, 0.150164], - [0.051644, 0.032474, 0.159254], - [0.056449, 0.034569, 0.168414], - [0.061340, 0.036590, 0.177642], - [0.066331, 0.038504, 0.186962], - [0.071429, 0.040294, 0.196354], - [0.076637, 0.041905, 0.205799], - [0.081962, 0.043328, 0.215289], - [0.087411, 0.044556, 0.224813], - [0.092990, 0.045583, 0.234358], - [0.098702, 0.046402, 0.243904], - [0.104551, 0.047008, 0.253430], - [0.110536, 0.047399, 0.262912], - [0.116656, 0.047574, 0.272321], - [0.122908, 0.047536, 0.281624], - [0.129285, 0.047293, 0.290788], - [0.135778, 0.046856, 0.299776], - [0.142378, 0.046242, 0.308553], - [0.149073, 0.045468, 0.317085], - [0.155850, 0.044559, 0.325338], - [0.162689, 0.043554, 0.333277], - [0.169575, 0.042489, 0.340874], - [0.176493, 0.041402, 0.348111], - [0.183429, 0.040329, 0.354971], - [0.190367, 0.039309, 0.361447], - [0.197297, 0.038400, 0.367535], - [0.204209, 0.037632, 0.373238], - [0.211095, 0.037030, 0.378563], - [0.217949, 0.036615, 0.383522], - [0.224763, 0.036405, 0.388129], - [0.231538, 0.036405, 0.392400], - [0.238273, 0.036621, 0.396353], - [0.244967, 0.037055, 0.400007], - [0.251620, 0.037705, 0.403378], - [0.258234, 0.038571, 0.406485], - [0.264810, 0.039647, 0.409345], - [0.271347, 0.040922, 0.411976], - [0.277850, 0.042353, 0.414392], - [0.284321, 0.043933, 0.416608], - [0.290763, 0.045644, 0.418637], - [0.297178, 0.047470, 0.420491], - [0.303568, 0.049396, 0.422182], - [0.309935, 0.051407, 0.423721], - [0.316282, 0.053490, 0.425116], - [0.322610, 0.055634, 0.426377], - [0.328921, 0.057827, 0.427511], - [0.335217, 0.060060, 0.428524], - [0.341500, 0.062325, 0.429425], - [0.347771, 0.064616, 0.430217], - [0.354032, 0.066925, 0.430906], - [0.360284, 0.069247, 0.431497], - [0.366529, 0.071579, 0.431994], - [0.372768, 0.073915, 0.432400], - [0.379001, 0.076253, 0.432719], - [0.385228, 0.078591, 0.432955], - [0.391453, 0.080927, 0.433109], - [0.397674, 0.083257, 0.433183], - [0.403894, 0.085580, 0.433179], - [0.410113, 0.087896, 0.433098], - [0.416331, 0.090203, 0.432943], - [0.422549, 0.092501, 0.432714], - [0.428768, 0.094790, 0.432412], - [0.434987, 0.097069, 0.432039], - [0.441207, 0.099338, 0.431594], - [0.447428, 0.101597, 0.431080], - [0.453651, 0.103848, 0.430498], - [0.459875, 0.106089, 0.429846], - [0.466100, 0.108322, 0.429125], - [0.472328, 0.110547, 0.428334], - [0.478558, 0.112764, 0.427475], - [0.484789, 0.114974, 0.426548], - [0.491022, 0.117179, 0.425552], - [0.497257, 0.119379, 0.424488], - [0.503493, 0.121575, 0.423356], - [0.509730, 0.123769, 0.422156], - [0.515967, 0.125960, 0.420887], - [0.522206, 0.128150, 0.419549], - [0.528444, 0.130341, 0.418142], - [0.534683, 0.132534, 0.416667], - [0.540920, 0.134729, 0.415123], - [0.547157, 0.136929, 0.413511], - [0.553392, 0.139134, 0.411829], - [0.559624, 0.141346, 0.410078], - [0.565854, 0.143567, 0.408258], - [0.572081, 0.145797, 0.406369], - [0.578304, 0.148039, 0.404411], - [0.584521, 0.150294, 0.402385], - [0.590734, 0.152563, 0.400290], - [0.596940, 0.154848, 0.398125], - [0.603139, 0.157151, 0.395891], - [0.609330, 0.159474, 0.393589], - [0.615513, 0.161817, 0.391219], - [0.621685, 0.164184, 0.388781], - [0.627847, 0.166575, 0.386276], - [0.633998, 0.168992, 0.383704], - [0.640135, 0.171438, 0.381065], - [0.646260, 0.173914, 0.378359], - [0.652369, 0.176421, 0.375586], - [0.658463, 0.178962, 0.372748], - [0.664540, 0.181539, 0.369846], - [0.670599, 0.184153, 0.366879], - [0.676638, 0.186807, 0.363849], - [0.682656, 0.189501, 0.360757], - [0.688653, 0.192239, 0.357603], - [0.694627, 0.195021, 0.354388], - [0.700576, 0.197851, 0.351113], - [0.706500, 0.200728, 0.347777], - [0.712396, 0.203656, 0.344383], - [0.718264, 0.206636, 0.340931], - [0.724103, 0.209670, 0.337424], - [0.729909, 0.212759, 0.333861], - [0.735683, 0.215906, 0.330245], - [0.741423, 0.219112, 0.326576], - [0.747127, 0.222378, 0.322856], - [0.752794, 0.225706, 0.319085], - [0.758422, 0.229097, 0.315266], - [0.764010, 0.232554, 0.311399], - [0.769556, 0.236077, 0.307485], - [0.775059, 0.239667, 0.303526], - [0.780517, 0.243327, 0.299523], - [0.785929, 0.247056, 0.295477], - [0.791293, 0.250856, 0.291390], - [0.796607, 0.254728, 0.287264], - [0.801871, 0.258674, 0.283099], - [0.807082, 0.262692, 0.278898], - [0.812239, 0.266786, 0.274661], - [0.817341, 0.270954, 0.270390], - [0.822386, 0.275197, 0.266085], - [0.827372, 0.279517, 0.261750], - [0.832299, 0.283913, 0.257383], - [0.837165, 0.288385, 0.252988], - [0.841969, 0.292933, 0.248564], - [0.846709, 0.297559, 0.244113], - [0.851384, 0.302260, 0.239636], - [0.855992, 0.307038, 0.235133], - [0.860533, 0.311892, 0.230606], - [0.865006, 0.316822, 0.226055], - [0.869409, 0.321827, 0.221482], - [0.873741, 0.326906, 0.216886], - [0.878001, 0.332060, 0.212268], - [0.882188, 0.337287, 0.207628], - [0.886302, 0.342586, 0.202968], - [0.890341, 0.347957, 0.198286], - [0.894305, 0.353399, 0.193584], - [0.898192, 0.358911, 0.188860], - [0.902003, 0.364492, 0.184116], - [0.905735, 0.370140, 0.179350], - [0.909390, 0.375856, 0.174563], - [0.912966, 0.381636, 0.169755], - [0.916462, 0.387481, 0.164924], - [0.919879, 0.393389, 0.160070], - [0.923215, 0.399359, 0.155193], - [0.926470, 0.405389, 0.150292], - [0.929644, 0.411479, 0.145367], - [0.932737, 0.417627, 0.140417], - [0.935747, 0.423831, 0.135440], - [0.938675, 0.430091, 0.130438], - [0.941521, 0.436405, 0.125409], - [0.944285, 0.442772, 0.120354], - [0.946965, 0.449191, 0.115272], - [0.949562, 0.455660, 0.110164], - [0.952075, 0.462178, 0.105031], - [0.954506, 0.468744, 0.099874], - [0.956852, 0.475356, 0.094695], - [0.959114, 0.482014, 0.089499], - [0.961293, 0.488716, 0.084289], - [0.963387, 0.495462, 0.079073], - [0.965397, 0.502249, 0.073859], - [0.967322, 0.509078, 0.068659], - [0.969163, 0.515946, 0.063488], - [0.970919, 0.522853, 0.058367], - [0.972590, 0.529798, 0.053324], - [0.974176, 0.536780, 0.048392], - [0.975677, 0.543798, 0.043618], - [0.977092, 0.550850, 0.039050], - [0.978422, 0.557937, 0.034931], - [0.979666, 0.565057, 0.031409], - [0.980824, 0.572209, 0.028508], - [0.981895, 0.579392, 0.026250], - [0.982881, 0.586606, 0.024661], - [0.983779, 0.593849, 0.023770], - [0.984591, 0.601122, 0.023606], - [0.985315, 0.608422, 0.024202], - [0.985952, 0.615750, 0.025592], - [0.986502, 0.623105, 0.027814], - [0.986964, 0.630485, 0.030908], - [0.987337, 0.637890, 0.034916], - [0.987622, 0.645320, 0.039886], - [0.987819, 0.652773, 0.045581], - [0.987926, 0.660250, 0.051750], - [0.987945, 0.667748, 0.058329], - [0.987874, 0.675267, 0.065257], - [0.987714, 0.682807, 0.072489], - [0.987464, 0.690366, 0.079990], - [0.987124, 0.697944, 0.087731], - [0.986694, 0.705540, 0.095694], - [0.986175, 0.713153, 0.103863], - [0.985566, 0.720782, 0.112229], - [0.984865, 0.728427, 0.120785], - [0.984075, 0.736087, 0.129527], - [0.983196, 0.743758, 0.138453], - [0.982228, 0.751442, 0.147565], - [0.981173, 0.759135, 0.156863], - [0.980032, 0.766837, 0.166353], - [0.978806, 0.774545, 0.176037], - [0.977497, 0.782258, 0.185923], - [0.976108, 0.789974, 0.196018], - [0.974638, 0.797692, 0.206332], - [0.973088, 0.805409, 0.216877], - [0.971468, 0.813122, 0.227658], - [0.969783, 0.820825, 0.238686], - [0.968041, 0.828515, 0.249972], - [0.966243, 0.836191, 0.261534], - [0.964394, 0.843848, 0.273391], - [0.962517, 0.851476, 0.285546], - [0.960626, 0.859069, 0.298010], - [0.958720, 0.866624, 0.310820], - [0.956834, 0.874129, 0.323974], - [0.954997, 0.881569, 0.337475], - [0.953215, 0.888942, 0.351369], - [0.951546, 0.896226, 0.365627], - [0.950018, 0.903409, 0.380271], - [0.948683, 0.910473, 0.395289], - [0.947594, 0.917399, 0.410665], - [0.946809, 0.924168, 0.426373], - [0.946392, 0.930761, 0.442367], - [0.946403, 0.937159, 0.458592], - [0.946903, 0.943348, 0.474970], - [0.947937, 0.949318, 0.491426], - [0.949545, 0.955063, 0.507860], - [0.951740, 0.960587, 0.524203], - [0.954529, 0.965896, 0.540361], - [0.957896, 0.971003, 0.556275], - [0.961812, 0.975924, 0.571925], - [0.966249, 0.980678, 0.587206], - [0.971162, 0.985282, 0.602154], - [0.976511, 0.989753, 0.616760], - [0.982257, 0.994109, 0.631017], - [0.988362, 0.998364, 0.644924]] - -_plasma_data = [[0.050383, 0.029803, 0.527975], - [0.063536, 0.028426, 0.533124], - [0.075353, 0.027206, 0.538007], - [0.086222, 0.026125, 0.542658], - [0.096379, 0.025165, 0.547103], - [0.105980, 0.024309, 0.551368], - [0.115124, 0.023556, 0.555468], - [0.123903, 0.022878, 0.559423], - [0.132381, 0.022258, 0.563250], - [0.140603, 0.021687, 0.566959], - [0.148607, 0.021154, 0.570562], - [0.156421, 0.020651, 0.574065], - [0.164070, 0.020171, 0.577478], - [0.171574, 0.019706, 0.580806], - [0.178950, 0.019252, 0.584054], - [0.186213, 0.018803, 0.587228], - [0.193374, 0.018354, 0.590330], - [0.200445, 0.017902, 0.593364], - [0.207435, 0.017442, 0.596333], - [0.214350, 0.016973, 0.599239], - [0.221197, 0.016497, 0.602083], - [0.227983, 0.016007, 0.604867], - [0.234715, 0.015502, 0.607592], - [0.241396, 0.014979, 0.610259], - [0.248032, 0.014439, 0.612868], - [0.254627, 0.013882, 0.615419], - [0.261183, 0.013308, 0.617911], - [0.267703, 0.012716, 0.620346], - [0.274191, 0.012109, 0.622722], - [0.280648, 0.011488, 0.625038], - [0.287076, 0.010855, 0.627295], - [0.293478, 0.010213, 0.629490], - [0.299855, 0.009561, 0.631624], - [0.306210, 0.008902, 0.633694], - [0.312543, 0.008239, 0.635700], - [0.318856, 0.007576, 0.637640], - [0.325150, 0.006915, 0.639512], - [0.331426, 0.006261, 0.641316], - [0.337683, 0.005618, 0.643049], - [0.343925, 0.004991, 0.644710], - [0.350150, 0.004382, 0.646298], - [0.356359, 0.003798, 0.647810], - [0.362553, 0.003243, 0.649245], - [0.368733, 0.002724, 0.650601], - [0.374897, 0.002245, 0.651876], - [0.381047, 0.001814, 0.653068], - [0.387183, 0.001434, 0.654177], - [0.393304, 0.001114, 0.655199], - [0.399411, 0.000859, 0.656133], - [0.405503, 0.000678, 0.656977], - [0.411580, 0.000577, 0.657730], - [0.417642, 0.000564, 0.658390], - [0.423689, 0.000646, 0.658956], - [0.429719, 0.000831, 0.659425], - [0.435734, 0.001127, 0.659797], - [0.441732, 0.001540, 0.660069], - [0.447714, 0.002080, 0.660240], - [0.453677, 0.002755, 0.660310], - [0.459623, 0.003574, 0.660277], - [0.465550, 0.004545, 0.660139], - [0.471457, 0.005678, 0.659897], - [0.477344, 0.006980, 0.659549], - [0.483210, 0.008460, 0.659095], - [0.489055, 0.010127, 0.658534], - [0.494877, 0.011990, 0.657865], - [0.500678, 0.014055, 0.657088], - [0.506454, 0.016333, 0.656202], - [0.512206, 0.018833, 0.655209], - [0.517933, 0.021563, 0.654109], - [0.523633, 0.024532, 0.652901], - [0.529306, 0.027747, 0.651586], - [0.534952, 0.031217, 0.650165], - [0.540570, 0.034950, 0.648640], - [0.546157, 0.038954, 0.647010], - [0.551715, 0.043136, 0.645277], - [0.557243, 0.047331, 0.643443], - [0.562738, 0.051545, 0.641509], - [0.568201, 0.055778, 0.639477], - [0.573632, 0.060028, 0.637349], - [0.579029, 0.064296, 0.635126], - [0.584391, 0.068579, 0.632812], - [0.589719, 0.072878, 0.630408], - [0.595011, 0.077190, 0.627917], - [0.600266, 0.081516, 0.625342], - [0.605485, 0.085854, 0.622686], - [0.610667, 0.090204, 0.619951], - [0.615812, 0.094564, 0.617140], - [0.620919, 0.098934, 0.614257], - [0.625987, 0.103312, 0.611305], - [0.631017, 0.107699, 0.608287], - [0.636008, 0.112092, 0.605205], - [0.640959, 0.116492, 0.602065], - [0.645872, 0.120898, 0.598867], - [0.650746, 0.125309, 0.595617], - [0.655580, 0.129725, 0.592317], - [0.660374, 0.134144, 0.588971], - [0.665129, 0.138566, 0.585582], - [0.669845, 0.142992, 0.582154], - [0.674522, 0.147419, 0.578688], - [0.679160, 0.151848, 0.575189], - [0.683758, 0.156278, 0.571660], - [0.688318, 0.160709, 0.568103], - [0.692840, 0.165141, 0.564522], - [0.697324, 0.169573, 0.560919], - [0.701769, 0.174005, 0.557296], - [0.706178, 0.178437, 0.553657], - [0.710549, 0.182868, 0.550004], - [0.714883, 0.187299, 0.546338], - [0.719181, 0.191729, 0.542663], - [0.723444, 0.196158, 0.538981], - [0.727670, 0.200586, 0.535293], - [0.731862, 0.205013, 0.531601], - [0.736019, 0.209439, 0.527908], - [0.740143, 0.213864, 0.524216], - [0.744232, 0.218288, 0.520524], - [0.748289, 0.222711, 0.516834], - [0.752312, 0.227133, 0.513149], - [0.756304, 0.231555, 0.509468], - [0.760264, 0.235976, 0.505794], - [0.764193, 0.240396, 0.502126], - [0.768090, 0.244817, 0.498465], - [0.771958, 0.249237, 0.494813], - [0.775796, 0.253658, 0.491171], - [0.779604, 0.258078, 0.487539], - [0.783383, 0.262500, 0.483918], - [0.787133, 0.266922, 0.480307], - [0.790855, 0.271345, 0.476706], - [0.794549, 0.275770, 0.473117], - [0.798216, 0.280197, 0.469538], - [0.801855, 0.284626, 0.465971], - [0.805467, 0.289057, 0.462415], - [0.809052, 0.293491, 0.458870], - [0.812612, 0.297928, 0.455338], - [0.816144, 0.302368, 0.451816], - [0.819651, 0.306812, 0.448306], - [0.823132, 0.311261, 0.444806], - [0.826588, 0.315714, 0.441316], - [0.830018, 0.320172, 0.437836], - [0.833422, 0.324635, 0.434366], - [0.836801, 0.329105, 0.430905], - [0.840155, 0.333580, 0.427455], - [0.843484, 0.338062, 0.424013], - [0.846788, 0.342551, 0.420579], - [0.850066, 0.347048, 0.417153], - [0.853319, 0.351553, 0.413734], - [0.856547, 0.356066, 0.410322], - [0.859750, 0.360588, 0.406917], - [0.862927, 0.365119, 0.403519], - [0.866078, 0.369660, 0.400126], - [0.869203, 0.374212, 0.396738], - [0.872303, 0.378774, 0.393355], - [0.875376, 0.383347, 0.389976], - [0.878423, 0.387932, 0.386600], - [0.881443, 0.392529, 0.383229], - [0.884436, 0.397139, 0.379860], - [0.887402, 0.401762, 0.376494], - [0.890340, 0.406398, 0.373130], - [0.893250, 0.411048, 0.369768], - [0.896131, 0.415712, 0.366407], - [0.898984, 0.420392, 0.363047], - [0.901807, 0.425087, 0.359688], - [0.904601, 0.429797, 0.356329], - [0.907365, 0.434524, 0.352970], - [0.910098, 0.439268, 0.349610], - [0.912800, 0.444029, 0.346251], - [0.915471, 0.448807, 0.342890], - [0.918109, 0.453603, 0.339529], - [0.920714, 0.458417, 0.336166], - [0.923287, 0.463251, 0.332801], - [0.925825, 0.468103, 0.329435], - [0.928329, 0.472975, 0.326067], - [0.930798, 0.477867, 0.322697], - [0.933232, 0.482780, 0.319325], - [0.935630, 0.487712, 0.315952], - [0.937990, 0.492667, 0.312575], - [0.940313, 0.497642, 0.309197], - [0.942598, 0.502639, 0.305816], - [0.944844, 0.507658, 0.302433], - [0.947051, 0.512699, 0.299049], - [0.949217, 0.517763, 0.295662], - [0.951344, 0.522850, 0.292275], - [0.953428, 0.527960, 0.288883], - [0.955470, 0.533093, 0.285490], - [0.957469, 0.538250, 0.282096], - [0.959424, 0.543431, 0.278701], - [0.961336, 0.548636, 0.275305], - [0.963203, 0.553865, 0.271909], - [0.965024, 0.559118, 0.268513], - [0.966798, 0.564396, 0.265118], - [0.968526, 0.569700, 0.261721], - [0.970205, 0.575028, 0.258325], - [0.971835, 0.580382, 0.254931], - [0.973416, 0.585761, 0.251540], - [0.974947, 0.591165, 0.248151], - [0.976428, 0.596595, 0.244767], - [0.977856, 0.602051, 0.241387], - [0.979233, 0.607532, 0.238013], - [0.980556, 0.613039, 0.234646], - [0.981826, 0.618572, 0.231287], - [0.983041, 0.624131, 0.227937], - [0.984199, 0.629718, 0.224595], - [0.985301, 0.635330, 0.221265], - [0.986345, 0.640969, 0.217948], - [0.987332, 0.646633, 0.214648], - [0.988260, 0.652325, 0.211364], - [0.989128, 0.658043, 0.208100], - [0.989935, 0.663787, 0.204859], - [0.990681, 0.669558, 0.201642], - [0.991365, 0.675355, 0.198453], - [0.991985, 0.681179, 0.195295], - [0.992541, 0.687030, 0.192170], - [0.993032, 0.692907, 0.189084], - [0.993456, 0.698810, 0.186041], - [0.993814, 0.704741, 0.183043], - [0.994103, 0.710698, 0.180097], - [0.994324, 0.716681, 0.177208], - [0.994474, 0.722691, 0.174381], - [0.994553, 0.728728, 0.171622], - [0.994561, 0.734791, 0.168938], - [0.994495, 0.740880, 0.166335], - [0.994355, 0.746995, 0.163821], - [0.994141, 0.753137, 0.161404], - [0.993851, 0.759304, 0.159092], - [0.993482, 0.765499, 0.156891], - [0.993033, 0.771720, 0.154808], - [0.992505, 0.777967, 0.152855], - [0.991897, 0.784239, 0.151042], - [0.991209, 0.790537, 0.149377], - [0.990439, 0.796859, 0.147870], - [0.989587, 0.803205, 0.146529], - [0.988648, 0.809579, 0.145357], - [0.987621, 0.815978, 0.144363], - [0.986509, 0.822401, 0.143557], - [0.985314, 0.828846, 0.142945], - [0.984031, 0.835315, 0.142528], - [0.982653, 0.841812, 0.142303], - [0.981190, 0.848329, 0.142279], - [0.979644, 0.854866, 0.142453], - [0.977995, 0.861432, 0.142808], - [0.976265, 0.868016, 0.143351], - [0.974443, 0.874622, 0.144061], - [0.972530, 0.881250, 0.144923], - [0.970533, 0.887896, 0.145919], - [0.968443, 0.894564, 0.147014], - [0.966271, 0.901249, 0.148180], - [0.964021, 0.907950, 0.149370], - [0.961681, 0.914672, 0.150520], - [0.959276, 0.921407, 0.151566], - [0.956808, 0.928152, 0.152409], - [0.954287, 0.934908, 0.152921], - [0.951726, 0.941671, 0.152925], - [0.949151, 0.948435, 0.152178], - [0.946602, 0.955190, 0.150328], - [0.944152, 0.961916, 0.146861], - [0.941896, 0.968590, 0.140956], - [0.940015, 0.975158, 0.131326]] - -_viridis_data = [[0.267004, 0.004874, 0.329415], - [0.268510, 0.009605, 0.335427], - [0.269944, 0.014625, 0.341379], - [0.271305, 0.019942, 0.347269], - [0.272594, 0.025563, 0.353093], - [0.273809, 0.031497, 0.358853], - [0.274952, 0.037752, 0.364543], - [0.276022, 0.044167, 0.370164], - [0.277018, 0.050344, 0.375715], - [0.277941, 0.056324, 0.381191], - [0.278791, 0.062145, 0.386592], - [0.279566, 0.067836, 0.391917], - [0.280267, 0.073417, 0.397163], - [0.280894, 0.078907, 0.402329], - [0.281446, 0.084320, 0.407414], - [0.281924, 0.089666, 0.412415], - [0.282327, 0.094955, 0.417331], - [0.282656, 0.100196, 0.422160], - [0.282910, 0.105393, 0.426902], - [0.283091, 0.110553, 0.431554], - [0.283197, 0.115680, 0.436115], - [0.283229, 0.120777, 0.440584], - [0.283187, 0.125848, 0.444960], - [0.283072, 0.130895, 0.449241], - [0.282884, 0.135920, 0.453427], - [0.282623, 0.140926, 0.457517], - [0.282290, 0.145912, 0.461510], - [0.281887, 0.150881, 0.465405], - [0.281412, 0.155834, 0.469201], - [0.280868, 0.160771, 0.472899], - [0.280255, 0.165693, 0.476498], - [0.279574, 0.170599, 0.479997], - [0.278826, 0.175490, 0.483397], - [0.278012, 0.180367, 0.486697], - [0.277134, 0.185228, 0.489898], - [0.276194, 0.190074, 0.493001], - [0.275191, 0.194905, 0.496005], - [0.274128, 0.199721, 0.498911], - [0.273006, 0.204520, 0.501721], - [0.271828, 0.209303, 0.504434], - [0.270595, 0.214069, 0.507052], - [0.269308, 0.218818, 0.509577], - [0.267968, 0.223549, 0.512008], - [0.266580, 0.228262, 0.514349], - [0.265145, 0.232956, 0.516599], - [0.263663, 0.237631, 0.518762], - [0.262138, 0.242286, 0.520837], - [0.260571, 0.246922, 0.522828], - [0.258965, 0.251537, 0.524736], - [0.257322, 0.256130, 0.526563], - [0.255645, 0.260703, 0.528312], - [0.253935, 0.265254, 0.529983], - [0.252194, 0.269783, 0.531579], - [0.250425, 0.274290, 0.533103], - [0.248629, 0.278775, 0.534556], - [0.246811, 0.283237, 0.535941], - [0.244972, 0.287675, 0.537260], - [0.243113, 0.292092, 0.538516], - [0.241237, 0.296485, 0.539709], - [0.239346, 0.300855, 0.540844], - [0.237441, 0.305202, 0.541921], - [0.235526, 0.309527, 0.542944], - [0.233603, 0.313828, 0.543914], - [0.231674, 0.318106, 0.544834], - [0.229739, 0.322361, 0.545706], - [0.227802, 0.326594, 0.546532], - [0.225863, 0.330805, 0.547314], - [0.223925, 0.334994, 0.548053], - [0.221989, 0.339161, 0.548752], - [0.220057, 0.343307, 0.549413], - [0.218130, 0.347432, 0.550038], - [0.216210, 0.351535, 0.550627], - [0.214298, 0.355619, 0.551184], - [0.212395, 0.359683, 0.551710], - [0.210503, 0.363727, 0.552206], - [0.208623, 0.367752, 0.552675], - [0.206756, 0.371758, 0.553117], - [0.204903, 0.375746, 0.553533], - [0.203063, 0.379716, 0.553925], - [0.201239, 0.383670, 0.554294], - [0.199430, 0.387607, 0.554642], - [0.197636, 0.391528, 0.554969], - [0.195860, 0.395433, 0.555276], - [0.194100, 0.399323, 0.555565], - [0.192357, 0.403199, 0.555836], - [0.190631, 0.407061, 0.556089], - [0.188923, 0.410910, 0.556326], - [0.187231, 0.414746, 0.556547], - [0.185556, 0.418570, 0.556753], - [0.183898, 0.422383, 0.556944], - [0.182256, 0.426184, 0.557120], - [0.180629, 0.429975, 0.557282], - [0.179019, 0.433756, 0.557430], - [0.177423, 0.437527, 0.557565], - [0.175841, 0.441290, 0.557685], - [0.174274, 0.445044, 0.557792], - [0.172719, 0.448791, 0.557885], - [0.171176, 0.452530, 0.557965], - [0.169646, 0.456262, 0.558030], - [0.168126, 0.459988, 0.558082], - [0.166617, 0.463708, 0.558119], - [0.165117, 0.467423, 0.558141], - [0.163625, 0.471133, 0.558148], - [0.162142, 0.474838, 0.558140], - [0.160665, 0.478540, 0.558115], - [0.159194, 0.482237, 0.558073], - [0.157729, 0.485932, 0.558013], - [0.156270, 0.489624, 0.557936], - [0.154815, 0.493313, 0.557840], - [0.153364, 0.497000, 0.557724], - [0.151918, 0.500685, 0.557587], - [0.150476, 0.504369, 0.557430], - [0.149039, 0.508051, 0.557250], - [0.147607, 0.511733, 0.557049], - [0.146180, 0.515413, 0.556823], - [0.144759, 0.519093, 0.556572], - [0.143343, 0.522773, 0.556295], - [0.141935, 0.526453, 0.555991], - [0.140536, 0.530132, 0.555659], - [0.139147, 0.533812, 0.555298], - [0.137770, 0.537492, 0.554906], - [0.136408, 0.541173, 0.554483], - [0.135066, 0.544853, 0.554029], - [0.133743, 0.548535, 0.553541], - [0.132444, 0.552216, 0.553018], - [0.131172, 0.555899, 0.552459], - [0.129933, 0.559582, 0.551864], - [0.128729, 0.563265, 0.551229], - [0.127568, 0.566949, 0.550556], - [0.126453, 0.570633, 0.549841], - [0.125394, 0.574318, 0.549086], - [0.124395, 0.578002, 0.548287], - [0.123463, 0.581687, 0.547445], - [0.122606, 0.585371, 0.546557], - [0.121831, 0.589055, 0.545623], - [0.121148, 0.592739, 0.544641], - [0.120565, 0.596422, 0.543611], - [0.120092, 0.600104, 0.542530], - [0.119738, 0.603785, 0.541400], - [0.119512, 0.607464, 0.540218], - [0.119423, 0.611141, 0.538982], - [0.119483, 0.614817, 0.537692], - [0.119699, 0.618490, 0.536347], - [0.120081, 0.622161, 0.534946], - [0.120638, 0.625828, 0.533488], - [0.121380, 0.629492, 0.531973], - [0.122312, 0.633153, 0.530398], - [0.123444, 0.636809, 0.528763], - [0.124780, 0.640461, 0.527068], - [0.126326, 0.644107, 0.525311], - [0.128087, 0.647749, 0.523491], - [0.130067, 0.651384, 0.521608], - [0.132268, 0.655014, 0.519661], - [0.134692, 0.658636, 0.517649], - [0.137339, 0.662252, 0.515571], - [0.140210, 0.665859, 0.513427], - [0.143303, 0.669459, 0.511215], - [0.146616, 0.673050, 0.508936], - [0.150148, 0.676631, 0.506589], - [0.153894, 0.680203, 0.504172], - [0.157851, 0.683765, 0.501686], - [0.162016, 0.687316, 0.499129], - [0.166383, 0.690856, 0.496502], - [0.170948, 0.694384, 0.493803], - [0.175707, 0.697900, 0.491033], - [0.180653, 0.701402, 0.488189], - [0.185783, 0.704891, 0.485273], - [0.191090, 0.708366, 0.482284], - [0.196571, 0.711827, 0.479221], - [0.202219, 0.715272, 0.476084], - [0.208030, 0.718701, 0.472873], - [0.214000, 0.722114, 0.469588], - [0.220124, 0.725509, 0.466226], - [0.226397, 0.728888, 0.462789], - [0.232815, 0.732247, 0.459277], - [0.239374, 0.735588, 0.455688], - [0.246070, 0.738910, 0.452024], - [0.252899, 0.742211, 0.448284], - [0.259857, 0.745492, 0.444467], - [0.266941, 0.748751, 0.440573], - [0.274149, 0.751988, 0.436601], - [0.281477, 0.755203, 0.432552], - [0.288921, 0.758394, 0.428426], - [0.296479, 0.761561, 0.424223], - [0.304148, 0.764704, 0.419943], - [0.311925, 0.767822, 0.415586], - [0.319809, 0.770914, 0.411152], - [0.327796, 0.773980, 0.406640], - [0.335885, 0.777018, 0.402049], - [0.344074, 0.780029, 0.397381], - [0.352360, 0.783011, 0.392636], - [0.360741, 0.785964, 0.387814], - [0.369214, 0.788888, 0.382914], - [0.377779, 0.791781, 0.377939], - [0.386433, 0.794644, 0.372886], - [0.395174, 0.797475, 0.367757], - [0.404001, 0.800275, 0.362552], - [0.412913, 0.803041, 0.357269], - [0.421908, 0.805774, 0.351910], - [0.430983, 0.808473, 0.346476], - [0.440137, 0.811138, 0.340967], - [0.449368, 0.813768, 0.335384], - [0.458674, 0.816363, 0.329727], - [0.468053, 0.818921, 0.323998], - [0.477504, 0.821444, 0.318195], - [0.487026, 0.823929, 0.312321], - [0.496615, 0.826376, 0.306377], - [0.506271, 0.828786, 0.300362], - [0.515992, 0.831158, 0.294279], - [0.525776, 0.833491, 0.288127], - [0.535621, 0.835785, 0.281908], - [0.545524, 0.838039, 0.275626], - [0.555484, 0.840254, 0.269281], - [0.565498, 0.842430, 0.262877], - [0.575563, 0.844566, 0.256415], - [0.585678, 0.846661, 0.249897], - [0.595839, 0.848717, 0.243329], - [0.606045, 0.850733, 0.236712], - [0.616293, 0.852709, 0.230052], - [0.626579, 0.854645, 0.223353], - [0.636902, 0.856542, 0.216620], - [0.647257, 0.858400, 0.209861], - [0.657642, 0.860219, 0.203082], - [0.668054, 0.861999, 0.196293], - [0.678489, 0.863742, 0.189503], - [0.688944, 0.865448, 0.182725], - [0.699415, 0.867117, 0.175971], - [0.709898, 0.868751, 0.169257], - [0.720391, 0.870350, 0.162603], - [0.730889, 0.871916, 0.156029], - [0.741388, 0.873449, 0.149561], - [0.751884, 0.874951, 0.143228], - [0.762373, 0.876424, 0.137064], - [0.772852, 0.877868, 0.131109], - [0.783315, 0.879285, 0.125405], - [0.793760, 0.880678, 0.120005], - [0.804182, 0.882046, 0.114965], - [0.814576, 0.883393, 0.110347], - [0.824940, 0.884720, 0.106217], - [0.835270, 0.886029, 0.102646], - [0.845561, 0.887322, 0.099702], - [0.855810, 0.888601, 0.097452], - [0.866013, 0.889868, 0.095953], - [0.876168, 0.891125, 0.095250], - [0.886271, 0.892374, 0.095374], - [0.896320, 0.893616, 0.096335], - [0.906311, 0.894855, 0.098125], - [0.916242, 0.896091, 0.100717], - [0.926106, 0.897330, 0.104071], - [0.935904, 0.898570, 0.108131], - [0.945636, 0.899815, 0.112838], - [0.955300, 0.901065, 0.118128], - [0.964894, 0.902323, 0.123941], - [0.974417, 0.903590, 0.130215], - [0.983868, 0.904867, 0.136897], - [0.993248, 0.906157, 0.143936]] - -_cividis_data = [[0.000000, 0.135112, 0.304751], - [0.000000, 0.138068, 0.311105], - [0.000000, 0.141013, 0.317579], - [0.000000, 0.143951, 0.323982], - [0.000000, 0.146877, 0.330479], - [0.000000, 0.149791, 0.337065], - [0.000000, 0.152673, 0.343704], - [0.000000, 0.155377, 0.350500], - [0.000000, 0.157932, 0.357521], - [0.000000, 0.160495, 0.364534], - [0.000000, 0.163058, 0.371608], - [0.000000, 0.165621, 0.378769], - [0.000000, 0.168204, 0.385902], - [0.000000, 0.170800, 0.393100], - [0.000000, 0.173420, 0.400353], - [0.000000, 0.176082, 0.407577], - [0.000000, 0.178802, 0.414764], - [0.000000, 0.181610, 0.421859], - [0.000000, 0.184550, 0.428802], - [0.000000, 0.186915, 0.435532], - [0.000000, 0.188769, 0.439563], - [0.000000, 0.190950, 0.441085], - [0.000000, 0.193366, 0.441561], - [0.003602, 0.195911, 0.441564], - [0.017852, 0.198528, 0.441248], - [0.032110, 0.201199, 0.440785], - [0.046205, 0.203903, 0.440196], - [0.058378, 0.206629, 0.439531], - [0.068968, 0.209372, 0.438863], - [0.078624, 0.212122, 0.438105], - [0.087465, 0.214879, 0.437342], - [0.095645, 0.217643, 0.436593], - [0.103401, 0.220406, 0.435790], - [0.110658, 0.223170, 0.435067], - [0.117612, 0.225935, 0.434308], - [0.124291, 0.228697, 0.433547], - [0.130669, 0.231458, 0.432840], - [0.136830, 0.234216, 0.432148], - [0.142852, 0.236972, 0.431404], - [0.148638, 0.239724, 0.430752], - [0.154261, 0.242475, 0.430120], - [0.159733, 0.245221, 0.429528], - [0.165113, 0.247965, 0.428908], - [0.170362, 0.250707, 0.428325], - [0.175490, 0.253444, 0.427790], - [0.180503, 0.256180, 0.427299], - [0.185453, 0.258914, 0.426788], - [0.190303, 0.261644, 0.426329], - [0.195057, 0.264372, 0.425924], - [0.199764, 0.267099, 0.425497], - [0.204385, 0.269823, 0.425126], - [0.208926, 0.272546, 0.424809], - [0.213431, 0.275266, 0.424480], - [0.217863, 0.277985, 0.424206], - [0.222264, 0.280702, 0.423914], - [0.226598, 0.283419, 0.423678], - [0.230871, 0.286134, 0.423498], - [0.235120, 0.288848, 0.423304], - [0.239312, 0.291562, 0.423167], - [0.243485, 0.294274, 0.423014], - [0.247605, 0.296986, 0.422917], - [0.251675, 0.299698, 0.422873], - [0.255731, 0.302409, 0.422814], - [0.259740, 0.305120, 0.422810], - [0.263738, 0.307831, 0.422789], - [0.267693, 0.310542, 0.422821], - [0.271639, 0.313253, 0.422837], - [0.275513, 0.315965, 0.422979], - [0.279411, 0.318677, 0.423031], - [0.283240, 0.321390, 0.423211], - [0.287065, 0.324103, 0.423373], - [0.290884, 0.326816, 0.423517], - [0.294669, 0.329531, 0.423716], - [0.298421, 0.332247, 0.423973], - [0.302169, 0.334963, 0.424213], - [0.305886, 0.337681, 0.424512], - [0.309601, 0.340399, 0.424790], - [0.313287, 0.343120, 0.425120], - [0.316941, 0.345842, 0.425512], - [0.320595, 0.348565, 0.425889], - [0.324250, 0.351289, 0.426250], - [0.327875, 0.354016, 0.426670], - [0.331474, 0.356744, 0.427144], - [0.335073, 0.359474, 0.427605], - [0.338673, 0.362206, 0.428053], - [0.342246, 0.364939, 0.428559], - [0.345793, 0.367676, 0.429127], - [0.349341, 0.370414, 0.429685], - [0.352892, 0.373153, 0.430226], - [0.356418, 0.375896, 0.430823], - [0.359916, 0.378641, 0.431501], - [0.363446, 0.381388, 0.432075], - [0.366923, 0.384139, 0.432796], - [0.370430, 0.386890, 0.433428], - [0.373884, 0.389646, 0.434209], - [0.377371, 0.392404, 0.434890], - [0.380830, 0.395164, 0.435653], - [0.384268, 0.397928, 0.436475], - [0.387705, 0.400694, 0.437305], - [0.391151, 0.403464, 0.438096], - [0.394568, 0.406236, 0.438986], - [0.397991, 0.409011, 0.439848], - [0.401418, 0.411790, 0.440708], - [0.404820, 0.414572, 0.441642], - [0.408226, 0.417357, 0.442570], - [0.411607, 0.420145, 0.443577], - [0.414992, 0.422937, 0.444578], - [0.418383, 0.425733, 0.445560], - [0.421748, 0.428531, 0.446640], - [0.425120, 0.431334, 0.447692], - [0.428462, 0.434140, 0.448864], - [0.431817, 0.436950, 0.449982], - [0.435168, 0.439763, 0.451134], - [0.438504, 0.442580, 0.452341], - [0.441810, 0.445402, 0.453659], - [0.445148, 0.448226, 0.454885], - [0.448447, 0.451053, 0.456264], - [0.451759, 0.453887, 0.457582], - [0.455072, 0.456718, 0.458976], - [0.458366, 0.459552, 0.460457], - [0.461616, 0.462405, 0.461969], - [0.464947, 0.465241, 0.463395], - [0.468254, 0.468083, 0.464908], - [0.471501, 0.470960, 0.466357], - [0.474812, 0.473832, 0.467681], - [0.478186, 0.476699, 0.468845], - [0.481622, 0.479573, 0.469767], - [0.485141, 0.482451, 0.470384], - [0.488697, 0.485318, 0.471008], - [0.492278, 0.488198, 0.471453], - [0.495913, 0.491076, 0.471751], - [0.499552, 0.493960, 0.472032], - [0.503185, 0.496851, 0.472305], - [0.506866, 0.499743, 0.472432], - [0.510540, 0.502643, 0.472550], - [0.514226, 0.505546, 0.472640], - [0.517920, 0.508454, 0.472707], - [0.521643, 0.511367, 0.472639], - [0.525348, 0.514285, 0.472660], - [0.529086, 0.517207, 0.472543], - [0.532829, 0.520135, 0.472401], - [0.536553, 0.523067, 0.472352], - [0.540307, 0.526005, 0.472163], - [0.544069, 0.528948, 0.471947], - [0.547840, 0.531895, 0.471704], - [0.551612, 0.534849, 0.471439], - [0.555393, 0.537807, 0.471147], - [0.559181, 0.540771, 0.470829], - [0.562972, 0.543741, 0.470488], - [0.566802, 0.546715, 0.469988], - [0.570607, 0.549695, 0.469593], - [0.574417, 0.552682, 0.469172], - [0.578236, 0.555673, 0.468724], - [0.582087, 0.558670, 0.468118], - [0.585916, 0.561674, 0.467618], - [0.589753, 0.564682, 0.467090], - [0.593622, 0.567697, 0.466401], - [0.597469, 0.570718, 0.465821], - [0.601354, 0.573743, 0.465074], - [0.605211, 0.576777, 0.464441], - [0.609105, 0.579816, 0.463638], - [0.612977, 0.582861, 0.462950], - [0.616852, 0.585913, 0.462237], - [0.620765, 0.588970, 0.461351], - [0.624654, 0.592034, 0.460583], - [0.628576, 0.595104, 0.459641], - [0.632506, 0.598180, 0.458668], - [0.636412, 0.601264, 0.457818], - [0.640352, 0.604354, 0.456791], - [0.644270, 0.607450, 0.455886], - [0.648222, 0.610553, 0.454801], - [0.652178, 0.613664, 0.453689], - [0.656114, 0.616780, 0.452702], - [0.660082, 0.619904, 0.451534], - [0.664055, 0.623034, 0.450338], - [0.668008, 0.626171, 0.449270], - [0.671991, 0.629316, 0.448018], - [0.675981, 0.632468, 0.446736], - [0.679979, 0.635626, 0.445424], - [0.683950, 0.638793, 0.444251], - [0.687957, 0.641966, 0.442886], - [0.691971, 0.645145, 0.441491], - [0.695985, 0.648334, 0.440072], - [0.700008, 0.651529, 0.438624], - [0.704037, 0.654731, 0.437147], - [0.708067, 0.657942, 0.435647], - [0.712105, 0.661160, 0.434117], - [0.716177, 0.664384, 0.432386], - [0.720222, 0.667618, 0.430805], - [0.724274, 0.670859, 0.429194], - [0.728334, 0.674107, 0.427554], - [0.732422, 0.677364, 0.425717], - [0.736488, 0.680629, 0.424028], - [0.740589, 0.683900, 0.422131], - [0.744664, 0.687181, 0.420393], - [0.748772, 0.690470, 0.418448], - [0.752886, 0.693766, 0.416472], - [0.756975, 0.697071, 0.414659], - [0.761096, 0.700384, 0.412638], - [0.765223, 0.703705, 0.410587], - [0.769353, 0.707035, 0.408516], - [0.773486, 0.710373, 0.406422], - [0.777651, 0.713719, 0.404112], - [0.781795, 0.717074, 0.401966], - [0.785965, 0.720438, 0.399613], - [0.790116, 0.723810, 0.397423], - [0.794298, 0.727190, 0.395016], - [0.798480, 0.730580, 0.392597], - [0.802667, 0.733978, 0.390153], - [0.806859, 0.737385, 0.387684], - [0.811054, 0.740801, 0.385198], - [0.815274, 0.744226, 0.382504], - [0.819499, 0.747659, 0.379785], - [0.823729, 0.751101, 0.377043], - [0.827959, 0.754553, 0.374292], - [0.832192, 0.758014, 0.371529], - [0.836429, 0.761483, 0.368747], - [0.840693, 0.764962, 0.365746], - [0.844957, 0.768450, 0.362741], - [0.849223, 0.771947, 0.359729], - [0.853515, 0.775454, 0.356500], - [0.857809, 0.778969, 0.353259], - [0.862105, 0.782494, 0.350011], - [0.866421, 0.786028, 0.346571], - [0.870717, 0.789572, 0.343333], - [0.875057, 0.793125, 0.339685], - [0.879378, 0.796687, 0.336241], - [0.883720, 0.800258, 0.332599], - [0.888081, 0.803839, 0.328770], - [0.892440, 0.807430, 0.324968], - [0.896818, 0.811030, 0.320982], - [0.901195, 0.814639, 0.317021], - [0.905589, 0.818257, 0.312889], - [0.910000, 0.821885, 0.308594], - [0.914407, 0.825522, 0.304348], - [0.918828, 0.829168, 0.299960], - [0.923279, 0.832822, 0.295244], - [0.927724, 0.836486, 0.290611], - [0.932180, 0.840159, 0.285880], - [0.936660, 0.843841, 0.280876], - [0.941147, 0.847530, 0.275815], - [0.945654, 0.851228, 0.270532], - [0.950178, 0.854933, 0.265085], - [0.954725, 0.858646, 0.259365], - [0.959284, 0.862365, 0.253563], - [0.963872, 0.866089, 0.247445], - [0.968469, 0.869819, 0.241310], - [0.973114, 0.873550, 0.234677], - [0.977780, 0.877281, 0.227954], - [0.982497, 0.881008, 0.220878], - [0.987293, 0.884718, 0.213336], - [0.992218, 0.888385, 0.205468], - [0.994847, 0.892954, 0.203445], - [0.995249, 0.898384, 0.207561], - [0.995503, 0.903866, 0.212370], - [0.995737, 0.909344, 0.217772]] - -_twilight_data = [ - [0.88575015840754434, 0.85000924943067835, 0.8879736506427196], - [0.88378520195539056, 0.85072940540310626, 0.88723222096949894], - [0.88172231059285788, 0.85127594077653468, 0.88638056925514819], - [0.8795410528270573, 0.85165675407495722, 0.8854143767924102], - [0.87724880858965482, 0.85187028338870274, 0.88434120381311432], - [0.87485347508575972, 0.85191526123023187, 0.88316926967613829], - [0.87233134085124076, 0.85180165478080894, 0.88189704355001619], - [0.86970474853509816, 0.85152403004797894, 0.88053883390003362], - [0.86696015505333579, 0.8510896085314068, 0.87909766977173343], - [0.86408985081463996, 0.85050391167507788, 0.87757925784892632], - [0.86110245436899846, 0.84976754857001258, 0.87599242923439569], - [0.85798259245670372, 0.84888934810281835, 0.87434038553446281], - [0.85472593189256985, 0.84787488124672816, 0.8726282980930582], - [0.85133714570857189, 0.84672735796116472, 0.87086081657350445], - [0.84780710702577922, 0.8454546229209523, 0.86904036783694438], - [0.8441261828674842, 0.84406482711037389, 0.86716973322690072], - [0.84030420805957784, 0.8425605950855084, 0.865250882410458], - [0.83634031809191178, 0.84094796518951942, 0.86328528001070159], - [0.83222705712934408, 0.83923490627754482, 0.86127563500427884], - [0.82796894316013536, 0.83742600751395202, 0.85922399451306786], - [0.82357429680252847, 0.83552487764795436, 0.85713191328514948], - [0.81904654677937527, 0.8335364929949034, 0.85500206287010105], - [0.81438982121143089, 0.83146558694197847, 0.85283759062147024], - [0.8095999819094809, 0.82931896673505456, 0.85064441601050367], - [0.80469164429814577, 0.82709838780560663, 0.84842449296974021], - [0.79967075421267997, 0.82480781812080928, 0.84618210029578533], - [0.79454305089231114, 0.82245116226304615, 0.84392184786827984], - [0.78931445564608915, 0.82003213188702007, 0.8416486380471222], - [0.78399101042764918, 0.81755426400533426, 0.83936747464036732], - [0.77857892008227592, 0.81502089378742548, 0.8370834463093898], - [0.77308416590170936, 0.81243524735466011, 0.83480172950579679], - [0.76751108504417864, 0.8098007598713145, 0.83252816638059668], - [0.76186907937980286, 0.80711949387647486, 0.830266486168872], - [0.75616443584381976, 0.80439408733477935, 0.82802138994719998], - [0.75040346765406696, 0.80162699008965321, 0.82579737851082424], - [0.74459247771890169, 0.79882047719583249, 0.82359867586156521], - [0.73873771700494939, 0.79597665735031009, 0.82142922780433014], - [0.73284543645523459, 0.79309746468844067, 0.81929263384230377], - [0.72692177512829703, 0.7901846863592763, 0.81719217466726379], - [0.72097280665536778, 0.78723995923452639, 0.81513073920879264], - [0.71500403076252128, 0.78426487091581187, 0.81311116559949914], - [0.70902078134539304, 0.78126088716070907, 0.81113591855117928], - [0.7030297722540817, 0.77822904973358131, 0.80920618848056969], - [0.6970365443886174, 0.77517050008066057, 0.80732335380063447], - [0.69104641009309098, 0.77208629460678091, 0.80548841690679074], - [0.68506446154395928, 0.7689774029354699, 0.80370206267176914], - [0.67909554499882152, 0.76584472131395898, 0.8019646617300199], - [0.67314422559426212, 0.76268908733890484, 0.80027628545809526], - [0.66721479803752815, 0.7595112803730375, 0.79863674654537764], - [0.6613112930078745, 0.75631202708719025, 0.7970456043491897], - [0.65543692326454717, 0.75309208756768431, 0.79550271129031047], - [0.64959573004253479, 0.74985201221941766, 0.79400674021499107], - [0.6437910831099849, 0.7465923800833657, 0.79255653201306053], - [0.63802586828545982, 0.74331376714033193, 0.79115100459573173], - [0.6323027138710603, 0.74001672160131404, 0.78978892762640429], - [0.62662402022604591, 0.73670175403699445, 0.78846901316334561], - [0.62099193064817548, 0.73336934798923203, 0.78718994624696581], - [0.61540846411770478, 0.73001995232739691, 0.78595022706750484], - [0.60987543176093062, 0.72665398759758293, 0.78474835732694714], - [0.60439434200274855, 0.7232718614323369, 0.78358295593535587], - [0.5989665814482068, 0.71987394892246725, 0.78245259899346642], - [0.59359335696837223, 0.7164606049658685, 0.78135588237640097], - [0.58827579780555495, 0.71303214646458135, 0.78029141405636515], - [0.58301487036932409, 0.70958887676997473, 0.77925781820476592], - [0.5778116438998202, 0.70613106157153982, 0.77825345121025524], - [0.5726668948158774, 0.7026589535425779, 0.77727702680911992], - [0.56758117853861967, 0.69917279302646274, 0.77632748534275298], - [0.56255515357219343, 0.69567278381629649, 0.77540359142309845], - [0.55758940419605174, 0.69215911458254054, 0.7745041337932782], - [0.55268450589347129, 0.68863194515166382, 0.7736279426902245], - [0.54784098153018634, 0.68509142218509878, 0.77277386473440868], - [0.54305932424018233, 0.68153767253065878, 0.77194079697835083], - [0.53834015575176275, 0.67797081129095405, 0.77112734439057717], - [0.53368389147728401, 0.67439093705212727, 0.7703325054879735], - [0.529090861832473, 0.67079812302806219, 0.76955552292313134], - [0.52456151470593582, 0.66719242996142225, 0.76879541714230948], - [0.52009627392235558, 0.66357391434030388, 0.76805119403344102], - [0.5156955988596057, 0.65994260812897998, 0.76732191489596169], - [0.51135992541601927, 0.65629853981831865, 0.76660663780645333], - [0.50708969576451657, 0.65264172403146448, 0.76590445660835849], - [0.5028853540415561, 0.64897216734095264, 0.76521446718174913], - [0.49874733661356069, 0.6452898684900934, 0.76453578734180083], - [0.4946761847863938, 0.64159484119504429, 0.76386719002130909], - [0.49067224938561221, 0.63788704858847078, 0.76320812763163837], - [0.4867359599430568, 0.63416646251100506, 0.76255780085924041], - [0.4828677867260272, 0.6304330455306234, 0.76191537149895305], - [0.47906816236197386, 0.62668676251860134, 0.76128000375662419], - [0.47533752394906287, 0.62292757283835809, 0.76065085571817748], - [0.47167629518877091, 0.61915543242884641, 0.76002709227883047], - [0.46808490970531597, 0.61537028695790286, 0.75940789891092741], - [0.46456376716303932, 0.61157208822864151, 0.75879242623025811], - [0.46111326647023881, 0.607760777169989, 0.75817986436807139], - [0.45773377230160567, 0.60393630046586455, 0.75756936901859162], - [0.45442563977552913, 0.60009859503858665, 0.75696013660606487], - [0.45118918687617743, 0.59624762051353541, 0.75635120643246645], - [0.44802470933589172, 0.59238331452146575, 0.75574176474107924], - [0.44493246854215379, 0.5885055998308617, 0.7551311041857901], - [0.44191271766696399, 0.58461441100175571, 0.75451838884410671], - [0.43896563958048396, 0.58070969241098491, 0.75390276208285945], - [0.43609138958356369, 0.57679137998186081, 0.7532834105961016], - [0.43329008867358393, 0.57285941625606673, 0.75265946532566674], - [0.43056179073057571, 0.56891374572457176, 0.75203008099312696], - [0.42790652284925834, 0.5649543060909209, 0.75139443521914839], - [0.42532423665011354, 0.56098104959950301, 0.75075164989005116], - [0.42281485675772662, 0.55699392126996583, 0.75010086988227642], - [0.42037822361396326, 0.55299287158108168, 0.7494412559451894], - [0.41801414079233629, 0.54897785421888889, 0.74877193167001121], - [0.4157223260454232, 0.54494882715350401, 0.74809204459000522], - [0.41350245743314729, 0.54090574771098476, 0.74740073297543086], - [0.41135414697304568, 0.53684857765005933, 0.74669712855065784], - [0.4092768899914751, 0.53277730177130322, 0.74598030635707824], - [0.40727018694219069, 0.52869188011057411, 0.74524942637581271], - [0.40533343789303178, 0.52459228174983119, 0.74450365836708132], - [0.40346600333905397, 0.52047847653840029, 0.74374215223567086], - [0.40166714010896104, 0.51635044969688759, 0.7429640345324835], - [0.39993606933454834, 0.51220818143218516, 0.74216844571317986], - [0.3982719152586337, 0.50805166539276136, 0.74135450918099721], - [0.39667374905665609, 0.50388089053847973, 0.74052138580516735], - [0.39514058808207631, 0.49969585326377758, 0.73966820211715711], - [0.39367135736822567, 0.49549655777451179, 0.738794102296364], - [0.39226494876209317, 0.49128300332899261, 0.73789824784475078], - [0.39092017571994903, 0.48705520251223039, 0.73697977133881254], - [0.38963580160340855, 0.48281316715123496, 0.73603782546932739], - [0.38841053300842432, 0.47855691131792805, 0.73507157641157261], - [0.38724301459330251, 0.47428645933635388, 0.73408016787854391], - [0.38613184178892102, 0.4700018340988123, 0.7330627749243106], - [0.38507556793651387, 0.46570306719930193, 0.73201854033690505], - [0.38407269378943537, 0.46139018782416635, 0.73094665432902683], - [0.38312168084402748, 0.45706323581407199, 0.72984626791353258], - [0.38222094988570376, 0.45272225034283325, 0.72871656144003782], - [0.38136887930454161, 0.44836727669277859, 0.72755671317141346], - [0.38056380696565623, 0.44399837208633719, 0.72636587045135315], - [0.37980403744848751, 0.43961558821222629, 0.72514323778761092], - [0.37908789283110761, 0.43521897612544935, 0.72388798691323131], - [0.378413635091359, 0.43080859411413064, 0.72259931993061044], - [0.37777949753513729, 0.4263845142616835, 0.72127639993530235], - [0.37718371844251231, 0.42194680223454828, 0.71991841524475775], - [0.37662448930806297, 0.41749553747893614, 0.71852454736176108], - [0.37610001286385814, 0.41303079952477062, 0.71709396919920232], - [0.37560846919442398, 0.40855267638072096, 0.71562585091587549], - [0.37514802505380473, 0.4040612609993941, 0.7141193695725726], - [0.37471686019302231, 0.3995566498711684, 0.71257368516500463], - [0.37431313199312338, 0.39503894828283309, 0.71098796522377461], - [0.37393499330475782, 0.39050827529375831, 0.70936134293478448], - [0.3735806215098284, 0.38596474386057539, 0.70769297607310577], - [0.37324816143326384, 0.38140848555753937, 0.70598200974806036], - [0.37293578646665032, 0.37683963835219841, 0.70422755780589941], - [0.37264166757849604, 0.37225835004836849, 0.7024287314570723], - [0.37236397858465387, 0.36766477862108266, 0.70058463496520773], - [0.37210089702443822, 0.36305909736982378, 0.69869434615073722], - [0.3718506155898596, 0.35844148285875221, 0.69675695810256544], - [0.37161133234400479, 0.3538121372967869, 0.69477149919380887], - [0.37138124223736607, 0.34917126878479027, 0.69273703471928827], - [0.37115856636209105, 0.34451911410230168, 0.69065253586464992], - [0.37094151551337329, 0.33985591488818123, 0.68851703379505125], - [0.37072833279422668, 0.33518193808489577, 0.68632948169606767], - [0.37051738634484427, 0.33049741244307851, 0.68408888788857214], - [0.37030682071842685, 0.32580269697872455, 0.68179411684486679], - [0.37009487130772695, 0.3210981375964933, 0.67944405399056851], - [0.36987980329025361, 0.31638410101153364, 0.67703755438090574], - [0.36965987626565955, 0.31166098762951971, 0.67457344743419545], - [0.36943334591276228, 0.30692923551862339, 0.67205052849120617], - [0.36919847837592484, 0.30218932176507068, 0.66946754331614522], - [0.36895355306596778, 0.29744175492366276, 0.66682322089824264], - [0.36869682231895268, 0.29268709856150099, 0.66411625298236909], - [0.36842655638020444, 0.28792596437778462, 0.66134526910944602], - [0.36814101479899719, 0.28315901221182987, 0.65850888806972308], - [0.36783843696531082, 0.27838697181297761, 0.65560566838453704], - [0.36751707094367697, 0.27361063317090978, 0.65263411711618635], - [0.36717513650699446, 0.26883085667326956, 0.64959272297892245], - [0.36681085540107988, 0.26404857724525643, 0.64647991652908243], - [0.36642243251550632, 0.25926481158628106, 0.64329409140765537], - [0.36600853966739794, 0.25448043878086224, 0.64003361803368586], - [0.36556698373538982, 0.24969683475296395, 0.63669675187488584], - [0.36509579845886808, 0.24491536803550484, 0.63328173520055586], - [0.36459308890125008, 0.24013747024823828, 0.62978680155026101], - [0.36405693022088509, 0.23536470386204195, 0.62621013451953023], - [0.36348537610385145, 0.23059876218396419, 0.62254988622392882], - [0.36287643560041027, 0.22584149293287031, 0.61880417410823019], - [0.36222809558295926, 0.22109488427338303, 0.61497112346096128], - [0.36153829010998356, 0.21636111429594002, 0.61104880679640927], - [0.36080493826624654, 0.21164251793458128, 0.60703532172064711], - [0.36002681809096376, 0.20694122817889948, 0.60292845431916875], - [0.35920088560930186, 0.20226037920758122, 0.5987265295935138], - [0.35832489966617809, 0.197602942459778, 0.59442768517501066], - [0.35739663292915563, 0.19297208197842461, 0.59003011251063131], - [0.35641381143126327, 0.18837119869242164, 0.5855320765920552], - [0.35537415306906722, 0.18380392577704466, 0.58093191431832802], - [0.35427534960663759, 0.17927413271618647, 0.57622809660668717], - [0.35311574421123737, 0.17478570377561287, 0.57141871523555288], - [0.35189248608873791, 0.17034320478524959, 0.56650284911216653], - [0.35060304441931012, 0.16595129984720861, 0.56147964703993225], - [0.34924513554955644, 0.16161477763045118, 0.55634837474163779], - [0.34781653238777782, 0.15733863511152979, 0.55110853452703257], - [0.34631507175793091, 0.15312802296627787, 0.5457599924248665], - [0.34473901574536375, 0.14898820589826409, 0.54030245920406539], - [0.34308600291572294, 0.14492465359918028, 0.53473704282067103], - [0.34135411074506483, 0.1409427920655632, 0.52906500940336754], - [0.33954168752669694, 0.13704801896718169, 0.52328797535085236], - [0.33764732090671112, 0.13324562282438077, 0.51740807573979475], - [0.33566978565015315, 0.12954074251271822, 0.51142807215168951], - [0.33360804901486002, 0.12593818301005921, 0.50535164796654897], - [0.33146154891145124, 0.12244245263391232, 0.49918274588431072], - [0.32923005203231409, 0.11905764321981127, 0.49292595612342666], - [0.3269137124539796, 0.1157873496841953, 0.48658646495697461], - [0.32451307931207785, 0.11263459791730848, 0.48017007211645196], - [0.32202882276069322, 0.10960114111258401, 0.47368494725726878], - [0.31946262395497965, 0.10668879882392659, 0.46713728801395243], - [0.31681648089023501, 0.10389861387653518, 0.46053414662739794], - [0.31409278414755532, 0.10123077676403242, 0.45388335612058467], - [0.31129434479712365, 0.098684771934052201, 0.44719313715161618], - [0.30842444457210105, 0.096259385340577736, 0.44047194882050544], - [0.30548675819945936, 0.093952764840823738, 0.43372849999361113], - [0.30248536364574252, 0.091761187397303601, 0.42697404043749887], - [0.29942483960214772, 0.089682253716750038, 0.42021619665853854], - [0.29631000388905288, 0.087713250960463951, 0.41346259134143476], - [0.29314593096985248, 0.085850656889620708, 0.40672178082365834], - [0.28993792445176608, 0.08409078829085731, 0.40000214725256295], - [0.28669151388283165, 0.082429873848480689, 0.39331182532243375], - [0.28341239797185225, 0.080864153365499375, 0.38665868550105914], - [0.28010638576975472, 0.079389994802261526, 0.38005028528138707], - [0.27677939615815589, 0.078003941033788216, 0.37349382846504675], - [0.27343739342450812, 0.076702800237496066, 0.36699616136347685], - [0.27008637749114051, 0.075483675584275545, 0.36056376228111864], - [0.26673233211995284, 0.074344018028546205, 0.35420276066240958], - [0.26338121807151404, 0.073281657939897077, 0.34791888996380105], - [0.26003895187439957, 0.072294781043362205, 0.3417175669546984], - [0.25671191651083902, 0.071380106242082242, 0.33560648984600089], - [0.25340685873736807, 0.070533582926851829, 0.3295945757321303], - [0.25012845306199383, 0.069758206429106989, 0.32368100685760637], - [0.24688226237958999, 0.069053639449204451, 0.31786993834254956], - [0.24367372557466271, 0.068419855150922693, 0.31216524050888372], - [0.24050813332295939, 0.067857103814855602, 0.30657054493678321], - [0.23739062429054825, 0.067365888050555517, 0.30108922184065873], - [0.23433055727563878, 0.066935599661639394, 0.29574009929867601], - [0.23132955273021344, 0.066576186939090592, 0.29051361067988485], - [0.2283917709422868, 0.06628997924139618, 0.28541074411068496], - [0.22552164337737857, 0.066078173119395595, 0.28043398847505197], - [0.22272706739121817, 0.065933790675651943, 0.27559714652053702], - [0.22001251100779617, 0.065857918918907604, 0.27090279994325861], - [0.21737845072382705, 0.065859661233562045, 0.26634209349669508], - [0.21482843531473683, 0.065940385613778491, 0.26191675992376573], - [0.21237411048541005, 0.066085024661758446, 0.25765165093569542], - [0.21001214221188125, 0.066308573918947178, 0.2535289048041211], - [0.2077442377448806, 0.06661453200418091, 0.24954644291943817], - [0.20558051999470117, 0.066990462397868739, 0.24572497420147632], - [0.20352007949514977, 0.067444179612424215, 0.24205576625191821], - [0.20156133764129841, 0.067983271026200248, 0.23852974228695395], - [0.19971571438603364, 0.068592710553704722, 0.23517094067076993], - [0.19794834061899208, 0.069314066071660657, 0.23194647381302336], - [0.1960826032659409, 0.070321227242423623, 0.22874673279569585], - [0.19410351363791453, 0.071608304856891569, 0.22558727307410353], - [0.19199449184606268, 0.073182830649273306, 0.22243385243433622], - [0.18975853639094634, 0.075019861862143766, 0.2193005075652994], - [0.18739228342697645, 0.077102096899588329, 0.21618875376309582], - [0.18488035509396164, 0.079425730279723883, 0.21307651648984993], - [0.18774482037046955, 0.077251588468039312, 0.21387448578597812], - [0.19049578401722037, 0.075311278416787641, 0.2146562337112265], - [0.1931548636579131, 0.073606819040117955, 0.21542362939081539], - [0.19571853588267552, 0.072157781039602742, 0.21617499187076789], - [0.19819343656336558, 0.070974625252738788, 0.21690975060032436], - [0.20058760685133747, 0.070064576149984209, 0.21762721310371608], - [0.20290365333558247, 0.069435248580458964, 0.21833167885096033], - [0.20531725273301316, 0.068919592266397572, 0.21911516689288835], - [0.20785704662965598, 0.068484398797025281, 0.22000133917653536], - [0.21052882914958676, 0.06812195249816172, 0.22098759107715404], - [0.2133313859647627, 0.067830148426026665, 0.22207043213024291], - [0.21625279838647882, 0.067616330270516389, 0.22324568672294431], - [0.21930503925136402, 0.067465786362940039, 0.22451023616807558], - [0.22247308588973624, 0.067388214053092838, 0.22585960379408354], - [0.2257539681670791, 0.067382132300147474, 0.22728984778098055], - [0.22915620278592841, 0.067434730871152565, 0.22879681433956656], - [0.23266299920501882, 0.067557104388479783, 0.23037617493752832], - [0.23627495835774248, 0.06774359820987802, 0.23202360805926608], - [0.23999586188690308, 0.067985029964779953, 0.23373434258507808], - [0.24381149720247919, 0.068289851529011875, 0.23550427698321885], - [0.24772092990501099, 0.068653337909486523, 0.2373288009471749], - [0.25172899728289466, 0.069064630826035506, 0.23920260612763083], - [0.25582135547481771, 0.06953231029187984, 0.24112190491594204], - [0.25999463887892144, 0.070053855603861875, 0.24308218808684579], - [0.26425512207060942, 0.070616595622995437, 0.24507758869355967], - [0.26859095948172862, 0.071226716277922458, 0.24710443563450618], - [0.27299701518897301, 0.071883555446163511, 0.24915847093232929], - [0.27747150809142801, 0.072582969899254779, 0.25123493995942769], - [0.28201746297366942, 0.073315693214040967, 0.25332800295084507], - [0.28662309235899847, 0.074088460826808866, 0.25543478673717029], - [0.29128515387578635, 0.074899049847466703, 0.25755101595750435], - [0.2960004726065818, 0.075745336000958424, 0.25967245030364566], - [0.30077276812918691, 0.076617824336164764, 0.26179294097819672], - [0.30559226007249934, 0.077521963107537312, 0.26391006692119662], - [0.31045520848595526, 0.078456871676182177, 0.2660200572779356], - [0.31535870009205808, 0.079420997315243186, 0.26811904076941961], - [0.32029986557994061, 0.080412994737554838, 0.27020322893039511], - [0.32527888860401261, 0.081428390076546092, 0.27226772884656186], - [0.33029174471181438, 0.08246763389003825, 0.27430929404579435], - [0.33533353224455448, 0.083532434119003962, 0.27632534356790039], - [0.34040164359597463, 0.084622236191702671, 0.27831254595259397], - [0.34549355713871799, 0.085736654965126335, 0.28026769921081435], - [0.35060678246032478, 0.08687555176033529, 0.28218770540182386], - [0.35573889947341125, 0.088038974350243354, 0.2840695897279818], - [0.36088752387578377, 0.089227194362745205, 0.28591050458531014], - [0.36605031412464006, 0.090440685427697898, 0.2877077458811747], - [0.37122508431309342, 0.091679997480262732, 0.28945865397633169], - [0.3764103053221462, 0.092945198093777909, 0.29116024157313919], - [0.38160247377467543, 0.094238731263712183, 0.29281107506269488], - [0.38679939079544168, 0.09556181960083443, 0.29440901248173756], - [0.39199887556812907, 0.09691583650296684, 0.29595212005509081], - [0.39719876876325577, 0.098302320968278623, 0.29743856476285779], - [0.40239692379737496, 0.099722930314950553, 0.29886674369733968], - [0.40759120392688708, 0.10117945586419633, 0.30023519507728602], - [0.41277985630360303, 0.1026734006932461, 0.30154226437468967], - [0.41796105205173684, 0.10420644885760968, 0.30278652039631843], - [0.42313214269556043, 0.10578120994917611, 0.3039675809469457], - [0.42829101315789753, 0.1073997763055258, 0.30508479060294547], - [0.4334355841041439, 0.1090642347484701, 0.30613767928289148], - [0.43856378187931538, 0.11077667828375456, 0.30712600062348083], - [0.44367358645071275, 0.11253912421257944, 0.30804973095465449], - [0.44876299173174822, 0.11435355574622549, 0.30890905921943196], - [0.45383005086999889, 0.11622183788331528, 0.30970441249844921], - [0.45887288947308297, 0.11814571137706886, 0.31043636979038808], - [0.46389102840284874, 0.12012561256850712, 0.31110343446582983], - [0.46888111384598413, 0.12216445576414045, 0.31170911458932665], - [0.473841437035254, 0.12426354237989065, 0.31225470169927194], - [0.47877034239726296, 0.12642401401409453, 0.31274172735821959], - [0.48366628618847957, 0.12864679022013889, 0.31317188565991266], - [0.48852847371852987, 0.13093210934893723, 0.31354553695453014], - [0.49335504375145617, 0.13328091630401023, 0.31386561956734976], - [0.49814435462074153, 0.13569380302451714, 0.314135190862664], - [0.50289524974970612, 0.13817086581280427, 0.31435662153833671], - [0.50760681181053691, 0.14071192654913128, 0.31453200120082569], - [0.51227835105321762, 0.14331656120063752, 0.3146630922831542], - [0.51690848800544464, 0.14598463068714407, 0.31475407592280041], - [0.52149652863229956, 0.14871544765633712, 0.31480767954534428], - [0.52604189625477482, 0.15150818660835483, 0.31482653406646727], - [0.53054420489856446, 0.15436183633886777, 0.31481299789187128], - [0.5350027976174474, 0.15727540775107324, 0.31477085207396532], - [0.53941736649199057, 0.16024769309971934, 0.31470295028655965], - [0.54378771313608565, 0.16327738551419116, 0.31461204226295625], - [0.54811370033467621, 0.1663630904279047, 0.31450102990914708], - [0.55239521572711914, 0.16950338809328983, 0.31437291554615371], - [0.55663229034969341, 0.17269677158182117, 0.31423043195101424], - [0.56082499039117173, 0.17594170887918095, 0.31407639883970623], - [0.56497343529017696, 0.17923664950367169, 0.3139136046337036], - [0.56907784784011428, 0.18258004462335425, 0.31374440956796529], - [0.57313845754107873, 0.18597036007065024, 0.31357126868520002], - [0.57715550812992045, 0.18940601489760422, 0.31339704333572083], - [0.58112932761586555, 0.19288548904692518, 0.31322399394183942], - [0.58506024396466882, 0.19640737049066315, 0.31305401163732732], - [0.58894861935544707, 0.19997020971775276, 0.31288922211590126], - [0.59279480536520257, 0.20357251410079796, 0.31273234839304942], - [0.59659918109122367, 0.207212956082026, 0.31258523031121233], - [0.60036213010411577, 0.21089030138947745, 0.31244934410414688], - [0.60408401696732739, 0.21460331490206347, 0.31232652641170694], - [0.60776523994818654, 0.21835070166659282, 0.31221903291870201], - [0.6114062072731884, 0.22213124697023234, 0.31212881396435238], - [0.61500723236391375, 0.22594402043981826, 0.31205680685765741], - [0.61856865258877192, 0.22978799249179921, 0.31200463838728931], - [0.62209079821082613, 0.2336621873300741, 0.31197383273627388], - [0.62557416500434959, 0.23756535071152696, 0.31196698314912269], - [0.62901892016985872, 0.24149689191922535, 0.31198447195645718], - [0.63242534854210275, 0.24545598775548677, 0.31202765974624452], - [0.6357937104834237, 0.24944185818822678, 0.31209793953300591], - [0.6391243387840212, 0.25345365461983138, 0.31219689612063978], - [0.642417577481186, 0.257490519876798, 0.31232631707560987], - [0.64567349382645434, 0.26155203161615281, 0.31248673753935263], - [0.64889230169458245, 0.26563755336209077, 0.31267941819570189], - [0.65207417290277303, 0.26974650525236699, 0.31290560605819168], - [0.65521932609327127, 0.27387826652410152, 0.3131666792687211], - [0.6583280801134499, 0.27803210957665631, 0.3134643447952643], - [0.66140037532601781, 0.28220778870555907, 0.31379912926498488], - [0.66443632469878844, 0.28640483614256179, 0.31417223403606975], - [0.66743603766369131, 0.29062280081258873, 0.31458483752056837], - [0.67039959547676198, 0.29486126309253047, 0.31503813956872212], - [0.67332725564817331, 0.29911962764489264, 0.31553372323982209], - [0.67621897924409746, 0.30339762792450425, 0.3160724937230589], - [0.67907474028157344, 0.30769497879760166, 0.31665545668946665], - [0.68189457150944521, 0.31201133280550686, 0.31728380489244951], - [0.68467850942494535, 0.31634634821222207, 0.31795870784057567], - [0.68742656435169625, 0.32069970535138104, 0.31868137622277692], - [0.6901389321505248, 0.32507091815606004, 0.31945332332898302], - [0.69281544846764931, 0.32945984647042675, 0.3202754315314667], - [0.69545608346891119, 0.33386622163232865, 0.32114884306985791], - [0.6980608153581771, 0.33828976326048621, 0.32207478855218091], - [0.70062962477242097, 0.34273019305341756, 0.32305449047765694], - [0.70316249458814151, 0.34718723719597999, 0.32408913679491225], - [0.70565951122610093, 0.35166052978120937, 0.32518014084085567], - [0.70812059568420482, 0.35614985523380299, 0.32632861885644465], - [0.7105456546582587, 0.36065500290840113, 0.32753574162788762], - [0.71293466839773467, 0.36517570519856757, 0.3288027427038317], - [0.71528760614847287, 0.36971170225223449, 0.3301308728723546], - [0.71760444908133847, 0.37426272710686193, 0.33152138620958932], - [0.71988521490549851, 0.37882848839337313, 0.33297555200245399], - [0.7221299918421461, 0.38340864508963057, 0.33449469983585844], - [0.72433865647781592, 0.38800301593162145, 0.33607995965691828], - [0.72651122900227549, 0.3926113126792577, 0.3377325942005665], - [0.72864773856716547, 0.39723324476747235, 0.33945384341064017], - [0.73074820754845171, 0.401868526884681, 0.3412449533046818], - [0.73281270506268747, 0.4065168468778026, 0.34310715173410822], - [0.73484133598564938, 0.41117787004519513, 0.34504169470809071], - [0.73683422173585866, 0.41585125850290111, 0.34704978520758401], - [0.73879140024599266, 0.42053672992315327, 0.34913260148542435], - [0.74071301619506091, 0.4252339389526239, 0.35129130890802607], - [0.7425992159973317, 0.42994254036133867, 0.35352709245374592], - [0.74445018676570673, 0.43466217184617112, 0.35584108091122535], - [0.74626615789163442, 0.43939245044973502, 0.35823439142300639], - [0.74804739275559562, 0.44413297780351974, 0.36070813602540136], - [0.74979420547170472, 0.44888333481548809, 0.36326337558360278], - [0.75150685045891663, 0.45364314496866825, 0.36590112443835765], - [0.75318566369046569, 0.45841199172949604, 0.36862236642234769], - [0.75483105066959544, 0.46318942799460555, 0.3714280448394211], - [0.75644341577140706, 0.46797501437948458, 0.37431909037543515], - [0.75802325538455839, 0.4727682731566229, 0.37729635531096678], - [0.75957111105340058, 0.47756871222057079, 0.380360657784311], - [0.7610876378057071, 0.48237579130289127, 0.38351275723852291], - [0.76257333554052609, 0.48718906673415824, 0.38675335037837993], - [0.76402885609288662, 0.49200802533379656, 0.39008308392311997], - [0.76545492593330511, 0.49683212909727231, 0.39350254000115381], - [0.76685228950643891, 0.5016608471009063, 0.39701221751773474], - [0.76822176599735303, 0.50649362371287909, 0.40061257089416885], - [0.7695642334401418, 0.5113298901696085, 0.40430398069682483], - [0.77088091962302474, 0.51616892643469103, 0.40808667584648967], - [0.77217257229605551, 0.5210102658711383, 0.41196089987122869], - [0.77344021829889886, 0.52585332093451564, 0.41592679539764366], - [0.77468494746063199, 0.53069749384776732, 0.41998440356963762], - [0.77590790730685699, 0.53554217882461186, 0.42413367909988375], - [0.7771103295521099, 0.54038674910561235, 0.42837450371258479], - [0.77829345807633121, 0.54523059488426595, 0.432706647838971], - [0.77945862731506643, 0.55007308413977274, 0.43712979856444761], - [0.78060774749483774, 0.55491335744890613, 0.44164332426364639], - [0.78174180478981836, 0.55975098052594863, 0.44624687186865436], - [0.78286225264440912, 0.56458533111166875, 0.45093985823706345], - [0.78397060836414478, 0.56941578326710418, 0.45572154742892063], - [0.78506845019606841, 0.5742417003617839, 0.46059116206904965], - [0.78615737132332963, 0.5790624629815756, 0.46554778281918402], - [0.78723904108188347, 0.58387743744557208, 0.47059039582133383], - [0.78831514045623963, 0.58868600173562435, 0.47571791879076081], - [0.78938737766251943, 0.5934875421745599, 0.48092913815357724], - [0.79045776847727878, 0.59828134277062461, 0.48622257801969754], - [0.79152832843475607, 0.60306670593147205, 0.49159667021646397], - [0.79260034304237448, 0.60784322087037024, 0.49705020621532009], - [0.79367559698664958, 0.61261029334072192, 0.50258161291269432], - [0.79475585972654039, 0.61736734400220705, 0.50818921213102985], - [0.79584292379583765, 0.62211378808451145, 0.51387124091909786], - [0.79693854719951607, 0.62684905679296699, 0.5196258425240281], - [0.79804447815136637, 0.63157258225089552, 0.52545108144834785], - [0.7991624518501963, 0.63628379372029187, 0.53134495942561433], - [0.80029415389753977, 0.64098213306749863, 0.53730535185141037], - [0.80144124292560048, 0.64566703459218766, 0.5433300863249918], - [0.80260531146112946, 0.65033793748103852, 0.54941691584603647], - [0.80378792531077625, 0.65499426549472628, 0.55556350867083815], - [0.80499054790810298, 0.65963545027564163, 0.56176745110546977], - [0.80621460526927058, 0.66426089585282289, 0.56802629178649788], - [0.8074614045096935, 0.6688700095398864, 0.57433746373459582], - [0.80873219170089694, 0.67346216702194517, 0.58069834805576737], - [0.81002809466520687, 0.67803672673971815, 0.58710626908082753], - [0.81135014011763329, 0.68259301546243389, 0.59355848909050757], - [0.81269922039881493, 0.68713033714618876, 0.60005214820435104], - [0.81407611046993344, 0.69164794791482131, 0.6065843782630862], - [0.81548146627279483, 0.69614505508308089, 0.61315221209322646], - [0.81691575775055891, 0.70062083014783982, 0.61975260637257923], - [0.81837931164498223, 0.70507438189635097, 0.62638245478933297], - [0.81987230650455289, 0.70950474978787481, 0.63303857040067113], - [0.8213947205565636, 0.7139109141951604, 0.63971766697672761], - [0.82294635110428427, 0.71829177331290062, 0.6464164243818421], - [0.8245268129450285, 0.72264614312088882, 0.65313137915422603], - [0.82613549710580259, 0.72697275518238258, 0.65985900156216504], - [0.8277716072353446, 0.73127023324078089, 0.66659570204682972], - [0.82943407816481474, 0.7355371221572935, 0.67333772009301907], - [0.83112163529096306, 0.73977184647638616, 0.68008125203631464], - [0.83283277185777982, 0.74397271817459876, 0.68682235874648545], - [0.8345656905566583, 0.7481379479992134, 0.69355697649863846], - [0.83631898844737929, 0.75226548952875261, 0.70027999028864962], - [0.83809123476131964, 0.75635314860808633, 0.70698561390212977], - [0.83987839884120874, 0.76039907199779677, 0.71367147811129228], - [0.84167750766845151, 0.76440101200982946, 0.72033299387284622], - [0.84348529222933699, 0.76835660399870176, 0.72696536998972039], - [0.84529810731955113, 0.77226338601044719, 0.73356368240541492], - [0.84711195507965098, 0.77611880236047159, 0.74012275762807056], - [0.84892245563117641, 0.77992021407650147, 0.74663719293664366], - [0.85072697023178789, 0.78366457342383888, 0.7530974636118285], - [0.85251907207708444, 0.78734936133548439, 0.7594994148789691], - [0.85429219611470464, 0.79097196777091994, 0.76583801477914104], - [0.85604022314725403, 0.79452963601550608, 0.77210610037674143], - [0.85775662943504905, 0.79801963142713928, 0.77829571667247499], - [0.8594346370300241, 0.8014392309950078, 0.78439788751383921], - [0.86107117027565516, 0.80478517909812231, 0.79039529663736285], - [0.86265601051127572, 0.80805523804261525, 0.796282666437655], - [0.86418343723941027, 0.81124644224653542, 0.80204612696863953], - [0.86564934325605325, 0.81435544067514909, 0.80766972324164554], - [0.86705314907048503, 0.81737804041911244, 0.81313419626911398], - [0.86839954695818633, 0.82030875512181523, 0.81841638963128993], - [0.86969131502613806, 0.82314158859569164, 0.82350476683173168], - [0.87093846717297507, 0.82586857889438514, 0.82838497261149613], - [0.87215331978454325, 0.82848052823709672, 0.8330486712880828], - [0.87335171360916275, 0.83096715251272624, 0.83748851001197089], - [0.87453793320260187, 0.83331972948645461, 0.84171925358069011], - [0.87571458709961403, 0.8355302318472394, 0.84575537519027078], - [0.87687848451614692, 0.83759238071186537, 0.84961373549150254], - [0.87802298436649007, 0.83950165618540074, 0.85330645352458923], - [0.87913244240792765, 0.84125554884475906, 0.85685572291039636], - [0.88019293315695812, 0.84285224824778615, 0.86027399927156634], - [0.88119169871341951, 0.84429066717717349, 0.86356595168669881], - [0.88211542489401606, 0.84557007254559347, 0.86673765046233331], - [0.88295168595448525, 0.84668970275699273, 0.86979617048190971], - [0.88369127145898041, 0.84764891761519268, 0.87274147101441557], - [0.88432713054113543, 0.84844741572055415, 0.87556785228242973], - [0.88485138159908572, 0.84908426422893801, 0.87828235285372469], - [0.88525897972630474, 0.84955892810989209, 0.88088414794024839], - [0.88554714811952384, 0.84987174283631584, 0.88336206121170946], - [0.88571155122845646, 0.85002186115856315, 0.88572538990087124]] - -_twilight_shifted_data = (_twilight_data[len(_twilight_data)//2:] + - _twilight_data[:len(_twilight_data)//2]) -_twilight_shifted_data.reverse() -_turbo_data = [[0.18995, 0.07176, 0.23217], - [0.19483, 0.08339, 0.26149], - [0.19956, 0.09498, 0.29024], - [0.20415, 0.10652, 0.31844], - [0.20860, 0.11802, 0.34607], - [0.21291, 0.12947, 0.37314], - [0.21708, 0.14087, 0.39964], - [0.22111, 0.15223, 0.42558], - [0.22500, 0.16354, 0.45096], - [0.22875, 0.17481, 0.47578], - [0.23236, 0.18603, 0.50004], - [0.23582, 0.19720, 0.52373], - [0.23915, 0.20833, 0.54686], - [0.24234, 0.21941, 0.56942], - [0.24539, 0.23044, 0.59142], - [0.24830, 0.24143, 0.61286], - [0.25107, 0.25237, 0.63374], - [0.25369, 0.26327, 0.65406], - [0.25618, 0.27412, 0.67381], - [0.25853, 0.28492, 0.69300], - [0.26074, 0.29568, 0.71162], - [0.26280, 0.30639, 0.72968], - [0.26473, 0.31706, 0.74718], - [0.26652, 0.32768, 0.76412], - [0.26816, 0.33825, 0.78050], - [0.26967, 0.34878, 0.79631], - [0.27103, 0.35926, 0.81156], - [0.27226, 0.36970, 0.82624], - [0.27334, 0.38008, 0.84037], - [0.27429, 0.39043, 0.85393], - [0.27509, 0.40072, 0.86692], - [0.27576, 0.41097, 0.87936], - [0.27628, 0.42118, 0.89123], - [0.27667, 0.43134, 0.90254], - [0.27691, 0.44145, 0.91328], - [0.27701, 0.45152, 0.92347], - [0.27698, 0.46153, 0.93309], - [0.27680, 0.47151, 0.94214], - [0.27648, 0.48144, 0.95064], - [0.27603, 0.49132, 0.95857], - [0.27543, 0.50115, 0.96594], - [0.27469, 0.51094, 0.97275], - [0.27381, 0.52069, 0.97899], - [0.27273, 0.53040, 0.98461], - [0.27106, 0.54015, 0.98930], - [0.26878, 0.54995, 0.99303], - [0.26592, 0.55979, 0.99583], - [0.26252, 0.56967, 0.99773], - [0.25862, 0.57958, 0.99876], - [0.25425, 0.58950, 0.99896], - [0.24946, 0.59943, 0.99835], - [0.24427, 0.60937, 0.99697], - [0.23874, 0.61931, 0.99485], - [0.23288, 0.62923, 0.99202], - [0.22676, 0.63913, 0.98851], - [0.22039, 0.64901, 0.98436], - [0.21382, 0.65886, 0.97959], - [0.20708, 0.66866, 0.97423], - [0.20021, 0.67842, 0.96833], - [0.19326, 0.68812, 0.96190], - [0.18625, 0.69775, 0.95498], - [0.17923, 0.70732, 0.94761], - [0.17223, 0.71680, 0.93981], - [0.16529, 0.72620, 0.93161], - [0.15844, 0.73551, 0.92305], - [0.15173, 0.74472, 0.91416], - [0.14519, 0.75381, 0.90496], - [0.13886, 0.76279, 0.89550], - [0.13278, 0.77165, 0.88580], - [0.12698, 0.78037, 0.87590], - [0.12151, 0.78896, 0.86581], - [0.11639, 0.79740, 0.85559], - [0.11167, 0.80569, 0.84525], - [0.10738, 0.81381, 0.83484], - [0.10357, 0.82177, 0.82437], - [0.10026, 0.82955, 0.81389], - [0.09750, 0.83714, 0.80342], - [0.09532, 0.84455, 0.79299], - [0.09377, 0.85175, 0.78264], - [0.09287, 0.85875, 0.77240], - [0.09267, 0.86554, 0.76230], - [0.09320, 0.87211, 0.75237], - [0.09451, 0.87844, 0.74265], - [0.09662, 0.88454, 0.73316], - [0.09958, 0.89040, 0.72393], - [0.10342, 0.89600, 0.71500], - [0.10815, 0.90142, 0.70599], - [0.11374, 0.90673, 0.69651], - [0.12014, 0.91193, 0.68660], - [0.12733, 0.91701, 0.67627], - [0.13526, 0.92197, 0.66556], - [0.14391, 0.92680, 0.65448], - [0.15323, 0.93151, 0.64308], - [0.16319, 0.93609, 0.63137], - [0.17377, 0.94053, 0.61938], - [0.18491, 0.94484, 0.60713], - [0.19659, 0.94901, 0.59466], - [0.20877, 0.95304, 0.58199], - [0.22142, 0.95692, 0.56914], - [0.23449, 0.96065, 0.55614], - [0.24797, 0.96423, 0.54303], - [0.26180, 0.96765, 0.52981], - [0.27597, 0.97092, 0.51653], - [0.29042, 0.97403, 0.50321], - [0.30513, 0.97697, 0.48987], - [0.32006, 0.97974, 0.47654], - [0.33517, 0.98234, 0.46325], - [0.35043, 0.98477, 0.45002], - [0.36581, 0.98702, 0.43688], - [0.38127, 0.98909, 0.42386], - [0.39678, 0.99098, 0.41098], - [0.41229, 0.99268, 0.39826], - [0.42778, 0.99419, 0.38575], - [0.44321, 0.99551, 0.37345], - [0.45854, 0.99663, 0.36140], - [0.47375, 0.99755, 0.34963], - [0.48879, 0.99828, 0.33816], - [0.50362, 0.99879, 0.32701], - [0.51822, 0.99910, 0.31622], - [0.53255, 0.99919, 0.30581], - [0.54658, 0.99907, 0.29581], - [0.56026, 0.99873, 0.28623], - [0.57357, 0.99817, 0.27712], - [0.58646, 0.99739, 0.26849], - [0.59891, 0.99638, 0.26038], - [0.61088, 0.99514, 0.25280], - [0.62233, 0.99366, 0.24579], - [0.63323, 0.99195, 0.23937], - [0.64362, 0.98999, 0.23356], - [0.65394, 0.98775, 0.22835], - [0.66428, 0.98524, 0.22370], - [0.67462, 0.98246, 0.21960], - [0.68494, 0.97941, 0.21602], - [0.69525, 0.97610, 0.21294], - [0.70553, 0.97255, 0.21032], - [0.71577, 0.96875, 0.20815], - [0.72596, 0.96470, 0.20640], - [0.73610, 0.96043, 0.20504], - [0.74617, 0.95593, 0.20406], - [0.75617, 0.95121, 0.20343], - [0.76608, 0.94627, 0.20311], - [0.77591, 0.94113, 0.20310], - [0.78563, 0.93579, 0.20336], - [0.79524, 0.93025, 0.20386], - [0.80473, 0.92452, 0.20459], - [0.81410, 0.91861, 0.20552], - [0.82333, 0.91253, 0.20663], - [0.83241, 0.90627, 0.20788], - [0.84133, 0.89986, 0.20926], - [0.85010, 0.89328, 0.21074], - [0.85868, 0.88655, 0.21230], - [0.86709, 0.87968, 0.21391], - [0.87530, 0.87267, 0.21555], - [0.88331, 0.86553, 0.21719], - [0.89112, 0.85826, 0.21880], - [0.89870, 0.85087, 0.22038], - [0.90605, 0.84337, 0.22188], - [0.91317, 0.83576, 0.22328], - [0.92004, 0.82806, 0.22456], - [0.92666, 0.82025, 0.22570], - [0.93301, 0.81236, 0.22667], - [0.93909, 0.80439, 0.22744], - [0.94489, 0.79634, 0.22800], - [0.95039, 0.78823, 0.22831], - [0.95560, 0.78005, 0.22836], - [0.96049, 0.77181, 0.22811], - [0.96507, 0.76352, 0.22754], - [0.96931, 0.75519, 0.22663], - [0.97323, 0.74682, 0.22536], - [0.97679, 0.73842, 0.22369], - [0.98000, 0.73000, 0.22161], - [0.98289, 0.72140, 0.21918], - [0.98549, 0.71250, 0.21650], - [0.98781, 0.70330, 0.21358], - [0.98986, 0.69382, 0.21043], - [0.99163, 0.68408, 0.20706], - [0.99314, 0.67408, 0.20348], - [0.99438, 0.66386, 0.19971], - [0.99535, 0.65341, 0.19577], - [0.99607, 0.64277, 0.19165], - [0.99654, 0.63193, 0.18738], - [0.99675, 0.62093, 0.18297], - [0.99672, 0.60977, 0.17842], - [0.99644, 0.59846, 0.17376], - [0.99593, 0.58703, 0.16899], - [0.99517, 0.57549, 0.16412], - [0.99419, 0.56386, 0.15918], - [0.99297, 0.55214, 0.15417], - [0.99153, 0.54036, 0.14910], - [0.98987, 0.52854, 0.14398], - [0.98799, 0.51667, 0.13883], - [0.98590, 0.50479, 0.13367], - [0.98360, 0.49291, 0.12849], - [0.98108, 0.48104, 0.12332], - [0.97837, 0.46920, 0.11817], - [0.97545, 0.45740, 0.11305], - [0.97234, 0.44565, 0.10797], - [0.96904, 0.43399, 0.10294], - [0.96555, 0.42241, 0.09798], - [0.96187, 0.41093, 0.09310], - [0.95801, 0.39958, 0.08831], - [0.95398, 0.38836, 0.08362], - [0.94977, 0.37729, 0.07905], - [0.94538, 0.36638, 0.07461], - [0.94084, 0.35566, 0.07031], - [0.93612, 0.34513, 0.06616], - [0.93125, 0.33482, 0.06218], - [0.92623, 0.32473, 0.05837], - [0.92105, 0.31489, 0.05475], - [0.91572, 0.30530, 0.05134], - [0.91024, 0.29599, 0.04814], - [0.90463, 0.28696, 0.04516], - [0.89888, 0.27824, 0.04243], - [0.89298, 0.26981, 0.03993], - [0.88691, 0.26152, 0.03753], - [0.88066, 0.25334, 0.03521], - [0.87422, 0.24526, 0.03297], - [0.86760, 0.23730, 0.03082], - [0.86079, 0.22945, 0.02875], - [0.85380, 0.22170, 0.02677], - [0.84662, 0.21407, 0.02487], - [0.83926, 0.20654, 0.02305], - [0.83172, 0.19912, 0.02131], - [0.82399, 0.19182, 0.01966], - [0.81608, 0.18462, 0.01809], - [0.80799, 0.17753, 0.01660], - [0.79971, 0.17055, 0.01520], - [0.79125, 0.16368, 0.01387], - [0.78260, 0.15693, 0.01264], - [0.77377, 0.15028, 0.01148], - [0.76476, 0.14374, 0.01041], - [0.75556, 0.13731, 0.00942], - [0.74617, 0.13098, 0.00851], - [0.73661, 0.12477, 0.00769], - [0.72686, 0.11867, 0.00695], - [0.71692, 0.11268, 0.00629], - [0.70680, 0.10680, 0.00571], - [0.69650, 0.10102, 0.00522], - [0.68602, 0.09536, 0.00481], - [0.67535, 0.08980, 0.00449], - [0.66449, 0.08436, 0.00424], - [0.65345, 0.07902, 0.00408], - [0.64223, 0.07380, 0.00401], - [0.63082, 0.06868, 0.00401], - [0.61923, 0.06367, 0.00410], - [0.60746, 0.05878, 0.00427], - [0.59550, 0.05399, 0.00453], - [0.58336, 0.04931, 0.00486], - [0.57103, 0.04474, 0.00529], - [0.55852, 0.04028, 0.00579], - [0.54583, 0.03593, 0.00638], - [0.53295, 0.03169, 0.00705], - [0.51989, 0.02756, 0.00780], - [0.50664, 0.02354, 0.00863], - [0.49321, 0.01963, 0.00955], - [0.47960, 0.01583, 0.01055]] - - -cmaps = { - name: ListedColormap(data, name=name) for name, data in [ - ('magma', _magma_data), - ('inferno', _inferno_data), - ('plasma', _plasma_data), - ('viridis', _viridis_data), - ('cividis', _cividis_data), - ('twilight', _twilight_data), - ('twilight_shifted', _twilight_shifted_data), - ('turbo', _turbo_data), - ]} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_category.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_category.py deleted file mode 100644 index fd4aec88b57435d5c5fc833df10b3c386badc183..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_category.py +++ /dev/null @@ -1,323 +0,0 @@ -"""Catch all for categorical functions""" -import warnings - -import pytest -import numpy as np - -import matplotlib as mpl -from matplotlib.axes import Axes -import matplotlib.pyplot as plt -import matplotlib.category as cat -from matplotlib.testing.decorators import check_figures_equal - - -class TestUnitData: - test_cases = [('single', (["hello world"], [0])), - ('unicode', (["Здравствуйте мир"], [0])), - ('mixed', (['A', "np.nan", 'B', "3.14", "мир"], - [0, 1, 2, 3, 4]))] - ids, data = zip(*test_cases) - - @pytest.mark.parametrize("data, locs", data, ids=ids) - def test_unit(self, data, locs): - unit = cat.UnitData(data) - assert list(unit._mapping.keys()) == data - assert list(unit._mapping.values()) == locs - - def test_update(self): - data = ['a', 'd'] - locs = [0, 1] - - data_update = ['b', 'd', 'e'] - unique_data = ['a', 'd', 'b', 'e'] - updated_locs = [0, 1, 2, 3] - - unit = cat.UnitData(data) - assert list(unit._mapping.keys()) == data - assert list(unit._mapping.values()) == locs - - unit.update(data_update) - assert list(unit._mapping.keys()) == unique_data - assert list(unit._mapping.values()) == updated_locs - - failing_test_cases = [("number", 3.14), ("nan", np.nan), - ("list", [3.14, 12]), ("mixed type", ["A", 2])] - - fids, fdata = zip(*test_cases) - - @pytest.mark.parametrize("fdata", fdata, ids=fids) - def test_non_string_fails(self, fdata): - with pytest.raises(TypeError): - cat.UnitData(fdata) - - @pytest.mark.parametrize("fdata", fdata, ids=fids) - def test_non_string_update_fails(self, fdata): - unitdata = cat.UnitData() - with pytest.raises(TypeError): - unitdata.update(fdata) - - -class FakeAxis: - def __init__(self, units): - self.units = units - - -class TestStrCategoryConverter: - """ - Based on the pandas conversion and factorization tests: - - ref: /pandas/tseries/tests/test_converter.py - /pandas/tests/test_algos.py:TestFactorize - """ - test_cases = [("unicode", ["Здравствуйте мир"]), - ("ascii", ["hello world"]), - ("single", ['a', 'b', 'c']), - ("integer string", ["1", "2"]), - ("single + values>10", ["A", "B", "C", "D", "E", "F", "G", - "H", "I", "J", "K", "L", "M", "N", - "O", "P", "Q", "R", "S", "T", "U", - "V", "W", "X", "Y", "Z"])] - - ids, values = zip(*test_cases) - - failing_test_cases = [("mixed", [3.14, 'A', np.inf]), - ("string integer", ['42', 42])] - - fids, fvalues = zip(*failing_test_cases) - - @pytest.fixture(autouse=True) - def mock_axis(self, request): - self.cc = cat.StrCategoryConverter() - # self.unit should be probably be replaced with real mock unit - self.unit = cat.UnitData() - self.ax = FakeAxis(self.unit) - - @pytest.mark.parametrize("vals", values, ids=ids) - def test_convert(self, vals): - np.testing.assert_allclose(self.cc.convert(vals, self.ax.units, - self.ax), - range(len(vals))) - - @pytest.mark.parametrize("value", ["hi", "мир"], ids=["ascii", "unicode"]) - def test_convert_one_string(self, value): - assert self.cc.convert(value, self.unit, self.ax) == 0 - - @pytest.mark.parametrize("fvals", fvalues, ids=fids) - def test_convert_fail(self, fvals): - with pytest.raises(TypeError): - self.cc.convert(fvals, self.unit, self.ax) - - def test_axisinfo(self): - axis = self.cc.axisinfo(self.unit, self.ax) - assert isinstance(axis.majloc, cat.StrCategoryLocator) - assert isinstance(axis.majfmt, cat.StrCategoryFormatter) - - def test_default_units(self): - assert isinstance(self.cc.default_units(["a"], self.ax), cat.UnitData) - - -PLOT_LIST = [Axes.scatter, Axes.plot, Axes.bar] -PLOT_IDS = ["scatter", "plot", "bar"] - - -class TestStrCategoryLocator: - def test_StrCategoryLocator(self): - locs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] - unit = cat.UnitData([str(j) for j in locs]) - ticks = cat.StrCategoryLocator(unit._mapping) - np.testing.assert_array_equal(ticks.tick_values(None, None), locs) - - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_StrCategoryLocatorPlot(self, plotter): - ax = plt.figure().subplots() - plotter(ax, [1, 2, 3], ["a", "b", "c"]) - np.testing.assert_array_equal(ax.yaxis.major.locator(), range(3)) - - -class TestStrCategoryFormatter: - test_cases = [("ascii", ["hello", "world", "hi"]), - ("unicode", ["Здравствуйте", "привет"])] - - ids, cases = zip(*test_cases) - - @pytest.mark.parametrize("ydata", cases, ids=ids) - def test_StrCategoryFormatter(self, ydata): - unit = cat.UnitData(ydata) - labels = cat.StrCategoryFormatter(unit._mapping) - for i, d in enumerate(ydata): - assert labels(i, i) == d - assert labels(i, None) == d - - @pytest.mark.parametrize("ydata", cases, ids=ids) - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_StrCategoryFormatterPlot(self, ydata, plotter): - ax = plt.figure().subplots() - plotter(ax, range(len(ydata)), ydata) - for i, d in enumerate(ydata): - assert ax.yaxis.major.formatter(i) == d - assert ax.yaxis.major.formatter(i+1) == "" - - -def axis_test(axis, labels): - ticks = list(range(len(labels))) - np.testing.assert_array_equal(axis.get_majorticklocs(), ticks) - graph_labels = [axis.major.formatter(i, i) for i in ticks] - # _text also decodes bytes as utf-8. - assert graph_labels == [cat.StrCategoryFormatter._text(l) for l in labels] - assert list(axis.units._mapping.keys()) == [l for l in labels] - assert list(axis.units._mapping.values()) == ticks - - -class TestPlotBytes: - bytes_cases = [('string list', ['a', 'b', 'c']), - ('bytes list', [b'a', b'b', b'c']), - ('bytes ndarray', np.array([b'a', b'b', b'c']))] - - bytes_ids, bytes_data = zip(*bytes_cases) - - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - @pytest.mark.parametrize("bdata", bytes_data, ids=bytes_ids) - def test_plot_bytes(self, plotter, bdata): - ax = plt.figure().subplots() - counts = np.array([4, 6, 5]) - plotter(ax, bdata, counts) - axis_test(ax.xaxis, bdata) - - -class TestPlotNumlike: - numlike_cases = [('string list', ['1', '11', '3']), - ('string ndarray', np.array(['1', '11', '3'])), - ('bytes list', [b'1', b'11', b'3']), - ('bytes ndarray', np.array([b'1', b'11', b'3']))] - numlike_ids, numlike_data = zip(*numlike_cases) - - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - @pytest.mark.parametrize("ndata", numlike_data, ids=numlike_ids) - def test_plot_numlike(self, plotter, ndata): - ax = plt.figure().subplots() - counts = np.array([4, 6, 5]) - plotter(ax, ndata, counts) - axis_test(ax.xaxis, ndata) - - -class TestPlotTypes: - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_plot_unicode(self, plotter): - ax = plt.figure().subplots() - words = ['Здравствуйте', 'привет'] - plotter(ax, words, [0, 1]) - axis_test(ax.xaxis, words) - - @pytest.fixture - def test_data(self): - self.x = ["hello", "happy", "world"] - self.xy = [2, 6, 3] - self.y = ["Python", "is", "fun"] - self.yx = [3, 4, 5] - - @pytest.mark.usefixtures("test_data") - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_plot_xaxis(self, test_data, plotter): - ax = plt.figure().subplots() - plotter(ax, self.x, self.xy) - axis_test(ax.xaxis, self.x) - - @pytest.mark.usefixtures("test_data") - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_plot_yaxis(self, test_data, plotter): - ax = plt.figure().subplots() - plotter(ax, self.yx, self.y) - axis_test(ax.yaxis, self.y) - - @pytest.mark.usefixtures("test_data") - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_plot_xyaxis(self, test_data, plotter): - ax = plt.figure().subplots() - plotter(ax, self.x, self.y) - axis_test(ax.xaxis, self.x) - axis_test(ax.yaxis, self.y) - - @pytest.mark.parametrize("plotter", PLOT_LIST, ids=PLOT_IDS) - def test_update_plot(self, plotter): - ax = plt.figure().subplots() - plotter(ax, ['a', 'b'], ['e', 'g']) - plotter(ax, ['a', 'b', 'd'], ['f', 'a', 'b']) - plotter(ax, ['b', 'c', 'd'], ['g', 'e', 'd']) - axis_test(ax.xaxis, ['a', 'b', 'd', 'c']) - axis_test(ax.yaxis, ['e', 'g', 'f', 'a', 'b', 'd']) - - failing_test_cases = [("mixed", ['A', 3.14]), - ("number integer", ['1', 1]), - ("string integer", ['42', 42]), - ("missing", ['12', np.nan])] - - fids, fvalues = zip(*failing_test_cases) - - plotters = [Axes.scatter, Axes.bar, - pytest.param(Axes.plot, marks=pytest.mark.xfail)] - - @pytest.mark.parametrize("plotter", plotters) - @pytest.mark.parametrize("xdata", fvalues, ids=fids) - def test_mixed_type_exception(self, plotter, xdata): - ax = plt.figure().subplots() - with pytest.raises(TypeError): - plotter(ax, xdata, [1, 2]) - - @pytest.mark.parametrize("plotter", plotters) - @pytest.mark.parametrize("xdata", fvalues, ids=fids) - def test_mixed_type_update_exception(self, plotter, xdata): - ax = plt.figure().subplots() - with pytest.raises(TypeError): - plotter(ax, [0, 3], [1, 3]) - plotter(ax, xdata, [1, 2]) - - -@mpl.style.context('default') -@check_figures_equal(extensions=["png"]) -def test_overriding_units_in_plot(fig_test, fig_ref): - from datetime import datetime - - t0 = datetime(2018, 3, 1) - t1 = datetime(2018, 3, 2) - t2 = datetime(2018, 3, 3) - t3 = datetime(2018, 3, 4) - - ax_test = fig_test.subplots() - ax_ref = fig_ref.subplots() - for ax, kwargs in zip([ax_test, ax_ref], - ({}, dict(xunits=None, yunits=None))): - # First call works - ax.plot([t0, t1], ["V1", "V2"], **kwargs) - x_units = ax.xaxis.units - y_units = ax.yaxis.units - # this should not raise - ax.plot([t2, t3], ["V1", "V2"], **kwargs) - # assert that we have not re-set the units attribute at all - assert x_units is ax.xaxis.units - assert y_units is ax.yaxis.units - - -def test_no_deprecation_on_empty_data(): - """ - Smoke test to check that no deprecation warning is emitted. See #22640. - """ - f, ax = plt.subplots() - ax.xaxis.update_units(["a", "b"]) - ax.plot([], []) - - -def test_hist(): - fig, ax = plt.subplots() - n, bins, patches = ax.hist(['a', 'b', 'a', 'c', 'ff']) - assert n.shape == (10,) - np.testing.assert_allclose(n, [2., 0., 0., 1., 0., 0., 1., 0., 0., 1.]) - - -def test_set_lim(): - # Numpy 1.25 deprecated casting [2.] to float, catch_warnings added to error - # with numpy 1.25 and prior to the change from gh-26597 - # can be removed once the minimum numpy version has expired the warning - f, ax = plt.subplots() - ax.plot(["a", "b", "c", "d"], [1, 2, 3, 4]) - with warnings.catch_warnings(): - ax.set_xlim("b", "c") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/system_info.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/system_info.py deleted file mode 100644 index feb28f61cf070c9dfc0b2fc6f205f477f6a66c8b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/system_info.py +++ /dev/null @@ -1,3271 +0,0 @@ -#!/usr/bin/env python3 -""" -This file defines a set of system_info classes for getting -information about various resources (libraries, library directories, -include directories, etc.) in the system. Usage: - info_dict = get_info() - where is a string 'atlas','x11','fftw','lapack','blas', - 'lapack_src', 'blas_src', etc. For a complete list of allowed names, - see the definition of get_info() function below. - - Returned info_dict is a dictionary which is compatible with - distutils.setup keyword arguments. If info_dict == {}, then the - asked resource is not available (system_info could not find it). - - Several *_info classes specify an environment variable to specify - the locations of software. When setting the corresponding environment - variable to 'None' then the software will be ignored, even when it - is available in system. - -Global parameters: - system_info.search_static_first - search static libraries (.a) - in precedence to shared ones (.so, .sl) if enabled. - system_info.verbosity - output the results to stdout if enabled. - -The file 'site.cfg' is looked for in - -1) Directory of main setup.py file being run. -2) Home directory of user running the setup.py file as ~/.numpy-site.cfg -3) System wide directory (location of this file...) - -The first one found is used to get system configuration options The -format is that used by ConfigParser (i.e., Windows .INI style). The -section ALL is not intended for general use. - -Appropriate defaults are used if nothing is specified. - -The order of finding the locations of resources is the following: - 1. environment variable - 2. section in site.cfg - 3. DEFAULT section in site.cfg - 4. System default search paths (see ``default_*`` variables below). -Only the first complete match is returned. - -Currently, the following classes are available, along with their section names: - - Numeric_info:Numeric - _numpy_info:Numeric - _pkg_config_info:None - accelerate_info:accelerate - accelerate_lapack_info:accelerate - agg2_info:agg2 - amd_info:amd - atlas_3_10_blas_info:atlas - atlas_3_10_blas_threads_info:atlas - atlas_3_10_info:atlas - atlas_3_10_threads_info:atlas - atlas_blas_info:atlas - atlas_blas_threads_info:atlas - atlas_info:atlas - atlas_threads_info:atlas - blas64__opt_info:ALL # usage recommended (general ILP64 BLAS, 64_ symbol suffix) - blas_ilp64_opt_info:ALL # usage recommended (general ILP64 BLAS) - blas_ilp64_plain_opt_info:ALL # usage recommended (general ILP64 BLAS, no symbol suffix) - blas_info:blas - blas_mkl_info:mkl - blas_ssl2_info:ssl2 - blas_opt_info:ALL # usage recommended - blas_src_info:blas_src - blis_info:blis - boost_python_info:boost_python - dfftw_info:fftw - dfftw_threads_info:fftw - djbfft_info:djbfft - f2py_info:ALL - fft_opt_info:ALL - fftw2_info:fftw - fftw3_info:fftw3 - fftw_info:fftw - fftw_threads_info:fftw - flame_info:flame - freetype2_info:freetype2 - gdk_2_info:gdk_2 - gdk_info:gdk - gdk_pixbuf_2_info:gdk_pixbuf_2 - gdk_pixbuf_xlib_2_info:gdk_pixbuf_xlib_2 - gdk_x11_2_info:gdk_x11_2 - gtkp_2_info:gtkp_2 - gtkp_x11_2_info:gtkp_x11_2 - lapack64__opt_info:ALL # usage recommended (general ILP64 LAPACK, 64_ symbol suffix) - lapack_atlas_3_10_info:atlas - lapack_atlas_3_10_threads_info:atlas - lapack_atlas_info:atlas - lapack_atlas_threads_info:atlas - lapack_ilp64_opt_info:ALL # usage recommended (general ILP64 LAPACK) - lapack_ilp64_plain_opt_info:ALL # usage recommended (general ILP64 LAPACK, no symbol suffix) - lapack_info:lapack - lapack_mkl_info:mkl - lapack_ssl2_info:ssl2 - lapack_opt_info:ALL # usage recommended - lapack_src_info:lapack_src - mkl_info:mkl - ssl2_info:ssl2 - numarray_info:numarray - numerix_info:numerix - numpy_info:numpy - openblas64__info:openblas64_ - openblas64__lapack_info:openblas64_ - openblas_clapack_info:openblas - openblas_ilp64_info:openblas_ilp64 - openblas_ilp64_lapack_info:openblas_ilp64 - openblas_info:openblas - openblas_lapack_info:openblas - sfftw_info:fftw - sfftw_threads_info:fftw - system_info:ALL - umfpack_info:umfpack - wx_info:wx - x11_info:x11 - xft_info:xft - -Note that blas_opt_info and lapack_opt_info honor the NPY_BLAS_ORDER -and NPY_LAPACK_ORDER environment variables to determine the order in which -specific BLAS and LAPACK libraries are searched for. - -This search (or autodetection) can be bypassed by defining the environment -variables NPY_BLAS_LIBS and NPY_LAPACK_LIBS, which should then contain the -exact linker flags to use (language will be set to F77). Building against -Netlib BLAS/LAPACK or stub files, in order to be able to switch BLAS and LAPACK -implementations at runtime. If using this to build NumPy itself, it is -recommended to also define NPY_CBLAS_LIBS (assuming your BLAS library has a -CBLAS interface) to enable CBLAS usage for matrix multiplication (unoptimized -otherwise). - -Example: ----------- -[DEFAULT] -# default section -library_dirs = /usr/lib:/usr/local/lib:/opt/lib -include_dirs = /usr/include:/usr/local/include:/opt/include -src_dirs = /usr/local/src:/opt/src -# search static libraries (.a) in preference to shared ones (.so) -search_static_first = 0 - -[fftw] -libraries = rfftw, fftw - -[atlas] -library_dirs = /usr/lib/3dnow:/usr/lib/3dnow/atlas -# for overriding the names of the atlas libraries -libraries = lapack, f77blas, cblas, atlas - -[x11] -library_dirs = /usr/X11R6/lib -include_dirs = /usr/X11R6/include ----------- - -Note that the ``libraries`` key is the default setting for libraries. - -Authors: - Pearu Peterson , February 2002 - David M. Cooke , April 2002 - -Copyright 2002 Pearu Peterson all rights reserved, -Pearu Peterson -Permission to use, modify, and distribute this software is given under the -terms of the NumPy (BSD style) license. See LICENSE.txt that came with -this distribution for specifics. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. - -""" -import sys -import os -import re -import copy -import warnings -import subprocess -import textwrap - -from glob import glob -from functools import reduce -from configparser import NoOptionError -from configparser import RawConfigParser as ConfigParser -# It seems that some people are importing ConfigParser from here so is -# good to keep its class name. Use of RawConfigParser is needed in -# order to be able to load path names with percent in them, like -# `feature%2Fcool` which is common on git flow branch names. - -from distutils.errors import DistutilsError -from distutils.dist import Distribution -import sysconfig -from numpy.distutils import log -from distutils.util import get_platform - -from numpy.distutils.exec_command import ( - find_executable, filepath_from_subprocess_output, - ) -from numpy.distutils.misc_util import (is_sequence, is_string, - get_shared_lib_extension) -from numpy.distutils.command.config import config as cmd_config -from numpy.distutils import customized_ccompiler as _customized_ccompiler -from numpy.distutils import _shell_utils -import distutils.ccompiler -import tempfile -import shutil - -__all__ = ['system_info'] - -# Determine number of bits -import platform -_bits = {'32bit': 32, '64bit': 64} -platform_bits = _bits[platform.architecture()[0]] - - -global_compiler = None - -def customized_ccompiler(): - global global_compiler - if not global_compiler: - global_compiler = _customized_ccompiler() - return global_compiler - - -def _c_string_literal(s): - """ - Convert a python string into a literal suitable for inclusion into C code - """ - # only these three characters are forbidden in C strings - s = s.replace('\\', r'\\') - s = s.replace('"', r'\"') - s = s.replace('\n', r'\n') - return '"{}"'.format(s) - - -def libpaths(paths, bits): - """Return a list of library paths valid on 32 or 64 bit systems. - - Inputs: - paths : sequence - A sequence of strings (typically paths) - bits : int - An integer, the only valid values are 32 or 64. A ValueError exception - is raised otherwise. - - Examples: - - Consider a list of directories - >>> paths = ['/usr/X11R6/lib','/usr/X11/lib','/usr/lib'] - - For a 32-bit platform, this is already valid: - >>> np.distutils.system_info.libpaths(paths,32) - ['/usr/X11R6/lib', '/usr/X11/lib', '/usr/lib'] - - On 64 bits, we prepend the '64' postfix - >>> np.distutils.system_info.libpaths(paths,64) - ['/usr/X11R6/lib64', '/usr/X11R6/lib', '/usr/X11/lib64', '/usr/X11/lib', - '/usr/lib64', '/usr/lib'] - """ - if bits not in (32, 64): - raise ValueError("Invalid bit size in libpaths: 32 or 64 only") - - # Handle 32bit case - if bits == 32: - return paths - - # Handle 64bit case - out = [] - for p in paths: - out.extend([p + '64', p]) - - return out - - -if sys.platform == 'win32': - default_lib_dirs = ['C:\\', - os.path.join(sysconfig.get_config_var('exec_prefix'), - 'libs')] - default_runtime_dirs = [] - default_include_dirs = [] - default_src_dirs = ['.'] - default_x11_lib_dirs = [] - default_x11_include_dirs = [] - _include_dirs = [ - 'include', - 'include/suitesparse', - ] - _lib_dirs = [ - 'lib', - ] - - _include_dirs = [d.replace('/', os.sep) for d in _include_dirs] - _lib_dirs = [d.replace('/', os.sep) for d in _lib_dirs] - def add_system_root(library_root): - """Add a package manager root to the include directories""" - global default_lib_dirs - global default_include_dirs - - library_root = os.path.normpath(library_root) - - default_lib_dirs.extend( - os.path.join(library_root, d) for d in _lib_dirs) - default_include_dirs.extend( - os.path.join(library_root, d) for d in _include_dirs) - - # VCpkg is the de-facto package manager on windows for C/C++ - # libraries. If it is on the PATH, then we append its paths here. - vcpkg = shutil.which('vcpkg') - if vcpkg: - vcpkg_dir = os.path.dirname(vcpkg) - if platform.architecture()[0] == '32bit': - specifier = 'x86' - else: - specifier = 'x64' - - vcpkg_installed = os.path.join(vcpkg_dir, 'installed') - for vcpkg_root in [ - os.path.join(vcpkg_installed, specifier + '-windows'), - os.path.join(vcpkg_installed, specifier + '-windows-static'), - ]: - add_system_root(vcpkg_root) - - # Conda is another popular package manager that provides libraries - conda = shutil.which('conda') - if conda: - conda_dir = os.path.dirname(conda) - add_system_root(os.path.join(conda_dir, '..', 'Library')) - add_system_root(os.path.join(conda_dir, 'Library')) - -else: - default_lib_dirs = libpaths(['/usr/local/lib', '/opt/lib', '/usr/lib', - '/opt/local/lib', '/sw/lib'], platform_bits) - default_runtime_dirs = [] - default_include_dirs = ['/usr/local/include', - '/opt/include', - # path of umfpack under macports - '/opt/local/include/ufsparse', - '/opt/local/include', '/sw/include', - '/usr/include/suitesparse'] - default_src_dirs = ['.', '/usr/local/src', '/opt/src', '/sw/src'] - - default_x11_lib_dirs = libpaths(['/usr/X11R6/lib', '/usr/X11/lib', - '/usr/lib'], platform_bits) - default_x11_include_dirs = ['/usr/X11R6/include', '/usr/X11/include'] - - if os.path.exists('/usr/lib/X11'): - globbed_x11_dir = glob('/usr/lib/*/libX11.so') - if globbed_x11_dir: - x11_so_dir = os.path.split(globbed_x11_dir[0])[0] - default_x11_lib_dirs.extend([x11_so_dir, '/usr/lib/X11']) - default_x11_include_dirs.extend(['/usr/lib/X11/include', - '/usr/include/X11']) - - with open(os.devnull, 'w') as tmp: - try: - p = subprocess.Popen(["gcc", "-print-multiarch"], stdout=subprocess.PIPE, - stderr=tmp) - except (OSError, DistutilsError): - # OSError if gcc is not installed, or SandboxViolation (DistutilsError - # subclass) if an old setuptools bug is triggered (see gh-3160). - pass - else: - triplet = str(p.communicate()[0].decode().strip()) - if p.returncode == 0: - # gcc supports the "-print-multiarch" option - default_x11_lib_dirs += [os.path.join("/usr/lib/", triplet)] - default_lib_dirs += [os.path.join("/usr/lib/", triplet)] - - -if os.path.join(sys.prefix, 'lib') not in default_lib_dirs: - default_lib_dirs.insert(0, os.path.join(sys.prefix, 'lib')) - default_include_dirs.append(os.path.join(sys.prefix, 'include')) - default_src_dirs.append(os.path.join(sys.prefix, 'src')) - -default_lib_dirs = [_m for _m in default_lib_dirs if os.path.isdir(_m)] -default_runtime_dirs = [_m for _m in default_runtime_dirs if os.path.isdir(_m)] -default_include_dirs = [_m for _m in default_include_dirs if os.path.isdir(_m)] -default_src_dirs = [_m for _m in default_src_dirs if os.path.isdir(_m)] - -so_ext = get_shared_lib_extension() - - -def get_standard_file(fname): - """Returns a list of files named 'fname' from - 1) System-wide directory (directory-location of this module) - 2) Users HOME directory (os.environ['HOME']) - 3) Local directory - """ - # System-wide file - filenames = [] - try: - f = __file__ - except NameError: - f = sys.argv[0] - sysfile = os.path.join(os.path.split(os.path.abspath(f))[0], - fname) - if os.path.isfile(sysfile): - filenames.append(sysfile) - - # Home directory - # And look for the user config file - try: - f = os.path.expanduser('~') - except KeyError: - pass - else: - user_file = os.path.join(f, fname) - if os.path.isfile(user_file): - filenames.append(user_file) - - # Local file - if os.path.isfile(fname): - filenames.append(os.path.abspath(fname)) - - return filenames - - -def _parse_env_order(base_order, env): - """ Parse an environment variable `env` by splitting with "," and only returning elements from `base_order` - - This method will sequence the environment variable and check for their - individual elements in `base_order`. - - The items in the environment variable may be negated via '^item' or '!itema,itemb'. - It must start with ^/! to negate all options. - - Raises - ------ - ValueError: for mixed negated and non-negated orders or multiple negated orders - - Parameters - ---------- - base_order : list of str - the base list of orders - env : str - the environment variable to be parsed, if none is found, `base_order` is returned - - Returns - ------- - allow_order : list of str - allowed orders in lower-case - unknown_order : list of str - for values not overlapping with `base_order` - """ - order_str = os.environ.get(env, None) - - # ensure all base-orders are lower-case (for easier comparison) - base_order = [order.lower() for order in base_order] - if order_str is None: - return base_order, [] - - neg = order_str.startswith('^') or order_str.startswith('!') - # Check format - order_str_l = list(order_str) - sum_neg = order_str_l.count('^') + order_str_l.count('!') - if neg: - if sum_neg > 1: - raise ValueError(f"Environment variable '{env}' may only contain a single (prefixed) negation: {order_str}") - # remove prefix - order_str = order_str[1:] - elif sum_neg > 0: - raise ValueError(f"Environment variable '{env}' may not mix negated an non-negated items: {order_str}") - - # Split and lower case - orders = order_str.lower().split(',') - - # to inform callee about non-overlapping elements - unknown_order = [] - - # if negated, we have to remove from the order - if neg: - allow_order = base_order.copy() - - for order in orders: - if not order: - continue - - if order not in base_order: - unknown_order.append(order) - continue - - if order in allow_order: - allow_order.remove(order) - - else: - allow_order = [] - - for order in orders: - if not order: - continue - - if order not in base_order: - unknown_order.append(order) - continue - - if order not in allow_order: - allow_order.append(order) - - return allow_order, unknown_order - - -def get_info(name, notfound_action=0): - """ - notfound_action: - 0 - do nothing - 1 - display warning message - 2 - raise error - """ - cl = {'armpl': armpl_info, - 'blas_armpl': blas_armpl_info, - 'lapack_armpl': lapack_armpl_info, - 'fftw3_armpl': fftw3_armpl_info, - 'atlas': atlas_info, # use lapack_opt or blas_opt instead - 'atlas_threads': atlas_threads_info, # ditto - 'atlas_blas': atlas_blas_info, - 'atlas_blas_threads': atlas_blas_threads_info, - 'lapack_atlas': lapack_atlas_info, # use lapack_opt instead - 'lapack_atlas_threads': lapack_atlas_threads_info, # ditto - 'atlas_3_10': atlas_3_10_info, # use lapack_opt or blas_opt instead - 'atlas_3_10_threads': atlas_3_10_threads_info, # ditto - 'atlas_3_10_blas': atlas_3_10_blas_info, - 'atlas_3_10_blas_threads': atlas_3_10_blas_threads_info, - 'lapack_atlas_3_10': lapack_atlas_3_10_info, # use lapack_opt instead - 'lapack_atlas_3_10_threads': lapack_atlas_3_10_threads_info, # ditto - 'flame': flame_info, # use lapack_opt instead - 'mkl': mkl_info, - 'ssl2': ssl2_info, - # openblas which may or may not have embedded lapack - 'openblas': openblas_info, # use blas_opt instead - # openblas with embedded lapack - 'openblas_lapack': openblas_lapack_info, # use blas_opt instead - 'openblas_clapack': openblas_clapack_info, # use blas_opt instead - 'blis': blis_info, # use blas_opt instead - 'lapack_mkl': lapack_mkl_info, # use lapack_opt instead - 'blas_mkl': blas_mkl_info, # use blas_opt instead - 'lapack_ssl2': lapack_ssl2_info, - 'blas_ssl2': blas_ssl2_info, - 'accelerate': accelerate_info, # use blas_opt instead - 'accelerate_lapack': accelerate_lapack_info, - 'openblas64_': openblas64__info, - 'openblas64__lapack': openblas64__lapack_info, - 'openblas_ilp64': openblas_ilp64_info, - 'openblas_ilp64_lapack': openblas_ilp64_lapack_info, - 'x11': x11_info, - 'fft_opt': fft_opt_info, - 'fftw': fftw_info, - 'fftw2': fftw2_info, - 'fftw3': fftw3_info, - 'dfftw': dfftw_info, - 'sfftw': sfftw_info, - 'fftw_threads': fftw_threads_info, - 'dfftw_threads': dfftw_threads_info, - 'sfftw_threads': sfftw_threads_info, - 'djbfft': djbfft_info, - 'blas': blas_info, # use blas_opt instead - 'lapack': lapack_info, # use lapack_opt instead - 'lapack_src': lapack_src_info, - 'blas_src': blas_src_info, - 'numpy': numpy_info, - 'f2py': f2py_info, - 'Numeric': Numeric_info, - 'numeric': Numeric_info, - 'numarray': numarray_info, - 'numerix': numerix_info, - 'lapack_opt': lapack_opt_info, - 'lapack_ilp64_opt': lapack_ilp64_opt_info, - 'lapack_ilp64_plain_opt': lapack_ilp64_plain_opt_info, - 'lapack64__opt': lapack64__opt_info, - 'blas_opt': blas_opt_info, - 'blas_ilp64_opt': blas_ilp64_opt_info, - 'blas_ilp64_plain_opt': blas_ilp64_plain_opt_info, - 'blas64__opt': blas64__opt_info, - 'boost_python': boost_python_info, - 'agg2': agg2_info, - 'wx': wx_info, - 'gdk_pixbuf_xlib_2': gdk_pixbuf_xlib_2_info, - 'gdk-pixbuf-xlib-2.0': gdk_pixbuf_xlib_2_info, - 'gdk_pixbuf_2': gdk_pixbuf_2_info, - 'gdk-pixbuf-2.0': gdk_pixbuf_2_info, - 'gdk': gdk_info, - 'gdk_2': gdk_2_info, - 'gdk-2.0': gdk_2_info, - 'gdk_x11_2': gdk_x11_2_info, - 'gdk-x11-2.0': gdk_x11_2_info, - 'gtkp_x11_2': gtkp_x11_2_info, - 'gtk+-x11-2.0': gtkp_x11_2_info, - 'gtkp_2': gtkp_2_info, - 'gtk+-2.0': gtkp_2_info, - 'xft': xft_info, - 'freetype2': freetype2_info, - 'umfpack': umfpack_info, - 'amd': amd_info, - }.get(name.lower(), system_info) - return cl().get_info(notfound_action) - - -class NotFoundError(DistutilsError): - """Some third-party program or library is not found.""" - - -class AliasedOptionError(DistutilsError): - """ - Aliases entries in config files should not be existing. - In section '{section}' we found multiple appearances of options {options}.""" - - -class AtlasNotFoundError(NotFoundError): - """ - Atlas (http://github.com/math-atlas/math-atlas) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [atlas]) or by setting - the ATLAS environment variable.""" - - -class FlameNotFoundError(NotFoundError): - """ - FLAME (http://www.cs.utexas.edu/~flame/web/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [flame]).""" - - -class LapackNotFoundError(NotFoundError): - """ - Lapack (http://www.netlib.org/lapack/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [lapack]) or by setting - the LAPACK environment variable.""" - - -class LapackSrcNotFoundError(LapackNotFoundError): - """ - Lapack (http://www.netlib.org/lapack/) sources not found. - Directories to search for the sources can be specified in the - numpy/distutils/site.cfg file (section [lapack_src]) or by setting - the LAPACK_SRC environment variable.""" - - -class LapackILP64NotFoundError(NotFoundError): - """ - 64-bit Lapack libraries not found. - Known libraries in numpy/distutils/site.cfg file are: - openblas64_, openblas_ilp64 - """ - -class BlasOptNotFoundError(NotFoundError): - """ - Optimized (vendor) Blas libraries are not found. - Falls back to netlib Blas library which has worse performance. - A better performance should be easily gained by switching - Blas library.""" - -class BlasNotFoundError(NotFoundError): - """ - Blas (http://www.netlib.org/blas/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [blas]) or by setting - the BLAS environment variable.""" - -class BlasILP64NotFoundError(NotFoundError): - """ - 64-bit Blas libraries not found. - Known libraries in numpy/distutils/site.cfg file are: - openblas64_, openblas_ilp64 - """ - -class BlasSrcNotFoundError(BlasNotFoundError): - """ - Blas (http://www.netlib.org/blas/) sources not found. - Directories to search for the sources can be specified in the - numpy/distutils/site.cfg file (section [blas_src]) or by setting - the BLAS_SRC environment variable.""" - - -class FFTWNotFoundError(NotFoundError): - """ - FFTW (http://www.fftw.org/) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [fftw]) or by setting - the FFTW environment variable.""" - - -class DJBFFTNotFoundError(NotFoundError): - """ - DJBFFT (https://cr.yp.to/djbfft.html) libraries not found. - Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [djbfft]) or by setting - the DJBFFT environment variable.""" - - -class NumericNotFoundError(NotFoundError): - """ - Numeric (https://www.numpy.org/) module not found. - Get it from above location, install it, and retry setup.py.""" - - -class X11NotFoundError(NotFoundError): - """X11 libraries not found.""" - - -class UmfpackNotFoundError(NotFoundError): - """ - UMFPACK sparse solver (https://www.cise.ufl.edu/research/sparse/umfpack/) - not found. Directories to search for the libraries can be specified in the - numpy/distutils/site.cfg file (section [umfpack]) or by setting - the UMFPACK environment variable.""" - - -class system_info: - - """ get_info() is the only public method. Don't use others. - """ - dir_env_var = None - # XXX: search_static_first is disabled by default, may disappear in - # future unless it is proved to be useful. - search_static_first = 0 - # The base-class section name is a random word "ALL" and is not really - # intended for general use. It cannot be None nor can it be DEFAULT as - # these break the ConfigParser. See gh-15338 - section = 'ALL' - saved_results = {} - - notfounderror = NotFoundError - - def __init__(self, - default_lib_dirs=default_lib_dirs, - default_include_dirs=default_include_dirs, - ): - self.__class__.info = {} - self.local_prefixes = [] - defaults = {'library_dirs': os.pathsep.join(default_lib_dirs), - 'include_dirs': os.pathsep.join(default_include_dirs), - 'runtime_library_dirs': os.pathsep.join(default_runtime_dirs), - 'rpath': '', - 'src_dirs': os.pathsep.join(default_src_dirs), - 'search_static_first': str(self.search_static_first), - 'extra_compile_args': '', 'extra_link_args': ''} - self.cp = ConfigParser(defaults) - self.files = [] - self.files.extend(get_standard_file('.numpy-site.cfg')) - self.files.extend(get_standard_file('site.cfg')) - self.parse_config_files() - - if self.section is not None: - self.search_static_first = self.cp.getboolean( - self.section, 'search_static_first') - assert isinstance(self.search_static_first, int) - - def parse_config_files(self): - self.cp.read(self.files) - if not self.cp.has_section(self.section): - if self.section is not None: - self.cp.add_section(self.section) - - def calc_libraries_info(self): - libs = self.get_libraries() - dirs = self.get_lib_dirs() - # The extensions use runtime_library_dirs - r_dirs = self.get_runtime_lib_dirs() - # Intrinsic distutils use rpath, we simply append both entries - # as though they were one entry - r_dirs.extend(self.get_runtime_lib_dirs(key='rpath')) - info = {} - for lib in libs: - i = self.check_libs(dirs, [lib]) - if i is not None: - dict_append(info, **i) - else: - log.info('Library %s was not found. Ignoring' % (lib)) - - if r_dirs: - i = self.check_libs(r_dirs, [lib]) - if i is not None: - # Swap library keywords found to runtime_library_dirs - # the libraries are insisting on the user having defined - # them using the library_dirs, and not necessarily by - # runtime_library_dirs - del i['libraries'] - i['runtime_library_dirs'] = i.pop('library_dirs') - dict_append(info, **i) - else: - log.info('Runtime library %s was not found. Ignoring' % (lib)) - - return info - - def set_info(self, **info): - if info: - lib_info = self.calc_libraries_info() - dict_append(info, **lib_info) - # Update extra information - extra_info = self.calc_extra_info() - dict_append(info, **extra_info) - self.saved_results[self.__class__.__name__] = info - - def get_option_single(self, *options): - """ Ensure that only one of `options` are found in the section - - Parameters - ---------- - *options : list of str - a list of options to be found in the section (``self.section``) - - Returns - ------- - str : - the option that is uniquely found in the section - - Raises - ------ - AliasedOptionError : - in case more than one of the options are found - """ - found = [self.cp.has_option(self.section, opt) for opt in options] - if sum(found) == 1: - return options[found.index(True)] - elif sum(found) == 0: - # nothing is found anyways - return options[0] - - # Else we have more than 1 key found - if AliasedOptionError.__doc__ is None: - raise AliasedOptionError() - raise AliasedOptionError(AliasedOptionError.__doc__.format( - section=self.section, options='[{}]'.format(', '.join(options)))) - - - def has_info(self): - return self.__class__.__name__ in self.saved_results - - def calc_extra_info(self): - """ Updates the information in the current information with - respect to these flags: - extra_compile_args - extra_link_args - """ - info = {} - for key in ['extra_compile_args', 'extra_link_args']: - # Get values - opt = self.cp.get(self.section, key) - opt = _shell_utils.NativeParser.split(opt) - if opt: - tmp = {key: opt} - dict_append(info, **tmp) - return info - - def get_info(self, notfound_action=0): - """ Return a dictionary with items that are compatible - with numpy.distutils.setup keyword arguments. - """ - flag = 0 - if not self.has_info(): - flag = 1 - log.info(self.__class__.__name__ + ':') - if hasattr(self, 'calc_info'): - self.calc_info() - if notfound_action: - if not self.has_info(): - if notfound_action == 1: - warnings.warn(self.notfounderror.__doc__, stacklevel=2) - elif notfound_action == 2: - raise self.notfounderror(self.notfounderror.__doc__) - else: - raise ValueError(repr(notfound_action)) - - if not self.has_info(): - log.info(' NOT AVAILABLE') - self.set_info() - else: - log.info(' FOUND:') - - res = self.saved_results.get(self.__class__.__name__) - if log.get_threshold() <= log.INFO and flag: - for k, v in res.items(): - v = str(v) - if k in ['sources', 'libraries'] and len(v) > 270: - v = v[:120] + '...\n...\n...' + v[-120:] - log.info(' %s = %s', k, v) - log.info('') - - return copy.deepcopy(res) - - def get_paths(self, section, key): - dirs = self.cp.get(section, key).split(os.pathsep) - env_var = self.dir_env_var - if env_var: - if is_sequence(env_var): - e0 = env_var[-1] - for e in env_var: - if e in os.environ: - e0 = e - break - if not env_var[0] == e0: - log.info('Setting %s=%s' % (env_var[0], e0)) - env_var = e0 - if env_var and env_var in os.environ: - d = os.environ[env_var] - if d == 'None': - log.info('Disabled %s: %s', - self.__class__.__name__, '(%s is None)' - % (env_var,)) - return [] - if os.path.isfile(d): - dirs = [os.path.dirname(d)] + dirs - l = getattr(self, '_lib_names', []) - if len(l) == 1: - b = os.path.basename(d) - b = os.path.splitext(b)[0] - if b[:3] == 'lib': - log.info('Replacing _lib_names[0]==%r with %r' \ - % (self._lib_names[0], b[3:])) - self._lib_names[0] = b[3:] - else: - ds = d.split(os.pathsep) - ds2 = [] - for d in ds: - if os.path.isdir(d): - ds2.append(d) - for dd in ['include', 'lib']: - d1 = os.path.join(d, dd) - if os.path.isdir(d1): - ds2.append(d1) - dirs = ds2 + dirs - default_dirs = self.cp.get(self.section, key).split(os.pathsep) - dirs.extend(default_dirs) - ret = [] - for d in dirs: - if len(d) > 0 and not os.path.isdir(d): - warnings.warn('Specified path %s is invalid.' % d, stacklevel=2) - continue - - if d not in ret: - ret.append(d) - - log.debug('( %s = %s )', key, ':'.join(ret)) - return ret - - def get_lib_dirs(self, key='library_dirs'): - return self.get_paths(self.section, key) - - def get_runtime_lib_dirs(self, key='runtime_library_dirs'): - path = self.get_paths(self.section, key) - if path == ['']: - path = [] - return path - - def get_include_dirs(self, key='include_dirs'): - return self.get_paths(self.section, key) - - def get_src_dirs(self, key='src_dirs'): - return self.get_paths(self.section, key) - - def get_libs(self, key, default): - try: - libs = self.cp.get(self.section, key) - except NoOptionError: - if not default: - return [] - if is_string(default): - return [default] - return default - return [b for b in [a.strip() for a in libs.split(',')] if b] - - def get_libraries(self, key='libraries'): - if hasattr(self, '_lib_names'): - return self.get_libs(key, default=self._lib_names) - else: - return self.get_libs(key, '') - - def library_extensions(self): - c = customized_ccompiler() - static_exts = [] - if c.compiler_type != 'msvc': - # MSVC doesn't understand binutils - static_exts.append('.a') - if sys.platform == 'win32': - static_exts.append('.lib') # .lib is used by MSVC and others - if self.search_static_first: - exts = static_exts + [so_ext] - else: - exts = [so_ext] + static_exts - if sys.platform == 'cygwin': - exts.append('.dll.a') - if sys.platform == 'darwin': - exts.append('.dylib') - return exts - - def check_libs(self, lib_dirs, libs, opt_libs=[]): - """If static or shared libraries are available then return - their info dictionary. - - Checks for all libraries as shared libraries first, then - static (or vice versa if self.search_static_first is True). - """ - exts = self.library_extensions() - info = None - for ext in exts: - info = self._check_libs(lib_dirs, libs, opt_libs, [ext]) - if info is not None: - break - if not info: - log.info(' libraries %s not found in %s', ','.join(libs), - lib_dirs) - return info - - def check_libs2(self, lib_dirs, libs, opt_libs=[]): - """If static or shared libraries are available then return - their info dictionary. - - Checks each library for shared or static. - """ - exts = self.library_extensions() - info = self._check_libs(lib_dirs, libs, opt_libs, exts) - if not info: - log.info(' libraries %s not found in %s', ','.join(libs), - lib_dirs) - - return info - - def _find_lib(self, lib_dir, lib, exts): - assert is_string(lib_dir) - # under windows first try without 'lib' prefix - if sys.platform == 'win32': - lib_prefixes = ['', 'lib'] - else: - lib_prefixes = ['lib'] - # for each library name, see if we can find a file for it. - for ext in exts: - for prefix in lib_prefixes: - p = self.combine_paths(lib_dir, prefix + lib + ext) - if p: - break - if p: - assert len(p) == 1 - # ??? splitext on p[0] would do this for cygwin - # doesn't seem correct - if ext == '.dll.a': - lib += '.dll' - if ext == '.lib': - lib = prefix + lib - return lib - - return False - - def _find_libs(self, lib_dirs, libs, exts): - # make sure we preserve the order of libs, as it can be important - found_dirs, found_libs = [], [] - for lib in libs: - for lib_dir in lib_dirs: - found_lib = self._find_lib(lib_dir, lib, exts) - if found_lib: - found_libs.append(found_lib) - if lib_dir not in found_dirs: - found_dirs.append(lib_dir) - break - return found_dirs, found_libs - - def _check_libs(self, lib_dirs, libs, opt_libs, exts): - """Find mandatory and optional libs in expected paths. - - Missing optional libraries are silently forgotten. - """ - if not is_sequence(lib_dirs): - lib_dirs = [lib_dirs] - # First, try to find the mandatory libraries - found_dirs, found_libs = self._find_libs(lib_dirs, libs, exts) - if len(found_libs) > 0 and len(found_libs) == len(libs): - # Now, check for optional libraries - opt_found_dirs, opt_found_libs = self._find_libs(lib_dirs, opt_libs, exts) - found_libs.extend(opt_found_libs) - for lib_dir in opt_found_dirs: - if lib_dir not in found_dirs: - found_dirs.append(lib_dir) - info = {'libraries': found_libs, 'library_dirs': found_dirs} - return info - else: - return None - - def combine_paths(self, *args): - """Return a list of existing paths composed by all combinations - of items from the arguments. - """ - return combine_paths(*args) - - -class fft_opt_info(system_info): - - def calc_info(self): - info = {} - fftw_info = get_info('fftw3') or get_info('fftw2') or get_info('dfftw') - djbfft_info = get_info('djbfft') - if fftw_info: - dict_append(info, **fftw_info) - if djbfft_info: - dict_append(info, **djbfft_info) - self.set_info(**info) - return - - -class fftw_info(system_info): - #variables to override - section = 'fftw' - dir_env_var = 'FFTW' - notfounderror = FFTWNotFoundError - ver_info = [{'name':'fftw3', - 'libs':['fftw3'], - 'includes':['fftw3.h'], - 'macros':[('SCIPY_FFTW3_H', None)]}, - {'name':'fftw2', - 'libs':['rfftw', 'fftw'], - 'includes':['fftw.h', 'rfftw.h'], - 'macros':[('SCIPY_FFTW_H', None)]}] - - def calc_ver_info(self, ver_param): - """Returns True on successful version detection, else False""" - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - - opt = self.get_option_single(self.section + '_libs', 'libraries') - libs = self.get_libs(opt, ver_param['libs']) - info = self.check_libs(lib_dirs, libs) - if info is not None: - flag = 0 - for d in incl_dirs: - if len(self.combine_paths(d, ver_param['includes'])) \ - == len(ver_param['includes']): - dict_append(info, include_dirs=[d]) - flag = 1 - break - if flag: - dict_append(info, define_macros=ver_param['macros']) - else: - info = None - if info is not None: - self.set_info(**info) - return True - else: - log.info(' %s not found' % (ver_param['name'])) - return False - - def calc_info(self): - for i in self.ver_info: - if self.calc_ver_info(i): - break - - -class fftw2_info(fftw_info): - #variables to override - section = 'fftw' - dir_env_var = 'FFTW' - notfounderror = FFTWNotFoundError - ver_info = [{'name':'fftw2', - 'libs':['rfftw', 'fftw'], - 'includes':['fftw.h', 'rfftw.h'], - 'macros':[('SCIPY_FFTW_H', None)]} - ] - - -class fftw3_info(fftw_info): - #variables to override - section = 'fftw3' - dir_env_var = 'FFTW3' - notfounderror = FFTWNotFoundError - ver_info = [{'name':'fftw3', - 'libs':['fftw3'], - 'includes':['fftw3.h'], - 'macros':[('SCIPY_FFTW3_H', None)]}, - ] - - -class fftw3_armpl_info(fftw_info): - section = 'fftw3' - dir_env_var = 'ARMPL_DIR' - notfounderror = FFTWNotFoundError - ver_info = [{'name': 'fftw3', - 'libs': ['armpl_lp64_mp'], - 'includes': ['fftw3.h'], - 'macros': [('SCIPY_FFTW3_H', None)]}] - - -class dfftw_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [{'name':'dfftw', - 'libs':['drfftw', 'dfftw'], - 'includes':['dfftw.h', 'drfftw.h'], - 'macros':[('SCIPY_DFFTW_H', None)]}] - - -class sfftw_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [{'name':'sfftw', - 'libs':['srfftw', 'sfftw'], - 'includes':['sfftw.h', 'srfftw.h'], - 'macros':[('SCIPY_SFFTW_H', None)]}] - - -class fftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [{'name':'fftw threads', - 'libs':['rfftw_threads', 'fftw_threads'], - 'includes':['fftw_threads.h', 'rfftw_threads.h'], - 'macros':[('SCIPY_FFTW_THREADS_H', None)]}] - - -class dfftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [{'name':'dfftw threads', - 'libs':['drfftw_threads', 'dfftw_threads'], - 'includes':['dfftw_threads.h', 'drfftw_threads.h'], - 'macros':[('SCIPY_DFFTW_THREADS_H', None)]}] - - -class sfftw_threads_info(fftw_info): - section = 'fftw' - dir_env_var = 'FFTW' - ver_info = [{'name':'sfftw threads', - 'libs':['srfftw_threads', 'sfftw_threads'], - 'includes':['sfftw_threads.h', 'srfftw_threads.h'], - 'macros':[('SCIPY_SFFTW_THREADS_H', None)]}] - - -class djbfft_info(system_info): - section = 'djbfft' - dir_env_var = 'DJBFFT' - notfounderror = DJBFFTNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend(self.combine_paths(d, ['djbfft']) + [d]) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - info = None - for d in lib_dirs: - p = self.combine_paths(d, ['djbfft.a']) - if p: - info = {'extra_objects': p} - break - p = self.combine_paths(d, ['libdjbfft.a', 'libdjbfft' + so_ext]) - if p: - info = {'libraries': ['djbfft'], 'library_dirs': [d]} - break - if info is None: - return - for d in incl_dirs: - if len(self.combine_paths(d, ['fftc8.h', 'fftfreq.h'])) == 2: - dict_append(info, include_dirs=[d], - define_macros=[('SCIPY_DJBFFT_H', None)]) - self.set_info(**info) - return - return - - -class mkl_info(system_info): - section = 'mkl' - dir_env_var = 'MKLROOT' - _lib_mkl = ['mkl_rt'] - - def get_mkl_rootdir(self): - mklroot = os.environ.get('MKLROOT', None) - if mklroot is not None: - return mklroot - paths = os.environ.get('LD_LIBRARY_PATH', '').split(os.pathsep) - ld_so_conf = '/etc/ld.so.conf' - if os.path.isfile(ld_so_conf): - with open(ld_so_conf) as f: - for d in f: - d = d.strip() - if d: - paths.append(d) - intel_mkl_dirs = [] - for path in paths: - path_atoms = path.split(os.sep) - for m in path_atoms: - if m.startswith('mkl'): - d = os.sep.join(path_atoms[:path_atoms.index(m) + 2]) - intel_mkl_dirs.append(d) - break - for d in paths: - dirs = glob(os.path.join(d, 'mkl', '*')) - dirs += glob(os.path.join(d, 'mkl*')) - for sub_dir in dirs: - if os.path.isdir(os.path.join(sub_dir, 'lib')): - return sub_dir - return None - - def __init__(self): - mklroot = self.get_mkl_rootdir() - if mklroot is None: - system_info.__init__(self) - else: - from .cpuinfo import cpu - if cpu.is_Itanium(): - plt = '64' - elif cpu.is_Intel() and cpu.is_64bit(): - plt = 'intel64' - else: - plt = '32' - system_info.__init__( - self, - default_lib_dirs=[os.path.join(mklroot, 'lib', plt)], - default_include_dirs=[os.path.join(mklroot, 'include')]) - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - opt = self.get_option_single('mkl_libs', 'libraries') - mkl_libs = self.get_libs(opt, self._lib_mkl) - info = self.check_libs2(lib_dirs, mkl_libs) - if info is None: - return - dict_append(info, - define_macros=[('SCIPY_MKL_H', None), - ('HAVE_CBLAS', None)], - include_dirs=incl_dirs) - if sys.platform == 'win32': - pass # win32 has no pthread library - else: - dict_append(info, libraries=['pthread']) - self.set_info(**info) - - -class lapack_mkl_info(mkl_info): - pass - - -class blas_mkl_info(mkl_info): - pass - - -class ssl2_info(system_info): - section = 'ssl2' - dir_env_var = 'SSL2_DIR' - # Multi-threaded version. Python itself must be built by Fujitsu compiler. - _lib_ssl2 = ['fjlapackexsve'] - # Single-threaded version - #_lib_ssl2 = ['fjlapacksve'] - - def get_tcsds_rootdir(self): - tcsdsroot = os.environ.get('TCSDS_PATH', None) - if tcsdsroot is not None: - return tcsdsroot - return None - - def __init__(self): - tcsdsroot = self.get_tcsds_rootdir() - if tcsdsroot is None: - system_info.__init__(self) - else: - system_info.__init__( - self, - default_lib_dirs=[os.path.join(tcsdsroot, 'lib64')], - default_include_dirs=[os.path.join(tcsdsroot, - 'clang-comp/include')]) - - def calc_info(self): - tcsdsroot = self.get_tcsds_rootdir() - - lib_dirs = self.get_lib_dirs() - if lib_dirs is None: - lib_dirs = os.path.join(tcsdsroot, 'lib64') - - incl_dirs = self.get_include_dirs() - if incl_dirs is None: - incl_dirs = os.path.join(tcsdsroot, 'clang-comp/include') - - ssl2_libs = self.get_libs('ssl2_libs', self._lib_ssl2) - - info = self.check_libs2(lib_dirs, ssl2_libs) - if info is None: - return - dict_append(info, - define_macros=[('HAVE_CBLAS', None), - ('HAVE_SSL2', 1)], - include_dirs=incl_dirs,) - self.set_info(**info) - - -class lapack_ssl2_info(ssl2_info): - pass - - -class blas_ssl2_info(ssl2_info): - pass - - - -class armpl_info(system_info): - section = 'armpl' - dir_env_var = 'ARMPL_DIR' - _lib_armpl = ['armpl_lp64_mp'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - incl_dirs = self.get_include_dirs() - armpl_libs = self.get_libs('armpl_libs', self._lib_armpl) - info = self.check_libs2(lib_dirs, armpl_libs) - if info is None: - return - dict_append(info, - define_macros=[('SCIPY_MKL_H', None), - ('HAVE_CBLAS', None)], - include_dirs=incl_dirs) - self.set_info(**info) - -class lapack_armpl_info(armpl_info): - pass - -class blas_armpl_info(armpl_info): - pass - - -class atlas_info(system_info): - section = 'atlas' - dir_env_var = 'ATLAS' - _lib_names = ['f77blas', 'cblas'] - if sys.platform[:7] == 'freebsd': - _lib_atlas = ['atlas_r'] - _lib_lapack = ['alapack_r'] - else: - _lib_atlas = ['atlas'] - _lib_lapack = ['lapack'] - - notfounderror = AtlasNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend(self.combine_paths(d, ['atlas*', 'ATLAS*', - 'sse', '3dnow', 'sse2']) + [d]) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - info = {} - opt = self.get_option_single('atlas_libs', 'libraries') - atlas_libs = self.get_libs(opt, self._lib_names + self._lib_atlas) - lapack_libs = self.get_libs('lapack_libs', self._lib_lapack) - atlas = None - lapack = None - atlas_1 = None - for d in lib_dirs: - atlas = self.check_libs2(d, atlas_libs, []) - if atlas is not None: - lib_dirs2 = [d] + self.combine_paths(d, ['atlas*', 'ATLAS*']) - lapack = self.check_libs2(lib_dirs2, lapack_libs, []) - if lapack is not None: - break - if atlas: - atlas_1 = atlas - log.info(self.__class__) - if atlas is None: - atlas = atlas_1 - if atlas is None: - return - include_dirs = self.get_include_dirs() - h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None]) - h = h[0] - if h: - h = os.path.dirname(h) - dict_append(info, include_dirs=[h]) - info['language'] = 'c' - if lapack is not None: - dict_append(info, **lapack) - dict_append(info, **atlas) - elif 'lapack_atlas' in atlas['libraries']: - dict_append(info, **atlas) - dict_append(info, - define_macros=[('ATLAS_WITH_LAPACK_ATLAS', None)]) - self.set_info(**info) - return - else: - dict_append(info, **atlas) - dict_append(info, define_macros=[('ATLAS_WITHOUT_LAPACK', None)]) - message = textwrap.dedent(""" - ********************************************************************* - Could not find lapack library within the ATLAS installation. - ********************************************************************* - """) - warnings.warn(message, stacklevel=2) - self.set_info(**info) - return - - # Check if lapack library is complete, only warn if it is not. - lapack_dir = lapack['library_dirs'][0] - lapack_name = lapack['libraries'][0] - lapack_lib = None - lib_prefixes = ['lib'] - if sys.platform == 'win32': - lib_prefixes.append('') - for e in self.library_extensions(): - for prefix in lib_prefixes: - fn = os.path.join(lapack_dir, prefix + lapack_name + e) - if os.path.exists(fn): - lapack_lib = fn - break - if lapack_lib: - break - if lapack_lib is not None: - sz = os.stat(lapack_lib)[6] - if sz <= 4000 * 1024: - message = textwrap.dedent(""" - ********************************************************************* - Lapack library (from ATLAS) is probably incomplete: - size of %s is %sk (expected >4000k) - - Follow the instructions in the KNOWN PROBLEMS section of the file - numpy/INSTALL.txt. - ********************************************************************* - """) % (lapack_lib, sz / 1024) - warnings.warn(message, stacklevel=2) - else: - info['language'] = 'f77' - - atlas_version, atlas_extra_info = get_atlas_version(**atlas) - dict_append(info, **atlas_extra_info) - - self.set_info(**info) - - -class atlas_blas_info(atlas_info): - _lib_names = ['f77blas', 'cblas'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - info = {} - opt = self.get_option_single('atlas_libs', 'libraries') - atlas_libs = self.get_libs(opt, self._lib_names + self._lib_atlas) - atlas = self.check_libs2(lib_dirs, atlas_libs, []) - if atlas is None: - return - include_dirs = self.get_include_dirs() - h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None]) - h = h[0] - if h: - h = os.path.dirname(h) - dict_append(info, include_dirs=[h]) - info['language'] = 'c' - info['define_macros'] = [('HAVE_CBLAS', None)] - - atlas_version, atlas_extra_info = get_atlas_version(**atlas) - dict_append(atlas, **atlas_extra_info) - - dict_append(info, **atlas) - - self.set_info(**info) - return - - -class atlas_threads_info(atlas_info): - dir_env_var = ['PTATLAS', 'ATLAS'] - _lib_names = ['ptf77blas', 'ptcblas'] - - -class atlas_blas_threads_info(atlas_blas_info): - dir_env_var = ['PTATLAS', 'ATLAS'] - _lib_names = ['ptf77blas', 'ptcblas'] - - -class lapack_atlas_info(atlas_info): - _lib_names = ['lapack_atlas'] + atlas_info._lib_names - - -class lapack_atlas_threads_info(atlas_threads_info): - _lib_names = ['lapack_atlas'] + atlas_threads_info._lib_names - - -class atlas_3_10_info(atlas_info): - _lib_names = ['satlas'] - _lib_atlas = _lib_names - _lib_lapack = _lib_names - - -class atlas_3_10_blas_info(atlas_3_10_info): - _lib_names = ['satlas'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - info = {} - opt = self.get_option_single('atlas_lib', 'libraries') - atlas_libs = self.get_libs(opt, self._lib_names) - atlas = self.check_libs2(lib_dirs, atlas_libs, []) - if atlas is None: - return - include_dirs = self.get_include_dirs() - h = (self.combine_paths(lib_dirs + include_dirs, 'cblas.h') or [None]) - h = h[0] - if h: - h = os.path.dirname(h) - dict_append(info, include_dirs=[h]) - info['language'] = 'c' - info['define_macros'] = [('HAVE_CBLAS', None)] - - atlas_version, atlas_extra_info = get_atlas_version(**atlas) - dict_append(atlas, **atlas_extra_info) - - dict_append(info, **atlas) - - self.set_info(**info) - return - - -class atlas_3_10_threads_info(atlas_3_10_info): - dir_env_var = ['PTATLAS', 'ATLAS'] - _lib_names = ['tatlas'] - _lib_atlas = _lib_names - _lib_lapack = _lib_names - - -class atlas_3_10_blas_threads_info(atlas_3_10_blas_info): - dir_env_var = ['PTATLAS', 'ATLAS'] - _lib_names = ['tatlas'] - - -class lapack_atlas_3_10_info(atlas_3_10_info): - pass - - -class lapack_atlas_3_10_threads_info(atlas_3_10_threads_info): - pass - - -class lapack_info(system_info): - section = 'lapack' - dir_env_var = 'LAPACK' - _lib_names = ['lapack'] - notfounderror = LapackNotFoundError - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - opt = self.get_option_single('lapack_libs', 'libraries') - lapack_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs(lib_dirs, lapack_libs, []) - if info is None: - return - info['language'] = 'f77' - self.set_info(**info) - - -class lapack_src_info(system_info): - # LAPACK_SRC is deprecated, please do not use this! - # Build or install a BLAS library via your package manager or from - # source separately. - section = 'lapack_src' - dir_env_var = 'LAPACK_SRC' - notfounderror = LapackSrcNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d, ['LAPACK*/SRC', 'SRC'])) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d, 'dgesv.f')): - src_dir = d - break - if not src_dir: - #XXX: Get sources from netlib. May be ask first. - return - # The following is extracted from LAPACK-3.0/SRC/Makefile. - # Added missing names from lapack-lite-3.1.1/SRC/Makefile - # while keeping removed names for Lapack-3.0 compatibility. - allaux = ''' - ilaenv ieeeck lsame lsamen xerbla - iparmq - ''' # *.f - laux = ''' - bdsdc bdsqr disna labad lacpy ladiv lae2 laebz laed0 laed1 - laed2 laed3 laed4 laed5 laed6 laed7 laed8 laed9 laeda laev2 - lagtf lagts lamch lamrg lanst lapy2 lapy3 larnv larrb larre - larrf lartg laruv las2 lascl lasd0 lasd1 lasd2 lasd3 lasd4 - lasd5 lasd6 lasd7 lasd8 lasd9 lasda lasdq lasdt laset lasq1 - lasq2 lasq3 lasq4 lasq5 lasq6 lasr lasrt lassq lasv2 pttrf - stebz stedc steqr sterf - - larra larrc larrd larr larrk larrj larrr laneg laisnan isnan - lazq3 lazq4 - ''' # [s|d]*.f - lasrc = ''' - gbbrd gbcon gbequ gbrfs gbsv gbsvx gbtf2 gbtrf gbtrs gebak - gebal gebd2 gebrd gecon geequ gees geesx geev geevx gegs gegv - gehd2 gehrd gelq2 gelqf gels gelsd gelss gelsx gelsy geql2 - geqlf geqp3 geqpf geqr2 geqrf gerfs gerq2 gerqf gesc2 gesdd - gesv gesvd gesvx getc2 getf2 getrf getri getrs ggbak ggbal - gges ggesx ggev ggevx ggglm gghrd gglse ggqrf ggrqf ggsvd - ggsvp gtcon gtrfs gtsv gtsvx gttrf gttrs gtts2 hgeqz hsein - hseqr labrd lacon laein lags2 lagtm lahqr lahrd laic1 lals0 - lalsa lalsd langb lange langt lanhs lansb lansp lansy lantb - lantp lantr lapll lapmt laqgb laqge laqp2 laqps laqsb laqsp - laqsy lar1v lar2v larf larfb larfg larft larfx largv larrv - lartv larz larzb larzt laswp lasyf latbs latdf latps latrd - latrs latrz latzm lauu2 lauum pbcon pbequ pbrfs pbstf pbsv - pbsvx pbtf2 pbtrf pbtrs pocon poequ porfs posv posvx potf2 - potrf potri potrs ppcon ppequ pprfs ppsv ppsvx pptrf pptri - pptrs ptcon pteqr ptrfs ptsv ptsvx pttrs ptts2 spcon sprfs - spsv spsvx sptrf sptri sptrs stegr stein sycon syrfs sysv - sysvx sytf2 sytrf sytri sytrs tbcon tbrfs tbtrs tgevc tgex2 - tgexc tgsen tgsja tgsna tgsy2 tgsyl tpcon tprfs tptri tptrs - trcon trevc trexc trrfs trsen trsna trsyl trti2 trtri trtrs - tzrqf tzrzf - - lacn2 lahr2 stemr laqr0 laqr1 laqr2 laqr3 laqr4 laqr5 - ''' # [s|c|d|z]*.f - sd_lasrc = ''' - laexc lag2 lagv2 laln2 lanv2 laqtr lasy2 opgtr opmtr org2l - org2r orgbr orghr orgl2 orglq orgql orgqr orgr2 orgrq orgtr - orm2l orm2r ormbr ormhr orml2 ormlq ormql ormqr ormr2 ormr3 - ormrq ormrz ormtr rscl sbev sbevd sbevx sbgst sbgv sbgvd sbgvx - sbtrd spev spevd spevx spgst spgv spgvd spgvx sptrd stev stevd - stevr stevx syev syevd syevr syevx sygs2 sygst sygv sygvd - sygvx sytd2 sytrd - ''' # [s|d]*.f - cz_lasrc = ''' - bdsqr hbev hbevd hbevx hbgst hbgv hbgvd hbgvx hbtrd hecon heev - heevd heevr heevx hegs2 hegst hegv hegvd hegvx herfs hesv - hesvx hetd2 hetf2 hetrd hetrf hetri hetrs hpcon hpev hpevd - hpevx hpgst hpgv hpgvd hpgvx hprfs hpsv hpsvx hptrd hptrf - hptri hptrs lacgv lacp2 lacpy lacrm lacrt ladiv laed0 laed7 - laed8 laesy laev2 lahef lanhb lanhe lanhp lanht laqhb laqhe - laqhp larcm larnv lartg lascl laset lasr lassq pttrf rot spmv - spr stedc steqr symv syr ung2l ung2r ungbr unghr ungl2 unglq - ungql ungqr ungr2 ungrq ungtr unm2l unm2r unmbr unmhr unml2 - unmlq unmql unmqr unmr2 unmr3 unmrq unmrz unmtr upgtr upmtr - ''' # [c|z]*.f - ####### - sclaux = laux + ' econd ' # s*.f - dzlaux = laux + ' secnd ' # d*.f - slasrc = lasrc + sd_lasrc # s*.f - dlasrc = lasrc + sd_lasrc # d*.f - clasrc = lasrc + cz_lasrc + ' srot srscl ' # c*.f - zlasrc = lasrc + cz_lasrc + ' drot drscl ' # z*.f - oclasrc = ' icmax1 scsum1 ' # *.f - ozlasrc = ' izmax1 dzsum1 ' # *.f - sources = ['s%s.f' % f for f in (sclaux + slasrc).split()] \ - + ['d%s.f' % f for f in (dzlaux + dlasrc).split()] \ - + ['c%s.f' % f for f in (clasrc).split()] \ - + ['z%s.f' % f for f in (zlasrc).split()] \ - + ['%s.f' % f for f in (allaux + oclasrc + ozlasrc).split()] - sources = [os.path.join(src_dir, f) for f in sources] - # Lapack 3.1: - src_dir2 = os.path.join(src_dir, '..', 'INSTALL') - sources += [os.path.join(src_dir2, p + 'lamch.f') for p in 'sdcz'] - # Lapack 3.2.1: - sources += [os.path.join(src_dir, p + 'larfp.f') for p in 'sdcz'] - sources += [os.path.join(src_dir, 'ila' + p + 'lr.f') for p in 'sdcz'] - sources += [os.path.join(src_dir, 'ila' + p + 'lc.f') for p in 'sdcz'] - # Should we check here actual existence of source files? - # Yes, the file listing is different between 3.0 and 3.1 - # versions. - sources = [f for f in sources if os.path.isfile(f)] - info = {'sources': sources, 'language': 'f77'} - self.set_info(**info) - -atlas_version_c_text = r''' -/* This file is generated from numpy/distutils/system_info.py */ -void ATL_buildinfo(void); -int main(void) { - ATL_buildinfo(); - return 0; -} -''' - -_cached_atlas_version = {} - - -def get_atlas_version(**config): - libraries = config.get('libraries', []) - library_dirs = config.get('library_dirs', []) - key = (tuple(libraries), tuple(library_dirs)) - if key in _cached_atlas_version: - return _cached_atlas_version[key] - c = cmd_config(Distribution()) - atlas_version = None - info = {} - try: - s, o = c.get_output(atlas_version_c_text, - libraries=libraries, library_dirs=library_dirs, - ) - if s and re.search(r'undefined reference to `_gfortran', o, re.M): - s, o = c.get_output(atlas_version_c_text, - libraries=libraries + ['gfortran'], - library_dirs=library_dirs, - ) - if not s: - warnings.warn(textwrap.dedent(""" - ***************************************************** - Linkage with ATLAS requires gfortran. Use - - python setup.py config_fc --fcompiler=gnu95 ... - - when building extension libraries that use ATLAS. - Make sure that -lgfortran is used for C++ extensions. - ***************************************************** - """), stacklevel=2) - dict_append(info, language='f90', - define_macros=[('ATLAS_REQUIRES_GFORTRAN', None)]) - except Exception: # failed to get version from file -- maybe on Windows - # look at directory name - for o in library_dirs: - m = re.search(r'ATLAS_(?P\d+[.]\d+[.]\d+)_', o) - if m: - atlas_version = m.group('version') - if atlas_version is not None: - break - - # final choice --- look at ATLAS_VERSION environment - # variable - if atlas_version is None: - atlas_version = os.environ.get('ATLAS_VERSION', None) - if atlas_version: - dict_append(info, define_macros=[( - 'ATLAS_INFO', _c_string_literal(atlas_version)) - ]) - else: - dict_append(info, define_macros=[('NO_ATLAS_INFO', -1)]) - return atlas_version or '?.?.?', info - - if not s: - m = re.search(r'ATLAS version (?P\d+[.]\d+[.]\d+)', o) - if m: - atlas_version = m.group('version') - if atlas_version is None: - if re.search(r'undefined symbol: ATL_buildinfo', o, re.M): - atlas_version = '3.2.1_pre3.3.6' - else: - log.info('Status: %d', s) - log.info('Output: %s', o) - - elif atlas_version == '3.2.1_pre3.3.6': - dict_append(info, define_macros=[('NO_ATLAS_INFO', -2)]) - else: - dict_append(info, define_macros=[( - 'ATLAS_INFO', _c_string_literal(atlas_version)) - ]) - result = _cached_atlas_version[key] = atlas_version, info - return result - - -class lapack_opt_info(system_info): - notfounderror = LapackNotFoundError - - # List of all known LAPACK libraries, in the default order - lapack_order = ['armpl', 'mkl', 'ssl2', 'openblas', 'flame', - 'accelerate', 'atlas', 'lapack'] - order_env_var_name = 'NPY_LAPACK_ORDER' - - def _calc_info_armpl(self): - info = get_info('lapack_armpl') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_mkl(self): - info = get_info('lapack_mkl') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_ssl2(self): - info = get_info('lapack_ssl2') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_openblas(self): - info = get_info('openblas_lapack') - if info: - self.set_info(**info) - return True - info = get_info('openblas_clapack') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_flame(self): - info = get_info('flame') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_atlas(self): - info = get_info('atlas_3_10_threads') - if not info: - info = get_info('atlas_3_10') - if not info: - info = get_info('atlas_threads') - if not info: - info = get_info('atlas') - if info: - # Figure out if ATLAS has lapack... - # If not we need the lapack library, but not BLAS! - l = info.get('define_macros', []) - if ('ATLAS_WITH_LAPACK_ATLAS', None) in l \ - or ('ATLAS_WITHOUT_LAPACK', None) in l: - # Get LAPACK (with possible warnings) - # If not found we don't accept anything - # since we can't use ATLAS with LAPACK! - lapack_info = self._get_info_lapack() - if not lapack_info: - return False - dict_append(info, **lapack_info) - self.set_info(**info) - return True - return False - - def _calc_info_accelerate(self): - info = get_info('accelerate') - if info: - self.set_info(**info) - return True - return False - - def _get_info_blas(self): - # Default to get the optimized BLAS implementation - info = get_info('blas_opt') - if not info: - warnings.warn(BlasNotFoundError.__doc__ or '', stacklevel=3) - info_src = get_info('blas_src') - if not info_src: - warnings.warn(BlasSrcNotFoundError.__doc__ or '', stacklevel=3) - return {} - dict_append(info, libraries=[('fblas_src', info_src)]) - return info - - def _get_info_lapack(self): - info = get_info('lapack') - if not info: - warnings.warn(LapackNotFoundError.__doc__ or '', stacklevel=3) - info_src = get_info('lapack_src') - if not info_src: - warnings.warn(LapackSrcNotFoundError.__doc__ or '', stacklevel=3) - return {} - dict_append(info, libraries=[('flapack_src', info_src)]) - return info - - def _calc_info_lapack(self): - info = self._get_info_lapack() - if info: - info_blas = self._get_info_blas() - dict_append(info, **info_blas) - dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)]) - self.set_info(**info) - return True - return False - - def _calc_info_from_envvar(self): - info = {} - info['language'] = 'f77' - info['libraries'] = [] - info['include_dirs'] = [] - info['define_macros'] = [] - info['extra_link_args'] = os.environ['NPY_LAPACK_LIBS'].split() - self.set_info(**info) - return True - - def _calc_info(self, name): - return getattr(self, '_calc_info_{}'.format(name))() - - def calc_info(self): - lapack_order, unknown_order = _parse_env_order(self.lapack_order, self.order_env_var_name) - if len(unknown_order) > 0: - raise ValueError("lapack_opt_info user defined " - "LAPACK order has unacceptable " - "values: {}".format(unknown_order)) - - if 'NPY_LAPACK_LIBS' in os.environ: - # Bypass autodetection, set language to F77 and use env var linker - # flags directly - self._calc_info_from_envvar() - return - - for lapack in lapack_order: - if self._calc_info(lapack): - return - - if 'lapack' not in lapack_order: - # Since the user may request *not* to use any library, we still need - # to raise warnings to signal missing packages! - warnings.warn(LapackNotFoundError.__doc__ or '', stacklevel=2) - warnings.warn(LapackSrcNotFoundError.__doc__ or '', stacklevel=2) - - -class _ilp64_opt_info_mixin: - symbol_suffix = None - symbol_prefix = None - - def _check_info(self, info): - macros = dict(info.get('define_macros', [])) - prefix = macros.get('BLAS_SYMBOL_PREFIX', '') - suffix = macros.get('BLAS_SYMBOL_SUFFIX', '') - - if self.symbol_prefix not in (None, prefix): - return False - - if self.symbol_suffix not in (None, suffix): - return False - - return bool(info) - - -class lapack_ilp64_opt_info(lapack_opt_info, _ilp64_opt_info_mixin): - notfounderror = LapackILP64NotFoundError - lapack_order = ['openblas64_', 'openblas_ilp64', 'accelerate'] - order_env_var_name = 'NPY_LAPACK_ILP64_ORDER' - - def _calc_info(self, name): - print('lapack_ilp64_opt_info._calc_info(name=%s)' % (name)) - info = get_info(name + '_lapack') - if self._check_info(info): - self.set_info(**info) - return True - else: - print('%s_lapack does not exist' % (name)) - return False - - -class lapack_ilp64_plain_opt_info(lapack_ilp64_opt_info): - # Same as lapack_ilp64_opt_info, but fix symbol names - symbol_prefix = '' - symbol_suffix = '' - - -class lapack64__opt_info(lapack_ilp64_opt_info): - symbol_prefix = '' - symbol_suffix = '64_' - - -class blas_opt_info(system_info): - notfounderror = BlasNotFoundError - # List of all known BLAS libraries, in the default order - - blas_order = ['armpl', 'mkl', 'ssl2', 'blis', 'openblas', - 'accelerate', 'atlas', 'blas'] - order_env_var_name = 'NPY_BLAS_ORDER' - - def _calc_info_armpl(self): - info = get_info('blas_armpl') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_mkl(self): - info = get_info('blas_mkl') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_ssl2(self): - info = get_info('blas_ssl2') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_blis(self): - info = get_info('blis') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_openblas(self): - info = get_info('openblas') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_atlas(self): - info = get_info('atlas_3_10_blas_threads') - if not info: - info = get_info('atlas_3_10_blas') - if not info: - info = get_info('atlas_blas_threads') - if not info: - info = get_info('atlas_blas') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_accelerate(self): - info = get_info('accelerate') - if info: - self.set_info(**info) - return True - return False - - def _calc_info_blas(self): - # Warn about a non-optimized BLAS library - warnings.warn(BlasOptNotFoundError.__doc__ or '', stacklevel=3) - info = {} - dict_append(info, define_macros=[('NO_ATLAS_INFO', 1)]) - - blas = get_info('blas') - if blas: - dict_append(info, **blas) - else: - # Not even BLAS was found! - warnings.warn(BlasNotFoundError.__doc__ or '', stacklevel=3) - - blas_src = get_info('blas_src') - if not blas_src: - warnings.warn(BlasSrcNotFoundError.__doc__ or '', stacklevel=3) - return False - dict_append(info, libraries=[('fblas_src', blas_src)]) - - self.set_info(**info) - return True - - def _calc_info_from_envvar(self): - info = {} - info['language'] = 'f77' - info['libraries'] = [] - info['include_dirs'] = [] - info['define_macros'] = [] - info['extra_link_args'] = os.environ['NPY_BLAS_LIBS'].split() - if 'NPY_CBLAS_LIBS' in os.environ: - info['define_macros'].append(('HAVE_CBLAS', None)) - info['extra_link_args'].extend( - os.environ['NPY_CBLAS_LIBS'].split()) - self.set_info(**info) - return True - - def _calc_info(self, name): - return getattr(self, '_calc_info_{}'.format(name))() - - def calc_info(self): - blas_order, unknown_order = _parse_env_order(self.blas_order, self.order_env_var_name) - if len(unknown_order) > 0: - raise ValueError("blas_opt_info user defined BLAS order has unacceptable values: {}".format(unknown_order)) - - if 'NPY_BLAS_LIBS' in os.environ: - # Bypass autodetection, set language to F77 and use env var linker - # flags directly - self._calc_info_from_envvar() - return - - for blas in blas_order: - if self._calc_info(blas): - return - - if 'blas' not in blas_order: - # Since the user may request *not* to use any library, we still need - # to raise warnings to signal missing packages! - warnings.warn(BlasNotFoundError.__doc__ or '', stacklevel=2) - warnings.warn(BlasSrcNotFoundError.__doc__ or '', stacklevel=2) - - -class blas_ilp64_opt_info(blas_opt_info, _ilp64_opt_info_mixin): - notfounderror = BlasILP64NotFoundError - blas_order = ['openblas64_', 'openblas_ilp64', 'accelerate'] - order_env_var_name = 'NPY_BLAS_ILP64_ORDER' - - def _calc_info(self, name): - info = get_info(name) - if self._check_info(info): - self.set_info(**info) - return True - return False - - -class blas_ilp64_plain_opt_info(blas_ilp64_opt_info): - symbol_prefix = '' - symbol_suffix = '' - - -class blas64__opt_info(blas_ilp64_opt_info): - symbol_prefix = '' - symbol_suffix = '64_' - - -class cblas_info(system_info): - section = 'cblas' - dir_env_var = 'CBLAS' - # No default as it's used only in blas_info - _lib_names = [] - notfounderror = BlasNotFoundError - - -class blas_info(system_info): - section = 'blas' - dir_env_var = 'BLAS' - _lib_names = ['blas'] - notfounderror = BlasNotFoundError - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - opt = self.get_option_single('blas_libs', 'libraries') - blas_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs(lib_dirs, blas_libs, []) - if info is None: - return - else: - info['include_dirs'] = self.get_include_dirs() - if platform.system() == 'Windows': - # The check for windows is needed because get_cblas_libs uses the - # same compiler that was used to compile Python and msvc is - # often not installed when mingw is being used. This rough - # treatment is not desirable, but windows is tricky. - info['language'] = 'f77' # XXX: is it generally true? - # If cblas is given as an option, use those - cblas_info_obj = cblas_info() - cblas_opt = cblas_info_obj.get_option_single('cblas_libs', 'libraries') - cblas_libs = cblas_info_obj.get_libs(cblas_opt, None) - if cblas_libs: - info['libraries'] = cblas_libs + blas_libs - info['define_macros'] = [('HAVE_CBLAS', None)] - else: - lib = self.get_cblas_libs(info) - if lib is not None: - info['language'] = 'c' - info['libraries'] = lib - info['define_macros'] = [('HAVE_CBLAS', None)] - self.set_info(**info) - - def get_cblas_libs(self, info): - """ Check whether we can link with CBLAS interface - - This method will search through several combinations of libraries - to check whether CBLAS is present: - - 1. Libraries in ``info['libraries']``, as is - 2. As 1. but also explicitly adding ``'cblas'`` as a library - 3. As 1. but also explicitly adding ``'blas'`` as a library - 4. Check only library ``'cblas'`` - 5. Check only library ``'blas'`` - - Parameters - ---------- - info : dict - system information dictionary for compilation and linking - - Returns - ------- - libraries : list of str or None - a list of libraries that enables the use of CBLAS interface. - Returns None if not found or a compilation error occurs. - - Since 1.17 returns a list. - """ - # primitive cblas check by looking for the header and trying to link - # cblas or blas - c = customized_ccompiler() - tmpdir = tempfile.mkdtemp() - s = textwrap.dedent("""\ - #include - int main(int argc, const char *argv[]) - { - double a[4] = {1,2,3,4}; - double b[4] = {5,6,7,8}; - return cblas_ddot(4, a, 1, b, 1) > 10; - }""") - src = os.path.join(tmpdir, 'source.c') - try: - with open(src, 'w') as f: - f.write(s) - - try: - # check we can compile (find headers) - obj = c.compile([src], output_dir=tmpdir, - include_dirs=self.get_include_dirs()) - except (distutils.ccompiler.CompileError, distutils.ccompiler.LinkError): - return None - - # check we can link (find library) - # some systems have separate cblas and blas libs. - for libs in [info['libraries'], ['cblas'] + info['libraries'], - ['blas'] + info['libraries'], ['cblas'], ['blas']]: - try: - c.link_executable(obj, os.path.join(tmpdir, "a.out"), - libraries=libs, - library_dirs=info['library_dirs'], - extra_postargs=info.get('extra_link_args', [])) - return libs - except distutils.ccompiler.LinkError: - pass - finally: - shutil.rmtree(tmpdir) - return None - - -class openblas_info(blas_info): - section = 'openblas' - dir_env_var = 'OPENBLAS' - _lib_names = ['openblas'] - _require_symbols = [] - notfounderror = BlasNotFoundError - - @property - def symbol_prefix(self): - try: - return self.cp.get(self.section, 'symbol_prefix') - except NoOptionError: - return '' - - @property - def symbol_suffix(self): - try: - return self.cp.get(self.section, 'symbol_suffix') - except NoOptionError: - return '' - - def _calc_info(self): - c = customized_ccompiler() - - lib_dirs = self.get_lib_dirs() - - # Prefer to use libraries over openblas_libs - opt = self.get_option_single('openblas_libs', 'libraries') - openblas_libs = self.get_libs(opt, self._lib_names) - - info = self.check_libs(lib_dirs, openblas_libs, []) - - if c.compiler_type == "msvc" and info is None: - from numpy.distutils.fcompiler import new_fcompiler - f = new_fcompiler(c_compiler=c) - if f and f.compiler_type == 'gnu95': - # Try gfortran-compatible library files - info = self.check_msvc_gfortran_libs(lib_dirs, openblas_libs) - # Skip lapack check, we'd need build_ext to do it - skip_symbol_check = True - elif info: - skip_symbol_check = False - info['language'] = 'c' - - if info is None: - return None - - # Add extra info for OpenBLAS - extra_info = self.calc_extra_info() - dict_append(info, **extra_info) - - if not (skip_symbol_check or self.check_symbols(info)): - return None - - info['define_macros'] = [('HAVE_CBLAS', None)] - if self.symbol_prefix: - info['define_macros'] += [('BLAS_SYMBOL_PREFIX', self.symbol_prefix)] - if self.symbol_suffix: - info['define_macros'] += [ - ('BLAS_SYMBOL_SUFFIX', self.symbol_suffix), - ('OPENBLAS_ILP64_NAMING_SCHEME', None), - ] - - return info - - def calc_info(self): - info = self._calc_info() - if info is not None: - self.set_info(**info) - - def check_msvc_gfortran_libs(self, library_dirs, libraries): - # First, find the full path to each library directory - library_paths = [] - for library in libraries: - for library_dir in library_dirs: - # MinGW static ext will be .a - fullpath = os.path.join(library_dir, library + '.a') - if os.path.isfile(fullpath): - library_paths.append(fullpath) - break - else: - return None - - # Generate numpy.distutils virtual static library file - basename = self.__class__.__name__ - tmpdir = os.path.join(os.getcwd(), 'build', basename) - if not os.path.isdir(tmpdir): - os.makedirs(tmpdir) - - info = {'library_dirs': [tmpdir], - 'libraries': [basename], - 'language': 'f77'} - - fake_lib_file = os.path.join(tmpdir, basename + '.fobjects') - fake_clib_file = os.path.join(tmpdir, basename + '.cobjects') - with open(fake_lib_file, 'w') as f: - f.write("\n".join(library_paths)) - with open(fake_clib_file, 'w') as f: - pass - - return info - - def check_symbols(self, info): - res = False - c = customized_ccompiler() - - tmpdir = tempfile.mkdtemp() - - prototypes = "\n".join("void %s%s%s();" % (self.symbol_prefix, - symbol_name, - self.symbol_suffix) - for symbol_name in self._require_symbols) - calls = "\n".join("%s%s%s();" % (self.symbol_prefix, - symbol_name, - self.symbol_suffix) - for symbol_name in self._require_symbols) - s = textwrap.dedent("""\ - %(prototypes)s - int main(int argc, const char *argv[]) - { - %(calls)s - return 0; - }""") % dict(prototypes=prototypes, calls=calls) - src = os.path.join(tmpdir, 'source.c') - out = os.path.join(tmpdir, 'a.out') - # Add the additional "extra" arguments - try: - extra_args = info['extra_link_args'] - except Exception: - extra_args = [] - try: - with open(src, 'w') as f: - f.write(s) - obj = c.compile([src], output_dir=tmpdir) - try: - c.link_executable(obj, out, libraries=info['libraries'], - library_dirs=info['library_dirs'], - extra_postargs=extra_args) - res = True - except distutils.ccompiler.LinkError: - res = False - finally: - shutil.rmtree(tmpdir) - return res - -class openblas_lapack_info(openblas_info): - section = 'openblas' - dir_env_var = 'OPENBLAS' - _lib_names = ['openblas'] - _require_symbols = ['zungqr_'] - notfounderror = BlasNotFoundError - -class openblas_clapack_info(openblas_lapack_info): - _lib_names = ['openblas', 'lapack'] - -class openblas_ilp64_info(openblas_info): - section = 'openblas_ilp64' - dir_env_var = 'OPENBLAS_ILP64' - _lib_names = ['openblas64'] - _require_symbols = ['dgemm_', 'cblas_dgemm'] - notfounderror = BlasILP64NotFoundError - - def _calc_info(self): - info = super()._calc_info() - if info is not None: - info['define_macros'] += [('HAVE_BLAS_ILP64', None)] - return info - -class openblas_ilp64_lapack_info(openblas_ilp64_info): - _require_symbols = ['dgemm_', 'cblas_dgemm', 'zungqr_', 'LAPACKE_zungqr'] - - def _calc_info(self): - info = super()._calc_info() - if info: - info['define_macros'] += [('HAVE_LAPACKE', None)] - return info - -class openblas64__info(openblas_ilp64_info): - # ILP64 Openblas, with default symbol suffix - section = 'openblas64_' - dir_env_var = 'OPENBLAS64_' - _lib_names = ['openblas64_'] - symbol_suffix = '64_' - symbol_prefix = '' - -class openblas64__lapack_info(openblas_ilp64_lapack_info, openblas64__info): - pass - -class blis_info(blas_info): - section = 'blis' - dir_env_var = 'BLIS' - _lib_names = ['blis'] - notfounderror = BlasNotFoundError - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - opt = self.get_option_single('blis_libs', 'libraries') - blis_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs2(lib_dirs, blis_libs, []) - if info is None: - return - - # Add include dirs - incl_dirs = self.get_include_dirs() - dict_append(info, - language='c', - define_macros=[('HAVE_CBLAS', None)], - include_dirs=incl_dirs) - self.set_info(**info) - - -class flame_info(system_info): - """ Usage of libflame for LAPACK operations - - This requires libflame to be compiled with lapack wrappers: - - ./configure --enable-lapack2flame ... - - Be aware that libflame 5.1.0 has some missing names in the shared library, so - if you have problems, try the static flame library. - """ - section = 'flame' - _lib_names = ['flame'] - notfounderror = FlameNotFoundError - - def check_embedded_lapack(self, info): - """ libflame does not necessarily have a wrapper for fortran LAPACK, we need to check """ - c = customized_ccompiler() - - tmpdir = tempfile.mkdtemp() - s = textwrap.dedent("""\ - void zungqr_(); - int main(int argc, const char *argv[]) - { - zungqr_(); - return 0; - }""") - src = os.path.join(tmpdir, 'source.c') - out = os.path.join(tmpdir, 'a.out') - # Add the additional "extra" arguments - extra_args = info.get('extra_link_args', []) - try: - with open(src, 'w') as f: - f.write(s) - obj = c.compile([src], output_dir=tmpdir) - try: - c.link_executable(obj, out, libraries=info['libraries'], - library_dirs=info['library_dirs'], - extra_postargs=extra_args) - return True - except distutils.ccompiler.LinkError: - return False - finally: - shutil.rmtree(tmpdir) - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - flame_libs = self.get_libs('libraries', self._lib_names) - - info = self.check_libs2(lib_dirs, flame_libs, []) - if info is None: - return - - # Add the extra flag args to info - extra_info = self.calc_extra_info() - dict_append(info, **extra_info) - - if self.check_embedded_lapack(info): - # check if the user has supplied all information required - self.set_info(**info) - else: - # Try and get the BLAS lib to see if we can get it to work - blas_info = get_info('blas_opt') - if not blas_info: - # since we already failed once, this ain't going to work either - return - - # Now we need to merge the two dictionaries - for key in blas_info: - if isinstance(blas_info[key], list): - info[key] = info.get(key, []) + blas_info[key] - elif isinstance(blas_info[key], tuple): - info[key] = info.get(key, ()) + blas_info[key] - else: - info[key] = info.get(key, '') + blas_info[key] - - # Now check again - if self.check_embedded_lapack(info): - self.set_info(**info) - - -class accelerate_info(system_info): - section = 'accelerate' - _lib_names = ['accelerate', 'veclib'] - notfounderror = BlasNotFoundError - - def calc_info(self): - # Make possible to enable/disable from config file/env var - libraries = os.environ.get('ACCELERATE') - if libraries: - libraries = [libraries] - else: - libraries = self.get_libs('libraries', self._lib_names) - libraries = [lib.strip().lower() for lib in libraries] - - if (sys.platform == 'darwin' and - not os.getenv('_PYTHON_HOST_PLATFORM', None)): - # Use the system BLAS from Accelerate or vecLib under OSX - args = [] - link_args = [] - if get_platform()[-4:] == 'i386' or 'intel' in get_platform() or \ - 'x86_64' in get_platform() or \ - 'i386' in platform.platform(): - intel = 1 - else: - intel = 0 - if (os.path.exists('/System/Library/Frameworks' - '/Accelerate.framework/') and - 'accelerate' in libraries): - if intel: - args.extend(['-msse3']) - args.extend([ - '-I/System/Library/Frameworks/vecLib.framework/Headers']) - link_args.extend(['-Wl,-framework', '-Wl,Accelerate']) - elif (os.path.exists('/System/Library/Frameworks' - '/vecLib.framework/') and - 'veclib' in libraries): - if intel: - args.extend(['-msse3']) - args.extend([ - '-I/System/Library/Frameworks/vecLib.framework/Headers']) - link_args.extend(['-Wl,-framework', '-Wl,vecLib']) - - if args: - macros = [ - ('NO_ATLAS_INFO', 3), - ('HAVE_CBLAS', None), - ('ACCELERATE_NEW_LAPACK', None), - ] - if(os.getenv('NPY_USE_BLAS_ILP64', None)): - print('Setting HAVE_BLAS_ILP64') - macros += [ - ('HAVE_BLAS_ILP64', None), - ('ACCELERATE_LAPACK_ILP64', None), - ] - self.set_info(extra_compile_args=args, - extra_link_args=link_args, - define_macros=macros) - - return - -class accelerate_lapack_info(accelerate_info): - def _calc_info(self): - return super()._calc_info() - -class blas_src_info(system_info): - # BLAS_SRC is deprecated, please do not use this! - # Build or install a BLAS library via your package manager or from - # source separately. - section = 'blas_src' - dir_env_var = 'BLAS_SRC' - notfounderror = BlasSrcNotFoundError - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d, ['blas'])) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d, 'daxpy.f')): - src_dir = d - break - if not src_dir: - #XXX: Get sources from netlib. May be ask first. - return - blas1 = ''' - caxpy csscal dnrm2 dzasum saxpy srotg zdotc ccopy cswap drot - dznrm2 scasum srotm zdotu cdotc dasum drotg icamax scnrm2 - srotmg zdrot cdotu daxpy drotm idamax scopy sscal zdscal crotg - dcabs1 drotmg isamax sdot sswap zrotg cscal dcopy dscal izamax - snrm2 zaxpy zscal csrot ddot dswap sasum srot zcopy zswap - scabs1 - ''' - blas2 = ''' - cgbmv chpmv ctrsv dsymv dtrsv sspr2 strmv zhemv ztpmv cgemv - chpr dgbmv dsyr lsame ssymv strsv zher ztpsv cgerc chpr2 dgemv - dsyr2 sgbmv ssyr xerbla zher2 ztrmv cgeru ctbmv dger dtbmv - sgemv ssyr2 zgbmv zhpmv ztrsv chbmv ctbsv dsbmv dtbsv sger - stbmv zgemv zhpr chemv ctpmv dspmv dtpmv ssbmv stbsv zgerc - zhpr2 cher ctpsv dspr dtpsv sspmv stpmv zgeru ztbmv cher2 - ctrmv dspr2 dtrmv sspr stpsv zhbmv ztbsv - ''' - blas3 = ''' - cgemm csymm ctrsm dsyrk sgemm strmm zhemm zsyr2k chemm csyr2k - dgemm dtrmm ssymm strsm zher2k zsyrk cher2k csyrk dsymm dtrsm - ssyr2k zherk ztrmm cherk ctrmm dsyr2k ssyrk zgemm zsymm ztrsm - ''' - sources = [os.path.join(src_dir, f + '.f') \ - for f in (blas1 + blas2 + blas3).split()] - #XXX: should we check here actual existence of source files? - sources = [f for f in sources if os.path.isfile(f)] - info = {'sources': sources, 'language': 'f77'} - self.set_info(**info) - - -class x11_info(system_info): - section = 'x11' - notfounderror = X11NotFoundError - _lib_names = ['X11'] - - def __init__(self): - system_info.__init__(self, - default_lib_dirs=default_x11_lib_dirs, - default_include_dirs=default_x11_include_dirs) - - def calc_info(self): - if sys.platform in ['win32']: - return - lib_dirs = self.get_lib_dirs() - include_dirs = self.get_include_dirs() - opt = self.get_option_single('x11_libs', 'libraries') - x11_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs(lib_dirs, x11_libs, []) - if info is None: - return - inc_dir = None - for d in include_dirs: - if self.combine_paths(d, 'X11/X.h'): - inc_dir = d - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir]) - self.set_info(**info) - - -class _numpy_info(system_info): - section = 'Numeric' - modulename = 'Numeric' - notfounderror = NumericNotFoundError - - def __init__(self): - include_dirs = [] - try: - module = __import__(self.modulename) - prefix = [] - for name in module.__file__.split(os.sep): - if name == 'lib': - break - prefix.append(name) - - # Ask numpy for its own include path before attempting - # anything else - try: - include_dirs.append(getattr(module, 'get_include')()) - except AttributeError: - pass - - include_dirs.append(sysconfig.get_path('include')) - except ImportError: - pass - py_incl_dir = sysconfig.get_path('include') - include_dirs.append(py_incl_dir) - py_pincl_dir = sysconfig.get_path('platinclude') - if py_pincl_dir not in include_dirs: - include_dirs.append(py_pincl_dir) - for d in default_include_dirs: - d = os.path.join(d, os.path.basename(py_incl_dir)) - if d not in include_dirs: - include_dirs.append(d) - system_info.__init__(self, - default_lib_dirs=[], - default_include_dirs=include_dirs) - - def calc_info(self): - try: - module = __import__(self.modulename) - except ImportError: - return - info = {} - macros = [] - for v in ['__version__', 'version']: - vrs = getattr(module, v, None) - if vrs is None: - continue - macros = [(self.modulename.upper() + '_VERSION', - _c_string_literal(vrs)), - (self.modulename.upper(), None)] - break - dict_append(info, define_macros=macros) - include_dirs = self.get_include_dirs() - inc_dir = None - for d in include_dirs: - if self.combine_paths(d, - os.path.join(self.modulename, - 'arrayobject.h')): - inc_dir = d - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir]) - if info: - self.set_info(**info) - return - - -class numarray_info(_numpy_info): - section = 'numarray' - modulename = 'numarray' - - -class Numeric_info(_numpy_info): - section = 'Numeric' - modulename = 'Numeric' - - -class numpy_info(_numpy_info): - section = 'numpy' - modulename = 'numpy' - - -class numerix_info(system_info): - section = 'numerix' - - def calc_info(self): - which = None, None - if os.getenv("NUMERIX"): - which = os.getenv("NUMERIX"), "environment var" - # If all the above fail, default to numpy. - if which[0] is None: - which = "numpy", "defaulted" - try: - import numpy # noqa: F401 - which = "numpy", "defaulted" - except ImportError as e: - msg1 = str(e) - try: - import Numeric # noqa: F401 - which = "numeric", "defaulted" - except ImportError as e: - msg2 = str(e) - try: - import numarray # noqa: F401 - which = "numarray", "defaulted" - except ImportError as e: - msg3 = str(e) - log.info(msg1) - log.info(msg2) - log.info(msg3) - which = which[0].strip().lower(), which[1] - if which[0] not in ["numeric", "numarray", "numpy"]: - raise ValueError("numerix selector must be either 'Numeric' " - "or 'numarray' or 'numpy' but the value obtained" - " from the %s was '%s'." % (which[1], which[0])) - os.environ['NUMERIX'] = which[0] - self.set_info(**get_info(which[0])) - - -class f2py_info(system_info): - def calc_info(self): - try: - import numpy.f2py as f2py - except ImportError: - return - f2py_dir = os.path.join(os.path.dirname(f2py.__file__), 'src') - self.set_info(sources=[os.path.join(f2py_dir, 'fortranobject.c')], - include_dirs=[f2py_dir]) - return - - -class boost_python_info(system_info): - section = 'boost_python' - dir_env_var = 'BOOST' - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d, ['boost*'])) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d, 'libs', 'python', 'src', - 'module.cpp')): - src_dir = d - break - if not src_dir: - return - py_incl_dirs = [sysconfig.get_path('include')] - py_pincl_dir = sysconfig.get_path('platinclude') - if py_pincl_dir not in py_incl_dirs: - py_incl_dirs.append(py_pincl_dir) - srcs_dir = os.path.join(src_dir, 'libs', 'python', 'src') - bpl_srcs = glob(os.path.join(srcs_dir, '*.cpp')) - bpl_srcs += glob(os.path.join(srcs_dir, '*', '*.cpp')) - info = {'libraries': [('boost_python_src', - {'include_dirs': [src_dir] + py_incl_dirs, - 'sources':bpl_srcs} - )], - 'include_dirs': [src_dir], - } - if info: - self.set_info(**info) - return - - -class agg2_info(system_info): - section = 'agg2' - dir_env_var = 'AGG2' - - def get_paths(self, section, key): - pre_dirs = system_info.get_paths(self, section, key) - dirs = [] - for d in pre_dirs: - dirs.extend([d] + self.combine_paths(d, ['agg2*'])) - return [d for d in dirs if os.path.isdir(d)] - - def calc_info(self): - src_dirs = self.get_src_dirs() - src_dir = '' - for d in src_dirs: - if os.path.isfile(os.path.join(d, 'src', 'agg_affine_matrix.cpp')): - src_dir = d - break - if not src_dir: - return - if sys.platform == 'win32': - agg2_srcs = glob(os.path.join(src_dir, 'src', 'platform', - 'win32', 'agg_win32_bmp.cpp')) - else: - agg2_srcs = glob(os.path.join(src_dir, 'src', '*.cpp')) - agg2_srcs += [os.path.join(src_dir, 'src', 'platform', - 'X11', - 'agg_platform_support.cpp')] - - info = {'libraries': - [('agg2_src', - {'sources': agg2_srcs, - 'include_dirs': [os.path.join(src_dir, 'include')], - } - )], - 'include_dirs': [os.path.join(src_dir, 'include')], - } - if info: - self.set_info(**info) - return - - -class _pkg_config_info(system_info): - section = None - config_env_var = 'PKG_CONFIG' - default_config_exe = 'pkg-config' - append_config_exe = '' - version_macro_name = None - release_macro_name = None - version_flag = '--modversion' - cflags_flag = '--cflags' - - def get_config_exe(self): - if self.config_env_var in os.environ: - return os.environ[self.config_env_var] - return self.default_config_exe - - def get_config_output(self, config_exe, option): - cmd = config_exe + ' ' + self.append_config_exe + ' ' + option - try: - o = subprocess.check_output(cmd) - except (OSError, subprocess.CalledProcessError): - pass - else: - o = filepath_from_subprocess_output(o) - return o - - def calc_info(self): - config_exe = find_executable(self.get_config_exe()) - if not config_exe: - log.warn('File not found: %s. Cannot determine %s info.' \ - % (config_exe, self.section)) - return - info = {} - macros = [] - libraries = [] - library_dirs = [] - include_dirs = [] - extra_link_args = [] - extra_compile_args = [] - version = self.get_config_output(config_exe, self.version_flag) - if version: - macros.append((self.__class__.__name__.split('.')[-1].upper(), - _c_string_literal(version))) - if self.version_macro_name: - macros.append((self.version_macro_name + '_%s' - % (version.replace('.', '_')), None)) - if self.release_macro_name: - release = self.get_config_output(config_exe, '--release') - if release: - macros.append((self.release_macro_name + '_%s' - % (release.replace('.', '_')), None)) - opts = self.get_config_output(config_exe, '--libs') - if opts: - for opt in opts.split(): - if opt[:2] == '-l': - libraries.append(opt[2:]) - elif opt[:2] == '-L': - library_dirs.append(opt[2:]) - else: - extra_link_args.append(opt) - opts = self.get_config_output(config_exe, self.cflags_flag) - if opts: - for opt in opts.split(): - if opt[:2] == '-I': - include_dirs.append(opt[2:]) - elif opt[:2] == '-D': - if '=' in opt: - n, v = opt[2:].split('=') - macros.append((n, v)) - else: - macros.append((opt[2:], None)) - else: - extra_compile_args.append(opt) - if macros: - dict_append(info, define_macros=macros) - if libraries: - dict_append(info, libraries=libraries) - if library_dirs: - dict_append(info, library_dirs=library_dirs) - if include_dirs: - dict_append(info, include_dirs=include_dirs) - if extra_link_args: - dict_append(info, extra_link_args=extra_link_args) - if extra_compile_args: - dict_append(info, extra_compile_args=extra_compile_args) - if info: - self.set_info(**info) - return - - -class wx_info(_pkg_config_info): - section = 'wx' - config_env_var = 'WX_CONFIG' - default_config_exe = 'wx-config' - append_config_exe = '' - version_macro_name = 'WX_VERSION' - release_macro_name = 'WX_RELEASE' - version_flag = '--version' - cflags_flag = '--cxxflags' - - -class gdk_pixbuf_xlib_2_info(_pkg_config_info): - section = 'gdk_pixbuf_xlib_2' - append_config_exe = 'gdk-pixbuf-xlib-2.0' - version_macro_name = 'GDK_PIXBUF_XLIB_VERSION' - - -class gdk_pixbuf_2_info(_pkg_config_info): - section = 'gdk_pixbuf_2' - append_config_exe = 'gdk-pixbuf-2.0' - version_macro_name = 'GDK_PIXBUF_VERSION' - - -class gdk_x11_2_info(_pkg_config_info): - section = 'gdk_x11_2' - append_config_exe = 'gdk-x11-2.0' - version_macro_name = 'GDK_X11_VERSION' - - -class gdk_2_info(_pkg_config_info): - section = 'gdk_2' - append_config_exe = 'gdk-2.0' - version_macro_name = 'GDK_VERSION' - - -class gdk_info(_pkg_config_info): - section = 'gdk' - append_config_exe = 'gdk' - version_macro_name = 'GDK_VERSION' - - -class gtkp_x11_2_info(_pkg_config_info): - section = 'gtkp_x11_2' - append_config_exe = 'gtk+-x11-2.0' - version_macro_name = 'GTK_X11_VERSION' - - -class gtkp_2_info(_pkg_config_info): - section = 'gtkp_2' - append_config_exe = 'gtk+-2.0' - version_macro_name = 'GTK_VERSION' - - -class xft_info(_pkg_config_info): - section = 'xft' - append_config_exe = 'xft' - version_macro_name = 'XFT_VERSION' - - -class freetype2_info(_pkg_config_info): - section = 'freetype2' - append_config_exe = 'freetype2' - version_macro_name = 'FREETYPE2_VERSION' - - -class amd_info(system_info): - section = 'amd' - dir_env_var = 'AMD' - _lib_names = ['amd'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - opt = self.get_option_single('amd_libs', 'libraries') - amd_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs(lib_dirs, amd_libs, []) - if info is None: - return - - include_dirs = self.get_include_dirs() - - inc_dir = None - for d in include_dirs: - p = self.combine_paths(d, 'amd.h') - if p: - inc_dir = os.path.dirname(p[0]) - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir], - define_macros=[('SCIPY_AMD_H', None)], - swig_opts=['-I' + inc_dir]) - - self.set_info(**info) - return - - -class umfpack_info(system_info): - section = 'umfpack' - dir_env_var = 'UMFPACK' - notfounderror = UmfpackNotFoundError - _lib_names = ['umfpack'] - - def calc_info(self): - lib_dirs = self.get_lib_dirs() - - opt = self.get_option_single('umfpack_libs', 'libraries') - umfpack_libs = self.get_libs(opt, self._lib_names) - info = self.check_libs(lib_dirs, umfpack_libs, []) - if info is None: - return - - include_dirs = self.get_include_dirs() - - inc_dir = None - for d in include_dirs: - p = self.combine_paths(d, ['', 'umfpack'], 'umfpack.h') - if p: - inc_dir = os.path.dirname(p[0]) - break - if inc_dir is not None: - dict_append(info, include_dirs=[inc_dir], - define_macros=[('SCIPY_UMFPACK_H', None)], - swig_opts=['-I' + inc_dir]) - - dict_append(info, **get_info('amd')) - - self.set_info(**info) - return - - -def combine_paths(*args, **kws): - """ Return a list of existing paths composed by all combinations of - items from arguments. - """ - r = [] - for a in args: - if not a: - continue - if is_string(a): - a = [a] - r.append(a) - args = r - if not args: - return [] - if len(args) == 1: - result = reduce(lambda a, b: a + b, map(glob, args[0]), []) - elif len(args) == 2: - result = [] - for a0 in args[0]: - for a1 in args[1]: - result.extend(glob(os.path.join(a0, a1))) - else: - result = combine_paths(*(combine_paths(args[0], args[1]) + args[2:])) - log.debug('(paths: %s)', ','.join(result)) - return result - -language_map = {'c': 0, 'c++': 1, 'f77': 2, 'f90': 3} -inv_language_map = {0: 'c', 1: 'c++', 2: 'f77', 3: 'f90'} - - -def dict_append(d, **kws): - languages = [] - for k, v in kws.items(): - if k == 'language': - languages.append(v) - continue - if k in d: - if k in ['library_dirs', 'include_dirs', - 'extra_compile_args', 'extra_link_args', - 'runtime_library_dirs', 'define_macros']: - [d[k].append(vv) for vv in v if vv not in d[k]] - else: - d[k].extend(v) - else: - d[k] = v - if languages: - l = inv_language_map[max([language_map.get(l, 0) for l in languages])] - d['language'] = l - return - - -def parseCmdLine(argv=(None,)): - import optparse - parser = optparse.OptionParser("usage: %prog [-v] [info objs]") - parser.add_option('-v', '--verbose', action='store_true', dest='verbose', - default=False, - help='be verbose and print more messages') - - opts, args = parser.parse_args(args=argv[1:]) - return opts, args - - -def show_all(argv=None): - import inspect - if argv is None: - argv = sys.argv - opts, args = parseCmdLine(argv) - if opts.verbose: - log.set_threshold(log.DEBUG) - else: - log.set_threshold(log.INFO) - show_only = [] - for n in args: - if n[-5:] != '_info': - n = n + '_info' - show_only.append(n) - show_all = not show_only - _gdict_ = globals().copy() - for name, c in _gdict_.items(): - if not inspect.isclass(c): - continue - if not issubclass(c, system_info) or c is system_info: - continue - if not show_all: - if name not in show_only: - continue - del show_only[show_only.index(name)] - conf = c() - conf.verbosity = 2 - # we don't need the result, but we want - # the side effect of printing diagnostics - conf.get_info() - if show_only: - log.info('Info classes not defined: %s', ','.join(show_only)) - -if __name__ == "__main__": - show_all() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/common.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/common.py deleted file mode 100644 index 559977bacf881552d546e7704d4cf4b12b4a32fe..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/ops/common.py +++ /dev/null @@ -1,146 +0,0 @@ -""" -Boilerplate functions used in defining binary operations. -""" -from __future__ import annotations - -from functools import wraps -from typing import ( - TYPE_CHECKING, - Callable, -) - -from pandas._libs.lib import item_from_zerodim -from pandas._libs.missing import is_matching_na - -from pandas.core.dtypes.generic import ( - ABCIndex, - ABCSeries, -) - -if TYPE_CHECKING: - from pandas._typing import F - - -def unpack_zerodim_and_defer(name: str) -> Callable[[F], F]: - """ - Boilerplate for pandas conventions in arithmetic and comparison methods. - - Parameters - ---------- - name : str - - Returns - ------- - decorator - """ - - def wrapper(method: F) -> F: - return _unpack_zerodim_and_defer(method, name) - - return wrapper - - -def _unpack_zerodim_and_defer(method, name: str): - """ - Boilerplate for pandas conventions in arithmetic and comparison methods. - - Ensure method returns NotImplemented when operating against "senior" - classes. Ensure zero-dimensional ndarrays are always unpacked. - - Parameters - ---------- - method : binary method - name : str - - Returns - ------- - method - """ - stripped_name = name.removeprefix("__").removesuffix("__") - is_cmp = stripped_name in {"eq", "ne", "lt", "le", "gt", "ge"} - - @wraps(method) - def new_method(self, other): - if is_cmp and isinstance(self, ABCIndex) and isinstance(other, ABCSeries): - # For comparison ops, Index does *not* defer to Series - pass - else: - prio = getattr(other, "__pandas_priority__", None) - if prio is not None: - if prio > self.__pandas_priority__: - # e.g. other is DataFrame while self is Index/Series/EA - return NotImplemented - - other = item_from_zerodim(other) - - return method(self, other) - - return new_method - - -def get_op_result_name(left, right): - """ - Find the appropriate name to pin to an operation result. This result - should always be either an Index or a Series. - - Parameters - ---------- - left : {Series, Index} - right : object - - Returns - ------- - name : object - Usually a string - """ - if isinstance(right, (ABCSeries, ABCIndex)): - name = _maybe_match_name(left, right) - else: - name = left.name - return name - - -def _maybe_match_name(a, b): - """ - Try to find a name to attach to the result of an operation between - a and b. If only one of these has a `name` attribute, return that - name. Otherwise return a consensus name if they match or None if - they have different names. - - Parameters - ---------- - a : object - b : object - - Returns - ------- - name : str or None - - See Also - -------- - pandas.core.common.consensus_name_attr - """ - a_has = hasattr(a, "name") - b_has = hasattr(b, "name") - if a_has and b_has: - try: - if a.name == b.name: - return a.name - elif is_matching_na(a.name, b.name): - # e.g. both are np.nan - return a.name - else: - return None - except TypeError: - # pd.NA - if is_matching_na(a.name, b.name): - return a.name - return None - except ValueError: - # e.g. np.int64(1) vs (np.int64(1), np.int64(2)) - return None - elif a_has: - return a.name - elif b_has: - return b.name - return None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_subclass.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_subclass.py deleted file mode 100644 index c3287e1ddcddcedc14857f2299798d3957830921..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_subclass.py +++ /dev/null @@ -1,40 +0,0 @@ -""" -Tests involving custom Index subclasses -""" -import numpy as np - -from pandas import ( - DataFrame, - Index, -) -import pandas._testing as tm - - -class CustomIndex(Index): - def __new__(cls, data, name=None): - # assert that this index class cannot hold strings - if any(isinstance(val, str) for val in data): - raise TypeError("CustomIndex cannot hold strings") - - if name is None and hasattr(data, "name"): - name = data.name - data = np.array(data, dtype="O") - - return cls._simple_new(data, name) - - -def test_insert_fallback_to_base_index(): - # https://github.com/pandas-dev/pandas/issues/47071 - - idx = CustomIndex([1, 2, 3]) - result = idx.insert(0, "string") - expected = Index(["string", 1, 2, 3], dtype=object) - tm.assert_index_equal(result, expected) - - df = DataFrame( - np.random.default_rng(2).standard_normal((2, 3)), - columns=idx, - index=Index([1, 2], name="string"), - ) - result = df.reset_index() - tm.assert_index_equal(result.columns, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_frame_equal.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_frame_equal.py deleted file mode 100644 index 2d3b47cd2e994785df804ab43cebdf4134c3848a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_frame_equal.py +++ /dev/null @@ -1,381 +0,0 @@ -import pytest - -import pandas as pd -from pandas import DataFrame -import pandas._testing as tm - - -@pytest.fixture(params=[True, False]) -def by_blocks_fixture(request): - return request.param - - -@pytest.fixture(params=["DataFrame", "Series"]) -def obj_fixture(request): - return request.param - - -def _assert_frame_equal_both(a, b, **kwargs): - """ - Check that two DataFrame equal. - - This check is performed commutatively. - - Parameters - ---------- - a : DataFrame - The first DataFrame to compare. - b : DataFrame - The second DataFrame to compare. - kwargs : dict - The arguments passed to `tm.assert_frame_equal`. - """ - tm.assert_frame_equal(a, b, **kwargs) - tm.assert_frame_equal(b, a, **kwargs) - - -@pytest.mark.parametrize("check_like", [True, False]) -def test_frame_equal_row_order_mismatch(check_like, obj_fixture): - df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"]) - df2 = DataFrame({"A": [3, 2, 1], "B": [6, 5, 4]}, index=["c", "b", "a"]) - - if not check_like: # Do not ignore row-column orderings. - msg = f"{obj_fixture}.index are different" - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture) - else: - _assert_frame_equal_both(df1, df2, check_like=check_like, obj=obj_fixture) - - -@pytest.mark.parametrize( - "df1,df2", - [ - (DataFrame({"A": [1, 2, 3]}), DataFrame({"A": [1, 2, 3, 4]})), - (DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}), DataFrame({"A": [1, 2, 3]})), - ], -) -def test_frame_equal_shape_mismatch(df1, df2, obj_fixture): - msg = f"{obj_fixture} are different" - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, obj=obj_fixture) - - -@pytest.mark.parametrize( - "df1,df2,msg", - [ - # Index - ( - DataFrame.from_records({"a": [1, 2], "c": ["l1", "l2"]}, index=["a"]), - DataFrame.from_records({"a": [1.0, 2.0], "c": ["l1", "l2"]}, index=["a"]), - "DataFrame\\.index are different", - ), - # MultiIndex - ( - DataFrame.from_records( - {"a": [1, 2], "b": [2.1, 1.5], "c": ["l1", "l2"]}, index=["a", "b"] - ), - DataFrame.from_records( - {"a": [1.0, 2.0], "b": [2.1, 1.5], "c": ["l1", "l2"]}, index=["a", "b"] - ), - "MultiIndex level \\[0\\] are different", - ), - ], -) -def test_frame_equal_index_dtype_mismatch(df1, df2, msg, check_index_type): - kwargs = {"check_index_type": check_index_type} - - if check_index_type: - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, **kwargs) - else: - tm.assert_frame_equal(df1, df2, **kwargs) - - -def test_empty_dtypes(check_dtype): - columns = ["col1", "col2"] - df1 = DataFrame(columns=columns) - df2 = DataFrame(columns=columns) - - kwargs = {"check_dtype": check_dtype} - df1["col1"] = df1["col1"].astype("int64") - - if check_dtype: - msg = r"Attributes of DataFrame\..* are different" - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, **kwargs) - else: - tm.assert_frame_equal(df1, df2, **kwargs) - - -@pytest.mark.parametrize("check_like", [True, False]) -def test_frame_equal_index_mismatch(check_like, obj_fixture): - msg = f"""{obj_fixture}\\.index are different - -{obj_fixture}\\.index values are different \\(33\\.33333 %\\) -\\[left\\]: Index\\(\\['a', 'b', 'c'\\], dtype='object'\\) -\\[right\\]: Index\\(\\['a', 'b', 'd'\\], dtype='object'\\) -At positional index 2, first diff: c != d""" - - df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"]) - df2 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "d"]) - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture) - - -@pytest.mark.parametrize("check_like", [True, False]) -def test_frame_equal_columns_mismatch(check_like, obj_fixture): - msg = f"""{obj_fixture}\\.columns are different - -{obj_fixture}\\.columns values are different \\(50\\.0 %\\) -\\[left\\]: Index\\(\\['A', 'B'\\], dtype='object'\\) -\\[right\\]: Index\\(\\['A', 'b'\\], dtype='object'\\)""" - - df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"]) - df2 = DataFrame({"A": [1, 2, 3], "b": [4, 5, 6]}, index=["a", "b", "c"]) - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture) - - -def test_frame_equal_block_mismatch(by_blocks_fixture, obj_fixture): - obj = obj_fixture - msg = f"""{obj}\\.iloc\\[:, 1\\] \\(column name="B"\\) are different - -{obj}\\.iloc\\[:, 1\\] \\(column name="B"\\) values are different \\(33\\.33333 %\\) -\\[index\\]: \\[0, 1, 2\\] -\\[left\\]: \\[4, 5, 6\\] -\\[right\\]: \\[4, 5, 7\\]""" - - df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - df2 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 7]}) - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, by_blocks=by_blocks_fixture, obj=obj_fixture) - - -@pytest.mark.parametrize( - "df1,df2,msg", - [ - ( - DataFrame({"A": ["á", "à", "ä"], "E": ["é", "è", "ë"]}), - DataFrame({"A": ["á", "à", "ä"], "E": ["é", "è", "e̊"]}), - """{obj}\\.iloc\\[:, 1\\] \\(column name="E"\\) are different - -{obj}\\.iloc\\[:, 1\\] \\(column name="E"\\) values are different \\(33\\.33333 %\\) -\\[index\\]: \\[0, 1, 2\\] -\\[left\\]: \\[é, è, ë\\] -\\[right\\]: \\[é, è, e̊\\]""", - ), - ( - DataFrame({"A": ["á", "à", "ä"], "E": ["é", "è", "ë"]}), - DataFrame({"A": ["a", "a", "a"], "E": ["e", "e", "e"]}), - """{obj}\\.iloc\\[:, 0\\] \\(column name="A"\\) are different - -{obj}\\.iloc\\[:, 0\\] \\(column name="A"\\) values are different \\(100\\.0 %\\) -\\[index\\]: \\[0, 1, 2\\] -\\[left\\]: \\[á, à, ä\\] -\\[right\\]: \\[a, a, a\\]""", - ), - ], -) -def test_frame_equal_unicode(df1, df2, msg, by_blocks_fixture, obj_fixture): - # see gh-20503 - # - # Test ensures that `tm.assert_frame_equals` raises the right exception - # when comparing DataFrames containing differing unicode objects. - msg = msg.format(obj=obj_fixture) - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(df1, df2, by_blocks=by_blocks_fixture, obj=obj_fixture) - - -def test_assert_frame_equal_extension_dtype_mismatch(): - # https://github.com/pandas-dev/pandas/issues/32747 - left = DataFrame({"a": [1, 2, 3]}, dtype="Int64") - right = left.astype(int) - - msg = ( - "Attributes of DataFrame\\.iloc\\[:, 0\\] " - '\\(column name="a"\\) are different\n\n' - 'Attribute "dtype" are different\n' - "\\[left\\]: Int64\n" - "\\[right\\]: int[32|64]" - ) - - tm.assert_frame_equal(left, right, check_dtype=False) - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(left, right, check_dtype=True) - - -def test_assert_frame_equal_interval_dtype_mismatch(): - # https://github.com/pandas-dev/pandas/issues/32747 - left = DataFrame({"a": [pd.Interval(0, 1)]}, dtype="interval") - right = left.astype(object) - - msg = ( - "Attributes of DataFrame\\.iloc\\[:, 0\\] " - '\\(column name="a"\\) are different\n\n' - 'Attribute "dtype" are different\n' - "\\[left\\]: interval\\[int64, right\\]\n" - "\\[right\\]: object" - ) - - tm.assert_frame_equal(left, right, check_dtype=False) - - with pytest.raises(AssertionError, match=msg): - tm.assert_frame_equal(left, right, check_dtype=True) - - -@pytest.mark.parametrize("right_dtype", ["Int32", "int64"]) -def test_assert_frame_equal_ignore_extension_dtype_mismatch(right_dtype): - # https://github.com/pandas-dev/pandas/issues/35715 - left = DataFrame({"a": [1, 2, 3]}, dtype="Int64") - right = DataFrame({"a": [1, 2, 3]}, dtype=right_dtype) - tm.assert_frame_equal(left, right, check_dtype=False) - - -@pytest.mark.parametrize( - "dtype", - [ - ("timedelta64[ns]"), - ("datetime64[ns, UTC]"), - ("Period[D]"), - ], -) -def test_assert_frame_equal_datetime_like_dtype_mismatch(dtype): - df1 = DataFrame({"a": []}, dtype=dtype) - df2 = DataFrame({"a": []}) - tm.assert_frame_equal(df1, df2, check_dtype=False) - - -def test_allows_duplicate_labels(): - left = DataFrame() - right = DataFrame().set_flags(allows_duplicate_labels=False) - tm.assert_frame_equal(left, left) - tm.assert_frame_equal(right, right) - tm.assert_frame_equal(left, right, check_flags=False) - tm.assert_frame_equal(right, left, check_flags=False) - - with pytest.raises(AssertionError, match=" None: - super().__init__(use_datetime) - index_parts = urllib.parse.urlparse(index_url) - self._scheme = index_parts.scheme - self._session = session - - def request( - self, - host: "_HostType", - handler: str, - request_body: bytes, - verbose: bool = False, - ) -> Tuple["_Marshallable", ...]: - assert isinstance(host, str) - parts = (self._scheme, host, handler, None, None, None) - url = urllib.parse.urlunparse(parts) - try: - headers = {"Content-Type": "text/xml"} - response = self._session.post( - url, - data=request_body, - headers=headers, - stream=True, - ) - raise_for_status(response) - self.verbose = verbose - return self.parse_response(response.raw) - except NetworkConnectionError as exc: - assert exc.response - logger.critical( - "HTTP error %s while getting %s", - exc.response.status_code, - url, - ) - raise diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/ext.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/ext.py deleted file mode 100644 index 4eb9dd65adc9aff07547f5ef7541bdf2be91124a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/ext.py +++ /dev/null @@ -1,193 +0,0 @@ -# coding: utf-8 -from collections import namedtuple -import datetime -import sys -import struct - - -PY2 = sys.version_info[0] == 2 - -if PY2: - int_types = (int, long) - _utc = None -else: - int_types = int - try: - _utc = datetime.timezone.utc - except AttributeError: - _utc = datetime.timezone(datetime.timedelta(0)) - - -class ExtType(namedtuple("ExtType", "code data")): - """ExtType represents ext type in msgpack.""" - - def __new__(cls, code, data): - if not isinstance(code, int): - raise TypeError("code must be int") - if not isinstance(data, bytes): - raise TypeError("data must be bytes") - if not 0 <= code <= 127: - raise ValueError("code must be 0~127") - return super(ExtType, cls).__new__(cls, code, data) - - -class Timestamp(object): - """Timestamp represents the Timestamp extension type in msgpack. - - When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`. When using pure-Python - msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and unpack `Timestamp`. - - This class is immutable: Do not override seconds and nanoseconds. - """ - - __slots__ = ["seconds", "nanoseconds"] - - def __init__(self, seconds, nanoseconds=0): - """Initialize a Timestamp object. - - :param int seconds: - Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds). - May be negative. - - :param int nanoseconds: - Number of nanoseconds to add to `seconds` to get fractional time. - Maximum is 999_999_999. Default is 0. - - Note: Negative times (before the UNIX epoch) are represented as negative seconds + positive ns. - """ - if not isinstance(seconds, int_types): - raise TypeError("seconds must be an interger") - if not isinstance(nanoseconds, int_types): - raise TypeError("nanoseconds must be an integer") - if not (0 <= nanoseconds < 10 ** 9): - raise ValueError( - "nanoseconds must be a non-negative integer less than 999999999." - ) - self.seconds = seconds - self.nanoseconds = nanoseconds - - def __repr__(self): - """String representation of Timestamp.""" - return "Timestamp(seconds={0}, nanoseconds={1})".format( - self.seconds, self.nanoseconds - ) - - def __eq__(self, other): - """Check for equality with another Timestamp object""" - if type(other) is self.__class__: - return ( - self.seconds == other.seconds and self.nanoseconds == other.nanoseconds - ) - return False - - def __ne__(self, other): - """not-equals method (see :func:`__eq__()`)""" - return not self.__eq__(other) - - def __hash__(self): - return hash((self.seconds, self.nanoseconds)) - - @staticmethod - def from_bytes(b): - """Unpack bytes into a `Timestamp` object. - - Used for pure-Python msgpack unpacking. - - :param b: Payload from msgpack ext message with code -1 - :type b: bytes - - :returns: Timestamp object unpacked from msgpack ext payload - :rtype: Timestamp - """ - if len(b) == 4: - seconds = struct.unpack("!L", b)[0] - nanoseconds = 0 - elif len(b) == 8: - data64 = struct.unpack("!Q", b)[0] - seconds = data64 & 0x00000003FFFFFFFF - nanoseconds = data64 >> 34 - elif len(b) == 12: - nanoseconds, seconds = struct.unpack("!Iq", b) - else: - raise ValueError( - "Timestamp type can only be created from 32, 64, or 96-bit byte objects" - ) - return Timestamp(seconds, nanoseconds) - - def to_bytes(self): - """Pack this Timestamp object into bytes. - - Used for pure-Python msgpack packing. - - :returns data: Payload for EXT message with code -1 (timestamp type) - :rtype: bytes - """ - if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits - data64 = self.nanoseconds << 34 | self.seconds - if data64 & 0xFFFFFFFF00000000 == 0: - # nanoseconds is zero and seconds < 2**32, so timestamp 32 - data = struct.pack("!L", data64) - else: - # timestamp 64 - data = struct.pack("!Q", data64) - else: - # timestamp 96 - data = struct.pack("!Iq", self.nanoseconds, self.seconds) - return data - - @staticmethod - def from_unix(unix_sec): - """Create a Timestamp from posix timestamp in seconds. - - :param unix_float: Posix timestamp in seconds. - :type unix_float: int or float. - """ - seconds = int(unix_sec // 1) - nanoseconds = int((unix_sec % 1) * 10 ** 9) - return Timestamp(seconds, nanoseconds) - - def to_unix(self): - """Get the timestamp as a floating-point value. - - :returns: posix timestamp - :rtype: float - """ - return self.seconds + self.nanoseconds / 1e9 - - @staticmethod - def from_unix_nano(unix_ns): - """Create a Timestamp from posix timestamp in nanoseconds. - - :param int unix_ns: Posix timestamp in nanoseconds. - :rtype: Timestamp - """ - return Timestamp(*divmod(unix_ns, 10 ** 9)) - - def to_unix_nano(self): - """Get the timestamp as a unixtime in nanoseconds. - - :returns: posix timestamp in nanoseconds - :rtype: int - """ - return self.seconds * 10 ** 9 + self.nanoseconds - - def to_datetime(self): - """Get the timestamp as a UTC datetime. - - Python 2 is not supported. - - :rtype: datetime. - """ - return datetime.datetime.fromtimestamp(0, _utc) + datetime.timedelta( - seconds=self.to_unix() - ) - - @staticmethod - def from_datetime(dt): - """Create a Timestamp from datetime with tzinfo. - - Python 2 is not supported. - - :rtype: Timestamp - """ - return Timestamp.from_unix(dt.timestamp()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/rust.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/rust.py deleted file mode 100644 index db68bb3461480fa1e5a1f2cc026b1f0e46c18ceb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/rust.py +++ /dev/null @@ -1,223 +0,0 @@ -""" - pygments.lexers.rust - ~~~~~~~~~~~~~~~~~~~~ - - Lexers for the Rust language. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, include, bygroups, words, default -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace - -__all__ = ['RustLexer'] - - -class RustLexer(RegexLexer): - """ - Lexer for the Rust programming language (version 1.47). - - .. versionadded:: 1.6 - """ - name = 'Rust' - url = 'https://www.rust-lang.org/' - filenames = ['*.rs', '*.rs.in'] - aliases = ['rust', 'rs'] - mimetypes = ['text/rust', 'text/x-rust'] - - keyword_types = (words(( - 'u8', 'u16', 'u32', 'u64', 'u128', 'i8', 'i16', 'i32', 'i64', 'i128', - 'usize', 'isize', 'f32', 'f64', 'char', 'str', 'bool', - ), suffix=r'\b'), Keyword.Type) - - builtin_funcs_types = (words(( - 'Copy', 'Send', 'Sized', 'Sync', 'Unpin', - 'Drop', 'Fn', 'FnMut', 'FnOnce', 'drop', - 'Box', 'ToOwned', 'Clone', - 'PartialEq', 'PartialOrd', 'Eq', 'Ord', - 'AsRef', 'AsMut', 'Into', 'From', 'Default', - 'Iterator', 'Extend', 'IntoIterator', 'DoubleEndedIterator', - 'ExactSizeIterator', - 'Option', 'Some', 'None', - 'Result', 'Ok', 'Err', - 'String', 'ToString', 'Vec', - ), suffix=r'\b'), Name.Builtin) - - builtin_macros = (words(( - 'asm', 'assert', 'assert_eq', 'assert_ne', 'cfg', 'column', - 'compile_error', 'concat', 'concat_idents', 'dbg', 'debug_assert', - 'debug_assert_eq', 'debug_assert_ne', 'env', 'eprint', 'eprintln', - 'file', 'format', 'format_args', 'format_args_nl', 'global_asm', - 'include', 'include_bytes', 'include_str', - 'is_aarch64_feature_detected', - 'is_arm_feature_detected', - 'is_mips64_feature_detected', - 'is_mips_feature_detected', - 'is_powerpc64_feature_detected', - 'is_powerpc_feature_detected', - 'is_x86_feature_detected', - 'line', 'llvm_asm', 'log_syntax', 'macro_rules', 'matches', - 'module_path', 'option_env', 'panic', 'print', 'println', 'stringify', - 'thread_local', 'todo', 'trace_macros', 'unimplemented', 'unreachable', - 'vec', 'write', 'writeln', - ), suffix=r'!'), Name.Function.Magic) - - tokens = { - 'root': [ - # rust allows a file to start with a shebang, but if the first line - # starts with #![ then it's not a shebang but a crate attribute. - (r'#![^[\r\n].*$', Comment.Preproc), - default('base'), - ], - 'base': [ - # Whitespace and Comments - (r'\n', Whitespace), - (r'\s+', Whitespace), - (r'//!.*?\n', String.Doc), - (r'///(\n|[^/].*?\n)', String.Doc), - (r'//(.*?)\n', Comment.Single), - (r'/\*\*(\n|[^/*])', String.Doc, 'doccomment'), - (r'/\*!', String.Doc, 'doccomment'), - (r'/\*', Comment.Multiline, 'comment'), - - # Macro parameters - (r"""\$([a-zA-Z_]\w*|\(,?|\),?|,?)""", Comment.Preproc), - # Keywords - (words(('as', 'async', 'await', 'box', 'const', 'crate', 'dyn', - 'else', 'extern', 'for', 'if', 'impl', 'in', 'loop', - 'match', 'move', 'mut', 'pub', 'ref', 'return', 'static', - 'super', 'trait', 'unsafe', 'use', 'where', 'while'), - suffix=r'\b'), Keyword), - (words(('abstract', 'become', 'do', 'final', 'macro', 'override', - 'priv', 'typeof', 'try', 'unsized', 'virtual', 'yield'), - suffix=r'\b'), Keyword.Reserved), - (r'(true|false)\b', Keyword.Constant), - (r'self\b', Name.Builtin.Pseudo), - (r'mod\b', Keyword, 'modname'), - (r'let\b', Keyword.Declaration), - (r'fn\b', Keyword, 'funcname'), - (r'(struct|enum|type|union)\b', Keyword, 'typename'), - (r'(default)(\s+)(type|fn)\b', bygroups(Keyword, Text, Keyword)), - keyword_types, - (r'[sS]elf\b', Name.Builtin.Pseudo), - # Prelude (taken from Rust's src/libstd/prelude.rs) - builtin_funcs_types, - builtin_macros, - # Path separators, so types don't catch them. - (r'::\b', Text), - # Types in positions. - (r'(?::|->)', Text, 'typename'), - # Labels - (r'(break|continue)(\b\s*)(\'[A-Za-z_]\w*)?', - bygroups(Keyword, Text.Whitespace, Name.Label)), - - # Character literals - (r"""'(\\['"\\nrt]|\\x[0-7][0-9a-fA-F]|\\0""" - r"""|\\u\{[0-9a-fA-F]{1,6}\}|.)'""", - String.Char), - (r"""b'(\\['"\\nrt]|\\x[0-9a-fA-F]{2}|\\0""" - r"""|\\u\{[0-9a-fA-F]{1,6}\}|.)'""", - String.Char), - - # Binary literals - (r'0b[01_]+', Number.Bin, 'number_lit'), - # Octal literals - (r'0o[0-7_]+', Number.Oct, 'number_lit'), - # Hexadecimal literals - (r'0[xX][0-9a-fA-F_]+', Number.Hex, 'number_lit'), - # Decimal literals - (r'[0-9][0-9_]*(\.[0-9_]+[eE][+\-]?[0-9_]+|' - r'\.[0-9_]*(?!\.)|[eE][+\-]?[0-9_]+)', Number.Float, - 'number_lit'), - (r'[0-9][0-9_]*', Number.Integer, 'number_lit'), - - # String literals - (r'b"', String, 'bytestring'), - (r'"', String, 'string'), - (r'(?s)b?r(#*)".*?"\1', String), - - # Lifetime names - (r"'", Operator, 'lifetime'), - - # Operators and Punctuation - (r'\.\.=?', Operator), - (r'[{}()\[\],.;]', Punctuation), - (r'[+\-*/%&|<>^!~@=:?]', Operator), - - # Identifiers - (r'[a-zA-Z_]\w*', Name), - # Raw identifiers - (r'r#[a-zA-Z_]\w*', Name), - - # Attributes - (r'#!?\[', Comment.Preproc, 'attribute['), - - # Misc - # Lone hashes: not used in Rust syntax, but allowed in macro - # arguments, most famously for quote::quote!() - (r'#', Text), - ], - 'comment': [ - (r'[^*/]+', Comment.Multiline), - (r'/\*', Comment.Multiline, '#push'), - (r'\*/', Comment.Multiline, '#pop'), - (r'[*/]', Comment.Multiline), - ], - 'doccomment': [ - (r'[^*/]+', String.Doc), - (r'/\*', String.Doc, '#push'), - (r'\*/', String.Doc, '#pop'), - (r'[*/]', String.Doc), - ], - 'modname': [ - (r'\s+', Text), - (r'[a-zA-Z_]\w*', Name.Namespace, '#pop'), - default('#pop'), - ], - 'funcname': [ - (r'\s+', Text), - (r'[a-zA-Z_]\w*', Name.Function, '#pop'), - default('#pop'), - ], - 'typename': [ - (r'\s+', Text), - (r'&', Keyword.Pseudo), - (r"'", Operator, 'lifetime'), - builtin_funcs_types, - keyword_types, - (r'[a-zA-Z_]\w*', Name.Class, '#pop'), - default('#pop'), - ], - 'lifetime': [ - (r"(static|_)", Name.Builtin), - (r"[a-zA-Z_]+\w*", Name.Attribute), - default('#pop'), - ], - 'number_lit': [ - (r'[ui](8|16|32|64|size)', Keyword, '#pop'), - (r'f(32|64)', Keyword, '#pop'), - default('#pop'), - ], - 'string': [ - (r'"', String, '#pop'), - (r"""\\['"\\nrt]|\\x[0-7][0-9a-fA-F]|\\0""" - r"""|\\u\{[0-9a-fA-F]{1,6}\}""", String.Escape), - (r'[^\\"]+', String), - (r'\\', String), - ], - 'bytestring': [ - (r"""\\x[89a-fA-F][0-9a-fA-F]""", String.Escape), - include('string'), - ], - 'attribute_common': [ - (r'"', String, 'string'), - (r'\[', Comment.Preproc, 'attribute['), - ], - 'attribute[': [ - include('attribute_common'), - (r'\]', Comment.Preproc, '#pop'), - (r'[^"\]\[]+', Comment.Preproc), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/exceptions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/exceptions.py deleted file mode 100644 index e1cedf883d3eadcbfda91967d36b7c59b8367e76..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/exceptions.py +++ /dev/null @@ -1,141 +0,0 @@ -""" -requests.exceptions -~~~~~~~~~~~~~~~~~~~ - -This module contains the set of Requests' exceptions. -""" -from urllib3.exceptions import HTTPError as BaseHTTPError - -from .compat import JSONDecodeError as CompatJSONDecodeError - - -class RequestException(IOError): - """There was an ambiguous exception that occurred while handling your - request. - """ - - def __init__(self, *args, **kwargs): - """Initialize RequestException with `request` and `response` objects.""" - response = kwargs.pop("response", None) - self.response = response - self.request = kwargs.pop("request", None) - if response is not None and not self.request and hasattr(response, "request"): - self.request = self.response.request - super().__init__(*args, **kwargs) - - -class InvalidJSONError(RequestException): - """A JSON error occurred.""" - - -class JSONDecodeError(InvalidJSONError, CompatJSONDecodeError): - """Couldn't decode the text into json""" - - def __init__(self, *args, **kwargs): - """ - Construct the JSONDecodeError instance first with all - args. Then use it's args to construct the IOError so that - the json specific args aren't used as IOError specific args - and the error message from JSONDecodeError is preserved. - """ - CompatJSONDecodeError.__init__(self, *args) - InvalidJSONError.__init__(self, *self.args, **kwargs) - - -class HTTPError(RequestException): - """An HTTP error occurred.""" - - -class ConnectionError(RequestException): - """A Connection error occurred.""" - - -class ProxyError(ConnectionError): - """A proxy error occurred.""" - - -class SSLError(ConnectionError): - """An SSL error occurred.""" - - -class Timeout(RequestException): - """The request timed out. - - Catching this error will catch both - :exc:`~requests.exceptions.ConnectTimeout` and - :exc:`~requests.exceptions.ReadTimeout` errors. - """ - - -class ConnectTimeout(ConnectionError, Timeout): - """The request timed out while trying to connect to the remote server. - - Requests that produced this error are safe to retry. - """ - - -class ReadTimeout(Timeout): - """The server did not send any data in the allotted amount of time.""" - - -class URLRequired(RequestException): - """A valid URL is required to make a request.""" - - -class TooManyRedirects(RequestException): - """Too many redirects.""" - - -class MissingSchema(RequestException, ValueError): - """The URL scheme (e.g. http or https) is missing.""" - - -class InvalidSchema(RequestException, ValueError): - """The URL scheme provided is either invalid or unsupported.""" - - -class InvalidURL(RequestException, ValueError): - """The URL provided was somehow invalid.""" - - -class InvalidHeader(RequestException, ValueError): - """The header value provided was somehow invalid.""" - - -class InvalidProxyURL(InvalidURL): - """The proxy URL provided is invalid.""" - - -class ChunkedEncodingError(RequestException): - """The server declared chunked encoding but sent an invalid chunk.""" - - -class ContentDecodingError(RequestException, BaseHTTPError): - """Failed to decode response content.""" - - -class StreamConsumedError(RequestException, TypeError): - """The content for this response was already consumed.""" - - -class RetryError(RequestException): - """Custom retries logic failed""" - - -class UnrewindableBodyError(RequestException): - """Requests encountered an error when trying to rewind a body.""" - - -# Warnings - - -class RequestsWarning(Warning): - """Base warning for Requests.""" - - -class FileModeWarning(RequestsWarning, DeprecationWarning): - """A file was opened in text mode, but Requests determined its binary length.""" - - -class RequestsDependencyWarning(RequestsWarning): - """An imported dependency doesn't match the expected version range.""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/themes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/themes.py deleted file mode 100644 index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/themes.py +++ /dev/null @@ -1,5 +0,0 @@ -from .default_styles import DEFAULT_STYLES -from .theme import Theme - - -DEFAULT = Theme(DEFAULT_STYLES) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/websockets/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/websockets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pscpeng/ChuanhuChatGPT/modules/presets.py b/spaces/pscpeng/ChuanhuChatGPT/modules/presets.py deleted file mode 100644 index 918c2380bd8a63b00e565fcd4149bd7419c71539..0000000000000000000000000000000000000000 --- a/spaces/pscpeng/ChuanhuChatGPT/modules/presets.py +++ /dev/null @@ -1,196 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -timeout_streaming = 10 # 流式对话时的超时时间 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

            川虎ChatGPT 🚀

            """ -description = """\ -
            - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
            -""" - -footer = """\ -
            {versions}
            -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -MODEL_SOFT_TOKEN_LIMIT = { - "gpt-3.5-turbo": { - "streaming": 3500, - "all": 3500 - }, - "gpt-3.5-turbo-0301": { - "streaming": 3500, - "all": 3500 - }, - "gpt-4": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-0314": { - "streaming": 7500, - "all": 7500 - }, - "gpt-4-32k": { - "streaming": 31000, - "all": 31000 - }, - "gpt-4-32k-0314": { - "streaming": 31000, - "all": 31000 - } -} - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/pseudolab/SonGPT/core/graph/__init__.py b/spaces/pseudolab/SonGPT/core/graph/__init__.py deleted file mode 100644 index 6dc4b7b5b5456fef247a94fd58f48bb5b70c437d..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/SonGPT/core/graph/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .graph import Graph diff --git a/spaces/pyodide-demo/self-hosted/cytoolz-tests.js b/spaces/pyodide-demo/self-hosted/cytoolz-tests.js deleted file mode 100644 index 1e6c99bc801867c70b7b5fe0e10afd4a40e4b2aa..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/cytoolz-tests.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="cytoolz-tests.data";var REMOTE_PACKAGE_BASE="cytoolz-tests.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","cytoolz",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/cytoolz","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:46303,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1248,2366,3380,4460,5161,6116,7121,8407,9556,10725,11879,12908,13848,14972,15795,16930,17773,18862,19908,20982,22045,23004,23500,23996,24804,25541,26216,27335,28369,29414,30267,31129,32213,33188,34310,35074,35908,37044,38246,38978,39781,40441,41026,41857,42860,43622,44347,45031,45910],sizes:[1248,1118,1014,1080,701,955,1005,1286,1149,1169,1154,1029,940,1124,823,1135,843,1089,1046,1074,1063,959,496,496,808,737,675,1119,1034,1045,853,862,1084,975,1122,764,834,1136,1202,732,803,660,585,831,1003,762,725,684,879,393],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_cytoolz-tests.data")}Module["addRunDependency"]("datafile_cytoolz-tests.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/cytoolz/tests/dev_skip_test.py",start:0,end:937,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_compatibility.py",start:937,end:1202,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_curried.py",start:1202,end:4905,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_curried_toolzlike.py",start:4905,end:6304,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_dev_skip_test.py",start:6304,end:6684,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_dicttoolz.py",start:6684,end:15764,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_docstrings.py",start:15764,end:18798,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_doctests.py",start:18798,end:19263,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_embedded_sigs.py",start:19263,end:23058,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_functoolz.py",start:23058,end:43275,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_inspect_args.py",start:43275,end:59269,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_itertoolz.py",start:59269,end:77458,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_none_safe.py",start:77458,end:89680,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_recipes.py",start:89680,end:90502,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_serialization.py",start:90502,end:96327,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_signatures.py",start:96327,end:99204,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_tlz.py",start:99204,end:100690,audio:0},{filename:"/lib/python3.9/site-packages/cytoolz/tests/test_utils.py",start:100690,end:101075,audio:0}],remote_package_size:50399,package_uuid:"8669d084-7b07-4c6a-8bc5-371cef4ef452"})})(); \ No newline at end of file diff --git a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/cppipc/buffer.cpp deleted file mode 100644 index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/test_project/cpp/cppipc/buffer.cpp +++ /dev/null @@ -1,87 +0,0 @@ -#include "libipc/buffer.h" -#include "libipc/utility/pimpl.h" - -#include - -namespace ipc { - -bool operator==(buffer const & b1, buffer const & b2) { - return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0); -} - -bool operator!=(buffer const & b1, buffer const & b2) { - return !(b1 == b2); -} - -class buffer::buffer_ : public pimpl { -public: - void* p_; - std::size_t s_; - void* a_; - buffer::destructor_t d_; - - buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a) - : p_(p), s_(s), a_(a), d_(d) { - } - - ~buffer_() { - if (d_ == nullptr) return; - d_((a_ == nullptr) ? p_ : a_, s_); - } -}; - -buffer::buffer() - : buffer(nullptr, 0, nullptr, nullptr) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d) - : p_(p_->make(p, s, d, nullptr)) { -} - -buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional) - : p_(p_->make(p, s, d, additional)) { -} - -buffer::buffer(void* p, std::size_t s) - : buffer(p, s, nullptr) { -} - -buffer::buffer(char const & c) - : buffer(const_cast(&c), 1) { -} - -buffer::buffer(buffer&& rhs) - : buffer() { - swap(rhs); -} - -buffer::~buffer() { - p_->clear(); -} - -void buffer::swap(buffer& rhs) { - std::swap(p_, rhs.p_); -} - -buffer& buffer::operator=(buffer rhs) { - swap(rhs); - return *this; -} - -bool buffer::empty() const noexcept { - return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0); -} - -void* buffer::data() noexcept { - return impl(p_)->p_; -} - -void const * buffer::data() const noexcept { - return impl(p_)->p_; -} - -std::size_t buffer::size() const noexcept { - return impl(p_)->s_; -} - -} // namespace ipc diff --git "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" deleted file mode 100644 index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" +++ /dev/null @@ -1,138 +0,0 @@ -import threading -from request_llm.bridge_all import predict_no_ui_long_connection -from toolbox import update_ui -from toolbox import CatchException, write_results_to_file, report_execption -from .crazy_utils import breakdown_txt_to_satisfy_token_limit - -def extract_code_block_carefully(txt): - splitted = txt.split('```') - n_code_block_seg = len(splitted) - 1 - if n_code_block_seg <= 1: return txt - # 剩下的情况都开头除去 ``` 结尾除去一次 ``` - txt_out = '```'.join(splitted[1:-1]) - return txt_out - - - -def break_txt_into_half_at_some_linebreak(txt): - lines = txt.split('\n') - n_lines = len(lines) - pre = lines[:(n_lines//2)] - post = lines[(n_lines//2):] - return "\n".join(pre), "\n".join(post) - - -@CatchException -def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port): - # 第1步:清空历史,以免输入溢出 - history = [] - - # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 第3步:集合文件 - import time, glob, os, shutil, re - os.makedirs('gpt_log/generated_english_version', exist_ok=True) - os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True) - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - # file_manifest = ['./toolbox.py'] - i_say_show_user_buffer = [] - - # 第4步:随便显示点什么防止卡顿的感觉 - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}' - i_say_show_user_buffer.append(i_say_show_user) - chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - # 第5步:Token限制下的截断与处理 - MAX_TOKEN = 3000 - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=())) - - - # 第6步:任务函数 - mutable_return = [None for _ in file_manifest] - observe_window = [[""] for _ in file_manifest] - def thread_worker(fp,index): - if index > 10: - time.sleep(60) - print('Openai 限制免费用户每分钟20次请求,降低请求频率中。') - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```' - try: - gpt_say = "" - # 分解代码文件 - file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN) - for file_content_partial in file_content_breakdown: - i_say = i_say_template(fp, file_content_partial) - # # ** gpt request ** - gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index]) - gpt_say_partial = extract_code_block_carefully(gpt_say_partial) - gpt_say += gpt_say_partial - mutable_return[index] = gpt_say - except ConnectionAbortedError as token_exceed_err: - print('至少一个线程任务Token溢出而失败', e) - except Exception as e: - print('至少一个线程任务意外失败', e) - - # 第7步:所有线程同时开始执行任务函数 - handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)] - for h in handles: - h.daemon = True - h.start() - chatbot.append(('开始了吗?', f'多线程操作已经开始')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第8步:循环轮询各个线程是否执行完毕 - cnt = 0 - while True: - cnt += 1 - time.sleep(0.2) - th_alive = [h.is_alive() for h in handles] - if not any(th_alive): break - # 更好的UI视觉效果 - observe_win = [] - for thread_index, alive in enumerate(th_alive): - observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('
            ','.....').replace('$','.')+"... ]") - stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)] - stat_str = ''.join(stat) - chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1))) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第9步:把结果写入文件 - for index, h in enumerate(handles): - h.join() # 这里其实不需要join了,肯定已经都结束了 - fp = file_manifest[index] - gpt_say = mutable_return[index] - i_say_show_user = i_say_show_user_buffer[index] - - where_to_relocate = f'gpt_log/generated_english_version/{fp}' - if gpt_say is not None: - with open(where_to_relocate, 'w+', encoding='utf-8') as f: - f.write(gpt_say) - else: # 失败 - shutil.copyfile(file_manifest[index], where_to_relocate) - chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}')) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - time.sleep(1) - - # 第10步:备份一个文件 - res = write_results_to_file(history) - chatbot.append(("生成一份任务执行报告", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 diff --git a/spaces/qingyu-h/bingo/README.md b/spaces/qingyu-h/bingo/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/qingyu-h/bingo/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
            - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
            - - diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare V3.5.1 Portable Serial Key !!INSTALL!! Keygen.md b/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare V3.5.1 Portable Serial Key !!INSTALL!! Keygen.md deleted file mode 100644 index d3fc0d98a3e52078c0f9b6027a8333f0a4830eb9..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare V3.5.1 Portable Serial Key !!INSTALL!! Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Advanced SystemCare V3.5.1 Portable Serial Key Keygen


            Download File ✑ ✑ ✑ https://geags.com/2uCs84



            - -Registrar Registry Manager Pro 7.53 build 753.30711 Portable · IDM ... Pro 2013 7.2.0 Portable · Rhinoceros v5.5 Corporate Edition x86 Incl Keygen-F4CG ... HDD Regenerator 2011 DC 08.05.2013 (+ Bootable Regenerating CD) 3 Sept · Autumn ... Passcape Software Reset Windows Password 5.1.3.559 Advanced Edition ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Eurocarsimulator2fullversiondownload [Extra Quality].md b/spaces/quidiaMuxgu/Expedit-SAM/Eurocarsimulator2fullversiondownload [Extra Quality].md deleted file mode 100644 index f27930153f7111d63a468fa42d22c99b5ff8d916..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Eurocarsimulator2fullversiondownload [Extra Quality].md +++ /dev/null @@ -1,6 +0,0 @@ -

            eurocarsimulator2fullversiondownload


            Download » https://geags.com/2uCqkY



            -
            -The Euro Truck Simulator 2 gives you the experience of managing the most powerful cars ever seen on the highways and autobahns of Europe. ... Euro Truck Simulator 2 (2013) download torrent RePack by R.G. Mechanics ... Full screen is unavailable. ... Screenshot for the game Euro Truck Simulator 2 [v 1.38.1.15s + DLC ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Free Download Little Fighter 3 Turbo Game WORK.md b/spaces/quidiaMuxgu/Expedit-SAM/Free Download Little Fighter 3 Turbo Game WORK.md deleted file mode 100644 index a3abfadcab18eb7776ba2a050ab8e84b45c42b99..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Free Download Little Fighter 3 Turbo Game WORK.md +++ /dev/null @@ -1,9 +0,0 @@ -
            -

            the various fighters are controlled using the directional pad and have a variety of special moves and attacks that they can perform. some characters have a wide variety of special moves available, while others have a few and are primarily focused on blocking attacks. in the single-player mode, you'll fight through eight levels, one for each character, to win the game. there are also multiplayer modes, including versus, battle, and demo.

            -

            Free Download Little Fighter 3 Turbo Game


            Download ☆☆☆☆☆ https://geags.com/2uCqQk



            -

            it's been a while since we've had a fighter game that's looked great, and little fighter does just that. the graphics are great, it has a wonderful soundtrack, and the levels are fun to play through. all in all, this is the best fighting game around.

            -

            little fighter was developed by snake for the game boy advance. it is a fighting game where your goal is to knock your opponents out. the controls are easy to learn but can be a bit hard to control at times. the levels are repetitive and short but that doesn't stop the game from being fun to play. the game features a story mode, an arcade mode, a training mode, a versus mode and a battle mode. there is also a demo mode, where you can play through the story mode or one of the training modes. the graphics are simple but decent and the sound is pretty good.

            -

            little fighter 2 is a kick-ass fighting game. little fighter 2 is a nice title which is why you see so many players playing it. little fighter 2 is not only a superb fighting game but it also has a great setting of color and music. you can even change the colors of the background and the color of the buttons. little fighter 2 has a lot of new features that makes the game even more interesting. a lot of people are looking for little fighter 2 to download because of the great game play, graphics and sound of this title. a lot of people are enjoying playing this game and all the features that this game has to offer. this game has a lot of new things that you can do. little fighter 2 is a great game that is easy to play but hard to master. little fighter 2 is a very nice and excellent game that is very fun to play. little fighter 2 has a lot of new features that make the game even more interesting.

            -

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/ragha108/aiyogi_text_to_audio/app.py b/spaces/ragha108/aiyogi_text_to_audio/app.py deleted file mode 100644 index 2801a5b9da8f4c8c763199717b558689a4153869..0000000000000000000000000000000000000000 --- a/spaces/ragha108/aiyogi_text_to_audio/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from gtts import gTTS -import gradio as gr -import os - -def text_to_audio(text): - # Generate audio from text using gTTS - tts = gTTS(text=text, lang='en', tld="co.in",slow=False) - tts.save("test.wav") - return 'test.wav' - - -iface = gr.Interface(fn = text_to_audio, - inputs = 'text', - outputs = 'audio', - verbose = True, - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/install.sh b/spaces/rahul999r/Rahul_Kannada_TTS/install.sh deleted file mode 100644 index 51e038d5a0098f21d4efd8051a15b7f0cdeb4b73..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/install.sh +++ /dev/null @@ -1,6 +0,0 @@ -cd src/glow_tts/monotonic_align/ -pip install . -cd ../../../ - -# torch -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html diff --git a/spaces/rajesh1729/animated-visualization-with-mercury-ipyvizzu/README.md b/spaces/rajesh1729/animated-visualization-with-mercury-ipyvizzu/README.md deleted file mode 100644 index 4d5a21dee9ee573d64586624ffc2a9b45e12b6f8..0000000000000000000000000000000000000000 --- a/spaces/rajesh1729/animated-visualization-with-mercury-ipyvizzu/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Animated Visualization With Mercury Ipyvizzu -emoji: 📊 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/rajesh1729/live-twitter-sentiment-analysis/README.md b/spaces/rajesh1729/live-twitter-sentiment-analysis/README.md deleted file mode 100644 index 41ed6b6a17e76afdc4b55455d8034174412a3bf6..0000000000000000000000000000000000000000 --- a/spaces/rajesh1729/live-twitter-sentiment-analysis/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Live Twitter Sentiment Analysis -emoji: 📈 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false -license: afl-3.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/abbrev/abbrev.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/abbrev/abbrev.js deleted file mode 100644 index 7b1dc5d67694a26793ad912950e9a1b56f1835c2..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/abbrev/abbrev.js +++ /dev/null @@ -1,61 +0,0 @@ -module.exports = exports = abbrev.abbrev = abbrev - -abbrev.monkeyPatch = monkeyPatch - -function monkeyPatch () { - Object.defineProperty(Array.prototype, 'abbrev', { - value: function () { return abbrev(this) }, - enumerable: false, configurable: true, writable: true - }) - - Object.defineProperty(Object.prototype, 'abbrev', { - value: function () { return abbrev(Object.keys(this)) }, - enumerable: false, configurable: true, writable: true - }) -} - -function abbrev (list) { - if (arguments.length !== 1 || !Array.isArray(list)) { - list = Array.prototype.slice.call(arguments, 0) - } - for (var i = 0, l = list.length, args = [] ; i < l ; i ++) { - args[i] = typeof list[i] === "string" ? list[i] : String(list[i]) - } - - // sort them lexicographically, so that they're next to their nearest kin - args = args.sort(lexSort) - - // walk through each, seeing how much it has in common with the next and previous - var abbrevs = {} - , prev = "" - for (var i = 0, l = args.length ; i < l ; i ++) { - var current = args[i] - , next = args[i + 1] || "" - , nextMatches = true - , prevMatches = true - if (current === next) continue - for (var j = 0, cl = current.length ; j < cl ; j ++) { - var curChar = current.charAt(j) - nextMatches = nextMatches && curChar === next.charAt(j) - prevMatches = prevMatches && curChar === prev.charAt(j) - if (!nextMatches && !prevMatches) { - j ++ - break - } - } - prev = current - if (j === cl) { - abbrevs[current] = current - continue - } - for (var a = current.substr(0, j) ; j <= cl ; j ++) { - abbrevs[a] = current - a += current.charAt(j) - } - } - return abbrevs -} - -function lexSort (a, b) { - return a === b ? 0 : a > b ? 1 : -1 -} diff --git a/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/README.md b/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/README.md deleted file mode 100644 index a534f1ddda3346c6b515de772e9a489d44471023..0000000000000000000000000000000000000000 --- a/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Modern Chinese To Ancient Translate Wenyanwen -emoji: 🍶 -colorFrom: gray -colorTo: cyan -sdk: streamlit -app_file: app.py -pinned: true ---- - -# Modern Chinese To Ancient Translate Wenyanwen -* Huggingface Model's Model: [wenyanwen-chinese-translate-to-ancient](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) -* [GitHub](https://github.com/raynardj/yuan) \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Barsaat 5 Full Movie Mp4 ((INSTALL)) Free Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Barsaat 5 Full Movie Mp4 ((INSTALL)) Free Download.md deleted file mode 100644 index f4d63c15c675c296e1ff5a53eef9292d52fc4912..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Barsaat 5 Full Movie Mp4 ((INSTALL)) Free Download.md +++ /dev/null @@ -1,36 +0,0 @@ -

            Barsaat 5 full movie mp4 free download


            Download File ••• https://urlgoal.com/2uCKUO



            -
            -,Bipasha Basu,Bobby Deol,Priyanka Chopra,Bend it Like Basha,Bobby Deol,Priyanka Chopra,Bend it Like Basha,Bobby Deol,Priyanka Chopra,Bend it Like Basha,Bobby Deol,Priyanka Chopra,Bend it Like BashaDementia and the role of the human community mental health center. - -This article examines the relationship of community mental health centers to dementia care in the U.S. It is based on a search of the current literature on dementia, its relationship to community mental health centers, and current local level strategies in the community for meeting the needs of those with dementia. The author discusses the early experiences of community mental health centers with services for the elderly and examines how a community mental health center's service model can address issues of cost and stigma related to dementia.Abdulkareem A. Moustafa - -Abdulkareem A. Moustafa is a Coptic priest, Egyptian Islamic scholar, professor and author of many books and scholarly papers on Islam. His books are widely used in academia and also translated into many languages including English. - -Life - -Abdulkareem A. Moustafa was born in 1970 in Zagazig, Egypt. - -He received a Doctor of Theology degree from the Salim Al-Hassaniya Theological University in Cairo, and a Master of Theology degree from the National Theological Academy. He was a professor at Alexandria University until he was suspended in 2003 for blasphemy and became a founding member of the Salafi movement in Egypt. - -Moustafa has been ordained as a Coptic priest in 2002. - -Activities - -He is a Professor of Islamic Thought at the Department of Religious Studies at the AUC and a member of the American Academy of Religion and a corresponding member of the Royal Asiatic Society. - -Scholarly contributions - -Moustafa contributed to the translation of works by many well-known Orientalist writers such as: - -Edward Lane, "The Manners and Customs of the Modern Egyptians" (1875) - -Muhammad Husayn Haykal, "The Life of Muhammad" (1912) - -Muhammad Husayn Haykal, "The Life of Mohammad" (1939) - -Muhammad Ali Khalidi, "Arabia: A History" (1966) - -Amin Maalouf, 4fefd39f24
            -
            -
            -

            diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Engineering Science N2 Question Papers And Memos Pdf 21.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Engineering Science N2 Question Papers And Memos Pdf 21.md deleted file mode 100644 index 2ddf879a837c5f6ad2387c1378083777775bd82a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Engineering Science N2 Question Papers And Memos Pdf 21.md +++ /dev/null @@ -1,6 +0,0 @@ -

            engineering science n2 question papers and memos pdf 21


            DOWNLOAD » https://urlgoal.com/2uCLVJ



            - - 3cee63e6c2
            -
            -
            -

            diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/train_shape.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/train_shape.py deleted file mode 100644 index 241ce543c956ce51f6f8445739ef41f4ddf7a7d5..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/apps/train_shape.py +++ /dev/null @@ -1,183 +0,0 @@ -import sys -import os - -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))) -ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -import time -import json -import numpy as np -import cv2 -import random -import torch -from torch.utils.data import DataLoader -from tqdm import tqdm - -from lib.options import BaseOptions -from lib.mesh_util import * -from lib.sample_util import * -from lib.train_util import * -from lib.data import * -from lib.model import * -from lib.geometry import index - -# get options -opt = BaseOptions().parse() - -def train(opt): - # set cuda - cuda = torch.device('cuda:%d' % opt.gpu_id) - - train_dataset = TrainDataset(opt, phase='train') - test_dataset = TrainDataset(opt, phase='test') - - projection_mode = train_dataset.projection_mode - - # create data loader - train_data_loader = DataLoader(train_dataset, - batch_size=opt.batch_size, shuffle=not opt.serial_batches, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - - print('train data size: ', len(train_data_loader)) - - # NOTE: batch size should be 1 and use all the points for evaluation - test_data_loader = DataLoader(test_dataset, - batch_size=1, shuffle=False, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - print('test data size: ', len(test_data_loader)) - - # create net - netG = HGPIFuNet(opt, projection_mode).to(device=cuda) - optimizerG = torch.optim.RMSprop(netG.parameters(), lr=opt.learning_rate, momentum=0, weight_decay=0) - lr = opt.learning_rate - print('Using Network: ', netG.name) - - def set_train(): - netG.train() - - def set_eval(): - netG.eval() - - # load checkpoints - if opt.load_netG_checkpoint_path is not None: - print('loading for net G ...', opt.load_netG_checkpoint_path) - netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda)) - - if opt.continue_train: - if opt.resume_epoch < 0: - model_path = '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name) - else: - model_path = '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, opt.resume_epoch) - print('Resuming from ', model_path) - netG.load_state_dict(torch.load(model_path, map_location=cuda)) - - os.makedirs(opt.checkpoints_path, exist_ok=True) - os.makedirs(opt.results_path, exist_ok=True) - os.makedirs('%s/%s' % (opt.checkpoints_path, opt.name), exist_ok=True) - os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True) - - opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt') - with open(opt_log, 'w') as outfile: - outfile.write(json.dumps(vars(opt), indent=2)) - - # training - start_epoch = 0 if not opt.continue_train else max(opt.resume_epoch,0) - for epoch in range(start_epoch, opt.num_epoch): - epoch_start_time = time.time() - - set_train() - iter_data_time = time.time() - for train_idx, train_data in enumerate(train_data_loader): - iter_start_time = time.time() - - # retrieve the data - image_tensor = train_data['img'].to(device=cuda) - calib_tensor = train_data['calib'].to(device=cuda) - sample_tensor = train_data['samples'].to(device=cuda) - - image_tensor, calib_tensor = reshape_multiview_tensors(image_tensor, calib_tensor) - - if opt.num_views > 1: - sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views) - - label_tensor = train_data['labels'].to(device=cuda) - - res, error = netG.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor) - - optimizerG.zero_grad() - error.backward() - optimizerG.step() - - iter_net_time = time.time() - eta = ((iter_net_time - epoch_start_time) / (train_idx + 1)) * len(train_data_loader) - ( - iter_net_time - epoch_start_time) - - if train_idx % opt.freq_plot == 0: - print( - 'Name: {0} | Epoch: {1} | {2}/{3} | Err: {4:.06f} | LR: {5:.06f} | Sigma: {6:.02f} | dataT: {7:.05f} | netT: {8:.05f} | ETA: {9:02d}:{10:02d}'.format( - opt.name, epoch, train_idx, len(train_data_loader), error.item(), lr, opt.sigma, - iter_start_time - iter_data_time, - iter_net_time - iter_start_time, int(eta // 60), - int(eta - 60 * (eta // 60)))) - - if train_idx % opt.freq_save == 0 and train_idx != 0: - torch.save(netG.state_dict(), '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name)) - torch.save(netG.state_dict(), '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, epoch)) - - if train_idx % opt.freq_save_ply == 0: - save_path = '%s/%s/pred.ply' % (opt.results_path, opt.name) - r = res[0].cpu() - points = sample_tensor[0].transpose(0, 1).cpu() - save_samples_truncted_prob(save_path, points.detach().numpy(), r.detach().numpy()) - - iter_data_time = time.time() - - # update learning rate - lr = adjust_learning_rate(optimizerG, epoch, lr, opt.schedule, opt.gamma) - - #### test - with torch.no_grad(): - set_eval() - - if not opt.no_num_eval: - test_losses = {} - print('calc error (test) ...') - test_errors = calc_error(opt, netG, cuda, test_dataset, 100) - print('eval test MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*test_errors)) - MSE, IOU, prec, recall = test_errors - test_losses['MSE(test)'] = MSE - test_losses['IOU(test)'] = IOU - test_losses['prec(test)'] = prec - test_losses['recall(test)'] = recall - - print('calc error (train) ...') - train_dataset.is_train = False - train_errors = calc_error(opt, netG, cuda, train_dataset, 100) - train_dataset.is_train = True - print('eval train MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*train_errors)) - MSE, IOU, prec, recall = train_errors - test_losses['MSE(train)'] = MSE - test_losses['IOU(train)'] = IOU - test_losses['prec(train)'] = prec - test_losses['recall(train)'] = recall - - if not opt.no_gen_mesh: - print('generate mesh (test) ...') - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - test_data = random.choice(test_dataset) - save_path = '%s/%s/test_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, test_data['name']) - gen_mesh(opt, netG, cuda, test_data, save_path) - - print('generate mesh (train) ...') - train_dataset.is_train = False - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - train_data = random.choice(train_dataset) - save_path = '%s/%s/train_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, train_data['name']) - gen_mesh(opt, netG, cuda, train_data, save_path) - train_dataset.is_train = True - - -if __name__ == '__main__': - train(opt) \ No newline at end of file diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M6S6R6.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M6S6R6.py deleted file mode 100644 index ad63ddd853d7c7664e5bd630864aa72cfd90257b..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/liteflownet/liteflownet_pre_M6S6R6.py +++ /dev/null @@ -1,51 +0,0 @@ -model = dict( - type='LiteFlowNet', - encoder=dict( - type='NetC', - in_channels=3, - pyramid_levels=[ - 'level1', 'level2', 'level3', 'level4', 'level5', 'level6' - ], - out_channels=(32, 32, 64, 96, 128, 192), - strides=(1, 2, 2, 2, 2, 2), - num_convs=(1, 3, 2, 2, 1, 1), - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None), - decoder=dict( - type='NetE', - in_channels=dict(level6=192), - corr_channels=dict(level6=49), - sin_channels=dict(level6=386), - rin_channels=dict(level6=195), - feat_channels=64, - mfeat_channels=(128, 64, 32), - sfeat_channels=(128, 64, 32), - rfeat_channels=(128, 128, 64, 64, 32, 32), - patch_size=dict(level6=3), - corr_cfg=dict(level6=dict(type='Correlation', max_displacement=3)), - warp_cfg=dict(type='Warp', align_corners=True, use_mask=True), - flow_div=20., - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - scaled_corr=False, - regularized_flow=True, - extra_training_loss=False, - flow_loss=dict( - type='MultiLevelEPE', - weights=dict(level6=0.32), - p=2, - reduction='sum'), - ), - init_cfg=dict( - type='Kaiming', - nonlinearity='leaky_relu', - layer=['Conv2d', 'ConvTranspose2d'], - mode='fan_in', - bias=0), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(), -) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/contextmanagers.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/contextmanagers.py deleted file mode 100644 index fa12bfcaff1e781b0a8cc7d7c8b839c2f2955a05..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/contextmanagers.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import asyncio -import contextlib -import logging -import os -import time -from typing import List - -import torch - -logger = logging.getLogger(__name__) - -DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False)) - - -@contextlib.asynccontextmanager -async def completed(trace_name='', - name='', - sleep_interval=0.05, - streams: List[torch.cuda.Stream] = None): - """Async context manager that waits for work to complete on given CUDA - streams.""" - if not torch.cuda.is_available(): - yield - return - - stream_before_context_switch = torch.cuda.current_stream() - if not streams: - streams = [stream_before_context_switch] - else: - streams = [s if s else stream_before_context_switch for s in streams] - - end_events = [ - torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams - ] - - if DEBUG_COMPLETED_TIME: - start = torch.cuda.Event(enable_timing=True) - stream_before_context_switch.record_event(start) - - cpu_start = time.monotonic() - logger.debug('%s %s starting, streams: %s', trace_name, name, streams) - grad_enabled_before = torch.is_grad_enabled() - try: - yield - finally: - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_end = time.monotonic() - for i, stream in enumerate(streams): - event = end_events[i] - stream.record_event(event) - - grad_enabled_after = torch.is_grad_enabled() - - # observed change of torch.is_grad_enabled() during concurrent run of - # async_test_bboxes code - assert (grad_enabled_before == grad_enabled_after - ), 'Unexpected is_grad_enabled() value change' - - are_done = [e.query() for e in end_events] - logger.debug('%s %s completed: %s streams: %s', trace_name, name, - are_done, streams) - with torch.cuda.stream(stream_before_context_switch): - while not all(are_done): - await asyncio.sleep(sleep_interval) - are_done = [e.query() for e in end_events] - logger.debug( - '%s %s completed: %s streams: %s', - trace_name, - name, - are_done, - streams, - ) - - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_time = (cpu_end - cpu_start) * 1000 - stream_times_ms = '' - for i, stream in enumerate(streams): - elapsed_time = start.elapsed_time(end_events[i]) - stream_times_ms += f' {stream} {elapsed_time:.2f} ms' - logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time, - stream_times_ms) - - -@contextlib.asynccontextmanager -async def concurrent(streamqueue: asyncio.Queue, - trace_name='concurrent', - name='stream'): - """Run code concurrently in different streams. - - :param streamqueue: asyncio.Queue instance. - - Queue tasks define the pool of streams used for concurrent execution. - """ - if not torch.cuda.is_available(): - yield - return - - initial_stream = torch.cuda.current_stream() - - with torch.cuda.stream(initial_stream): - stream = await streamqueue.get() - assert isinstance(stream, torch.cuda.Stream) - - try: - with torch.cuda.stream(stream): - logger.debug('%s %s is starting, stream: %s', trace_name, name, - stream) - yield - current = torch.cuda.current_stream() - assert current == stream - logger.debug('%s %s has finished, stream: %s', trace_name, - name, stream) - finally: - streamqueue.task_done() - streamqueue.put_nowait(stream) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Dazed And Confused 1993 720p HDDVD X264 650MB YIFY.md b/spaces/rorallitri/biomedical-language-models/logs/Dazed And Confused 1993 720p HDDVD X264 650MB YIFY.md deleted file mode 100644 index 56d719e130cb8e94ce46807236b50499db2999a8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Dazed And Confused 1993 720p HDDVD X264 650MB YIFY.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Dazed And Confused 1993 720p HDDVD X264 650MB YIFY


            Download ===== https://tinurll.com/2uzo0r



            - -Download Dazed and Confused (1993) 720p HDDVD x264 - 650MB - YIFY torrent or any other torrent from the Video HD - Movies. Direct download via magnet ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/rorallitri/biomedical-language-models/logs/Endrendrum Raja A Full Concert By Isaignani Ilaiyaraaja A Rare and Unforgettable Musical Event.md b/spaces/rorallitri/biomedical-language-models/logs/Endrendrum Raja A Full Concert By Isaignani Ilaiyaraaja A Rare and Unforgettable Musical Event.md deleted file mode 100644 index 3cfee72bbf6163ea452b7a939235b538520a0024..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Endrendrum Raja A Full Concert By Isaignani Ilaiyaraaja A Rare and Unforgettable Musical Event.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Endrendrum Raja A Full Concert By Isaignani Ilaiyaraaja


            Downloadhttps://tinurll.com/2uzok3



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hook Ya Crook hd download How to stream the hilarious prison escape movie online.md b/spaces/rorallitri/biomedical-language-models/logs/Hook Ya Crook hd download How to stream the hilarious prison escape movie online.md deleted file mode 100644 index 981c6e57d6c0f1ab1330b6c4dbc9b2153a70d168..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hook Ya Crook hd download How to stream the hilarious prison escape movie online.md +++ /dev/null @@ -1,19 +0,0 @@ -
            -

            PermittedFor non-commercial purposes:

            • Read, print & download
            • Text & data mine
            • Translate the article
            Not Permitted
            • Reuse portions or extracts from the article in other works
            • Redistribute or republish the final article
            • Sell or re-use for commercial purposes

            Elsevier's open access license policy

            -

            "By hook or by crook" is an English phrase meaning "by any means necessary", suggesting that any means possible should be taken to accomplish a goal. The phrase was first recorded in the Middle English Controversial Tracts of John Wyclif in 1380.[1][2]

            -

            Hook Ya Crook hd download


            Download Zip 🗸 https://tinurll.com/2uznc9



            -

            The origin of the phrase is obscure, with multiple different explanations and no evidence to support any particular one over the others.[3] For example, a commonly repeated suggestion is that it comes from Hook Head in Wexford, Ireland and the nearby village of Crooke, in Waterford, Ireland. As such, the phrase would derive from a vow by Oliver Cromwell to take Waterford by Hook (on the Wexford side of Waterford Estuary) or by Crook (a village on the Waterford side); although the Wyclif tract was published at least 260 years before Cromwell. Another is that it comes from the customs regulating which firewood local people could take from common land; they were allowed to take any branches that they could reach with a billhook or a shepherd's crook (used to hook sheep).[4]

            -

            The phrase was featured in the opening credits to the 1960s British television series The Prisoner.[5] It appears prominently (as "by hook and by crook") in the short stories "The Snows of Kilimanjaro" by Ernest Hemingway[6] and "The Legend of Sleepy Hollow" by Washington Irving.[7] It was also used as the title of the 2001 film By Hook or by Crook directed by Silas Howard and Harry Dodge. It was also used (as "By hook or by crook, you're coming with me") by the bounty hunter Cad Bane in the Star Wars: The Bad Batch episode, "Bounty Lost". It was also used as a lyric in the chorus of Radiohead's song "Little by Little".[8]

            -

            Includes unlimited streaming via the free Bandcamp app, plus high-quality downloads of Infinite, The Bill Murray EP, Alive And Well, PIGEON HOLE - Age Like Astronauts, and By Hook or by Crook. , and , . Purchasable with gift card Buy Digital Discography $25.28 USD or more (35% OFF) Send as Gift

          4. Buy it on itunes Limited Edition CD + Digital Compact Disc (CD) + Digital Album
        Get it while it lasts.

        Includes unlimited streaming of By Hook or by Crook via the free Bandcamp app, plus high-quality download in MP3, FLAC and more. $(".buyItem .bd").last().bcTruncate(TruncateProfile.get("buyItem"), "more", "less"); Sold Out Share / Embed 1. Recession Proof 02:16 buy track 2. By Hook Or By Crook (feat. Josh Martinez, Mos Eisley of Sweatshop Union and Mat The Alien) 02:54 buy track 3. Howard The Duck 03:11 buy track 4. Look So Good 03:24 buy track 5. Five Finger Discount 01:47 buy track 6. Slumlord Trillionaire$ (feat. AWOL ONE) 03:47 buy track 7. Shiners 03:30 buy track 8. Wolf Boy 03:22 buy track 9. Hollywood Square (feat. Moka Only and Frank Nitt of Frank n Dank) 03:49 buy track 10. Eat Shit 01:32 buy track 11. Chewin The Fat 03:23 buy track 12. 12 Step Program (Last Call) 02:04 buy track 13. The Magic Number 02:14 buy track about The Trillionaire$ By Hook Or By Crook Is a unique concept album, far different from the normal rap records being released today. It is a tongue and cheek commentary on the greed and ruthlessness of modern society. It celebrates and at the same time makes a statement about mans insatiable hunger for "the good things in life". The Trillionaire$ are the brainchild of Metty The Dertmerchant , a combo consisting of himself and fellow Canadian M.C. Evil Ebenezer.

        Twitter: twitter.com/sweatshopunion1
        Website: urbnet.com/TheTrillionaire$
        Sweatshop Union: urbnet.com/ssu
        Photos: flickr.com/photos/urbnet/sets/72157626050389661 $(".tralbum-about").last().bcTruncate(TruncateProfile.get("tralbum_about"), "more", "less"); credits released June 15, 2010 license all rights reserved tags Tags hip-hop hip-hop/rap sweatshop union hip-hop Vancouver Shopping cart total USD Check out about Sweatshop Union Vancouver, British Columbia

        -

        Hook ya crook is an unreleased bollywood comedy drama film, which was to be directed by david dhawan and starred john abraham and genelia d'souza in the lead roles. produced by ronnie screwvala and written by rensil d'silva, the award-winning writer of rang de basanti, the film was shelved due to numerous delays.the film was to be an adaption of the 2005 hollywood hit, the longest yard.

        -

        Description : Hook N Crook mp3 song download by Kulbir Jhinjer in album Hook N Crook. The song Hook N Crook is Lyrics by Hardeep Grewal Music by Yeah Proof Label Hardeep Grewal Music. Hook N Crook Kulbir Jhinjer mp3 song belongs to Single Track and Hook N Crook release on Dec 15, 2021. Hook N Crook song playtime is 3:37 minute

        -

        Related Tags: By Hook Or By Crook, By Hook Or By Crook song, By Hook Or By Crook MP3 song, By Hook Or By Crook MP3, download By Hook Or By Crook song, By Hook Or By Crook song, Do With Me What You Will (Home Demos) By Hook Or By Crook song, By Hook Or By Crook song by Billy Swan, By Hook Or By Crook song download, download By Hook Or By Crook MP3 song

        -

        -

        Client Library - We've made it clear over the years that it's super important for clients to back up your digital files securely as we do not offer any guarantees that your files will be available for re-download. All purchases made within the Crooklyn Clan Vault are, and have always been on an as-is basis. Your previous downloads will no longer be accessible in ccv4. Your "Library" is a feature we created as a courtesy to you so that you may keep track of previous purchases, and while your library is active, re-download them. Please take the time to download from your library while ccv2 and ccv3 are still active. In some cases your library may not be available to you depending on whether or not you have an active recurring membership, or credits in your account. This will remain the case until the very end of v.2 and v.3.

        -

        Stars - Since every token is worth just a single penny, we needed a way to reward clients for bigger purchases without adjusting the value of a token, so we invented stars. Stars will be issued to clients in certain packages and allow the download of a single track anywhere site-wide for a single star, completely disregarding the token price of the track. The only catch with stars is that when you have a star balance on your account they will be used to make purchases FIRST, then your tokens will be used. So in other words, while you have an active star balance you won't see a token cost on tracks, but instead just a star in its place letting you know that your track purchase will be made using your star balance. When your stars have depleted your tokens will be used and token prices will show on tracks.

        -

        On this example of the desktop version of a release, on the top left you see the date the release was published, on the top-right you see the number of tracks in the release, on the bottom-left you see the discount offered on the tracks contained within the release if you purchase the entire release, and on the bottom-right you see the controls for the release to add it to your cart, share it on various platforms, or one-click it to automatically download it on the spot by clicking on the token cost of the release..

        On hovering the release, you can see the title of the release and the contributor that the release belongs to. Clicking on the release will bring you to the page showing the tracks contained within the release. Clicking on the contributor will take you to that contributors profile page.

        -

        Dropbox Support - Dropbox is a fixture-cloud service that is now supported by Pioneer DJ right inside the CDJ-3000 among many other DJ Controllers from various brandnames.. CCV4 gives you the ability to link your Dropbox account to our application and download your tracks right to your Dropbox.

        -

        Not only could they use stones to solve the task, but they were flexible in their tool choice, using and modifying sticks to achieve the same goal. When the correct tool was out of reach, they used another tool to get it, demonstrating the ability to use tools sequentially. In further tests, the rooks were able to use a hook tool to get food out of a different tube and even creatively bent a straight piece of wire to make the hook to reach the food.

        -

        Includes unlimited streaming via the free Bandcamp app, plus high-quality downloads of A Thin Thread, Party Music!, No Need To Beg, Oh So Sour, and Could You Be The One?. , and , . Purchasable with gift card Buy Digital Discography $16.25 USD or more (35% OFF) Send as Gift lyrics If only I was lonely and double all of your dares
        Maybe double something else down there
        If only I was lonely

        Propose something unholy, perhaps perfectly prepare
        To announce all of your affairs
        Propose something unholy

        You come bounding into this joint
        Flexible and so adroit
        I'm sorry you missed the point
        I'm sorry, so sorry
        If only I was lonely, if only I was lonely

        Now someone calls me "mine"
        She won me with just one look
        It was a chance that I already took
        Now someone calls me "mine"

        Just casting out your line, hope to catch me on your hook
        Steal me away like a common crook
        Just casting out your line

        Where are the drinks? Let them all pour
        Spill on my jeans and on the floor
        Make eyes at me then make for the door
        I'm sorry, so sorry
        If only I was lonely, if only I was lonely

        You come bounding into this joint
        Flexible and so adroit
        I'm sorry you missed the point
        I'm sorry, so sorry
        If only I was lonely, if only I was lonely $(".lyricsText").last().bcTruncate(TruncateProfile.get("tralbum_long"), "more", "less"); credits from Party Music!, released April 20, 2018
        Music & Lyrics: Phil Yates (Sonic Charger Songs, ASCAP)

        Phil Yates: vocals, electric guitar
        Jake Blodgett: drums
        Raph Worrick: bass
        Kevin Stevens: electric guitar $(".tralbum-credits").last().bcTruncate(TruncateProfile.get("tralbum_long"), "more", "less"); license all rights reserved tags Tags alternative college rock folk indie indie rock jangle pop power pop rock Chicago Shopping cart total USD Check out about Phil Yates & The Affiliates Chicago, Illinois

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rushic24/Priyanka-Chopra-TTS/training/tacotron2_model/model.py b/spaces/rushic24/Priyanka-Chopra-TTS/training/tacotron2_model/model.py deleted file mode 100644 index f38f66e4c4f875f1974b048ee0e9b1b48658549b..0000000000000000000000000000000000000000 --- a/spaces/rushic24/Priyanka-Chopra-TTS/training/tacotron2_model/model.py +++ /dev/null @@ -1,609 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2018, NVIDIA Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" -from math import sqrt -import torch -from torch.autograd import Variable -from torch import nn -from torch.nn import functional as F -from training.tacotron2_model.layers import ConvNorm, LinearNorm -from training.tacotron2_model.utils import to_gpu, get_mask_from_lengths, get_x - - -class LocationLayer(nn.Module): - def __init__(self, attention_n_filters, attention_kernel_size, attention_dim): - super(LocationLayer, self).__init__() - padding = int((attention_kernel_size - 1) / 2) - self.location_conv = ConvNorm( - 2, attention_n_filters, kernel_size=attention_kernel_size, padding=padding, bias=False, stride=1, dilation=1 - ) - self.location_dense = LinearNorm(attention_n_filters, attention_dim, bias=False, w_init_gain="tanh") - - def forward(self, attention_weights_cat): - processed_attention = self.location_conv(attention_weights_cat) - processed_attention = processed_attention.transpose(1, 2) - processed_attention = self.location_dense(processed_attention) - return processed_attention - - -class Attention(nn.Module): - def __init__( - self, - attention_rnn_dim, - embedding_dim, - attention_dim, - attention_location_n_filters, - attention_location_kernel_size, - ): - super(Attention, self).__init__() - self.query_layer = LinearNorm(attention_rnn_dim, attention_dim, bias=False, w_init_gain="tanh") - self.memory_layer = LinearNorm(embedding_dim, attention_dim, bias=False, w_init_gain="tanh") - self.v = LinearNorm(attention_dim, 1, bias=False) - self.location_layer = LocationLayer(attention_location_n_filters, attention_location_kernel_size, attention_dim) - self.score_mask_value = -float("inf") - - def get_alignment_energies(self, query, processed_memory, attention_weights_cat): - """ - PARAMS - ------ - query: decoder output (batch, n_mel_channels * n_frames_per_step) - processed_memory: processed encoder outputs (B, T_in, attention_dim) - attention_weights_cat: cumulative and prev. att weights (B, 2, max_time) - - RETURNS - ------- - alignment (batch, max_time) - """ - - processed_query = self.query_layer(query.unsqueeze(1)) - processed_attention_weights = self.location_layer(attention_weights_cat) - energies = self.v(torch.tanh(processed_query + processed_attention_weights + processed_memory)) - - energies = energies.squeeze(-1) - return energies - - def forward(self, attention_hidden_state, memory, processed_memory, attention_weights_cat, mask): - """ - PARAMS - ------ - attention_hidden_state: attention rnn last output - memory: encoder outputs - processed_memory: processed encoder outputs - attention_weights_cat: previous and cummulative attention weights - mask: binary mask for padded data - """ - alignment = self.get_alignment_energies(attention_hidden_state, processed_memory, attention_weights_cat) - - if mask is not None: - alignment.data.masked_fill_(mask, self.score_mask_value) - - attention_weights = F.softmax(alignment, dim=1) - attention_context = torch.bmm(attention_weights.unsqueeze(1), memory) - attention_context = attention_context.squeeze(1) - - return attention_context, attention_weights - - -class Prenet(nn.Module): - def __init__(self, in_dim, sizes): - super(Prenet, self).__init__() - in_sizes = [in_dim] + sizes[:-1] - self.layers = nn.ModuleList( - [LinearNorm(in_size, out_size, bias=False) for (in_size, out_size) in zip(in_sizes, sizes)] - ) - - def forward(self, x): - for linear in self.layers: - x = F.dropout(F.relu(linear(x)), p=0.5, training=True) - return x - - -class Postnet(nn.Module): - """Postnet - - Five 1-d convolution with 512 channels and kernel size 5 - """ - - def __init__(self, n_mel_channels, postnet_embedding_dim, postnet_kernel_size, postnet_n_convolutions): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - - self.convolutions.append( - nn.Sequential( - ConvNorm( - n_mel_channels, - postnet_embedding_dim, - kernel_size=postnet_kernel_size, - stride=1, - padding=int((postnet_kernel_size - 1) / 2), - dilation=1, - w_init_gain="tanh", - ), - nn.BatchNorm1d(postnet_embedding_dim), - ) - ) - - for i in range(1, postnet_n_convolutions - 1): - self.convolutions.append( - nn.Sequential( - ConvNorm( - postnet_embedding_dim, - postnet_embedding_dim, - kernel_size=postnet_kernel_size, - stride=1, - padding=int((postnet_kernel_size - 1) / 2), - dilation=1, - w_init_gain="tanh", - ), - nn.BatchNorm1d(postnet_embedding_dim), - ) - ) - - self.convolutions.append( - nn.Sequential( - ConvNorm( - postnet_embedding_dim, - n_mel_channels, - kernel_size=postnet_kernel_size, - stride=1, - padding=int((postnet_kernel_size - 1) / 2), - dilation=1, - w_init_gain="linear", - ), - nn.BatchNorm1d(n_mel_channels), - ) - ) - - def forward(self, x): - for i in range(len(self.convolutions) - 1): - x = F.dropout(torch.tanh(self.convolutions[i](x)), 0.5, self.training) - x = F.dropout(self.convolutions[-1](x), 0.5, self.training) - - return x - - -class Encoder(nn.Module): - """Encoder module: - - Three 1-d convolution banks - - Bidirectional LSTM - """ - - def __init__(self, encoder_kernel_size, encoder_n_convolutions, encoder_embedding_dim): - super(Encoder, self).__init__() - - convolutions = [] - for _ in range(encoder_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm( - encoder_embedding_dim, - encoder_embedding_dim, - kernel_size=encoder_kernel_size, - stride=1, - padding=int((encoder_kernel_size - 1) / 2), - dilation=1, - w_init_gain="relu", - ), - nn.BatchNorm1d(encoder_embedding_dim), - ) - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM( - encoder_embedding_dim, int(encoder_embedding_dim / 2), 1, batch_first=True, bidirectional=True - ) - - def forward(self, x, input_lengths): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - # pytorch tensor are not reversible, hence the conversion - input_lengths = input_lengths.cpu().numpy() - x = nn.utils.rnn.pack_padded_sequence(x, input_lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=True) - - return outputs - - def inference(self, x): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - return outputs - - -class Decoder(nn.Module): - def __init__( - self, - n_mel_channels, - n_frames_per_step, - encoder_embedding_dim, - attention_dim, - attention_rnn_dim, - attention_location_n_filters, - attention_location_kernel_size, - decoder_rnn_dim, - prenet_dim, - max_decoder_steps, - gate_threshold, - p_attention_dropout, - p_decoder_dropout, - ): - super(Decoder, self).__init__() - self.n_mel_channels = n_mel_channels - self.n_frames_per_step = n_frames_per_step - self.encoder_embedding_dim = encoder_embedding_dim - self.attention_rnn_dim = attention_rnn_dim - self.decoder_rnn_dim = decoder_rnn_dim - self.prenet_dim = prenet_dim - self.max_decoder_steps = max_decoder_steps - self.gate_threshold = gate_threshold - self.p_attention_dropout = p_attention_dropout - self.p_decoder_dropout = p_decoder_dropout - - self.prenet = Prenet(n_mel_channels * n_frames_per_step, [prenet_dim, prenet_dim]) - - self.attention_rnn = nn.LSTMCell(prenet_dim + encoder_embedding_dim, attention_rnn_dim) - - self.attention_layer = Attention( - attention_rnn_dim, - encoder_embedding_dim, - attention_dim, - attention_location_n_filters, - attention_location_kernel_size, - ) - - self.decoder_rnn = nn.LSTMCell(attention_rnn_dim + encoder_embedding_dim, decoder_rnn_dim, 1) - - self.linear_projection = LinearNorm(decoder_rnn_dim + encoder_embedding_dim, n_mel_channels * n_frames_per_step) - - self.gate_layer = LinearNorm(decoder_rnn_dim + encoder_embedding_dim, 1, bias=True, w_init_gain="sigmoid") - - def get_go_frame(self, memory): - """Gets all zeros frames to use as first decoder input - PARAMS - ------ - memory: decoder outputs - - RETURNS - ------- - decoder_input: all zeros frames - """ - B = memory.size(0) - decoder_input = Variable(memory.data.new(B, self.n_mel_channels * self.n_frames_per_step).zero_()) - return decoder_input - - def initialize_decoder_states(self, memory, mask): - """Initializes attention rnn states, decoder rnn states, attention - weights, attention cumulative weights, attention context, stores memory - and stores processed memory - PARAMS - ------ - memory: Encoder outputs - mask: Mask for padded data if training, expects None for inference - """ - B = memory.size(0) - MAX_TIME = memory.size(1) - - self.attention_hidden = Variable(memory.data.new(B, self.attention_rnn_dim).zero_()) - self.attention_cell = Variable(memory.data.new(B, self.attention_rnn_dim).zero_()) - - self.decoder_hidden = Variable(memory.data.new(B, self.decoder_rnn_dim).zero_()) - self.decoder_cell = Variable(memory.data.new(B, self.decoder_rnn_dim).zero_()) - - self.attention_weights = Variable(memory.data.new(B, MAX_TIME).zero_()) - self.attention_weights_cum = Variable(memory.data.new(B, MAX_TIME).zero_()) - self.attention_context = Variable(memory.data.new(B, self.encoder_embedding_dim).zero_()) - - self.memory = memory - self.processed_memory = self.attention_layer.memory_layer(memory) - self.mask = mask - - def parse_decoder_inputs(self, decoder_inputs): - """Prepares decoder inputs, i.e. mel outputs - PARAMS - ------ - decode encoder_kernel_size=5, - encoder_n_convolutions=3, - encoder_embedding_dim=512,r_inputs: inputs used for teacher-forced training, i.e. mel-specs - - RETURNS - ------- - inputs: processed decoder inputs - - """ - # (B, n_mel_channels, T_out) -> (B, T_out, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(1, 2) - decoder_inputs = decoder_inputs.view( - decoder_inputs.size(0), int(decoder_inputs.size(1) / self.n_frames_per_step), -1 - ) - # (B, T_out, n_mel_channels) -> (T_out, B, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(0, 1) - return decoder_inputs - - def parse_decoder_outputs(self, mel_outputs, gate_outputs, alignments): - """Prepares decoder outputs for output - PARAMS - ------ - mel_outputs: - gate_outputs: gate output energies - alignments: - - RETURNS - ------- - mel_outputs: - gate_outpust: gate output energies - alignments: - """ - # (T_out, B) -> (B, T_out) - alignments = torch.stack(alignments).transpose(0, 1) - # (T_out, B) -> (B, T_out) - gate_outputs = torch.stack(gate_outputs).transpose(0, 1) - gate_outputs = gate_outputs.contiguous() - # (T_out, B, n_mel_channels) -> (B, T_out, n_mel_channels) - mel_outputs = torch.stack(mel_outputs).transpose(0, 1).contiguous() - # decouple frames per step - mel_outputs = mel_outputs.view(mel_outputs.size(0), -1, self.n_mel_channels) - # (B, T_out, n_mel_channels) -> (B, n_mel_channels, T_out) - mel_outputs = mel_outputs.transpose(1, 2) - - return mel_outputs, gate_outputs, alignments - - def decode(self, decoder_input): - """Decoder step using stored states, attention and memory - PARAMS - ------ - decoder_input: previous mel output - - RETURNS - ------- - mel_output: - gate_output: gate output energies - attention_weights: - """ - cell_input = torch.cat((decoder_input, self.attention_context), -1) - self.attention_hidden, self.attention_cell = self.attention_rnn( - cell_input, (self.attention_hidden, self.attention_cell) - ) - self.attention_hidden = F.dropout(self.attention_hidden, self.p_attention_dropout, self.training) - - attention_weights_cat = torch.cat( - (self.attention_weights.unsqueeze(1), self.attention_weights_cum.unsqueeze(1)), dim=1 - ) - self.attention_context, self.attention_weights = self.attention_layer( - self.attention_hidden, self.memory, self.processed_memory, attention_weights_cat, self.mask - ) - - self.attention_weights_cum += self.attention_weights - decoder_input = torch.cat((self.attention_hidden, self.attention_context), -1) - self.decoder_hidden, self.decoder_cell = self.decoder_rnn( - decoder_input, (self.decoder_hidden, self.decoder_cell) - ) - self.decoder_hidden = F.dropout(self.decoder_hidden, self.p_decoder_dropout, self.training) - - decoder_hidden_attention_context = torch.cat((self.decoder_hidden, self.attention_context), dim=1) - decoder_output = self.linear_projection(decoder_hidden_attention_context) - - gate_prediction = self.gate_layer(decoder_hidden_attention_context) - return decoder_output, gate_prediction, self.attention_weights - - def forward(self, memory, decoder_inputs, memory_lengths, device): - """Decoder forward pass for training - PARAMS - ------ - memory: Encoder outputs - decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs - memory_lengths: Encoder output lengths for attention masking. - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - - decoder_input = self.get_go_frame(memory).unsqueeze(0) - decoder_inputs = self.parse_decoder_inputs(decoder_inputs) - decoder_inputs = torch.cat((decoder_input, decoder_inputs), dim=0) - decoder_inputs = self.prenet(decoder_inputs) - - self.initialize_decoder_states(memory, mask=~get_mask_from_lengths(memory_lengths, device)) - - mel_outputs, gate_outputs, alignments = [], [], [] - while len(mel_outputs) < decoder_inputs.size(0) - 1: - decoder_input = decoder_inputs[len(mel_outputs)] - mel_output, gate_output, attention_weights = self.decode(decoder_input) - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output.squeeze(1)] - alignments += [attention_weights] - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs(mel_outputs, gate_outputs, alignments) - - return mel_outputs, gate_outputs, alignments - - def inference(self, memory, max_decoder_steps=None): - """Decoder inference - PARAMS - ------ - memory: Encoder outputs - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - if not max_decoder_steps: - # Use default max decoder steps if not given - max_decoder_steps = self.max_decoder_steps - - decoder_input = self.get_go_frame(memory) - - self.initialize_decoder_states(memory, mask=None) - - mel_outputs, gate_outputs, alignments = [], [], [] - while True: - decoder_input = self.prenet(decoder_input) - mel_output, gate_output, alignment = self.decode(decoder_input) - - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output] - alignments += [alignment] - - if torch.sigmoid(gate_output.data) > self.gate_threshold: - break - elif len(mel_outputs) == max_decoder_steps: - raise Exception( - "Warning! Reached max decoder steps. Either the model is low quality or the given sentence is too short/long" - ) - - decoder_input = mel_output - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs(mel_outputs, gate_outputs, alignments) - - return mel_outputs, gate_outputs, alignments - - -class Tacotron2(nn.Module): - def __init__( - self, - mask_padding=True, - fp16_run=False, - n_mel_channels=80, - n_symbols=148, - symbols_embedding_dim=512, - encoder_kernel_size=5, - encoder_n_convolutions=3, - encoder_embedding_dim=512, - attention_rnn_dim=1024, - attention_dim=128, - attention_location_n_filters=32, - attention_location_kernel_size=31, - decoder_rnn_dim=1024, - prenet_dim=256, - max_decoder_steps=1000, - gate_threshold=0.5, - p_attention_dropout=0.1, - p_decoder_dropout=0.1, - postnet_embedding_dim=512, - postnet_kernel_size=5, - postnet_n_convolutions=5, - ): - super(Tacotron2, self).__init__() - self.mask_padding = mask_padding - self.fp16_run = fp16_run - self.n_mel_channels = n_mel_channels - self.n_frames_per_step = 1 - self.embedding = nn.Embedding(n_symbols, symbols_embedding_dim) - std = sqrt(2.0 / (n_symbols + symbols_embedding_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.embedding.weight.data.uniform_(-val, val) - self.encoder = Encoder(encoder_kernel_size, encoder_n_convolutions, encoder_embedding_dim) - self.decoder = Decoder( - n_mel_channels, - self.n_frames_per_step, - encoder_embedding_dim, - attention_dim, - attention_rnn_dim, - attention_location_n_filters, - attention_location_kernel_size, - decoder_rnn_dim, - prenet_dim, - max_decoder_steps, - gate_threshold, - p_attention_dropout, - p_decoder_dropout, - ) - self.postnet = Postnet(n_mel_channels, postnet_embedding_dim, postnet_kernel_size, postnet_n_convolutions) - - def parse_batch(self, batch): - text_padded, input_lengths, mel_padded, gate_padded, output_lengths = batch - text_padded = to_gpu(text_padded).long() - input_lengths = to_gpu(input_lengths).long() - max_len = torch.max(input_lengths.data).item() - mel_padded = to_gpu(mel_padded).float() - gate_padded = to_gpu(gate_padded).float() - output_lengths = to_gpu(output_lengths).long() - - return ((text_padded, input_lengths, mel_padded, max_len, output_lengths), (mel_padded, gate_padded)) - - def parse_output(self, outputs, output_lengths, mask_size, alignment_mask_size, device): - if self.mask_padding: - mask = ~get_mask_from_lengths(output_lengths, device, mask_size) - mask = mask.expand(self.n_mel_channels, mask.size(0), mask.size(1)) - mask = mask.permute(1, 0, 2) - - outputs[0].data.masked_fill_(mask, 0.0) - outputs[1].data.masked_fill_(mask, 0.0) - outputs[2].data.masked_fill_(mask[:, 0, :], 1e3) # gate energies - if outputs[3].size(2) != alignment_mask_size: - outputs[3] = nn.ConstantPad1d((0, alignment_mask_size - outputs[3].size(2)), 0)(outputs[3]) - - return outputs - - def forward(self, inputs, mask_size, alignment_mask_size): - text_inputs, text_lengths, mels, output_lengths = get_x(inputs) - device = text_inputs.device - - text_lengths, output_lengths = text_lengths.data, output_lengths.data - embedded_inputs = self.embedding(text_inputs).transpose(1, 2) - encoder_outputs = self.encoder(embedded_inputs, text_lengths) - mel_outputs, gate_outputs, alignments = self.decoder( - encoder_outputs, mels, memory_lengths=text_lengths, device=device - ) - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - return self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments], - output_lengths, - mask_size, - alignment_mask_size, - device, - ) - - def inference(self, inputs, max_decoder_steps=None): - embedded_inputs = self.embedding(inputs).transpose(1, 2) - encoder_outputs = self.encoder.inference(embedded_inputs) - mel_outputs, gate_outputs, alignments = self.decoder.inference(encoder_outputs, max_decoder_steps) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - return [mel_outputs, mel_outputs_postnet, gate_outputs, alignments] diff --git a/spaces/sarunas856/tinder/README.md b/spaces/sarunas856/tinder/README.md deleted file mode 100644 index b2a4bb9e4bd99dda3e228bb4d39a4c4d82f9ee8d..0000000000000000000000000000000000000000 --- a/spaces/sarunas856/tinder/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tinder -emoji: 🐠 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/satozen/openai-whisper-large-v2/README.md b/spaces/satozen/openai-whisper-large-v2/README.md deleted file mode 100644 index b9b995da0309566f1f77b978dcce73507109d70d..0000000000000000000000000000000000000000 --- a/spaces/satozen/openai-whisper-large-v2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Openai Whisper Large V2 -emoji: 🐢 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sdeeas/ChuanhuChatGPT/custom.css b/spaces/sdeeas/ChuanhuChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/sdeeas/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/util/logger.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/util/logger.py deleted file mode 100644 index 18145f54c927abd59b95f3fa6e6da8002bc2ce97..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/util/logger.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import functools -import logging -import os -import sys - -from termcolor import colored - - -class _ColorfulFormatter(logging.Formatter): - def __init__(self, *args, **kwargs): - self._root_name = kwargs.pop("root_name") + "." - self._abbrev_name = kwargs.pop("abbrev_name", "") - if len(self._abbrev_name): - self._abbrev_name = self._abbrev_name + "." - super(_ColorfulFormatter, self).__init__(*args, **kwargs) - - def formatMessage(self, record): - record.name = record.name.replace(self._root_name, self._abbrev_name) - log = super(_ColorfulFormatter, self).formatMessage(record) - if record.levelno == logging.WARNING: - prefix = colored("WARNING", "red", attrs=["blink"]) - elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL: - prefix = colored("ERROR", "red", attrs=["blink", "underline"]) - else: - return log - return prefix + " " + log - - -# so that calling setup_logger multiple times won't add many handlers -@functools.lru_cache() -def setup_logger(output=None, distributed_rank=0, *, color=True, name="imagenet", abbrev_name=None): - """ - Initialize the detectron2 logger and set its verbosity level to "INFO". - - Args: - output (str): a file name or a directory to save log. If None, will not save log file. - If ends with ".txt" or ".log", assumed to be a file name. - Otherwise, logs will be saved to `output/log.txt`. - name (str): the root module name of this logger - - Returns: - logging.Logger: a logger - """ - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - logger.propagate = False - - if abbrev_name is None: - abbrev_name = name - - plain_formatter = logging.Formatter( - "[%(asctime)s.%(msecs)03d]: %(message)s", datefmt="%m/%d %H:%M:%S" - ) - # stdout logging: master only - if distributed_rank == 0: - ch = logging.StreamHandler(stream=sys.stdout) - ch.setLevel(logging.DEBUG) - if color: - formatter = _ColorfulFormatter( - colored("[%(asctime)s.%(msecs)03d]: ", "green") + "%(message)s", - datefmt="%m/%d %H:%M:%S", - root_name=name, - abbrev_name=str(abbrev_name), - ) - else: - formatter = plain_formatter - ch.setFormatter(formatter) - logger.addHandler(ch) - - # file logging: all workers - if output is not None: - if output.endswith(".txt") or output.endswith(".log"): - filename = output - else: - filename = os.path.join(output, "log.txt") - if distributed_rank > 0: - filename = filename + f".rank{distributed_rank}" - os.makedirs(os.path.dirname(filename), exist_ok=True) - - fh = logging.StreamHandler(_cached_log_stream(filename)) - fh.setLevel(logging.DEBUG) - fh.setFormatter(plain_formatter) - logger.addHandler(fh) - - return logger - - -# cache the opened file object, so that different calls to `setup_logger` -# with the same file name can safely write to the same file. -@functools.lru_cache(maxsize=None) -def _cached_log_stream(filename): - return open(filename, "a") diff --git a/spaces/sidharthism/fashion-eye/models/stylegan2/__init__.py b/spaces/sidharthism/fashion-eye/models/stylegan2/__init__.py deleted file mode 100644 index 87739d5c18fe051149018f275983ebf6380c8b54..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan2/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -import sys -import os -import shutil -import glob -import platform -from pathlib import Path - -current_path = os.getcwd() - -module_path = Path(__file__).parent / 'stylegan2-pytorch' -sys.path.append(str(module_path.resolve())) -os.chdir(module_path) - -from model import Generator - -os.chdir(current_path) \ No newline at end of file diff --git a/spaces/silk-road/ChatHaruhi/characters/Megumi/gradio_header.md b/spaces/silk-road/ChatHaruhi/characters/Megumi/gradio_header.md deleted file mode 100644 index 954589d5e5f5385906fb47e9cbaedd6f85c428af..0000000000000000000000000000000000000000 --- a/spaces/silk-road/ChatHaruhi/characters/Megumi/gradio_header.md +++ /dev/null @@ -1,8 +0,0 @@ -## Chat加藤惠 - -项目地址 [https://github.com/LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) -骆驼项目地址 [https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) - -争取模仿《路人女主的养成方法》中的加藤惠风格的ChatBot -语料由DataWhale 5月学习2群的“逃出现实”(陈昊宇)提供 -欢迎更多同学来一起提供语料! \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Conduce como un profesional con Extreme Car Driving Simulator MOD APK Consigue todos los autos y modos de juego.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Conduce como un profesional con Extreme Car Driving Simulator MOD APK Consigue todos los autos y modos de juego.md deleted file mode 100644 index 416ad1b4a8c194d82ec3f02389649c59fd989bd8..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Conduce como un profesional con Extreme Car Driving Simulator MOD APK Consigue todos los autos y modos de juego.md +++ /dev/null @@ -1,107 +0,0 @@ - -

        Extreme Car Driving Simulator Mod: A Review

        -

        Do you love driving cars and racing with other players? Do you want to experience the thrill of driving the most advanced and realistic vehicles in a stunning city? If yes, then you should try Extreme Car Driving Simulator, a popular racing game for Android devices. And if you want to make the game even more fun and exciting, you should use Extreme Car Driving Simulator Mod, a modified version of the game that gives you unlimited money, VIP access, and all cars unlocked. In this article, we will review Extreme Car Driving Simulator and its mod, and show you how to download and install it on your device.

        -

        What is Extreme Car Driving Simulator?

        -

        Extreme Car Driving Simulator is a racing game developed by AxesInMotion Racing, a studio based in Spain. The game was released in 2015 and has since gained over 100 million downloads on Google Play Store. The game is rated 4.1 out of 5 stars by more than 3 million users.

        -

        extreme car driving simulator mod


        Download File ✑ ✑ ✑ https://ssurll.com/2uNY2Z



        -

        Features of the game

        -

        The game has many features that make it one of the best car driving simulators on the market. Some of these features are:

        -
          -
        • Realistic physics and graphics: The game uses a powerful physics engine and high-quality graphics to create a realistic driving experience. You can see the damage effects on your car, feel the speed and acceleration, and hear the sound of the engine and tires.
        • -
        • Multiple game modes: The game offers different game modes to suit your preferences. You can choose from Free Mode, Traffic Mode, Checkpoint Mode, or Drift Mode. Each mode has its own challenges and objectives.
        • -
        • Huge open world: The game features a huge open world map that you can explore freely. You can drive around the city, the airport, the desert, or the off-road area. You can also find ramps, loops, bridges, and other obstacles to perform stunts and tricks.
        • -
        • Customizable cars: The game has a wide range of cars that you can choose from. You can drive sports cars, supercars, SUVs, trucks, and more. You can also customize your cars with different colors, wheels, spoilers, and stickers.
        • -
        -

        How to play the game

        -

        The game is easy to play and control. You can use the buttons on the screen to steer, accelerate, brake, or reverse your car. You can also use the tilt option to tilt your device to control your car. You can switch between different camera views to see your car from different angles. You can also use the mini-map to see your location and destination.

        -

        What is Extreme Car Driving Simulator Mod?

        -

        Extreme Car Driving Simulator Mod is a modified version of the original game that gives you some extra benefits and features. The mod is created by third-party developers who modify the game files to unlock some features that are otherwise restricted or paid in the original game.

        -

        Benefits of using the mod

        -

        Some of the benefits of using the mod are:

        -

        extreme car driving simulator mod apk unlimited money
        -extreme car driving simulator mod menu
        -extreme car driving simulator mod apk download
        -extreme car driving simulator mod apk latest version
        -extreme car driving simulator mod apk all cars unlocked
        -extreme car driving simulator mod apk vip
        -extreme car driving simulator mod apk android 1
        -extreme car driving simulator mod apk revdl
        -extreme car driving simulator mod apk happymod
        -extreme car driving simulator mod apk rexdl
        -extreme car driving simulator mod ios
        -extreme car driving simulator mod hack
        -extreme car driving simulator mod free shopping
        -extreme car driving simulator mod unlimited nitro
        -extreme car driving simulator mod apk 2023
        -extreme car driving simulator mod apk offline
        -extreme car driving simulator mod apk 6.75.1
        -extreme car driving simulator mod apk 6.75.0
        -extreme car driving simulator mod apk 6.74.0
        -extreme car driving simulator mod apk 6.73.0
        -extreme car driving simulator mod download for pc
        -extreme car driving simulator mod online
        -extreme car driving simulator mod game
        -extreme car driving simulator mod cheats
        -extreme car driving simulator mod no ads
        -extreme car driving simulator mod obb
        -extreme car driving simulator mod data
        -extreme car driving simulator mod unlimited gems
        -extreme car driving simulator mod vip unlocked
        -extreme car driving simulator mod all cars free
        -extreme car driving simulator mod gameplay
        -extreme car driving simulator mod features
        -extreme car driving simulator mod review
        -extreme car driving simulator mod tips and tricks
        -extreme car driving simulator mod best settings
        -extreme car driving simulator mod new update
        -extreme car driving simulator mod new cars
        -extreme car driving simulator mod new maps
        -extreme car driving simulator mod new modes
        -extreme car driving simulator mod new missions
        -extreme car driving simulator mod how to install
        -extreme car driving simulator mod how to play
        -extreme car driving simulator mod how to get vip
        -extreme car driving simulator mod how to unlock cars
        -extreme car driving simulator mod how to use nitro
        -extreme car driving simulator mod how to drift
        -extreme car driving simulator mod how to change camera view
        -extreme car driving simulator mod how to customize cars
        -extreme car driving simulator mod how to earn money fast

        -
          -
        • Unlimited money: The mod gives you unlimited money that you can use to buy and upgrade any car you want. You don't have to worry about running out of money or earning it by completing missions or watching ads.
        • -
        • VIP access: The mod gives you VIP access that grants you some exclusive features and privileges. For example, you can access all cars without unlocking them, remove ads from the game, get double rewards for missions, and more.
        • -
        • All cars unlocked: The mod unlocks all cars in the game for you. You can drive any car you want without having to complete certain levels or pay real money.
        • -
        -

        How to download and install the mod

        -

        To download and install the mod on your device, you need to follow these steps:

        -
          -
        1. Uninstall the original game from your device if you have it
        2. Download the mod APK file from a trusted source. You can search for "Extreme Car Driving Simulator Mod APK" on Google or use this link:
        3. -
        4. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
        5. -
        6. Locate the downloaded mod APK file on your device and tap on it to install it.
        7. -
        8. Wait for the installation to finish and then launch the game from your app drawer or home screen.
        9. -
        10. Enjoy the game with the mod features enabled.
        11. -
        -

        Conclusion

        -

        Extreme Car Driving Simulator is a fun and realistic racing game that lets you drive various cars in a huge open world. You can enjoy different game modes, customize your cars, and perform stunts and tricks. If you want to enhance your gaming experience, you can use Extreme Car Driving Simulator Mod, a modified version of the game that gives you unlimited money, VIP access, and all cars unlocked. You can download and install the mod easily by following the steps we provided in this article. We hope you found this article helpful and informative. Happy driving!

        -

        FAQs

        -

        Here are some frequently asked questions about Extreme Car Driving Simulator and its mod:

        -
          -
        1. Is Extreme Car Driving Simulator Mod safe to use?
        2. -

          Yes, the mod is safe to use as long as you download it from a reliable source. However, you should be aware that using the mod may violate the terms and conditions of the original game and may result in your account being banned or suspended. Use the mod at your own risk.

          -
        3. Can I play Extreme Car Driving Simulator online with other players?
        4. -

          No, Extreme Car Driving Simulator is an offline game that does not support online multiplayer mode. You can only play the game solo or with AI traffic.

          -
        5. How can I update Extreme Car Driving Simulator Mod?
        6. -

          To update Extreme Car Driving Simulator Mod, you need to uninstall the old version of the mod and install the new version of the mod from the same source. You may lose your progress and data if you do this, so make sure you back up your game before updating.

          -
        7. What are some alternatives to Extreme Car Driving Simulator?
        8. -

          If you are looking for some other racing games that are similar to Extreme Car Driving Simulator, you can try these games:

          -
            -
          • Real Racing 3: A realistic racing game that features licensed cars, tracks, and events.
          • -
          • Asphalt 9: Legends: A fast-paced arcade racing game that features stunning graphics, dynamic weather, and online multiplayer mode.
          • -
          • CarX Drift Racing 2: A drifting game that lets you customize your car, tune your engine, and compete with other players.
          • -
          -
        9. How can I contact the developers of Extreme Car Driving Simulator?
        10. -

          If you have any questions, feedback, or issues regarding Extreme Car Driving Simulator, you can contact the developers by emailing them at support@axesinmotion.com or visiting their website at https://www.axesinmotion.com/.

          -
        - : https://apkmody.io/games/extreme-car-driving-simulator-mod-apk

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 5.0.9 APK Download The Best Way to Play Grand Theft Auto on Your Smartphone or Computer.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 5.0.9 APK Download The Best Way to Play Grand Theft Auto on Your Smartphone or Computer.md deleted file mode 100644 index 53c7186351c60353df96b23af89a22fdd7002bc5..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/GTA 5 5.0.9 APK Download The Best Way to Play Grand Theft Auto on Your Smartphone or Computer.md +++ /dev/null @@ -1,85 +0,0 @@ - -

        GTA 5 5.0.9 APK Download: How to Play GTA 5 on Android Devices

        -

        Are you a fan of Grand Theft Auto V, one of the most popular and iconic video games of all time? Do you want to experience the thrilling and immersive gameplay of GTA 5 on your Android device? If yes, then you are in luck, because in this article, we will show you how to download and install GTA 5 5.0.9 APK, the latest version of the game for Android devices. We will also show you how to play GTA 5 on your PC or Mac with BlueStacks, an app player that lets you run Android games and apps on your computer.

        -

        Introduction

        -

        Grand Theft Auto V, or GTA 5 for short, is an action-adventure game developed by Rockstar Games and released in 2013 for PlayStation 3, Xbox 360, PlayStation 4, Xbox One, and PC. The game is set in the fictional city of Los Santos, a parody of Los Angeles, and follows the lives of three protagonists: Michael, a retired bank robber; Trevor, a psychopathic criminal; and Franklin, a young street hustler. The game features a vast open world that players can explore and interact with in various ways, such as driving vehicles, shooting weapons, completing missions, engaging in heists, and more.

        -

        gta 5 5.0.9 apk download


        Download File > https://ssurll.com/2uNZ7X



        -

        What is GTA 5 5.0.9 APK?

        -

        GTA 5 5.0.9 APK is a modified version of the original GTA 5 game that allows you to play it on your Android device without any restrictions or limitations. The APK file is a package that contains all the necessary files and data to run the game on your device. By downloading and installing GTA 5 5.0.9 APK, you can enjoy the full features and functions of the game, such as high-quality graphics, realistic physics, smooth controls, online multiplayer mode, and more.

        -

        Why should you download GTA 5 5.0.9 APK?

        -

        There are many reasons why you should download GTA 5 5.0.9 APK for your Android device. Here are some of them:

        -
          -
        • You can play GTA 5 anytime and anywhere on your device without needing a console or a PC.
        • -
        • You can save your progress and resume it later on any device.
        • -
        • You can customize the game settings according to your preferences and device specifications.
        • -
        • You can access new features and updates that are not available in the official version of the game.
        • -
        • You can avoid any ads or in-app purchases that may interrupt your gameplay.
        • -
        -

        How to download GTA 5 5.0.9 APK for Android devices

        -

        If you are interested in downloading GTA 5 5.0.9 APK for your Android device, you need to follow these simple steps:

        -

        Step 1: Enable unknown sources on your device

        -

        Since GTA 5 5.0.9 APK is not available on the Google Play Store, you need to enable unknown sources on your device to allow it to install apps from third-party sources.

        -

        gta 5 5.0.9 apk download for android
        -gta 5 5.0.9 apk download for pc
        -gta 5 5.0.9 apk download for mac
        -gta 5 5.0.9 apk download free
        -gta 5 5.0.9 apk download full version
        -gta 5 5.0.9 apk download with obb
        -gta 5 5.0.9 apk download highly compressed
        -gta 5 5.0.9 apk download offline
        -gta 5 5.0.9 apk download latest version
        -gta 5 5.0.9 apk download mod
        -gta 5 5.0.9 apk download no verification
        -gta 5 5.0.9 apk download unlimited money
        -gta 5 5.0.9 apk download bluestacks
        -gta 5 5.0.9 apk download rockstar games
        -gta 5 5.0.9 apk download update
        -gta 5 5.0.9 apk download real
        -gta 5 5.0.9 apk download hack
        -gta 5 5.0.9 apk download online
        -gta 5 5.0.9 apk download original
        -gta 5 5.0.9 apk download cracked
        -gta 5 5.0.9 apk download android phone
        -gta 5 5.0.9 apk download windows
        -gta 5 5.0.9 apk download ios
        -gta 5 5.0.9 apk download laptop
        -gta 5 5.0.9 apk download link
        -gta 5 5.0.9 apk download google drive
        -gta 5 5.0.9 apk download mediafire
        -gta 5 5.0.9 apk download mega
        -gta 5 5.0.9 apk download zip file
        -gta 5 5.0.9 apk download without password
        -gta 5 mobile – grand theft auto v android game free download [^1^]
        -grand theft auto v adventure game for pc and mac [^2^]
        -how to install and play gta v on android device using bluestacks emulator [^1^] [^2^]
        -best settings and tips for playing gta v on pc with bluestacks app player [^1^] [^2^]
        -how to use macros and scripts to automate tasks in gta v on pc with bluestacks [^1^] [^2^]
        -how to switch between characters and perspectives in gta v on pc and mobile [^1^] [^2^]
        -how to access the open world and side activities in gta v on pc and mobile [^1^] [^2^]
        -how to use weapons and vehicles in gta v on pc and mobile [^1^] [^2^]
        -how to complete missions and heists in gta v on pc and mobile [^1^] [^2^]
        -how to customize your controls and keymapping in gta v on pc with bluestacks [^1^] [^2^]

        computer with ease. In this article, we have shown you how to download and install GTA 5 5.0.9 APK for Android devices, and how to play GTA 5 on PC and Mac with BlueStacks. We hope you have found this article helpful and informative, and that you have enjoyed playing GTA 5 on your device of choice.

        -

        If you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you and help you out. Thank you for reading and happy gaming!

        -

        Summary of the main points

        -

        Here are the main points of this article:

        -
          -
        • GTA 5 5.0.9 APK is a modified version of the original GTA 5 game that allows you to play it on your Android device without any restrictions or limitations.
        • -
        • To download and install GTA 5 5.0.9 APK for Android devices, you need to enable unknown sources on your device, download the APK file from a trusted source, install the APK file on your device, and launch the game.
        • -
        • To play GTA 5 on PC and Mac with BlueStacks, you need to download and install BlueStacks on your computer, download the GTA 5 5.0.9 APK file from the same link, open it with BlueStacks APK installer, and launch the game.
        • -
        -

        Call to action

        -

        If you liked this article, please share it with your friends and family who are also fans of GTA 5. You can also subscribe to our newsletter to get more articles like this delivered to your inbox. And don't forget to follow us on social media for more updates and tips on gaming and technology.

        -

        FAQs

        -

        Here are some frequently asked questions about GTA 5 5.0.9 APK:

        -

        Q: Is GTA 5 5.0.9 APK safe to download and install?

        -

        A: Yes, GTA 5 5.0.9 APK is safe to download and install, as long as you get it from a trusted source like [this link]. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain viruses or malware that can harm your device.

        -

        Q: Do I need to root my device to play GTA 5 5.0.9 APK?

        -

        A: No, you do not need to root your device to play GTA 5 5.0.9 APK. You just need to enable unknown sources on your device settings and follow the steps mentioned above.

        -

        Q: How much storage space do I need to play GTA 5 5.0.9 APK?

        -

        A: You need at least 3 GB of free storage space on your device to play GTA 5 5.0.9 APK, as the APK file size is about 2.6 GB and the game data size is about 400 MB.

        -

        Q: Can I play GTA 5 online with GTA 5 5.0.9 APK?

        -

        A: Yes, you can play GTA 5 online with GTA 5 5.0.9 APK, as the game supports online multiplayer mode where you can join other players in various activities and missions.

        -

        Q: Can I play GTA 5 with a controller or a keyboard and mouse with GTA 5 5.0.9 APK?

        -

        A: Yes, you can play GTA 5 with a controller or a keyboard and mouse with GTA 5 5.0.9 APK, as the game supports various control options that you can customize according to your preferences.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/autocast.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/skf15963/summary/fengshen/models/deepVAE/utils.py b/spaces/skf15963/summary/fengshen/models/deepVAE/utils.py deleted file mode 100644 index 7ffc0a407cf472a489d8fd0b893002cd55208db9..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/deepVAE/utils.py +++ /dev/null @@ -1,134 +0,0 @@ -# coding=utf-8 -# Copyright 2022 IDEA-CCNL The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch Della model. """ - -import torch -import torch.nn.functional as F -from torch.distributions import Bernoulli - - -def enforce_repetition_penalty(lprobs, prev_output_tokens, repetition_penalty=1.5): - """repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858). """ - for i in range(len(prev_output_tokens)): - for previous_token in set(prev_output_tokens[i]): - # if score < 0 then repetition penalty has to multiplied to reduce the previous token probability - if lprobs[i, previous_token] < 0: - lprobs[i, previous_token] *= repetition_penalty - else: - lprobs[i, previous_token] /= repetition_penalty - - -def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): - """ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering - Args: - logits: logits distribution shape (vocabulary size) - top_k > 0: keep only top k tokens with highest probability (top-k filtering). - top_p > 0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). - Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) - From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 - """ - # assert logits.dim() == 1# batch size 1 for now - could be updated for more but the code would be less clear - top_k = min(top_k, logits.size(-1)) # Safety check - if top_k > 0: - # Remove all tokens with a probability less than the last token of the top-k - indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] - logits[indices_to_remove] = filter_value - - if top_p > 0.0: - sorted_logits, sorted_indices = torch.sort(logits, dim=-1, descending=True) - cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) - - # Remove tokens with cumulative probability above the threshold - sorted_indices_to_remove = cumulative_probs > top_p - # Shift the indices to the right to keep also the first token above the threshold - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() - sorted_indices_to_remove[..., 0] = 0 - - for i in range(sorted_indices.size()[0]): - indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]] - logits[i][indices_to_remove] = filter_value - # indices_to_remove = sorted_indices[sorted_indices_to_remove] - # logits[indices_to_remove] = filter_value - return logits - - -def word_drop(x, p, unk_token): - x_ = x.detach().clone() - mask = Bernoulli(1. - p).sample(x.shape) - x_[mask == 0] = unk_token - return x_ - - -def log_sum_exp(value, dim=None, keepdim=False): - """Numerically stable implementation of the operation - value.exp().sum(dim, keepdim).log() - """ - if dim is not None: - m, _ = torch.max(value, dim=dim, keepdim=True) - value0 = value - m - if keepdim is False: - m = m.squeeze(dim) - return m + torch.log(torch.sum(torch.exp(value0), dim=dim, keepdim=keepdim)) - else: - m = torch.max(value) - sum_exp = torch.sum(torch.exp(value - m)) - return m + torch.log(sum_exp) - - -def connect(mean, logvar, nsamples=1, sample=True, clip=False, min_clip_val=-1., beta_logvar=1.): - """ - Returns: Tensor1, Tensor2 - Tensor1: the tensor latent z with shape [batch, nsamples, nz] - """ - # (batch, nsamples, nz) - if sample: - if clip: - # NOTE: clip the logvar here to see if we can force z to be more distant - logvar = torch.clip(logvar, min=min_clip_val) - z = reparameterize(mean, logvar, nsamples, beta_logvar) - else: - batch_size, nz = mean.size() - z = mean.unsqueeze(1).expand(batch_size, nsamples, nz) - if nsamples == 1: - z = z.squeeze(dim=1) - return z - - -def reparameterize(mu, logvar, nsamples=1, beta_logvar=1.): - """sample from posterior Gaussian family - Args: - mu: Tensor - Mean of gaussian distribution with shape (batch, nz) - logvar: Tensor - logvar of gaussian distibution with shape (batch, nz) - Returns: Tensor - Sampled z with shape (batch, nsamples, nz) - """ - batch_size, nz = mu.size() - std = logvar.mul(0.5).exp().mul(beta_logvar) - - mu_expd = mu.unsqueeze(1).expand(batch_size, nsamples, nz) - std_expd = std.unsqueeze(1).expand(batch_size, nsamples, nz) - - eps = torch.zeros_like(std_expd).normal_() - - return mu_expd + torch.mul(eps, std_expd) - - -def compute_kl_loss(mean1, logvar1, mean2, logvar2): - '''adapted from adaVAE implementation https://github.com/ImKeTT/adavae/blob/main/src/adapters/vae.py#L1627''' - exponential = logvar1 - logvar2 - torch.pow(mean1 - mean2, 2) / logvar2.exp() - torch.exp(logvar1 - logvar2) + 1 - result = -0.5 * torch.sum(exponential, tuple(range(1, len(exponential.shape)))) - return result diff --git a/spaces/skf15963/summary/fengshen/models/model_utils.py b/spaces/skf15963/summary/fengshen/models/model_utils.py deleted file mode 100644 index 65699c45b660e17e05d116a04ae68911acea4b35..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/model_utils.py +++ /dev/null @@ -1,251 +0,0 @@ -from pytorch_lightning import LightningModule -from pytorch_lightning.strategies import DeepSpeedStrategy -from deepspeed.ops.adam import DeepSpeedCPUAdam, FusedAdam -from transformers.optimization import AdamW, TYPE_TO_SCHEDULER_FUNCTION -from torch.optim import Optimizer -from torch.optim.lr_scheduler import _LRScheduler -from transformers.trainer_utils import SchedulerType -from typing import Optional, Union -import warnings -import types - - -def add_module_args(parent_args): - parser = parent_args.add_argument_group('Basic Module') - parser.add_argument('--learning_rate', default=5e-5, type=float) - parser.add_argument('--min_learning_rate', default=1e-7, type=float) - parser.add_argument('--lr_decay_steps', default=0, type=int) - # lr decay的时候会依赖total_steps,这里设置的是total_steps的比例,比如我只需要前50%步做decay,ratio设置为0.5 - parser.add_argument('--lr_decay_ratio', default=1.0, type=float) - parser.add_argument('--warmup_steps', default=0, type=int) - parser.add_argument('--warmup_ratio', default=0.1, type=float) - parser.add_argument('--weight_decay', default=1e-1, type=float) - parser.add_argument('--adam_beta1', default=0.9, type=float) - parser.add_argument('--adam_beta2', default=0.999, type=float) - parser.add_argument('--adam_epsilon', default=1e-8, type=float) - parser.add_argument('--model_path', default=None, type=str) - parser.add_argument('--scheduler_type', default='polynomial', type=str) - return parent_args - - -def add_inverse_square_args(parent_args): - parser = parent_args.add_argument_group('Basic Module') - parser.add_argument('--warmup_min_lr', default=1e-9, type=float) - parser.add_argument('--warmup_max_lr', default=1e-4, type=float) - - return parent_args - - -def get_default_update_params(pl_model: LightningModule): - no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight', 'layer_norm.', 'layernorm.'] - optimizer_grouped_params = [ - {'params': [p for n, p in pl_model.named_parameters() if not any( - nd in n for nd in no_decay) and p.requires_grad], 'weight_decay': pl_model.hparams.weight_decay}, - {'params': [p for n, p in pl_model.named_parameters() if any( - nd in n for nd in no_decay) and p.requires_grad], 'weight_decay': 0.0} - ] - return optimizer_grouped_params - - -def configure_optimizers(pl_model: LightningModule, model_params=None): - ''' - Args: - pl_model: lightning module - model_params: 需要优化的模型参数 - ''' - # get params that optimizer need - if model_params is None: - optimizer_grouped_params = get_default_update_params(pl_model) - else: - optimizer_grouped_params = model_params - # Configure optimizer. - if isinstance(pl_model.trainer.strategy, DeepSpeedStrategy): - if 'offload_optimizer' in pl_model.trainer.strategy.config['zero_optimization']: - optimizer = DeepSpeedCPUAdam( - optimizer_grouped_params, adamw_mode=True, - lr=pl_model.hparams.learning_rate, - betas=(pl_model.hparams.adam_beta1, pl_model.hparams.adam_beta2), eps=pl_model.hparams.adam_epsilon) - else: - optimizer = FusedAdam( - optimizer_grouped_params, adam_w_mode=True, - lr=pl_model.hparams.learning_rate, - betas=(pl_model.hparams.adam_beta1, pl_model.hparams.adam_beta2), eps=pl_model.hparams.adam_epsilon) - # elif isinstance(pl_model.trainer.strategy, ColossalAIStrategy): - # from colossalai.nn.optimizer import HybridAdam - # optimizer = HybridAdam( - # optimizer_grouped_params, - # lr=pl_model.hparams.learning_rate, - # betas=(pl_model.hparams.adam_beta1, pl_model.hparams.adam_beta2), - # eps=pl_model.hparams.adam_epsilon) - else: - optimizer = AdamW(optimizer_grouped_params, lr=pl_model.hparams.learning_rate, - betas=(pl_model.hparams.adam_beta1, pl_model.hparams.adam_beta2), - eps=pl_model.hparams.adam_epsilon) - # Configure learning rate scheduler. - - warmup_steps = pl_model.hparams.warmup_ratio * \ - pl_model.total_steps if pl_model.hparams.warmup_steps == 0 else pl_model.hparams.warmup_steps - - if pl_model.hparams.scheduler_type == "inverse_sqrt": - scheduler = inverse_square_root_schedule(optimizer=optimizer, - num_warmup_steps=warmup_steps, lr_min=pl_model.hparams.warmup_min_lr, lr_max=pl_model.hparams.warmup_max_lr) - else: - total_steps = pl_model.hparams.lr_decay_ratio * \ - pl_model.total_steps if pl_model.hparams.lr_decay_steps == 0 else pl_model.hparams.lr_decay_steps - scheduler = get_scheduler(name=pl_model.hparams.scheduler_type, optimizer=optimizer, - num_warmup_steps=warmup_steps, num_training_steps=total_steps, - lr_end=pl_model.hparams.min_learning_rate) - scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1} - return [optimizer], [scheduler] - - -def inverse_square_root_schedule( - optimizer: Optimizer, - num_warmup_steps: int = 4000, - lr_min=1e-9, - lr_max=1e-4, - power=0.5, - last_epoch: int = -1): - - lr_init = optimizer.defaults["lr"] - if (lr_min > lr_max): - raise ValueError(f"lr_min ({lr_min}) must be be smaller than lr_max ({lr_max})") - - lr_step = (lr_max - lr_init) / num_warmup_steps - decay_factor = lr_max * num_warmup_steps**power - - def lr_lambda(current_step: int): - # 自定义函数 - if current_step < num_warmup_steps: - return lr_step * current_step - return decay_factor * current_step ** (-power) - - return Direct_LR(optimizer, lr_lambda, last_epoch, True) - - -class Direct_LR(_LRScheduler): - """ - Modified from LambdaLR - """ - - def __init__(self, optimizer, lr_lambda, last_epoch=-1, warmup_steps=4000, verbose=False): - self.optimizer = optimizer - self.warmup_steps = warmup_steps - if not isinstance(lr_lambda, list) and not isinstance(lr_lambda, tuple): - self.lr_lambdas = [lr_lambda] * len(optimizer.param_groups) - else: - if len(lr_lambda) != len(optimizer.param_groups): - raise ValueError("Expected {} lr_lambdas, but got {}".format( - len(optimizer.param_groups), len(lr_lambda))) - self.lr_lambdas = list(lr_lambda) - super(Direct_LR, self).__init__(optimizer, last_epoch, verbose) - - def state_dict(self): - """Returns the state of the scheduler as a :class:`dict`. - - It contains an entry for every variable in self.__dict__ which - is not the optimizer. - The learning rate lambda functions will only be saved if they are callable objects - and not if they are functions or lambdas. - - When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. - """ - - state_dict = {key: value for key, value in self.__dict__.items() if key not in ('optimizer', 'lr_lambdas')} - state_dict['lr_lambdas'] = [None] * len(self.lr_lambdas) - - for idx, fn in enumerate(self.lr_lambdas): - if not isinstance(fn, types.FunctionType): - state_dict['lr_lambdas'][idx] = fn.__dict__.copy() - - return state_dict - - def load_state_dict(self, state_dict): - """Loads the schedulers state. - - When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. - - Args: - state_dict (dict): scheduler state. Should be an object returned - from a call to :meth:`state_dict`. - """ - - lr_lambdas = state_dict.pop('lr_lambdas') - self.__dict__.update(state_dict) - # Restore state_dict keys in order to prevent side effects - # https://github.com/pytorch/pytorch/issues/32756 - state_dict['lr_lambdas'] = lr_lambdas - - for idx, fn in enumerate(lr_lambdas): - if fn is not None: - self.lr_lambdas[idx].__dict__.update(fn) - - def get_lr(self): - if not self._get_lr_called_within_step: - warnings.warn("To get the last learning rate computed by the scheduler, " - "please use `get_last_lr()`.") - - if self._step_count < self.warmup_steps: - return [base_lr + lmbda(self.last_epoch) - for lmbda, base_lr in zip(self.lr_lambdas, self.base_lrs)] - - return [lmbda(self.last_epoch) for lmbda in self.lr_lambdas] - - -def get_total_steps(trainer, hparams): - train_loader = trainer._data_connector._train_dataloader_source.dataloader() - # Calculate total steps - if trainer.max_epochs > 0: - world_size = trainer.world_size - tb_size = hparams.train_batchsize * max(1, world_size) - ab_size = trainer.accumulate_grad_batches - total_steps = (len(train_loader.dataset) * - trainer.max_epochs // tb_size) // ab_size - else: - total_steps = trainer.max_steps - return total_steps - - -def get_scheduler( - name: Union[str, SchedulerType], - optimizer: Optimizer, - num_warmup_steps: Optional[int] = None, - num_training_steps: Optional[int] = None, - lr_end: Optional[float] = None -): - """ - Unified API to get any scheduler from its name. - - Args: - name (`str` or `SchedulerType`): - The name of the scheduler to use. - optimizer (`torch.optim.Optimizer`): - The optimizer that will be used during training. - num_warmup_steps (`int`, *optional*): - The number of warmup steps to do. This is not required by all schedulers (hence the argument being - optional), the function will raise an error if it's unset and the scheduler type requires it. - num_training_steps (`int``, *optional*): - The number of training steps to do. This is not required by all schedulers (hence the argument being - optional), the function will raise an error if it's unset and the scheduler type requires it. - """ - name = SchedulerType(name) - schedule_func = TYPE_TO_SCHEDULER_FUNCTION[name] - if name == SchedulerType.CONSTANT: - return schedule_func(optimizer) - - # All other schedulers require `num_warmup_steps` - if num_warmup_steps is None: - raise ValueError(f"{name} requires `num_warmup_steps`, please provide that argument.") - - if name == SchedulerType.CONSTANT_WITH_WARMUP: - return schedule_func(optimizer, num_warmup_steps=num_warmup_steps) - - # All other schedulers require `num_training_steps` - if num_training_steps is None: - raise ValueError(f"{name} requires `num_training_steps`, please provide that argument.") - - if name == SchedulerType.POLYNOMIAL: - return schedule_func(optimizer, num_warmup_steps=num_warmup_steps, - num_training_steps=num_training_steps, lr_end=lr_end) - - return schedule_func(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) diff --git a/spaces/skf15963/summary/fengshen/models/transfo_xl_denoise/__init__.py b/spaces/skf15963/summary/fengshen/models/transfo_xl_denoise/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/generate.py b/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/generate.py deleted file mode 100644 index af25da3dcc78cce9705d578ee45e1f555c9d27b2..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/generate.py +++ /dev/null @@ -1,120 +0,0 @@ -# encoding=utf-8 -from typing import List, Union - -import torch -from torch.nn.utils.rnn import pad_sequence -from transformers import T5Tokenizer - -from fengshen.models.transfo_xl_reasoning import TransfoXLModel -from fengshen.utils import sample_sequence_batch - - -def en_to_zh(sentence:str): - en_pun = u",.!?[]()<>\"\"''" - zh_pun = u",。!?【】()《》“”‘’" - table = { - ord(f): ord(t) for f,t in zip(en_pun, zh_pun) - } - return sentence.translate(table) - - -def deduction_generate( - model:TransfoXLModel, - tokenizer:T5Tokenizer, - input_text:Union[str, List[str]], - device:int=0, - batch_size:int=2, - temperature:float=1.0, - repetition_penalty:float=2.0, - max_out_seq:int=512, - top_p:float=0.6) -> List[str]: - """ Generate with fixed prompt of deduction """ - - model = model.eval().cuda(device) - - if isinstance(input_text, str): - input_text = [input_text] - - input_text = [f"{text},因而" for text in input_text] - - input_ids = [torch.tensor(ids[:-1]) for ids in tokenizer(input_text).input_ids] - input_length = [len(ids) for ids in input_ids] - - output = [] - - for index in range(0, len(input_ids), batch_size): - input_ids_batch = pad_sequence( - input_ids[index: index + batch_size], batch_first=True, padding_value=50000, - ) - input_ids_length = torch.tensor(input_length[index: index + batch_size]) - - res_ids_batch, _ = sample_sequence_batch( - model=model, - context_tokens_tensor=input_ids_batch.cuda(device=device), - context_length_tensor=input_ids_length.cuda(device=device), - end_token_id=50000, - top_k=0, top_p=top_p, - max_out_seq=max_out_seq, - repetition_penalty=repetition_penalty, - temperature=temperature - ) - - res_sentence = [ - en_to_zh(tokenizer.decode(ids[length:])).replace(" ", "") - for ids, length in zip(res_ids_batch, input_length[index: index + batch_size]) - ] - - output.extend(res_sentence) - - return output - - -def abduction_generate( - model:TransfoXLModel, - tokenizer:T5Tokenizer, - input_text:Union[str, List[str]], - device:int=0, - batch_size:int=2, - temperature:float=1.0, - repetition_penalty:float=2.0, - top_p:float=0.6) -> List[str]: - """ Generate with fixed prompt of abduction """ - - model = model.eval().cuda(device) - - if isinstance(input_text, str): - input_text = [input_text] - - input_text = [f"之所以{text},是因为" for text in input_text] - - input_ids = [torch.tensor(ids[:-1]) for ids in tokenizer(input_text).input_ids] - input_length = [len(ids) for ids in input_ids] - - output = [] - - for index in range(0, len(input_ids), batch_size): - input_ids_batch = pad_sequence( - input_ids[index: index + batch_size], batch_first=True, padding_value=50000, - ) - input_ids_length = torch.tensor(input_length[index: index + batch_size]) - - res_ids_batch, _ = sample_sequence_batch( - model=model, - context_tokens_tensor=input_ids_batch.cuda(device=device), - context_length_tensor=input_ids_length.cuda(device=device), - end_token_id=50000, - top_k=0, top_p=top_p, - max_out_seq=512, - repetition_penalty=repetition_penalty, - temperature=temperature - ) - - res_sentence = [ - en_to_zh(tokenizer.decode(ids[length:])).replace(" ", "") - for ids, length in zip(res_ids_batch, input_length[index: index + batch_size]) - ] - - output.extend(res_sentence) - - return output - diff --git a/spaces/sklearn-docs/gaussian-quantile-adaboost/app.py b/spaces/sklearn-docs/gaussian-quantile-adaboost/app.py deleted file mode 100644 index 300c09e6427b3d8c6b8971fd379f2296d405a963..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/gaussian-quantile-adaboost/app.py +++ /dev/null @@ -1,125 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from matplotlib.colors import ListedColormap -plt.rcParams['figure.dpi'] = 100 - -from sklearn.ensemble import AdaBoostClassifier -from sklearn.tree import DecisionTreeClassifier -from sklearn.datasets import make_gaussian_quantiles -from sklearn.inspection import DecisionBoundaryDisplay - -import gradio as gr - -#======================================================= -C1, C2 = '#ff0000', '#0000ff' -CMAP = ListedColormap([C1, C2]) -GRANULARITY = 0.05 -#======================================================= -def get_decision_surface(X, y, model): - x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 - y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 - xrange = np.arange(x_min, x_max, GRANULARITY) - yrange = np.arange(y_min, y_max, GRANULARITY) - xx, yy = np.meshgrid(xrange, yrange) - - Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) - Z = Z.reshape(xx.shape) - - return xx, yy, Z - -def create_plot(x1, y1, x2, y2, cov1, cov2, n1, n2, max_depth, n_estimators): - #Generate the dataset - X1, y1 = make_gaussian_quantiles( - mean=(x1, y1), cov=cov1, n_samples=n1, n_features=2, n_classes=2 - ) - X2, y2 = make_gaussian_quantiles( - mean=(x2, y2), cov=cov2, n_samples=n2, n_features=2, n_classes=2 - ) - X = np.concatenate((X1, X2)) - y = np.concatenate((y1, -y2 + 1)) - - clf = AdaBoostClassifier(DecisionTreeClassifier(max_depth=max_depth), algorithm="SAMME", n_estimators=n_estimators) - - clf.fit(X, y) - - fig = plt.figure(figsize=(4.5, 6.9)) - ax = fig.add_subplot(211) - - xx, yy, Z = get_decision_surface(X, y, clf) - ax.contourf(xx, yy, Z, cmap=CMAP, alpha=0.4) - - X1, y1 = X[y==0], y[y==0] - X2, y2 = X[y==1], y[y==1] - - ax.scatter(X1[:, 0], X1[:, 1], c=C1, edgecolor='k', s=20, label='Class A') - ax.scatter(X2[:, 0], X2[:, 1], c=C2, edgecolor='k', s=20, label='Class B') - - ax.legend() - ax.set_title(f'AdaBoostClassifier Decision Surface') - - scores = clf.decision_function(X) - - ax = fig.add_subplot(212) - ax.hist(scores[y==0], bins=100, range=(scores.min(), scores.max()), facecolor=C1, label="Class A", alpha=0.5, edgecolor="k") - ax.hist(scores[y==1], bins=100, range=(scores.min(), scores.max()), facecolor=C2, label="Class B", alpha=0.5, edgecolor="k") - - ax.set_xlabel('Score'); ax.set_ylabel('Frequency') - ax.legend() - ax.set_title('Decision Scores') - fig.set_tight_layout(True) - - return fig - -info = ''' -This example fits an [AdaBoost classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html#sklearn.ensemble.AdaBoostClassifier) on two non-linearly separable classes. The samples are generated using two [Gaussian quantiles](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_gaussian_quantiles.html#sklearn.datasets.make_gaussian_quantiles) of configurable mean and covariance (see the sliders below). - -For the first generated Gaussian, the inner half quantile is assigned to Class A and the outer half quantile is assigned to class B. For the second generated quantile, the opposite assignment happens (inner = Class B, outer = Class A). - -A histogram of the decision scores of the AdaBoostClassifer is shown below. Values closer to -1 mean a high confidence that the sample belongs to Class A, and values closer to 1 mean a high confidence that the sample belongs to Class B. - -Use the controls below to change the Gaussian distribution parameters, number of generated samples in each Gaussian distribution, and the classifier's max_depth and n_estimators. - -Created by [@huabdul](https://huggingface.co/huabdul) based on [Scikit-learn docs](https://scikit-learn.org/stable/auto_examples/ensemble/plot_adaboost_twoclass.html). -''' -with gr.Blocks(analytics_enabled=False) as demo: - with gr.Row(): - with gr.Column(scale=2): - gr.Markdown(info) - with gr.Row(): - with gr.Column(min_width=100): - s_x1 = gr.Slider(-10, 10, value=0, step=0.1, label='Mean x1') - with gr.Column(min_width=100): - s_y1 = gr.Slider(-10, 10, value=0, step=0.1, label='Mean y1') - with gr.Row(): - with gr.Column(min_width=100): - s_x2 = gr.Slider(-10, 10, value=2, step=0.1, label='Mean x2') - with gr.Column(min_width=100): - s_y2 = gr.Slider(-10, 10, value=2, step=0.1, label='Mean y2') - - with gr.Row(): - with gr.Column(min_width=100): - s_cov1 = gr.Slider(0.01, 5, value=1, step=0.01, label='Covariance 1') - with gr.Column(min_width=100): - s_cov2 = gr.Slider(0.01, 5, value=2, step=0.01, label='Covariance 2') - - with gr.Row(): - with gr.Column(min_width=100): - s_n_samples1 = gr.Slider(1, 1000, value=200, step=1, label='n_samples 1') - with gr.Column(min_width=100): - s_n_samples2 = gr.Slider(1, 1000, value=300, step=1, label='n_samples 2') - - with gr.Row(): - with gr.Column(min_width=100): - s_max_depth = gr.Slider(1, 50, value=1, step=1, label='AdaBoostClassifier max_depth') - with gr.Column(min_width=100): - s_n_estimators = gr.Slider(1, 500, value=300, step=1, label='AdaBoostClassifier n_estimators') - - btn = gr.Button('Submit') - with gr.Column(scale=1.5): - plot = gr.Plot(show_label=False) - - btn.click(create_plot, inputs=[s_x1, s_y1, s_x2, s_y2, s_cov1, s_cov2, s_n_samples1, s_n_samples2, s_max_depth, s_n_estimators], outputs=[plot]) - demo.load(create_plot, inputs=[s_x1, s_y1, s_x2, s_y2, s_cov1, s_cov2, s_n_samples1, s_n_samples2, s_max_depth, s_n_estimators], outputs=[plot]) - -demo.launch() -#======================================================= \ No newline at end of file diff --git a/spaces/smjain/smjainvoice/starganv2vc_paddle/optimizers.py b/spaces/smjain/smjainvoice/starganv2vc_paddle/optimizers.py deleted file mode 100644 index 09717365beeb8a87a4e20f25df7b1428cc5ddebe..0000000000000000000000000000000000000000 --- a/spaces/smjain/smjainvoice/starganv2vc_paddle/optimizers.py +++ /dev/null @@ -1,80 +0,0 @@ -#coding:utf-8 -import os, sys -import os.path as osp -import numpy as np -import paddle -from paddle import nn -from paddle.optimizer import Optimizer -from functools import reduce -from paddle.optimizer import AdamW - -class MultiOptimizer: - def __init__(self, optimizers={}, schedulers={}): - self.optimizers = optimizers - self.schedulers = schedulers - self.keys = list(optimizers.keys()) - - def get_lr(self): - return max([self.optimizers[key].get_lr() - for key in self.keys]) - - def state_dict(self): - state_dicts = [(key, self.optimizers[key].state_dict())\ - for key in self.keys] - return state_dicts - - def set_state_dict(self, state_dict): - for key, val in state_dict: - try: - self.optimizers[key].set_state_dict(val) - except: - print("Unloaded %s" % key) - - def step(self, key=None, scaler=None): - keys = [key] if key is not None else self.keys - _ = [self._step(key, scaler) for key in keys] - - def _step(self, key, scaler=None): - if scaler is not None: - scaler.step(self.optimizers[key]) - scaler.update() - else: - self.optimizers[key].step() - - def clear_grad(self, key=None): - if key is not None: - self.optimizers[key].clear_grad() - else: - _ = [self.optimizers[key].clear_grad() for key in self.keys] - - def scheduler(self, *args, key=None): - if key is not None: - self.schedulers[key].step(*args) - else: - _ = [self.schedulers[key].step(*args) for key in self.keys] - -def define_scheduler(params): - print(params) - # scheduler = paddle.optim.lr_scheduler.OneCycleLR( - # max_lr=params.get('max_lr', 2e-4), - # epochs=params.get('epochs', 200), - # steps_per_epoch=params.get('steps_per_epoch', 1000), - # pct_start=params.get('pct_start', 0.0), - # div_factor=1, - # final_div_factor=1) - scheduler = paddle.optimizer.lr.CosineAnnealingDecay( - learning_rate=params.get('max_lr', 2e-4), - T_max=10) - - return scheduler - -def build_optimizer(parameters_dict, scheduler_params_dict): - schedulers = dict([(key, define_scheduler(params)) \ - for key, params in scheduler_params_dict.items()]) - - optim = dict([(key, AdamW(parameters=parameters_dict[key], learning_rate=sch, weight_decay=1e-4, beta1=0.1, beta2=0.99, epsilon=1e-9)) - for key, sch in schedulers.items()]) - - - multi_optim = MultiOptimizer(optim, schedulers) - return multi_optim \ No newline at end of file diff --git a/spaces/spacerini/miracl-chinese/app.py b/spaces/spacerini/miracl-chinese/app.py deleted file mode 100644 index 39f91a04e490ba5dd85fdcf3cb45521f3bbb45eb..0000000000000000000000000000000000000000 --- a/spaces/spacerini/miracl-chinese/app.py +++ /dev/null @@ -1,201 +0,0 @@ -import http.client as http_client -import json -import logging -import os -import pprint -import re -import time -import string - -import streamlit as st - -import streamlit.components.v1 as components -from typing import Callable, Optional, Tuple, Union -from pyserini import util -from pyserini.search import LuceneSearcher, FaissSearcher, AutoQueryEncoder - - -VERSION = '1.0' -st.set_page_config(page_title="Miracl Search - Chinese", layout="wide") - -os.makedirs(os.path.join(os.getcwd(),".streamlit"), exist_ok = True) -with open(os.path.join(os.getcwd(),".streamlit/config.toml"), "w") as file: - file.write( - '[theme]\nbase="light"' - ) - -Searcher = Union[FaissSearcher, LuceneSearcher] -LANG_MAPPING = {'Chinese':'zh'} - - -st.sidebar.markdown( -""" - -

        MIRACL Chinese Demo

        -

        🌍🙌🌏

        -

        MIRACL is a multilingual dataset for ad hoc retrieval that consists of 18 different languages, collectively encompassing over three billion native speakers around the world.

        -""", -unsafe_allow_html=True, -) - -st.sidebar.markdown( -""" - -

        -GitHub | Paper -

        -""", -unsafe_allow_html=True, -) - -query = st.sidebar.text_input(label='Search query', value='') -language = 'Chinese' - -max_results = st.sidebar.slider( - "Maximum Number of Results", - min_value=1, - max_value=1000, - step=1, - value=10, - help="Maximum Number of Documents to return", -) - - -def _load_sparse_searcher(language: str, k1: Optional[float]=None, b: Optional[float]=None) -> (Searcher): - searcher = LuceneSearcher(f'lucene-index.miracl-v1.0-{language}.20221004.2b2856') - searcher.set_language(language) - if k1 is not None and b is not None: - searcher.set_bm25(k1, b) - retriever_name = f'BM25 (k1={k1}, b={b})' - else: - retriever_name = 'BM25' - - return searcher - -def search(query, language, num_results=10): - searcher = _load_sparse_searcher(language=LANG_MAPPING[language]) - - t_0 = time.time() - search_results = searcher.search(query, k=num_results) - search_time = time.time() - t_0 - - results_dict ={"docs": [], "doc_ids": [], "score":[], "lang": language} - for i, result in enumerate(search_results): - result = json.loads(result.raw) - results_dict["docs"].append(result["text"]) - results_dict["doc_ids"].append(result["docid"]) - results_dict["score"].append(search_results[i].score) - - return results_dict, search_time - - - -def highlight_string(paragraph: str, highlight_terms: list) -> str: - for term in highlight_terms: - paragraph = re.sub(f"\\b{term}\\b", f"{term}", paragraph, flags=re.I) - return paragraph - -def process_results(hits: dict, highlight_terms: list) -> str: - hit_list = [] - for i in range(len(hits['doc_ids'])): - res_head = f""" -
        -

        {i+1}. Document ID: {hits['doc_ids'][i]}

        -

        Language: {hits['lang']}, Score: {round(hits['score'][i], 2)}

        -

        {highlight_string(hits['docs'][i], highlight_terms)}

        -
        -
        - """ - hit_list.append(res_head) - return " ".join(hit_list) - - - -if st.sidebar.button("Search"): - hits, search_time = search(query, language, max_results) - html_results = process_results(hits, []) - rendered_results = f""" -
        -
        -

        About {max_results} results

        - {html_results} -
        - """ - st.markdown(""" - - """, - unsafe_allow_html=True) - st.markdown( - """ - - """, - unsafe_allow_html=True) - st.markdown( - f""" -
        -

        Search Results

        -
        - """, - unsafe_allow_html=True) - components.html( - """ - - - - """ + rendered_results, height=800, scrolling=True - ) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_options.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_options.py deleted file mode 100644 index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/noisychannel/rerank_options.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import options - - -def get_reranking_parser(default_task="translation"): - parser = options.get_parser("Generation and reranking", default_task) - add_reranking_args(parser) - return parser - - -def get_tuning_parser(default_task="translation"): - parser = options.get_parser("Reranking tuning", default_task) - add_reranking_args(parser) - add_tuning_args(parser) - return parser - - -def add_reranking_args(parser): - group = parser.add_argument_group("Reranking") - # fmt: off - group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True, - help='path to first model or ensemble of models for rescoring') - group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False, - help='path to second model or ensemble of models for rescoring') - group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10, - help='the number of candidate hypothesis to rescore') - group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128, - help='batch size for generating the nbest list') - group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'], - help='data subset to generate (train, valid, test)') - group.add_argument('--gen-model', default=None, metavar='FILE', - help='the model to generate translations') - group.add_argument('-b1', '--backwards1', action='store_true', - help='whether or not the first model group is backwards') - group.add_argument('-b2', '--backwards2', action='store_true', - help='whether or not the second model group is backwards') - group.add_argument('-a', '--weight1', default=1, nargs='+', type=float, - help='the weight(s) of the first model') - group.add_argument('-b', '--weight2', default=1, nargs='+', type=float, - help='the weight(s) of the second model, or the gen model if using nbest from interactive.py') - group.add_argument('-c', '--weight3', default=1, nargs='+', type=float, - help='the weight(s) of the third model') - - # lm arguments - group.add_argument('-lm', '--language-model', default=None, metavar='FILE', - help='language model for target language to rescore translations') - group.add_argument('--lm-dict', default=None, metavar='FILE', - help='the dict of the language model for the target language') - group.add_argument('--lm-name', default=None, - help='the name of the language model for the target language') - group.add_argument('--lm-bpe-code', default=None, metavar='FILE', - help='the bpe code for the language model for the target language') - group.add_argument('--data-dir-name', default=None, - help='name of data directory') - group.add_argument('--lenpen', default=1, nargs='+', type=float, - help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences') - group.add_argument('--score-dict-dir', default=None, - help='the directory with dictionaries for the scoring models') - group.add_argument('--right-to-left1', action='store_true', - help='whether the first model group is a right to left model') - group.add_argument('--right-to-left2', action='store_true', - help='whether the second model group is a right to left model') - group.add_argument('--post-process', '--remove-bpe', default='@@ ', - help='the bpe symbol, used for the bitext and LM') - group.add_argument('--prefix-len', default=None, type=int, - help='the length of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--sampling', action='store_true', - help='use sampling instead of beam search for generating n best list') - group.add_argument('--diff-bpe', action='store_true', - help='bpe for rescoring and nbest list not the same') - group.add_argument('--rescore-bpe-code', default=None, - help='bpe code for rescoring models') - group.add_argument('--nbest-list', default=None, - help='use predefined nbest list in interactive.py format') - group.add_argument('--write-hypos', default=None, - help='filename prefix to write hypos to') - group.add_argument('--ref-translation', default=None, - help='reference translation to use with nbest list from interactive.py') - group.add_argument('--backwards-score-dict-dir', default=None, - help='the directory with dictionaries for the backwards model,' - 'if None then it is assumed the fw and backwards models share dictionaries') - - # extra scaling args - group.add_argument('--gen-model-name', default=None, - help='the name of the models that generated the nbest list') - group.add_argument('--model1-name', default=None, - help='the name of the set for model1 group ') - group.add_argument('--model2-name', default=None, - help='the name of the set for model2 group') - group.add_argument('--shard-id', default=0, type=int, - help='the id of the shard to generate') - group.add_argument('--num-shards', default=1, type=int, - help='the number of shards to generate across') - group.add_argument('--all-shards', action='store_true', - help='use all shards') - group.add_argument('--target-prefix-frac', default=None, type=float, - help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--source-prefix-frac', default=None, type=float, - help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--normalize', action='store_true', - help='whether to normalize by src and target len') - # fmt: on - return group - - -def add_tuning_args(parser): - group = parser.add_argument_group("Tuning") - - group.add_argument( - "--lower-bound", - default=[-0.7], - nargs="+", - type=float, - help="lower bound of search space", - ) - group.add_argument( - "--upper-bound", - default=[3], - nargs="+", - type=float, - help="upper bound of search space", - ) - group.add_argument( - "--tune-param", - default=["lenpen"], - nargs="+", - choices=["lenpen", "weight1", "weight2", "weight3"], - help="the parameter(s) to tune", - ) - group.add_argument( - "--tune-subset", - default="valid", - choices=["valid", "test", "train"], - help="the subset to tune on ", - ) - group.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - group.add_argument( - "--share-weights", action="store_true", help="share weight2 and weight 3" - ) - return group diff --git a/spaces/stomexserde/gpt4-ui/Examples/Corel Painter 12.2.0.703 European Multilingual Keygen Setup Free HOT.md b/spaces/stomexserde/gpt4-ui/Examples/Corel Painter 12.2.0.703 European Multilingual Keygen Setup Free HOT.md deleted file mode 100644 index ab72e3e95a99553e4d7df79e28665298fc6ac8e8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Corel Painter 12.2.0.703 European Multilingual Keygen Setup Free HOT.md +++ /dev/null @@ -1,26 +0,0 @@ -
        -```html -

        How to Download and Install Corel Painter 12.2.0.703 European Multilingual for Free

        -

        If you are looking for a powerful and versatile digital art software, you might want to check out Corel Painter 12.2.0.703 European Multilingual. This software allows you to create stunning paintings, illustrations, photo art, and more with a variety of brushes, tools, and effects.

        -

        However, Corel Painter 12.2.0.703 European Multilingual is not a cheap software. It costs $429 for the full version and $229 for the upgrade version. If you don't want to spend that much money, you can try to download and install it for free using a keygen and a crack.

        -

        Corel Painter 12.2.0.703 European Multilingual Keygen Setup Free


        Download File > https://urlgoal.com/2uI5MU



        -

        A keygen is a program that generates a serial number or a license key for a software. A crack is a program that modifies or bypasses the security features of a software. By using a keygen and a crack, you can activate Corel Painter 12.2.0.703 European Multilingual without paying anything.

        -

        However, downloading and installing Corel Painter 12.2.0.703 European Multilingual for free using a keygen and a crack is not legal or safe. You might violate the copyright laws or the terms of service of Corel Corporation. You might also expose your computer to viruses, malware, or spyware that can harm your system or steal your personal information.

        -

        Therefore, we do not recommend or endorse downloading and installing Corel Painter 12.2.0.703 European Multilingual for free using a keygen and a crack. We only provide this information for educational purposes only. If you decide to do so, you do it at your own risk and responsibility.

        -

        If you still want to proceed, here are the steps to download and install Corel Painter 12.2.0.703 European Multilingual for free using a keygen and a crack:

        -
          -
        1. Go to this link [^1^] and download the Corel Painter 12 setup file and the keygen file.
        2. -
        3. Extract the files using WinRAR or any other file compression software.
        4. -
        5. Run the setup file and follow the installation instructions.
        6. -
        7. When prompted to enter a serial number, run the keygen file and generate a serial number.
        8. -
        9. Copy and paste the serial number into the setup window and continue the installation.
        10. -
        11. When the installation is complete, do not launch Corel Painter 12 yet.
        12. -
        13. Go to the folder where you installed Corel Painter 12 and find the file named "Painter.exe".
        14. -
        15. Rename this file to "Painter.exe.bak" or any other name.
        16. -
        17. Copy the crack file from the downloaded folder and paste it into the same folder where you renamed "Painter.exe".
        18. -
        19. Run Corel Painter 12 and enjoy your free digital art software.
        20. -
        -

        Note: This method may not work for all versions of Windows or Mac OS. It may also be detected by your antivirus software as malicious or suspicious. You may need to disable your antivirus software temporarily or add an exception for Corel Painter 12 files.

        -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/EXA The Infinite Instrument Torrent Download [cheat] !NEW!.md b/spaces/stomexserde/gpt4-ui/Examples/EXA The Infinite Instrument Torrent Download [cheat] !NEW!.md deleted file mode 100644 index e338b5ae0f07efe02d592977b008fa26ca2acb8c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/EXA The Infinite Instrument Torrent Download [cheat] !NEW!.md +++ /dev/null @@ -1,26 +0,0 @@ - -

        How to Download EXA: The Infinite Instrument Torrent and Unlock All Features

        -

        EXA: The Infinite Instrument is a virtual reality music creation platform that lets you play and compose music in any style and genre. You can use a variety of instruments, effects, loops, and samples to create your own musical masterpiece. But what if you want to access all the features and content without paying for the full version? In this article, we will show you how to download EXA: The Infinite Instrument torrent and use a cheat to unlock everything.

        -

        EXA: The Infinite Instrument Torrent Download [cheat]


        Downloadhttps://urlgoal.com/2uI7to



        -

        What is EXA: The Infinite Instrument?

        -

        EXA: The Infinite Instrument is a VR game that simulates a realistic music studio. You can explore different environments, such as a concert hall, a forest, or a spaceship, and use the controllers to play and manipulate virtual instruments. You can also record, edit, and mix your tracks using a built-in sequencer and mixer. You can even collaborate with other players online and share your creations with the world.

        -

        Why Download EXA: The Infinite Instrument Torrent?

        -

        EXA: The Infinite Instrument is a premium game that costs $19.99 on Steam. However, you can download it for free using a torrent file. A torrent file is a small file that contains information about the larger file you want to download. You need a torrent client, such as BitTorrent or uTorrent, to open the torrent file and download the game from other users who have it on their computers.

        -

        Downloading EXA: The Infinite Instrument torrent has several advantages. First, you can save money and enjoy the game without paying anything. Second, you can bypass any regional restrictions and access the game from anywhere in the world. Third, you can get the latest updates and patches for the game without waiting for the official release.

        -

        -

        How to Use EXA: The Infinite Instrument Cheat?

        -

        Downloading EXA: The Infinite Instrument torrent is not enough to unlock all the features and content of the game. You also need to use a cheat that modifies the game files and gives you unlimited access to everything. A cheat is a software or a code that alters the game's behavior and gives you an advantage over the normal gameplay.

        -

        To use EXA: The Infinite Instrument cheat, you need to follow these steps:

        -
          -
        1. Download EXA: The Infinite Instrument torrent from a reliable source.
        2. -
        3. Install the game on your computer using the torrent client.
        4. -
        5. Download EXA: The Infinite Instrument cheat from a trusted website.
        6. -
        7. Run the cheat as an administrator and select the game folder.
        8. -
        9. Choose the options you want to activate, such as unlimited instruments, effects, loops, samples, etc.
        10. -
        11. Click on "Apply" and wait for the cheat to finish.
        12. -
        13. Launch the game and enjoy your unlimited musical freedom.
        14. -
        -

        Conclusion

        -

        EXA: The Infinite Instrument is a VR game that lets you create music in any style and genre. You can download it for free using a torrent file and use a cheat to unlock all the features and content. However, you should be careful when downloading torrents and cheats from unknown sources, as they may contain viruses or malware that can harm your computer or compromise your privacy. You should also respect the developers' work and support them by buying the game if you like it.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/szukevin/VISOR-GPT/train/finetune/run_dbqa.py b/spaces/szukevin/VISOR-GPT/train/finetune/run_dbqa.py deleted file mode 100644 index 41aec91ed0e27f87276f0fda0a03fd7556e7d379..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/finetune/run_dbqa.py +++ /dev/null @@ -1,232 +0,0 @@ -""" -This script provides an exmaple to wrap TencentPretrain for document-based question answering. -""" -import sys -import os -import random -import argparse -import torch - -tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) -sys.path.append(tencentpretrain_dir) - -from tencentpretrain.utils.constants import * -from tencentpretrain.utils import * -from tencentpretrain.utils.optimizers import * -from tencentpretrain.utils.config import load_hyperparam -from tencentpretrain.utils.seed import set_seed -from tencentpretrain.utils.logging import init_logger -from tencentpretrain.model_saver import save_model -from tencentpretrain.opts import finetune_opts, tokenizer_opts, adv_opts -from finetune.run_classifier import Classifier, count_labels_num, build_optimizer, batch_loader, train_model, load_or_initialize_parameters - - -def read_dataset(args, path): - dataset, columns = [], {} - with open(path, mode="r", encoding="utf-8") as f: - for line_id, line in enumerate(f): - if line_id == 0: - for i, column_name in enumerate(line.rstrip("\r\n").split("\t")): - columns[column_name] = i - continue - line = line.rstrip("\r\n").split("\t") - qid = int(line[columns["qid"]]) - tgt = int(line[columns["label"]]) - text_a, text_b = line[columns["text_a"]], line[columns["text_b"]] - src_a = args.tokenizer.convert_tokens_to_ids([CLS_TOKEN] + args.tokenizer.tokenize(text_a) + [SEP_TOKEN]) - src_b = args.tokenizer.convert_tokens_to_ids(args.tokenizer.tokenize(text_b) + [SEP_TOKEN]) - src = src_a + src_b - seg = [1] * len(src_a) + [2] * len(src_b) - - if len(src) > args.seq_length: - src = src[: args.seq_length] - seg = seg[: args.seq_length] - PAD_ID = args.tokenizer.convert_tokens_to_ids([PAD_TOKEN])[0] - while len(src) < args.seq_length: - src.append(PAD_ID) - seg.append(0) - dataset.append((src, tgt, seg, qid)) - - return dataset - - -def gen_dataset_groupby_qid(dataset, logits_all): - dataset_groupby_qid, correct_answer_orders, scores = [], [], [] - for i in range(len(dataset)): - label = dataset[i][1] - if i == 0: - qid = dataset[i][3] - # Order of the current sentence in the document. - current_order = 0 - scores.append(float(logits_all[i][1].item())) - if label == 1: - # Occasionally, more than one sentences in a document contain answers. - correct_answer_orders.append(current_order) - current_order += 1 - continue - if qid == dataset[i][3]: - scores.append(float(logits_all[i][1].item())) - if label == 1: - correct_answer_orders.append(current_order) - current_order += 1 - else: - # For each question, we record which sentences contain answers - # and the scores of all sentences in the document. - dataset_groupby_qid.append((qid, correct_answer_orders, scores)) - correct_answer_orders, scores, current_order = [], [], 0 - qid = dataset[i][3] - scores.append(float(logits_all[i][1].item())) - if label == 1: - correct_answer_orders.append(current_order) - current_order += 1 - dataset_groupby_qid.append((qid, correct_answer_orders, scores)) - return dataset_groupby_qid - - -def evaluate(args, dataset): - src = torch.LongTensor([sample[0] for sample in dataset]) - tgt = torch.LongTensor([sample[1] for sample in dataset]) - seg = torch.LongTensor([sample[2] for sample in dataset]) - - batch_size = args.batch_size - instances_num = src.size()[0] - - args.model.eval() - - for i, (src_batch, tgt_batch, seg_batch, _) in enumerate(batch_loader(batch_size, src, tgt, seg)): - src_batch = src_batch.to(args.device) - tgt_batch = tgt_batch.to(args.device) - seg_batch = seg_batch.to(args.device) - with torch.no_grad(): - loss, logits = args.model(src_batch, tgt_batch, seg_batch) - if i == 0: - logits_all = logits - if i >= 1: - logits_all = torch.cat((logits_all, logits), 0) - - # To calculate MRR, the results are grouped by qid. - dataset_groupby_qid = gen_dataset_groupby_qid(dataset, logits_all) - - reciprocal_rank = [] - for _, correct_answer_orders, scores in dataset_groupby_qid: - if len(correct_answer_orders) == 1: - sorted_scores = sorted(scores, reverse=True) - for j in range(len(sorted_scores)): - if sorted_scores[j] == scores[correct_answer_orders[0]]: - reciprocal_rank.append(1 / (j + 1)) - else: - current_rank = len(scores) - sorted_scores = sorted(scores, reverse=True) - for i in range(len(correct_answer_orders)): - for j in range(len(scores)): - if sorted_scores[j] == scores[correct_answer_orders[i]] and j < current_rank: - current_rank = j - reciprocal_rank.append(1 / (current_rank + 1)) - - MRR = sum(reciprocal_rank) / len(reciprocal_rank) - args.logger.info("Mean Reciprocal Rank: {:.4f}".format(MRR)) - return MRR - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - finetune_opts(parser) - - tokenizer_opts(parser) - - parser.add_argument("--soft_targets", action='store_true', - help="Train model with logits.") - parser.add_argument("--soft_alpha", type=float, default=0.5, - help="Weight of the soft targets loss.") - - adv_opts(parser) - - args = parser.parse_args() - - # Load the hyperparameters from the config file. - args = load_hyperparam(args) - - set_seed(args.seed) - - # Count the number of labels. - args.labels_num = count_labels_num(args.train_path) - - # Build tokenizer. - args.tokenizer = str2tokenizer[args.tokenizer](args) - - # Build classification model. - model = Classifier(args) - - # Load or initialize parameters. - load_or_initialize_parameters(args, model) - - # Get logger. - args.logger = init_logger(args) - - args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model = model.to(args.device) - - # Training phase. - trainset = read_dataset(args, args.train_path) - instances_num = len(trainset) - batch_size = args.batch_size - - args.train_steps = int(instances_num * args.epochs_num / batch_size) + 1 - - args.logger.info("Batch size: {}".format(batch_size)) - args.logger.info("The number of training instances: {}".format(instances_num)) - - optimizer, scheduler = build_optimizer(args, model) - - if args.fp16: - try: - from apex import amp - except ImportError: - raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.") - model, optimizer = amp.initialize(model, optimizer,opt_level = args.fp16_opt_level) - args.amp = amp - - if torch.cuda.device_count() > 1: - args.logger.info("{} GPUs are available. Let's use them.".format(torch.cuda.device_count())) - model = torch.nn.DataParallel(model) - args.model = model - - if args.use_adv: - args.adv_method = str2adv[args.adv_type](model) - - total_loss, result, best_result = 0.0, 0.0, 0.0 - - args.logger.info("Start training.") - - for epoch in range(1, args.epochs_num + 1): - random.shuffle(trainset) - src = torch.LongTensor([example[0] for example in trainset]) - tgt = torch.LongTensor([example[1] for example in trainset]) - seg = torch.LongTensor([example[2] for example in trainset]) - - model.train() - for i, (src_batch, tgt_batch, seg_batch, _) in enumerate(batch_loader(batch_size, src, tgt, seg)): - loss = train_model(args, model, optimizer, scheduler, src_batch, tgt_batch, seg_batch) - total_loss += loss.item() - if (i + 1) % args.report_steps == 0: - args.logger.info("Epoch id: {}, Training steps: {}, Avg loss: {:.3f}".format(epoch, i + 1, total_loss / args.report_steps)) - total_loss = 0.0 - - result = evaluate(args, read_dataset(args, args.dev_path)) - if result > best_result: - best_result = result - save_model(model, args.output_model_path) - - # Evaluation phase. - if args.test_path is not None: - args.logger.info("Test set evaluation.") - if torch.cuda.device_count() > 1: - args.model.module.load_state_dict(torch.load(args.output_model_path)) - else: - args.model.load_state_dict(torch.load(args.output_model_path)) - evaluate(args, read_dataset(args, args.test_path)) - - -if __name__ == "__main__": - main() diff --git a/spaces/tang155/bingo/src/components/ui/button.tsx b/spaces/tang155/bingo/src/components/ui/button.tsx deleted file mode 100644 index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000 --- a/spaces/tang155/bingo/src/components/ui/button.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import * as React from 'react' -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: - 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90', - destructive: - 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: - 'border border-input hover:bg-accent hover:text-accent-foreground', - secondary: - 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 shadow-none hover:underline' - }, - size: { - default: 'h-8 px-4 py-2', - sm: 'h-8 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'h-8 w-8 p-0' - } - }, - defaultVariants: { - variant: 'default', - size: 'default' - } - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : 'button' - return ( - - ) - } -) -Button.displayName = 'Button' - -export { Button, buttonVariants } diff --git a/spaces/terfces0erbo/CollegeProjectV2/Boom Chat Add Ons Nulled Io !EXCLUSIVE!.md b/spaces/terfces0erbo/CollegeProjectV2/Boom Chat Add Ons Nulled Io !EXCLUSIVE!.md deleted file mode 100644 index ecae1464173127349fb6bc3a4f494cbc7182e496..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Boom Chat Add Ons Nulled Io !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        boom chat add ons nulled io


        Download File ->>> https://bytlly.com/2uGlLZ



        -
        -Get 4 chat room website templates on themeforest. ... Boom Embed For Boomchat Php Ajax Chat Add Ons Download Free ... Download Free Virtualspaces Socket Io Virtual Chat Room Avatar Chat Chat Chatroom Chat Room Chatroom Css. 1fdad05405
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Disk Drill 4.0.487.0 Crack ((TOP)).md b/spaces/terfces0erbo/CollegeProjectV2/Disk Drill 4.0.487.0 Crack ((TOP)).md deleted file mode 100644 index 76cd3245a69cc875383c20ab000d978ef501b8b9..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Disk Drill 4.0.487.0 Crack ((TOP)).md +++ /dev/null @@ -1,8 +0,0 @@ - -

        disk drill pro crack is a powerful recovery tool. it recuperates data from all storage devices. disk drill pro crack is an advanced tool to recover deleted files that was lost because of various reasons. disk drill pro crack is a powerful tool to recuperate files from all types of storage devices. it can be a powerful tool to recuperate deleted files from all types of storage devices.

        -

        disk drill pro crack is a powerful data recovery software. it can be a powerful tool to recuperate deleted files from all types of storage devices. disk drill pro crack is a powerful tool to recover deleted files that was lost because of various reasons. disk drill pro crack is a powerful tool to recuperate deleted files from all types of storage devices.

        -

        Disk Drill 4.0.487.0 Crack


        Download File ••• https://bytlly.com/2uGiyC



        -

        this program is developed by one of the best software company in the world. that is, crack. this software is the best software in the field of data recovery because disk drill pro torrent is the best application for data recovery. this is a very powerful application which enables you to recover your data from the various storage devices which are lost or damaged.

        -

        disk drill pro 4.0.487.0 crack gives you complete access to your data, so you can recover your files, documents, pictures, videos, and more from nearly any device. disk drill can be a free tool, but it also allows you to recover your data for a fee. in fact, there are many third-party developers that have created solutions to complement disk drill. these include: filefinder, data rescue, data rescue, photo rescue, and gpart, among others.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Disk Drill Pro 4.0.499 Final Crack Plus Full NEW Activation Code [Win Mac] Torrent.md b/spaces/terfces0erbo/CollegeProjectV2/Disk Drill Pro 4.0.499 Final Crack Plus Full NEW Activation Code [Win Mac] Torrent.md deleted file mode 100644 index 5c8b974beda96f1706894dbfdbd5494c4915569f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Disk Drill Pro 4.0.499 Final Crack Plus Full NEW Activation Code [Win Mac] Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Disk Drill Pro 4.0.499 Final Crack Plus Full Activation Code [Win Mac] Torrent


        Download Zip 🔗 https://bytlly.com/2uGkk3



        -
        -February 7, 2022 - you can recover lost files. The Disk Drill Pro activation code provides advanced features to help stop data loss. Disk Drill Pro 4.0.499 Crack & Activation code WIN/MAC 2020. Free Download Disk Drill Pro is an application for recovering deleted files. Disk Drill Pro is the best tool to recover lost files and lost files on your computer. The application gives you the opportunity to recover your deleted and lost files. The program recovers deleted, lost and lost files such as: Documents; Images; Video; Music; Audio; Private Documents. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download WORK Mcgs Embedded Configuration Software 12 8.md b/spaces/terfces0erbo/CollegeProjectV2/Download WORK Mcgs Embedded Configuration Software 12 8.md deleted file mode 100644 index 19b94f4c7064252f070ef7bf62125fde2fb02aae..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download WORK Mcgs Embedded Configuration Software 12 8.md +++ /dev/null @@ -1,112 +0,0 @@ - -

        Download MCGS Embedded Configuration Software 12 8: A Tool for Creating HMI Screens

        - -

        MCGS Embedded Configuration Software 12 8 is a software that allows you to design and program human-machine interface (HMI) screens for various devices, such as PLCs, inverters, sensors, and controllers. It supports 800 communication drivers for popular PLCs and has multi-layered security features. It also allows you to set the date of shutdown, which will automatically lock the HMI when inactive. In this article, we will show you how to download MCGS Embedded Configuration Software 12 8 and how to use it for your devices.

        -

        Download mcgs embedded configuration software 12 8


        Download Zip ✏ ✏ ✏ https://bytlly.com/2uGjyC



        - -

        How to Download MCGS Embedded Configuration Software 12 8

        - -

        To download MCGS Embedded Configuration Software 12 8, you need to follow these steps:

        - -
          -
        1. Go to this link and click on the download button. The file size is about 1.5 GB and the password to extract it is plc247.com.
        2. -
        3. Extract the downloaded file using WinRAR or any other software that can handle RAR files.
        4. -
        5. Open the extracted folder and double-click on setup.exe to launch the installer.
        6. -
        7. Follow the on-screen instructions to complete the installation. You may need to enter some information in Chinese, such as your name and company name.
        8. -
        9. When the installation is finished, you can launch MCGS Embedded Configuration Software 12 8 from the Start menu or the desktop shortcut.
        10. -
        - -

        Note: Before you install the software, you need to change the Windows language and location to Chinese, because the software only uses Chinese language. To do this, follow these steps:

        - -
          -
        1. Open Control Panel and click on Region and Language.
        2. -
        3. Under the Formats tab, select Chinese (Simplified, China) from the Format drop-down menu.
        4. -
        5. Under the Location tab, select China from the Current location drop-down menu.
        6. -
        7. Under the Keyboards and Languages tab, click on Change keyboards...
        8. -
        9. Under the General tab, click on Add... and select Chinese (Simplified) - Microsoft Pinyin IME 2010 from the list.
        10. -
        11. Click OK to save the changes and close the windows.
        12. -
        13. Restart your computer for the changes to take effect.
        14. -
        - -

        How to Use MCGS Embedded Configuration Software 12 8

        - -

        Once you have installed MCGS Embedded Configuration Software 12 8, you can use it to create and edit HMI screens for your devices. Here are some basic steps to follow:

        - -
          -
        1. Open MCGS Embedded Configuration Software 12 8 from the Start menu or the desktop shortcut.
        2. -
        3. Select a device model from the list or create a new one by clicking on New Project.
        4. -
        5. Select a screen size and resolution from the list or create a new one by clicking on New Screen.
        6. -
        7. Add elements to your screen by dragging and dropping them from the toolbox on the left side of the window. You can also edit their properties by double-clicking on them or using the property window on the right side of the window.
        8. -
        9. Add communication settings to your screen by clicking on Communication Settings on the toolbar. You can select a communication driver from the list or create a new one by clicking on New Driver. You can also edit their parameters by double-clicking on them or using the parameter window on the right side of the window.
        10. -
        11. Add scripts to your screen by clicking on Script Editor on the toolbar. You can write scripts in C language or use built-in functions and variables. You can also debug your scripts by clicking on Debug Script on the toolbar.
        12. -
        13. Save your project by clicking on Save Project on the toolbar or pressing Ctrl+S.
        14. -
        15. Download your project to your device by clicking on Download Project on the toolbar or pressing Ctrl+D. You need to connect your device to your computer using a USB cable or a serial port. You also need to select a download mode from Normal Mode or Fast Mode.
        16. -
        - -

        Conclusion

        - -

        MCGS Embedded Configuration Software 12 8 is a powerful tool for creating HMI screens for various devices. It has many features and supports many communication drivers. However, it only uses Chinese language, so you need to change your Windows settings before installing it. We hope this article helped you download and use MCGS Embedded Configuration Software 12 8 successfully. If you have any questions or feedback, please leave a comment below.

        -

        What are the Features of MCGS Embedded Configuration Software 12 8

        - -

        MCGS Embedded Configuration Software 12 8 is a software that has many features that can help you create and program HMI screens for various devices. Here are some of the main features of the software:

        -

        - -
          -
        • It supports 800 communication drivers for popular PLCs, such as Siemens, Mitsubishi, Omron, Allen-Bradley, and more. You can also create your own communication drivers using the driver development tool.
        • -
        • It has a rich library of graphical elements, such as buttons, switches, gauges, charts, tables, and more. You can also import your own images or animations to customize your screen.
        • -
        • It has a powerful script editor that allows you to write scripts in C language or use built-in functions and variables. You can also debug your scripts using the debug tool.
        • -
        • It has multi-layered security features that allow you to set passwords, user levels, encryption modes, and date of shutdown for your screen. You can also use the USB dongle to protect your project.
        • -
        • It has a simulation mode that allows you to test your screen on your computer without connecting to your device. You can also use the online mode to monitor and control your device in real time.
        • -
        • It has a backup and restore function that allows you to save and load your project easily. You can also export and import your project to other devices or computers.
        • -
        - -

        What are the Benefits of Using MCGS Embedded Configuration Software 12 8

        - -

        Using MCGS Embedded Configuration Software 12 8 has many benefits for you as a user. Here are some of them:

        - -
          -
        • You can create and program HMI screens for various devices easily and quickly. You can use the drag-and-drop function, the property window, and the parameter window to design and configure your screen.
        • -
        • You can improve your productivity and efficiency by using the software. You can use the templates, the copy-and-paste function, the undo-and-redo function, and the batch download function to save time and effort.
        • -
        • You can enhance your creativity and flexibility by using the software. You can use the graphical elements, the images, the animations, the scripts, and the communication drivers to customize your screen according to your needs and preferences.
        • -
        • You can ensure the quality and reliability of your screen by using the software. You can use the simulation mode, the online mode, the debug tool, and the backup and restore function to test and troubleshoot your screen.
        • -
        • You can protect your intellectual property and data by using the software. You can use the security features, such as passwords, user levels, encryption modes, date of shutdown, and USB dongle to prevent unauthorized access or modification of your screen.
        • -
        - -

        Conclusion

        - -

        MCGS Embedded Configuration Software 12 8 is a tool for designing and programming HMI screens for various devices. It supports 800 communication drivers for popular PLCs and has multi-layered security features. It also allows you to set the date of shutdown, which will automatically lock the HMI when inactive. In this article, we have shown you how to download MCGS Embedded Configuration Software 12 8 and how to use it for your devices. We have also explained the features and benefits of using MCGS Embedded Configuration Software 12 8. We hope this article has been helpful and informative for you. If you have any questions or feedback, please leave a comment below.

        -

        How to Troubleshoot MCGS Embedded Configuration Software 12 8

        - -

        MCGS Embedded Configuration Software 12 8 is a software that usually works smoothly and reliably. However, sometimes you may encounter some problems or errors while using the software. Here are some common problems and solutions for MCGS Embedded Configuration Software 12 8:

        - -
          -
        • Problem: The software cannot open or run properly.
        • -
        • Solution: Check if your computer meets the minimum system requirements for the software. You need to have Windows XP or above, 1 GB of RAM or above, and 2 GB of free disk space or above. You also need to have a USB port or a serial port to connect your device. If your computer meets the requirements, try reinstalling the software or updating it to the latest version.
        • -
        • Problem: The software cannot communicate with your device.
        • -
        • Solution: Check if your device is compatible with the software. You can find the compatible models in the user manual or on the official website. You also need to check if your device is powered on and connected to your computer properly. You can use a USB cable or a serial cable to connect your device. You also need to check if your communication settings are correct. You can find the communication settings in the Communication Settings window on the toolbar. You need to select the correct communication driver, port, baud rate, and other parameters for your device.
        • -
        • Problem: The software cannot download or upload your project to your device.
        • -
        • Solution: Check if your device has enough memory space and battery power to store and run your project. You also need to check if your download mode is correct. You can select Normal Mode or Fast Mode in the Download Project window on the toolbar. Normal Mode is slower but safer, while Fast Mode is faster but riskier. You also need to check if your security settings are correct. You can set passwords, user levels, encryption modes, and date of shutdown in the Security Settings window on the toolbar. You need to enter the correct password and user level to download or upload your project.
        • -
        - -

        How to Update MCGS Embedded Configuration Software 12 8

        - -

        MCGS Embedded Configuration Software 12 8 is a software that is constantly updated and improved by the developer. Updating the software can help you fix bugs, improve performance, and add new features and drivers. Here are some steps to update MCGS Embedded Configuration Software 12 8:

        - -
          -
        1. Go to this link and click on the download button. The file size is about 1.5 GB and the password to extract it is plc247.com.
        2. -
        3. Extract the downloaded file using WinRAR or any other software that can handle RAR files.
        4. -
        5. Open the extracted folder and double-click on setup.exe to launch the installer.
        6. -
        7. Follow the on-screen instructions to complete the installation. You may need to enter some information in Chinese, such as your name and company name.
        8. -
        9. When the installation is finished, you can launch MCGS Embedded Configuration Software 12 8 from the Start menu or the desktop shortcut.
        10. -
        - -

        Note: Before you update the software, you need to backup your project and uninstall the previous version of the software. To backup your project, you can use the Backup Project function on the toolbar or copy and paste your project folder to another location. To uninstall the previous version of the software, you can use the Control Panel or delete the installation folder manually.

        - -

        Conclusion

        - -

        MCGS Embedded Configuration Software 12 8 is a tool for designing and programming HMI screens for various devices. It supports 800 communication drivers for popular PLCs and has multi-layered security features. It also allows you to set the date of shutdown, which will automatically lock the HMI when inactive. In this article, we have shown you how to download MCGS Embedded Configuration Software 12 8 and how to use it for your devices. We have also explained the features, benefits, troubleshooting tips, and update methods of using MCGS Embedded Configuration Software 12 8. We hope this article has been helpful and informative for you. If you have any questions or feedback, please leave a comment below.

        -

        Conclusion

        - -

        In this article, we have discussed how to download MCGS Embedded Configuration Software 12 8 and how to use it for your devices. We have also shared some tips to avoid common errors and some methods to update the software. We have also explained the features and benefits of using MCGS Embedded Configuration Software 12 8 for creating HMI screens. We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/ERRORCODE0xC004F017activateoffice201321 !!TOP!!.md b/spaces/terfces0erbo/CollegeProjectV2/ERRORCODE0xC004F017activateoffice201321 !!TOP!!.md deleted file mode 100644 index a3119c7fb3e973ddbce957f244bca234ce9d931b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/ERRORCODE0xC004F017activateoffice201321 !!TOP!!.md +++ /dev/null @@ -1,33 +0,0 @@ -
        -

        How to Fix ERRORCODE0xC004F017 When Activating Office 2013

        -

        If you are trying to activate your Office 2013 product and you encounter the error code 0xC004F017, it means that your computer cannot connect to the Microsoft activation servers. This can happen due to various reasons, such as network issues, firewall settings, antivirus software, or corrupted system files. In this article, we will show you some possible solutions to fix this error and activate your Office 2013 successfully.

        -

        ERRORCODE0xC004F017activateoffice201321


        Downloadhttps://bytlly.com/2uGloI



        -

        Method 1: Check Your Internet Connection

        -

        The first thing you should do is to make sure that your computer is connected to the internet and that you can access the Microsoft website. You can try to open a web browser and go to www.microsoft.com. If you can see the website, then your internet connection is working fine. If not, you may need to troubleshoot your network settings or contact your internet service provider.

        -

        Method 2: Disable Your Firewall and Antivirus Software

        -

        Another possible cause of the error code 0xC004F017 is that your firewall or antivirus software is blocking the communication between your computer and the Microsoft activation servers. To test this, you can temporarily disable your firewall and antivirus software and try to activate your Office 2013 again. If the activation succeeds, then you need to add an exception for Office 2013 in your firewall and antivirus settings. If the activation fails, then you can re-enable your firewall and antivirus software and try another method.

        -

        Method 3: Run the Office Activation Troubleshooter

        -

        Microsoft provides a tool called the Office Activation Troubleshooter that can help you diagnose and fix common activation issues. To run this tool, follow these steps:

        -

        -
          -
        1. Open any Office 2013 application, such as Word or Excel.
        2. -
        3. Click on the File tab and select Account.
        4. -
        5. Under Product Information, click on Change Product Key.
        6. -
        7. In the Enter your product key window, click on Use the automated phone system instead.
        8. -
        9. In the Activate Office by telephone window, click on I want to activate the software by telephone.
        10. -
        11. In the next window, click on Run the Activation Troubleshooter.
        12. -
        13. Follow the instructions on the screen to complete the troubleshooting process.
        14. -
        -

        If the troubleshooter fixes the error code 0xC004F017, then you can activate your Office 2013 normally. If not, then you can try another method.

        -

        Method 4: Repair Your Office Installation

        -

        Sometimes, the error code 0xC004F017 can occur due to corrupted or missing system files related to Office 2013. To fix this, you can try to repair your Office installation using the Control Panel. To do this, follow these steps:

        -
          -
        1. Open the Control Panel and select Programs and Features.
        2. -
        3. Find Microsoft Office 2013 in the list of installed programs and right-click on it.
        4. -
        5. Select Change from the context menu.
        6. -
        7. In the Change your installation of Microsoft Office window, select Repair and click on Continue.
        8. -
        9. Wait for the repair process to finish and restart your computer.
        10. -
        -

        After repairing your Office installation, try to activate your Office 2013 again. If the error code 0xC004F017 persists, then you may need to contact Microsoft support for further assistance.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/thealphhamerc/audio-to-text/README.md b/spaces/thealphhamerc/audio-to-text/README.md deleted file mode 100644 index 46a402f5959bf46c1fe33ea0ff904beef6ced1fe..0000000000000000000000000000000000000000 --- a/spaces/thealphhamerc/audio-to-text/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Audio To Text -emoji: 🐨 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thov/medicalSegmentation/src/medicalDataLoader.py b/spaces/thov/medicalSegmentation/src/medicalDataLoader.py deleted file mode 100644 index a501c72474c3713a444f189ccd53a3874285ce27..0000000000000000000000000000000000000000 --- a/spaces/thov/medicalSegmentation/src/medicalDataLoader.py +++ /dev/null @@ -1,115 +0,0 @@ -from __future__ import print_function, division -import os -import torch -import pandas as pd -from skimage import io, transform -import numpy as np -from torch.utils.data import Dataset, DataLoader -from torchvision import transforms, utils -from PIL import Image, ImageOps -from random import random, randint - -import warnings -warnings.filterwarnings("ignore") - -def make_dataset(root, mode): - assert mode in ['train','val', 'test'] - items = [] - - if mode == 'train': - train_img_path = os.path.join(root, 'train', 'Img') - train_mask_path = os.path.join(root, 'train', 'GT') - - images = os.listdir(train_img_path) - labels = os.listdir(train_mask_path) - - images.sort() - labels.sort() - - for it_im, it_gt in zip(images, labels): - item = (os.path.join(train_img_path, it_im), os.path.join(train_mask_path, it_gt)) - items.append(item) - - - elif mode == 'val': - val_img_path = os.path.join(root, 'val', 'Img') - val_mask_path = os.path.join(root, 'val', 'GT') - - images = os.listdir(val_img_path) - labels = os.listdir(val_mask_path) - - images.sort() - labels.sort() - - for it_im, it_gt in zip(images, labels): - item = (os.path.join(val_img_path, it_im), os.path.join(val_mask_path, it_gt)) - items.append(item) - else: - test_img_path = os.path.join(root, 'test', 'Img') - test_mask_path = os.path.join(root, 'test', 'GT') - - images = os.listdir(test_img_path) - labels = os.listdir(test_mask_path) - - images.sort() - labels.sort() - - for it_im, it_gt in zip(images, labels): - item = (os.path.join(test_img_path, it_im), os.path.join(test_mask_path, it_gt)) - items.append(item) - - return items - - -class MedicalImageDataset(Dataset): - """Face Landmarks dataset.""" - - def __init__(self, mode, root_dir, transform=None, mask_transform=None, augment=False, equalize=False): - """ - Args: - root_dir (string): Directory with all the images. - transform (callable, optional): Optional transform to be applied - on a sample. - """ - self.root_dir = root_dir - self.transform = transform - self.mask_transform = mask_transform - self.imgs = make_dataset(root_dir, mode) - self.augmentation = augment - self.equalize = equalize - self.mode = mode - - def __len__(self): - return len(self.imgs) - - def augment(self, img, mask): - if random() > 0.5: - img = ImageOps.flip(img) - mask = ImageOps.flip(mask) - if random() > 0.5: - img = ImageOps.mirror(img) - mask = ImageOps.mirror(mask) - if random() > 0.5: - angle = random() * 60 - 30 - img = img.rotate(angle) - mask = mask.rotate(angle) - return img, mask - - def __getitem__(self, index): - img_path, mask_path = self.imgs[index] - img = Image.open(img_path) - mask = Image.open(mask_path).convert('L') - - if self.equalize: - img = ImageOps.equalize(img) - - if self.augmentation: - img, mask = self.augment(img, mask) - - if self.transform: - img = self.transform(img) - mask = self.mask_transform(mask) - - return [img, mask, img_path] - - diff --git a/spaces/thuanz123/peft-sd-realfill/README.md b/spaces/thuanz123/peft-sd-realfill/README.md deleted file mode 100644 index dae69f8d501a92cb1b4756e389671ed778f07dd5..0000000000000000000000000000000000000000 --- a/spaces/thuanz123/peft-sd-realfill/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Peft Lora Sd Dreambooth -emoji: 🎨 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Microsoft Excel 2019 and Boost Your Productivity with These Tips.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Microsoft Excel 2019 and Boost Your Productivity with These Tips.md deleted file mode 100644 index ac004d8e9c32fbfc96a5fb8a56d481ba5d78f2a6..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Microsoft Excel 2019 and Boost Your Productivity with These Tips.md +++ /dev/null @@ -1,26 +0,0 @@ - -

        How to Download Microsoft Excel 2019 for Free

        -

        Microsoft Excel 2019 is one of the most popular and powerful spreadsheet applications in the world. It allows you to create, edit, and analyze data in various formats, such as tables, charts, graphs, and pivot tables. Excel 2019 also comes with new features and enhancements, such as improved formulas, functions, charts, and data analysis tools.

        -

        If you want to download Microsoft Excel 2019 for free, you have a few options. In this article, we will show you how to get Excel 2019 without paying a dime.

        -

        microsoft excel 2019 crack download


        DOWNLOAD » https://urlcod.com/2uK5a1



        - -

        Option 1: Use Microsoft Office Online

        -

        One of the easiest ways to use Excel 2019 for free is to use Microsoft Office Online. This is a web-based version of Microsoft Office that lets you access and edit your files from any browser. You can use Office Online to create and edit documents, spreadsheets, presentations, and more.

        -

        To use Office Online, you need a Microsoft account. If you don't have one, you can create one for free at https://signup.live.com. Once you have an account, you can go to https://office.com and sign in with your credentials. You will see a dashboard with various apps, including Excel. Click on Excel to launch the online version of Excel 2019.

        -

        Office Online has most of the features and functionality of the desktop version of Excel 2019. However, some advanced features may not be available or may have limited functionality. For example, you may not be able to use macros, add-ins, or external data sources. Also, you need an internet connection to use Office Online.

        - -

        Option 2: Use Microsoft Office Mobile Apps

        -

        Another way to use Excel 2019 for free is to use Microsoft Office Mobile Apps. These are apps that let you access and edit your files from your smartphone or tablet. You can use Office Mobile Apps to create and edit documents, spreadsheets, presentations, and more.

        -

        To use Office Mobile Apps, you need a Microsoft account and a compatible device. You can download the apps from the Google Play Store or the Apple App Store. The apps are free for devices with screen sizes up to 10.1 inches. For larger devices, you need an Office 365 subscription to unlock all the features.

        -

        -

        Office Mobile Apps have most of the features and functionality of the desktop version of Excel 2019. However, some advanced features may not be available or may have limited functionality. For example, you may not be able to use macros, add-ins, or external data sources. Also, you need an internet connection to use Office Mobile Apps.

        - -

        Option 3: Use Microsoft Office Trial Version

        -

        A third way to use Excel 2019 for free is to use Microsoft Office Trial Version. This is a version of Microsoft Office that lets you try out the full features and functionality of the desktop version of Excel 2019 for a limited time. You can use Office Trial Version to create and edit documents, spreadsheets, presentations, and more.

        -

        To use Office Trial Version, you need a Microsoft account and a compatible device. You can download the trial version from https://www.microsoft.com/en-us/evalcenter/evaluate-office-365-proplus. You will get a 30-day trial period to use all the features and functionality of Office 365 ProPlus, which includes Excel 2019.

        -

        Office Trial Version has all the features and functionality of the desktop version of Excel 2019. However, after the trial period expires, you will need to purchase an Office 365 subscription or a standalone license to continue using it.

        - -

        Conclusion

        -

        In this article, we have shown you how to download Microsoft Excel 2019 for free using three different options: Office Online, Office Mobile Apps, and Office Trial Version. Each option has its own advantages and disadvantages depending on your needs and preferences. We hope this article has helped you find the best option for you.

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom for Chromebook How to Install and Play the RPG Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom for Chromebook How to Install and Play the RPG Game.md deleted file mode 100644 index 96f7d18d9b09073f44677eb96d34057649200388..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom for Chromebook How to Install and Play the RPG Game.md +++ /dev/null @@ -1,126 +0,0 @@ - -

        Cookie Run: Kingdom Download Chromebook

        -

        If you are looking for a fun and addictive game to play on your Chromebook, you might want to check out Cookie Run: Kingdom. This is a role-playing game where you can build your own cookie kingdom, recruit and upgrade cookie heroes, and battle against evil forces. You can also join guilds, chat with other players, and enjoy various events and rewards.

        -

        But what is a Chromebook and how can you download Cookie Run: Kingdom on it? A Chromebook is a laptop that runs on ChromeOS, a fast, simple, and secure operating system made by Google. Chromebooks have many benefits, such as booting up in seconds, updating automatically, having built-in virus protection, and being compatible with Android apps.

        -

        cookie run kingdom download chromebook


        Downloadhttps://bltlly.com/2uOhn9



        -

        Yes, you read that right. You can install and use Android apps on your Chromebook, just like you would on your smartphone or tablet. This means you can access thousands of games, including Cookie Run: Kingdom, from the Google Play Store app on your Chromebook. In this article, we will show you how to do that step by step.

        -

        Step-by-step guide to download Cookie Run: Kingdom on a Chromebook

        -

        Before you start downloading Cookie Run: Kingdom on your Chromebook, you need to make sure that your device supports Android apps. Not all Chromebooks have this feature, so you need to check if yours does. You can do this by going to this page and looking for your model name. If your Chromebook is listed there, it means it can run Android apps.

        -

        Once you have confirmed that your Chromebook supports Android apps, you need to enable the Google Play Store app on your device. This app will allow you to search for and install Android apps on your Chromebook. To enable the Google Play Store app, follow these steps:

        -
          -
        1. Click on your account photo in the bottom-right corner of the screen.
        2. -
        3. Select Settings.
        4. -
        5. Under Apps, select Google Play Store.
        6. -
        7. Turn on Install apps and games from Google Play on your Chromebook.
        8. -
        9. You will see a window with the Google Play terms of service. Click Agree.
        10. -
        11. The Google Play Store app will open. You might need to sign in with your Google account if you haven't already.
        12. -
        -

        Now that you have enabled the Google Play Store app on your Chromebook, you can search for Cookie Run: Kingdom and install it. To do this, follow these steps:

        -
          -
        1. Open the Google Play Store app from the Launcher or the Shelf.
        2. -
        3. In the search box, type Cookie Run: Kingdom and press Enter.
        4. -
        5. You will see the game's page with its description, screenshots, ratings, reviews, etc. Click on Install.
        6. -
        7. The game will download and install automatically on your Chromebook. You will see a notification when it is done.
        8. -
        9. To launch the game, click on Open from the notification or find it in the Launcher or the Shelf.
        10. -
        -

        Congratulations! You have successfully downloaded Cookie Run: Kingdom. on your Chromebook and you are ready to play. But how can you optimize your gaming experience on this device? Here are some tips and tricks to help you out.

        -

        How to play Cookie Run: Kingdom on PC with BlueStacks emulator
        -Cookie Run: Kingdom characters, story, and media
        -Cookie Run: Kingdom RPG game by Devsisters Corporation
        -Cookie Run: Kingdom dynamic battles and skills
        -Cookie Run: Kingdom tips and tricks for beginners
        -Cookie Run: Kingdom best cookies and teams
        -Cookie Run: Kingdom guilds and festivals
        -Cookie Run: Kingdom latest updates and news
        -Cookie Run: Kingdom OST and music videos
        -Cookie Run: Kingdom reviews and ratings
        -Cookie Run: Kingdom fan art and cosplay
        -Cookie Run: Kingdom merchandise and collectibles
        -Cookie Run: Kingdom codes and coupons
        -Cookie Run: Kingdom events and rewards
        -Cookie Run: Kingdom wallpapers and stickers
        -Cookie Run: Kingdom memes and jokes
        -Cookie Run: Kingdom FAQs and guides
        -Cookie Run: Kingdom bugs and issues
        -Cookie Run: Kingdom support and feedback
        -Cookie Run: Kingdom community and forums
        -Cookie Run: Kingdom legends and villains
        -Cookie Run: Kingdom GingerBrave and friends
        -Cookie Run: Kingdom escape from the Witch's Oven
        -Cookie Run: Kingdom build your cookie kingdom
        -Cookie Run: Kingdom explore the uncharted lands
        -Cookie Run: Kingdom secrets of the ancient cookies
        -Cookie Run: Kingdom epic journey around Earthbread
        -Cookie Run: Kingdom net energy gain experiment
        -Cookie Run: Kingdom holy grail fusion reactor
        -Cookie Run: Kingdom mini Sun creation project
        -Cookie Run: Kingdom 100 million degrees Celsius temperature
        -Cookie Run: Kingdom 30 seconds record achievement
        -Cookie Run: Kingdom Korea Superconducting Tokamak Advanced Research facility (KSTAR)
        -Cookie Run: Kingdom Korea Institute of Fusion Energy (KFE)
        -Cookie Run: Kingdom nuclear fusion reaction simulation
        -Cookie Run: Kingdom physics and engineering challenges
        -Cookie Run: Kingdom unlimited energy potential
        -Cookie Run: Kingdom environmental and social impacts
        -Cookie Run: Kingdom future developments and plans
        -Cookie Run: Kingdom comparison with other fusion projects

        -

        Tips and tricks to optimize your gaming experience on a Chromebook

        -

        Playing Cookie Run: Kingdom on a Chromebook can be fun and convenient, but it can also have some challenges. For example, you might encounter some lagging, crashing, or compatibility issues. To avoid or minimize these problems, you can try the following tips and tricks:

        -
          -
        • Adjust the display settings and resolution. Depending on the size and quality of your Chromebook's screen, you might want to change the display settings and resolution of the game to suit your preferences. You can do this by going to the game's settings menu and choosing the graphics option. You can also adjust the brightness, contrast, and color of your Chromebook's screen by going to Settings > Device > Displays.
        • -
        • Use keyboard and mouse controls for better accuracy and speed. Although Cookie Run: Kingdom is designed for touchscreens, you can also use your Chromebook's keyboard and mouse to control the game. This can give you more precision and responsiveness, especially in battles and quests. You can customize the keyboard and mouse controls by going to the game's settings menu and choosing the controls option. You can also use a gamepad or a controller if your Chromebook supports them.
        • -
        • Connect headphones or speakers for better sound quality. Cookie Run: Kingdom has a lot of sound effects and music that add to the fun and excitement of the game. To enjoy them fully, you might want to connect headphones or speakers to your Chromebook. This can enhance the sound quality and volume of the game, as well as block out any background noise. You can connect headphones or speakers to your Chromebook via Bluetooth or a 3.5mm audio jack.
        • -
        • Join a guild and chat with other players. Cookie Run: Kingdom is not only a solo game, but also a social game. You can join a guild and chat with other players from around the world. This can help you make friends, exchange tips, request help, and participate in guild wars and events. You can join a guild by going to the game's main menu and choosing the guild option. You can also chat with other players by tapping on the chat icon on the bottom-right corner of the screen.
        • -
        -

        These are some of the tips and tricks that can help you optimize your gaming experience on a Chromebook. Of course, you can also experiment with different settings and options to find what works best for you.

        -

        Conclusion

        -

        In this article, we have shown you how to download Cookie Run: Kingdom on a Chromebook step by step. We have also given you some tips and tricks to optimize your gaming experience on this device. We hope you found this article helpful and informative.

        -

        If you are interested in playing Cookie Run: Kingdom on your Chromebook, why not give it a try? You might be surprised by how much fun it is. You can also learn more about the game and the Chromebook by visiting the official website of Cookie Run: Kingdom and the official website of Chromebook.

        -

        Thank you for reading this article and happy gaming!

        -

        FAQs

        -

        Here are some frequently asked questions about Cookie Run: Kingdom on a Chromebook:

        -

        What are the system requirements for Cookie Run: Kingdom?

        -

        The system requirements for Cookie Run: Kingdom are as follows:

        -
        OptionPriceProsCons
        Fitzroy Readers website$69.95 AUD for the boxed set of ten readers- You can order directly from the official website of Fitzroy Readers - You can get free shipping within Australia - You can get discounts for bulk orders - You can access other resources and support from Fitzroy Readers- You may have to pay extra for international shipping - You may have to wait for delivery time
        Amazon.com.au$69.95 AUD for the boxed set of ten readers- You can order from a trusted online retailer - You can get free shipping with Amazon Prime membership - You can get fast delivery with Amazon's fulfillment service - You can read customer reviews and ratings- You may have to pay extra for non-Prime shipping - You may have to deal with third-party sellers or resellers - You may not get access to other resources and support from Fitzroy Readers
        - - -
        OSRAMStorageInternet
        Android 4.4 or higher1 GB or higher1 GB or higherRequired
        -

        If your Chromebook meets these requirements, you should be able to play Cookie Run: Kingdom smoothly.

        -

        How can I update Cookie Run: Kingdom on my Chromebook?

        -

        To update Cookie Run: Kingdom on your Chromebook, you need to follow these steps:

        -
          -
        1. Open the Google Play Store app from the Launcher or the Shelf.
        2. -
        3. In the search box, type Cookie Run: Kingdom and press Enter.
        4. -
        5. You will see the game's page with an Update button if there is a new version available.
        6. -
        7. Click on Update and wait for the download and installation to finish.
        8. -
        9. You can also enable auto-update for Cookie Run: Kingdom by tapping on the three dots icon on the top-right corner of the game's page and selecting Auto-update.
        10. -
        Here are some more FAQs about Cookie Run: Kingdom on a Chromebook:

        -

        How can I backup and sync my game data on my Chromebook?

        -

        To backup and sync your game data on your Chromebook, you need to follow these steps:

        -
          -
        1. Open the game and go to the settings menu.
        2. -
        3. Select the account option and choose Google Play Games.
        4. -
        5. Sign in with your Google account and allow the game to access your data.
        6. -
        7. You will see a message that says your game data is synced with Google Play Games.
        8. -
        9. You can also enable cloud save by tapping on the cloud icon on the top-left corner of the screen.
        10. -
        -

        This way, you can restore your game data if you switch devices or reinstall the game.

        -

        How can I contact the game developer or customer support?

        -

        If you have any questions, feedback, or issues about the game, you can contact the game developer or customer support by following these steps:

        -
          -
        1. Open the game and go to the settings menu.
        2. -
        3. Select the help option and choose Contact Us.
        4. -
        5. You will see a form where you can fill in your name, email, subject, and message.
        6. -
        7. Attach any screenshots or files if needed and click Send.
        8. -
        9. You will receive a confirmation email and a reply from the game developer or customer support within 24 hours.
        10. -
        -

        How can I uninstall Cookie Run: Kingdom from my Chromebook?

        -

        If you want to uninstall Cookie Run: Kingdom from your Chromebook, you need to follow these steps:

        -
          -
        1. Open the Launcher or the Shelf and find Cookie Run: Kingdom.
        2. -
        3. Right-click on the game icon and select Uninstall.
        4. -
        5. You will see a pop-up window asking you to confirm your action. Click Uninstall again.
        6. -
        7. The game will be removed from your Chromebook. You will see a notification when it is done.
        8. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Brawl Stars on Windows 10 and Experience the Most Popular Mobile Game on PC.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Brawl Stars on Windows 10 and Experience the Most Popular Mobile Game on PC.md deleted file mode 100644 index d7475794eeea8f23419064d15997b174ba3a11f1..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Brawl Stars on Windows 10 and Experience the Most Popular Mobile Game on PC.md +++ /dev/null @@ -1,183 +0,0 @@ - -

        How to Download Brawl Stars for Windows 10

        -

        If you are a fan of fast-paced multiplayer games, you might have heard of Brawl Stars, the latest hit game from Supercell, the makers of Clash of Clans and Clash Royale. Brawl Stars is a mobile game that lets you team up with your friends and compete in various game modes, from 3v3 battles to battle royale. You can also unlock and upgrade dozens of unique characters, each with their own abilities and skins.

        -

        But what if you want to play Brawl Stars on your Windows 10 PC? Is it possible? And if so, how do you do it? In this article, we will answer these questions and show you how to download Brawl Stars for Windows 10 in a few simple steps. We will also give you some tips and tricks to make your gaming experience even better.

        -

        brawl stars download for windows 10


        DOWNLOAD 🆓 https://bltlly.com/2uOrIx



        -

        What is Brawl Stars?

        -

        Before we get into the details of how to download Brawl Stars for Windows 10, let's take a quick look at what Brawl Stars is and why it is so popular.

        -

        A fast-paced multiplayer game from Supercell

        -

        Brawl Stars is a free-to-play game that was released globally in December 2018 by Supercell, a Finnish game developer that is known for creating some of the most successful mobile games in history, such as Clash of Clans, Clash Royale, and Boom Beach. Supercell has a reputation for creating games that are easy to pick up and play, but hard to master, and Brawl Stars is no exception.

        -

        Brawl Stars is a game that combines elements of twin-stick shooters, MOBAs, and battle royale genres. You can choose from a variety of game modes, each with a different objective and rules. You can also choose from a roster of colorful characters, called Brawlers, each with their own personality, stats, and skills. You can team up with your friends or play solo, and fight against other players from around the world in real-time matches that last under three minutes.

        -

        Features of Brawl Stars

        -

        Brawl Stars has many features that make it an exciting and addictive game. Here are some of them:

        -

        Different game modes

        -

        Brawl Stars has several game modes that cater to different play-styles and preferences. Here are some examples:

        -
          -
        • Gem Grab: A 3v3 mode where you have to collect and hold 10 gems to win, but if you die, you drop all your gems.
        • -
        • Showdown: A solo or duo mode where you have to survive as long as possible in a shrinking map, while collecting power-ups and eliminating other players.
        • -
        • Brawl Ball: A 3v3 mode where you have to score two goals before the other team does, but you can also use your attacks to knock the ball out of their hands.
        • -
        • Bounty: A 3v3 mode where you have to kill as many enemies as possible, while avoiding getting killed yourself. Each kill increases your bounty, but also makes you a bigger target.
        • -
        • Heist: A 3v3 mode where you have to either attack or defend a safe full of gems. The attackers have to break the safe, while the defenders have to protect it.
        • -
        • Hot Zone: A 3v3 mode where you have to control a zone on the map for as long as possible, while preventing the enemy team from doing the same.
        • -
        • Knockout: A 3v3 mode where you have to eliminate all the enemies in a best-of-three rounds format, with no respawns.
        • -
        • Special Events: These are limited-time modes that offer unique challenges and rewards, such as Boss Fight, Robo Rumble, and Super City Rampage.
        • -
        -

        Unique characters

        -

        Brawl Stars has over 40 Brawlers that you can unlock and play with, each with their own strengths and weaknesses. You can also customize your Brawlers with different skins and pins. Here are some examples of Brawlers:

        -

        How to download brawl stars on windows 10 PC
        -Brawl stars windows 10 free download full version
        -Brawl stars for windows 10 laptop download
        -Brawl stars download for PC windows 10 64 bit
        -Brawl stars game download for windows 10
        -Brawl stars download for windows 10 without bluestacks
        -Brawl stars download for windows 10 using bluestacks
        -Brawl stars download for windows 10 from official website
        -Brawl stars download for windows 10 filehippo
        -Brawl stars download for windows 10 latest update
        -Brawl stars download for windows 10 offline installer
        -Brawl stars download for windows 10 online play
        -Brawl stars download for windows 10 with keyboard and mouse
        -Brawl stars download for windows 10 no emulator
        -Brawl stars download for windows 10 emulator
        -Brawl stars download for windows 10 apk
        -Brawl stars download for windows 10 app store
        -Brawl stars download for windows 10 microsoft store
        -Brawl stars download for windows 10 softonic
        -Brawl stars download for windows 10 uptodown
        -Brawl stars download for windows 10 steam
        -Brawl stars download for windows 10 epic games
        -Brawl stars download for windows 10 origin
        -Brawl stars download for windows 10 discord
        -Brawl stars download for windows 10 reddit
        -Brawl stars download for windows 10 youtube
        -Brawl stars download for windows 10 tutorial
        -Brawl stars download for windows 10 guide
        -Brawl stars download for windows 10 tips and tricks
        -Brawl stars download for windows 10 cheats and hacks
        -Brawl stars download for windows 10 best settings
        -Brawl stars download for windows 10 system requirements
        -Brawl stars download for windows 10 error fix
        -Brawl stars download for windows 10 not working
        -Brawl stars download for windows 10 crashing
        -Brawl stars download for windows 10 lagging
        -Brawl stars download for windows 10 slow loading
        -Brawl stars download for windows 10 black screen
        -Brawl stars download for windows 10 sound problem
        -Brawl stars download for windows 10 graphics problem
        -Brawl stars download for windows 10 controller support
        -Brawl stars download for windows 10 touch screen support
        -Brawl stars download for windows 10 cross platform play
        -Brawl stars download for windows 10 multiplayer mode
        -Brawl stars download for windows 10 single player mode
        -Brawl stars download for windows 10 new characters and skins
        -Brawl stars download for windows 10 new maps and modes

        -
          -
        • Shelly: A shotgun-wielding Brawler who can deal massive damage at close range, and charge up her Super to blast enemies away.
        • -
        • Colt: A revolver-wielding Brawler who can fire a rapid burst of bullets, and use his Super to unleash a barrage of bullets that can destroy obstacles.
        • -
        • Brock: A rocket-launcher-wielding Brawler who can deal splash damage at long range, and use his Super to rain down rockets on a large area.
        • -
        • Nita: A bear-summoning Brawler who can attack enemies with shockwaves, and use her Super to summon a big bear that can chase and maul enemies.
        • -
        • Dynamike: A dynamite-throwing Brawler who can lob explosives over walls, and use his Super to throw a big bomb that can blow up enemies and terrain.
        • -
        • Tara: A card-throwing Brawler who can pierce through multiple enemies with her attacks, and use her Super to pull enemies into a black hole that deals damage.
        • -
        • Poco: A guitar-playing Brawler who can heal his allies with his attacks, and use his Super to heal all nearby allies with a burst of music.
        • -
        • Mortis: A shovel-wielding Brawler who can dash forward with his attacks, and use his Super to unleash a swarm of bats that can heal him and damage enemies.
        • -
        • Spike: A cactus-like Brawler who can throw spikes that explode and spread in different directions, and use his Super to create a field of cacti that slows down and damages enemies.
        • -
        • Crow: A crow-like Brawler who can throw poisoned daggers that deal damage over time, and use his Super to jump and land on enemies while throwing daggers around him.
        • -
        -

        Constantly evolving content

        -

        Brawl Stars is a game that is constantly updated with new content and features. You can expect to see new Brawlers, skins, maps, game modes, events, seasons, quests, rewards, and more. You can also participate in the Brawl Pass, which is a progression system that lets you unlock exclusive items by completing challenges and earning tokens. You can also join the Brawl Stars Championship, which is a global esports tournament that anyone can enter and compete for glory and prizes.

        -

        Why play Brawl Stars on Windows 10?

        -

        Now that you know what Brawl Stars is and what it offers, you might be wondering why you would want to play it on your Windows 10 PC instead of your mobile device. Well, there are some benefits and drawbacks of playing Brawl Stars on PC that you should consider before making your decision.

        -

        Benefits of playing on PC

        -

        Playing Brawl Stars on PC has some advantages over playing on mobile. Here are some of them:

        -

        Bigger screen

        -

        One of the most obvious benefits of playing Brawl Stars on PC is that you can enjoy the game on a bigger screen. This can make the game more immersive and enjoyable, as well as give you a better view of the action and the map. You can also adjust the resolution and graphics settings to suit your preferences.

        -

        Better controls

        -

        Another benefit of playing Brawl Stars on PC is that you can use a controller or a keyboard and mouse to control your Brawler. This can give you more accuracy and responsiveness, as well as more comfort and convenience. You can also customize your keybindings and sensitivity to fit your play-style.

        -

        Higher performance

        -

        A third benefit of playing Brawl Stars on PC is that you can enjoy a smoother and faster gameplay experience. You can avoid issues such as lag, crashes, overheating, battery drain, and storage space that might affect your mobile device. You can also take advantage of the higher processing power and memory of your PC to run the game at its optimal level.

        -

        Drawbacks of playing on PC

        -

        Playing Brawl Stars on PC also has some disadvantages over playing on mobile. Here are some of them:

        -

        No official support

        -

        One of the main drawbacks of playing Brawl Stars on PC is that there is no official support from Supercell for this platform. This means that you might encounter some compatibility issues, bugs, or errors that are not addressed by the developers. You might also miss out on some features or updates that are exclusive to the mobile version.

        -

        Need an emulator

        -

        Another drawback of playing Brawl Stars on PC is that you need an emulator to run the game. An emulator is a software that simulates the environment of a mobile device on your PC, allowing you to run mobile apps and games. However, not all emulators are reliable, safe, or easy to use. You might have to do some research and testing to find the best emulator for Brawl Stars. You might also have to deal with some ads, pop-ups, or malware that come with some emulators.

        -

        How to download Brawl Stars for Windows 10?

        -

        If you have decided to play Brawl Stars on your Windows 10 PC, you might be wondering how to do it. Well, it's not very complicated, but it does require some steps. Here is a simple guide on how to download Brawl Stars for Windows 10:

        -

        Step 1: Choose an emulator

        -

        The first step is to choose an emulator that can run Brawl Stars on your PC. There are many emulators available online, but not all of them are compatible with Brawl Stars or Windows 10. You have to look for an emulator that is fast, stable, secure, and easy to use.

        -

        One of the most popular and recommended emulators for Brawl Stars is BlueStacks. BlueStacks is a free emulator that has been designed specifically for gaming. It has a high compatibility rate with most Android games and apps, including Brawl Stars. It also has features such as keyboard and mouse support, gamepad support, multi-instance mode, macro recorder, and more.

        -

        To download BlueStacks, you can visit their official website at https://www.bluestacks.com/. You can also check out other emulators such as NoxPlayer, MEmu, or LDPlayer, but make sure they meet the minimum system requirements for Brawl Stars.

        -

        Step 2: Install the emulator

        -

        The second step is to install the emulator on your PC. This is usually a straightforward process that involves downloading and running the installer file from the emulator's website. You might have to agree to some terms and conditions, choose a destination folder, and follow some instructions on the screen.

        -

        Once the installation is complete, you can launch the emulator and sign in with your Google account. This will allow you to access the Google Play Store and download apps and games from there.

        -

        Step 3: Download Brawl Stars from the emulator

        -

        The third step is to download Brawl Stars from the emulator's app store. This is similar to how you would download any app or game from your mobile device. You just have to search for Brawl Stars in the app store, click on the install button, and wait for the download and installation to finish.

        -

        Alternatively, you can also download Brawl Stars from an external source, such as an APK file. An APK file is a package file that contains all the data and resources needed to run an Android app or game. You can find APK files for Brawl Stars from various websites online, but make sure they are safe and trustworthy.

        -

        To install an APK file on your emulator, you just have to drag and drop it onto the emulator's window, or browse to the folder where you saved it, and double-click on it. The emulator will then install the APK file and create a shortcut for Brawl Stars on your home screen.

        -

        Step 4: Enjoy the game

        -

        The fourth and final step is to enjoy the game. You can launch Brawl Stars from the emulator's home screen or app drawer, and start playing. You can also adjust the settings, such as the graphics, sound, language, and controls, to suit your preferences.

        -

        You can also link your Brawl Stars account to your Supercell ID, which is a service that lets you save and sync your progress across different devices. This way, you can switch between playing on your PC and your mobile device without losing your data. You can also access your friends list, chat, and club from your Supercell ID.

        -

        Tips and tricks for playing Brawl Stars on Windows 10

        -

        Now that you know how to download Brawl Stars for Windows 10, you might want to learn some tips and tricks to improve your gameplay and have more fun. Here are some of them:

        -

        Customize your settings

        -

        One of the first things you should do when playing Brawl Stars on PC is to customize your settings. You can access the settings menu by clicking on the gear icon on the top right corner of the screen. Here, you can change various options, such as:

        -
          -
        • Graphics: You can choose the graphics quality from low, medium, or high, depending on your PC's specifications and your preference. You can also enable or disable shadows, bloom, and FPS cap.
        • -
        • Sound: You can adjust the volume of the music, sound effects, and voice chat. You can also mute or unmute the sound altogether.
        • -
        • Language: You can choose the language of the game from a list of available options. You can also change the region of the game server from global to local.
        • -
        • Controls: You can choose the control scheme from joystick or tap to move. You can also enable or disable auto-aim, super button, gadget button, and quickfire.
        • -
        • Keybindings: You can customize the keybindings for each action, such as moving, aiming, shooting, using super, using gadget, and emote.
        • -
        -

        Use a controller or keyboard and mouse

        -

        Another tip for playing Brawl Stars on PC is to use a controller or a keyboard and mouse instead of the emulator's touch controls. This can give you more precision and comfort when playing the game.

        -

        If you have a controller that is compatible with your PC, such as an Xbox or PlayStation controller, you can connect it to your PC via USB or Bluetooth. Then, you can map the controller buttons to the emulator's keys using the emulator's settings or a third-party software.

        -

        If you prefer using a keyboard and mouse, you can also customize the keybindings using the emulator's settings or a third-party software. You can also adjust the mouse sensitivity and DPI to suit your play-style.

        -

        Join a club and chat with other players

        -

        A third tip for playing Brawl Stars on PC is to join a club and chat with other players. A club is a group of players that share a common interest or goal in Brawl Stars. You can join an existing club or create your own club with your friends. By joining a club, you can:

        -
          -
        • Play with club members: You can invite club members to join your team or join their team in any game mode. This way, you can coordinate better and have more fun.
        • -
        • Chat with club members: You can chat with club members in real-time using text or voice chat. You can also send messages, emojis, pins, and screenshots to club members.
        • -
        • Earn club trophies: You can earn trophies for yourself and your club by winning matches in any game mode. The more trophies you earn, the higher your club's rank will be in the global and local leaderboards.
        • -
        • Participate in club events: You can participate in club events that are organized by Supercell or by your club leader. These events can include friendly matches, tournaments, challenges, giveaways, and more.
        • -
        -

        Learn from the pros and watch streams

        -

        A fourth tip for playing Brawl Stars on PC is to learn from the pros and watch streams. Brawl Stars has a large and active community of players who share their tips, strategies, guides, and gameplay videos on various platforms, such as YouTube, Twitch, Reddit, Discord, and more. You can learn a lot from watching and listening to these players, as they can teach you how to play better, how to use different Brawlers and game modes, how to counter other Brawlers and strategies, and more. You can also interact with them and ask them questions or feedback.

        -

        Some of the most popular and influential Brawl Stars players and streamers are:

        -
          -
        • KairosTime: A YouTube creator who makes informative and entertaining videos about Brawl Stars, such as tier lists, guides, reviews, news, and more.
        • -
        • Rey: A YouTube creator and Twitch streamer who makes high-quality gameplay videos and streams about Brawl Stars, featuring different Brawlers, game modes, tips, and tricks.
        • -
        • Lex: A YouTube creator and Twitch streamer who makes fun and funny videos and streams about Brawl Stars, featuring challenges, collaborations, memes, and more.
        • -
        • Tom: A professional player and Twitch streamer who plays for Tribe Gaming, one of the top Brawl Stars esports teams in the world. He showcases his skills and strategies in competitive matches and tournaments.
        • -
        • OJ: A YouTube creator who makes educational and analytical videos about Brawl Stars, such as mechanics, interactions, stats, and more.
        • -
        -

        Conclusion

        -

        Brawl Stars is a game that can be enjoyed by anyone who loves fast-paced multiplayer games. It has a lot of features that make it fun and exciting, such as different game modes, unique characters, constantly evolving content, and more. You can also play it on your Windows 10 PC using an emulator, which can give you some benefits over playing on mobile, such as a bigger screen, better controls, and higher performance. However, you also have to consider some drawbacks of playing on PC, such as no official support and the need for an emulator.

        -

        If you want to play Brawl Stars on your Windows 10 PC, you just have to follow these steps:

        -
          -
        1. Choose an emulator that can run Brawl Stars on your PC.
        2. -
        3. Install the emulator on your PC.
        4. -
        5. Download Brawl Stars from the emulator's app store or an external source.
        6. -
        7. Enjoy the game.
        8. -
        -

        You can also improve your gameplay and have more fun by following these tips:

        -
          -
        • Customize your settings.
        • -
        • Use a controller or keyboard and mouse.
        • -
        • Join a club and chat with other players.
        • -
        • Learn from the pros and watch streams.
        • -
        -

        We hope this article has helped you learn how to download Brawl Stars for Windows 10. If you have any questions or feedback, feel free to leave a comment below. Happy brawling!

        -

        FAQs

        -

        Here are some frequently asked questions about Brawl Stars and playing it on Windows 10:

        -

        Is Brawl Stars free to play?

        -

        Yes, Brawl Stars is free to play. You can download it from the Google Play Store or the App Store for your mobile device, or from an emulator's app store or an external source for your PC. However, the game does have some optional in-app purchases that can enhance your gameplay, such as gems, coins, skins, and brawl passes. You can buy these with real money or earn them by playing the game.

        -

        Is Brawl Stars cross-platform?

        -

        Yes, Brawl Stars is cross-platform. This means that you can play with other players who are using different devices or platforms, such as Android, iOS, or PC. You can also sync your progress across different devices using your Supercell ID.

        -

        Is Brawl Stars safe for kids?

        -

        Brawl Stars is rated 9+ on the App Store and 7+ on the Google Play Store, which means that it is suitable for most kids. However, the game does have some cartoon violence, online interactions, and in-app purchases that might not be appropriate for younger or more sensitive kids. Therefore, we recommend that parents supervise their kids when playing Brawl Stars and use the parental control features to limit their screen time, spending, and chat options.

        -

        How do I update Brawl Stars on PC?

        -

        To update Brawl Stars on PC, you have to follow the same steps as you would on your mobile device. You have to check the emulator's app store or the external source for any new updates and download them. You might also have to update the emulator itself if there are any new versions available.

        -

        How do I uninstall Brawl Stars from PC?

        -

        To uninstall Brawl Stars from PC, you have to follow the same steps as you would on your mobile device. You have to go to the emulator's app drawer or settings and find Brawl Stars. Then, you have to click on the uninstall button and confirm your action. You might also have to delete any leftover files or folders from your PC.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ares 3.1.9.4045 Keygen.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ares 3.1.9.4045 Keygen.md deleted file mode 100644 index df0338eeb59d5c9e2729ce59bfd0da6fa9173cdf..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ares 3.1.9.4045 Keygen.md +++ /dev/null @@ -1,16 +0,0 @@ - -

        Ares 3.1.9.4045: A Free and Open Source File Sharing Program

        -

        Ares is a popular peer-to-peer (P2P) file sharing program that allows users to download and share files with one another. Ares supports various types of media, such as music, videos, images, documents, and software. Ares also has a built-in chat feature that lets users communicate with other members of the network.

        -

        ares 3.1.9.4045 keygen


        Download ►►►►► https://urlcod.com/2uHw0i



        -

        Ares 3.1.9.4045 is the latest version of the software, which was released in 2020. It has several improvements and bug fixes over the previous versions, such as faster downloads, better stability, and enhanced security. Ares 3.1.9.4045 is compatible with Windows 7 and higher operating systems.

        -

        One of the best features of Ares is that it is free and open source software. This means that users do not have to pay any fees or subscriptions to use it. It also means that anyone can access the source code and modify it according to their needs and preferences. Ares is licensed under the GNU General Public License (GPL), which ensures that it remains free and open for everyone.

        -

        To download and install Ares 3.1.9.4045, users can visit the official website of Aresgalaxy[^2^] or other trusted sources such as Softonic[^2^]. The installation process is simple and straightforward, and users can customize their settings and preferences during the setup. Once installed, users can start searching and downloading files from the Ares network.

        -

        Ares is a great choice for anyone who wants to enjoy free and unlimited file sharing with other users around the world. Ares 3.1.9.4045 is a reliable and secure software that offers fast and easy downloads of various types of media.

        -

        - -

        One of the challenges of using P2P file sharing programs is that they may expose users to viruses or malware that can harm their computers. Ares 3.1.9.4045 has a built-in virus filter that scans every file before it is downloaded and alerts users of any potential threats. Users can also choose to delete or quarantine any suspicious files that they encounter.

        -

        Another challenge of using P2P file sharing programs is that they may violate the intellectual property rights of the creators or owners of the files. Ares 3.1.9.4045 respects the rights of the content providers and does not support or encourage any illegal or unethical activities. Users are responsible for ensuring that they have the proper permissions and licenses to download and share any files that they obtain through Ares.

        -

        Ares 3.1.9.4045 also has a feature that allows users to create and join chat rooms based on their interests and preferences. Users can chat with other members of the network and exchange opinions, ideas, and recommendations. Users can also create their own chat rooms and invite their friends or contacts to join them.

        -

        Ares 3.1.9.4045 is more than just a file sharing program. It is a community of users who share a common passion for media and entertainment. Ares 3.1.9.4045 connects users with each other and allows them to discover new and exciting content.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/It Alexa Chung Epub Download VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/It Alexa Chung Epub Download VERIFIED.md deleted file mode 100644 index 595ddb6054e97b92b25d83f51b0e7b8fabf88944..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/It Alexa Chung Epub Download VERIFIED.md +++ /dev/null @@ -1,21 +0,0 @@ -
        -

        How to Download It by Alexa Chung in PDF EPUB Format

        -

        If you are a fan of fashion, music, and Alexa Chung, you might be interested in reading her book It, which is a collection of her personal writings, drawings, and photographs. In this book, Alexa shares her inspirations, musings, and her own very eclectic style, with influences that range from Jane Birkin to Mick Jagger. You can also learn more about her thoughts on life, love, and everything Alexa Chung.

        -

        But how can you get a copy of this book in PDF EPUB format? PDF EPUB is a popular file format for ebooks, which allows you to read them on various devices such as computers, tablets, smartphones, and e-readers. PDF EPUB files are also easy to download and store on your device or cloud service.

        -

        it alexa chung epub download


        Download Ziphttps://urlcod.com/2uHylA



        -

        In this article, we will show you how to download It by Alexa Chung in PDF EPUB format for free from some of the best online sources. We will also provide you with some tips on how to enjoy reading this book and other similar books.

        -

        Where to Download It by Alexa Chung in PDF EPUB Format

        -

        There are many websites that offer free downloads of ebooks in PDF EPUB format, but not all of them are reliable or legal. Some of them may contain viruses, malware, or spam that can harm your device or compromise your privacy. Some of them may also violate the copyright laws and infringe on the author's rights.

        -

        To avoid these risks, we recommend you to use only trusted and reputable websites that provide high-quality and legal downloads of ebooks in PDF EPUB format. Here are some of the best ones that we have found for downloading It by Alexa Chung:

        -
          -
        • OceanofPDF: This website offers a large collection of free ebooks in various genres and languages. You can easily find It by Alexa Chung by searching for the title or the author name. You can also browse through the categories or use the filters to narrow down your search. To download the book, you just need to click on the download button and choose the file format you prefer.
        • -
        • Internet Archive: This website is a digital library that preserves and provides access to millions of books, movies, music, and other media. You can find It by Alexa Chung by typing the title or the author name in the search box. You can also explore other related items by using the tags or the collections. To download the book, you just need to click on the PDF or EPUB icon on the right side of the page.
        • -
        • Rakuten Kobo: This website is an online bookstore that sells ebooks and audiobooks in various formats and languages. You can find It by Alexa Chung by searching for the title or the author name. You can also read a brief summary and some reviews of the book before buying it. To download the book, you need to create an account and purchase it with your credit card or PayPal. You can then access it from your Kobo app or device.
        • -
        -

        How to Enjoy Reading It by Alexa Chung and Other Similar Books

        -

        Once you have downloaded It by Alexa Chung in PDF EPUB format, you can start reading it on your device or e-reader. However, reading an ebook is not the same as reading a printed book. There are some differences and challenges that you need to consider to make your reading experience more enjoyable and satisfying.

        -

        Here are some tips on how to enjoy reading It by Alexa Chung and other similar books:

        -
          -
        • Choose a

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/text/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/text/__init__.py deleted file mode 100644 index c466378ceba69a335d2beb4d3af92703d52b3831..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/text/__init__.py +++ /dev/null @@ -1,599 +0,0 @@ -import re -import itertools -import textwrap -import functools - -try: - from importlib.resources import files # type: ignore -except ImportError: # pragma: nocover - from pkg_resources.extern.importlib_resources import files # type: ignore - -from pkg_resources.extern.jaraco.functools import compose, method_cache -from pkg_resources.extern.jaraco.context import ExceptionTrap - - -def substitution(old, new): - """ - Return a function that will perform a substitution on a string - """ - return lambda s: s.replace(old, new) - - -def multi_substitution(*substitutions): - """ - Take a sequence of pairs specifying substitutions, and create - a function that performs those substitutions. - - >>> multi_substitution(('foo', 'bar'), ('bar', 'baz'))('foo') - 'baz' - """ - substitutions = itertools.starmap(substitution, substitutions) - # compose function applies last function first, so reverse the - # substitutions to get the expected order. - substitutions = reversed(tuple(substitutions)) - return compose(*substitutions) - - -class FoldedCase(str): - """ - A case insensitive string class; behaves just like str - except compares equal when the only variation is case. - - >>> s = FoldedCase('hello world') - - >>> s == 'Hello World' - True - - >>> 'Hello World' == s - True - - >>> s != 'Hello World' - False - - >>> s.index('O') - 4 - - >>> s.split('O') - ['hell', ' w', 'rld'] - - >>> sorted(map(FoldedCase, ['GAMMA', 'alpha', 'Beta'])) - ['alpha', 'Beta', 'GAMMA'] - - Sequence membership is straightforward. - - >>> "Hello World" in [s] - True - >>> s in ["Hello World"] - True - - You may test for set inclusion, but candidate and elements - must both be folded. - - >>> FoldedCase("Hello World") in {s} - True - >>> s in {FoldedCase("Hello World")} - True - - String inclusion works as long as the FoldedCase object - is on the right. - - >>> "hello" in FoldedCase("Hello World") - True - - But not if the FoldedCase object is on the left: - - >>> FoldedCase('hello') in 'Hello World' - False - - In that case, use ``in_``: - - >>> FoldedCase('hello').in_('Hello World') - True - - >>> FoldedCase('hello') > FoldedCase('Hello') - False - """ - - def __lt__(self, other): - return self.lower() < other.lower() - - def __gt__(self, other): - return self.lower() > other.lower() - - def __eq__(self, other): - return self.lower() == other.lower() - - def __ne__(self, other): - return self.lower() != other.lower() - - def __hash__(self): - return hash(self.lower()) - - def __contains__(self, other): - return super().lower().__contains__(other.lower()) - - def in_(self, other): - "Does self appear in other?" - return self in FoldedCase(other) - - # cache lower since it's likely to be called frequently. - @method_cache - def lower(self): - return super().lower() - - def index(self, sub): - return self.lower().index(sub.lower()) - - def split(self, splitter=' ', maxsplit=0): - pattern = re.compile(re.escape(splitter), re.I) - return pattern.split(self, maxsplit) - - -# Python 3.8 compatibility -_unicode_trap = ExceptionTrap(UnicodeDecodeError) - - -@_unicode_trap.passes -def is_decodable(value): - r""" - Return True if the supplied value is decodable (using the default - encoding). - - >>> is_decodable(b'\xff') - False - >>> is_decodable(b'\x32') - True - """ - value.decode() - - -def is_binary(value): - r""" - Return True if the value appears to be binary (that is, it's a byte - string and isn't decodable). - - >>> is_binary(b'\xff') - True - >>> is_binary('\xff') - False - """ - return isinstance(value, bytes) and not is_decodable(value) - - -def trim(s): - r""" - Trim something like a docstring to remove the whitespace that - is common due to indentation and formatting. - - >>> trim("\n\tfoo = bar\n\t\tbar = baz\n") - 'foo = bar\n\tbar = baz' - """ - return textwrap.dedent(s).strip() - - -def wrap(s): - """ - Wrap lines of text, retaining existing newlines as - paragraph markers. - - >>> print(wrap(lorem_ipsum)) - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad - minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. - - Curabitur pretium tincidunt lacus. Nulla gravida orci a odio. Nullam - varius, turpis et commodo pharetra, est eros bibendum elit, nec luctus - magna felis sollicitudin mauris. Integer in mauris eu nibh euismod - gravida. Duis ac tellus et risus vulputate vehicula. Donec lobortis - risus a elit. Etiam tempor. Ut ullamcorper, ligula eu tempor congue, - eros est euismod turpis, id tincidunt sapien risus a quam. Maecenas - fermentum consequat mi. Donec fermentum. Pellentesque malesuada nulla - a mi. Duis sapien sem, aliquet nec, commodo eget, consequat quis, - neque. Aliquam faucibus, elit ut dictum aliquet, felis nisl adipiscing - sapien, sed malesuada diam lacus eget erat. Cras mollis scelerisque - nunc. Nullam arcu. Aliquam consequat. Curabitur augue lorem, dapibus - quis, laoreet et, pretium ac, nisi. Aenean magna nisl, mollis quis, - molestie eu, feugiat in, orci. In hac habitasse platea dictumst. - """ - paragraphs = s.splitlines() - wrapped = ('\n'.join(textwrap.wrap(para)) for para in paragraphs) - return '\n\n'.join(wrapped) - - -def unwrap(s): - r""" - Given a multi-line string, return an unwrapped version. - - >>> wrapped = wrap(lorem_ipsum) - >>> wrapped.count('\n') - 20 - >>> unwrapped = unwrap(wrapped) - >>> unwrapped.count('\n') - 1 - >>> print(unwrapped) - Lorem ipsum dolor sit amet, consectetur adipiscing ... - Curabitur pretium tincidunt lacus. Nulla gravida orci ... - - """ - paragraphs = re.split(r'\n\n+', s) - cleaned = (para.replace('\n', ' ') for para in paragraphs) - return '\n'.join(cleaned) - - - - -class Splitter(object): - """object that will split a string with the given arguments for each call - - >>> s = Splitter(',') - >>> s('hello, world, this is your, master calling') - ['hello', ' world', ' this is your', ' master calling'] - """ - - def __init__(self, *args): - self.args = args - - def __call__(self, s): - return s.split(*self.args) - - -def indent(string, prefix=' ' * 4): - """ - >>> indent('foo') - ' foo' - """ - return prefix + string - - -class WordSet(tuple): - """ - Given an identifier, return the words that identifier represents, - whether in camel case, underscore-separated, etc. - - >>> WordSet.parse("camelCase") - ('camel', 'Case') - - >>> WordSet.parse("under_sep") - ('under', 'sep') - - Acronyms should be retained - - >>> WordSet.parse("firstSNL") - ('first', 'SNL') - - >>> WordSet.parse("you_and_I") - ('you', 'and', 'I') - - >>> WordSet.parse("A simple test") - ('A', 'simple', 'test') - - Multiple caps should not interfere with the first cap of another word. - - >>> WordSet.parse("myABCClass") - ('my', 'ABC', 'Class') - - The result is a WordSet, so you can get the form you need. - - >>> WordSet.parse("myABCClass").underscore_separated() - 'my_ABC_Class' - - >>> WordSet.parse('a-command').camel_case() - 'ACommand' - - >>> WordSet.parse('someIdentifier').lowered().space_separated() - 'some identifier' - - Slices of the result should return another WordSet. - - >>> WordSet.parse('taken-out-of-context')[1:].underscore_separated() - 'out_of_context' - - >>> WordSet.from_class_name(WordSet()).lowered().space_separated() - 'word set' - - >>> example = WordSet.parse('figured it out') - >>> example.headless_camel_case() - 'figuredItOut' - >>> example.dash_separated() - 'figured-it-out' - - """ - - _pattern = re.compile('([A-Z]?[a-z]+)|([A-Z]+(?![a-z]))') - - def capitalized(self): - return WordSet(word.capitalize() for word in self) - - def lowered(self): - return WordSet(word.lower() for word in self) - - def camel_case(self): - return ''.join(self.capitalized()) - - def headless_camel_case(self): - words = iter(self) - first = next(words).lower() - new_words = itertools.chain((first,), WordSet(words).camel_case()) - return ''.join(new_words) - - def underscore_separated(self): - return '_'.join(self) - - def dash_separated(self): - return '-'.join(self) - - def space_separated(self): - return ' '.join(self) - - def trim_right(self, item): - """ - Remove the item from the end of the set. - - >>> WordSet.parse('foo bar').trim_right('foo') - ('foo', 'bar') - >>> WordSet.parse('foo bar').trim_right('bar') - ('foo',) - >>> WordSet.parse('').trim_right('bar') - () - """ - return self[:-1] if self and self[-1] == item else self - - def trim_left(self, item): - """ - Remove the item from the beginning of the set. - - >>> WordSet.parse('foo bar').trim_left('foo') - ('bar',) - >>> WordSet.parse('foo bar').trim_left('bar') - ('foo', 'bar') - >>> WordSet.parse('').trim_left('bar') - () - """ - return self[1:] if self and self[0] == item else self - - def trim(self, item): - """ - >>> WordSet.parse('foo bar').trim('foo') - ('bar',) - """ - return self.trim_left(item).trim_right(item) - - def __getitem__(self, item): - result = super(WordSet, self).__getitem__(item) - if isinstance(item, slice): - result = WordSet(result) - return result - - @classmethod - def parse(cls, identifier): - matches = cls._pattern.finditer(identifier) - return WordSet(match.group(0) for match in matches) - - @classmethod - def from_class_name(cls, subject): - return cls.parse(subject.__class__.__name__) - - -# for backward compatibility -words = WordSet.parse - - -def simple_html_strip(s): - r""" - Remove HTML from the string `s`. - - >>> str(simple_html_strip('')) - '' - - >>> print(simple_html_strip('A stormy day in paradise')) - A stormy day in paradise - - >>> print(simple_html_strip('Somebody tell the truth.')) - Somebody tell the truth. - - >>> print(simple_html_strip('What about
          \nmultiple lines?')) - What about - multiple lines? - """ - html_stripper = re.compile('()|(<[^>]*>)|([^<]+)', re.DOTALL) - texts = (match.group(3) or '' for match in html_stripper.finditer(s)) - return ''.join(texts) - - -class SeparatedValues(str): - """ - A string separated by a separator. Overrides __iter__ for getting - the values. - - >>> list(SeparatedValues('a,b,c')) - ['a', 'b', 'c'] - - Whitespace is stripped and empty values are discarded. - - >>> list(SeparatedValues(' a, b , c, ')) - ['a', 'b', 'c'] - """ - - separator = ',' - - def __iter__(self): - parts = self.split(self.separator) - return filter(None, (part.strip() for part in parts)) - - -class Stripper: - r""" - Given a series of lines, find the common prefix and strip it from them. - - >>> lines = [ - ... 'abcdefg\n', - ... 'abc\n', - ... 'abcde\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix - 'abc' - >>> list(res.lines) - ['defg\n', '\n', 'de\n'] - - If no prefix is common, nothing should be stripped. - - >>> lines = [ - ... 'abcd\n', - ... '1234\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix = '' - >>> list(res.lines) - ['abcd\n', '1234\n'] - """ - - def __init__(self, prefix, lines): - self.prefix = prefix - self.lines = map(self, lines) - - @classmethod - def strip_prefix(cls, lines): - prefix_lines, lines = itertools.tee(lines) - prefix = functools.reduce(cls.common_prefix, prefix_lines) - return cls(prefix, lines) - - def __call__(self, line): - if not self.prefix: - return line - null, prefix, rest = line.partition(self.prefix) - return rest - - @staticmethod - def common_prefix(s1, s2): - """ - Return the common prefix of two lines. - """ - index = min(len(s1), len(s2)) - while s1[:index] != s2[:index]: - index -= 1 - return s1[:index] - - -def remove_prefix(text, prefix): - """ - Remove the prefix from the text if it exists. - - >>> remove_prefix('underwhelming performance', 'underwhelming ') - 'performance' - - >>> remove_prefix('something special', 'sample') - 'something special' - """ - null, prefix, rest = text.rpartition(prefix) - return rest - - -def remove_suffix(text, suffix): - """ - Remove the suffix from the text if it exists. - - >>> remove_suffix('name.git', '.git') - 'name' - - >>> remove_suffix('something special', 'sample') - 'something special' - """ - rest, suffix, null = text.partition(suffix) - return rest - - -def normalize_newlines(text): - r""" - Replace alternate newlines with the canonical newline. - - >>> normalize_newlines('Lorem Ipsum\u2029') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\r\n') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\x85') - 'Lorem Ipsum\n' - """ - newlines = ['\r\n', '\r', '\n', '\u0085', '\u2028', '\u2029'] - pattern = '|'.join(newlines) - return re.sub(pattern, '\n', text) - - -def _nonblank(str): - return str and not str.startswith('#') - - -@functools.singledispatch -def yield_lines(iterable): - r""" - Yield valid lines of a string or iterable. - - >>> list(yield_lines('')) - [] - >>> list(yield_lines(['foo', 'bar'])) - ['foo', 'bar'] - >>> list(yield_lines('foo\nbar')) - ['foo', 'bar'] - >>> list(yield_lines('\nfoo\n#bar\nbaz #comment')) - ['foo', 'baz #comment'] - >>> list(yield_lines(['foo\nbar', 'baz', 'bing\n\n\n'])) - ['foo', 'bar', 'baz', 'bing'] - """ - return itertools.chain.from_iterable(map(yield_lines, iterable)) - - -@yield_lines.register(str) -def _(text): - return filter(_nonblank, map(str.strip, text.splitlines())) - - -def drop_comment(line): - """ - Drop comments. - - >>> drop_comment('foo # bar') - 'foo' - - A hash without a space may be in a URL. - - >>> drop_comment('http://example.com/foo#bar') - 'http://example.com/foo#bar' - """ - return line.partition(' #')[0] - - -def join_continuation(lines): - r""" - Join lines continued by a trailing backslash. - - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar \\', 'baz'])) - ['foobarbaz'] - - Not sure why, but... - The character preceeding the backslash is also elided. - - >>> list(join_continuation(['goo\\', 'dly'])) - ['godly'] - - A terrible idea, but... - If no line is available to continue, suppress the lines. - - >>> list(join_continuation(['foo', 'bar\\', 'baz\\'])) - ['foo'] - """ - lines = iter(lines) - for item in lines: - while item.endswith('\\'): - try: - item = item[:-2].strip() + next(lines) - except StopIteration: - return - yield item diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/tags.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/tags.py deleted file mode 100644 index 9a3d25a71c75c975291cf987001ecd6882d6417d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/packaging/tags.py +++ /dev/null @@ -1,487 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import logging -import platform -import sys -import sysconfig -from importlib.machinery import EXTENSION_SUFFIXES -from typing import ( - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - cast, -) - -from . import _manylinux, _musllinux - -logger = logging.getLogger(__name__) - -PythonVersion = Sequence[int] -MacVersion = Tuple[int, int] - -INTERPRETER_SHORT_NAMES: Dict[str, str] = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} - - -_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32 - - -class Tag: - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform", "_hash"] - - def __init__(self, interpreter: str, abi: str, platform: str) -> None: - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - # The __hash__ of every single element in a Set[Tag] will be evaluated each time - # that a set calls its `.disjoint()` method, which may be called hundreds of - # times when scanning a page of links for packages with tags matching that - # Set[Tag]. Pre-computing the value here produces significant speedups for - # downstream consumers. - self._hash = hash((self._interpreter, self._abi, self._platform)) - - @property - def interpreter(self) -> str: - return self._interpreter - - @property - def abi(self) -> str: - return self._abi - - @property - def platform(self) -> str: - return self._platform - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self._hash == other._hash) # Short-circuit ASAP for perf reasons. - and (self._platform == other._platform) - and (self._abi == other._abi) - and (self._interpreter == other._interpreter) - ) - - def __hash__(self) -> int: - return self._hash - - def __str__(self) -> str: - return f"{self._interpreter}-{self._abi}-{self._platform}" - - def __repr__(self) -> str: - return f"<{self} @ {id(self)}>" - - -def parse_tag(tag: str) -> FrozenSet[Tag]: - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]: - value = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string: str) -> str: - return string.replace(".", "_").replace("-", "_") - - -def _abi3_applies(python_version: PythonVersion) -> bool: - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]: - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append(f"cp{version}") - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version: Optional[PythonVersion] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - if not python_version: - python_version = sys.version_info[:2] - - interpreter = f"cp{_version_nodot(python_version[:2])}" - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms) - yield from (Tag(interpreter, "none", platform_) for platform_ in platforms) - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi() -> Iterator[str]: - abi = sysconfig.get_config_var("SOABI") - if abi: - yield _normalize_string(abi) - - -def generic_tags( - interpreter: Optional[str] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - platforms = list(platforms or platform_tags()) - abis = list(abis) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]: - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield f"py{_version_nodot(py_version[:2])}" - yield f"py{py_version[0]}" - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield f"py{_version_nodot((py_version[0], minor))}" - - -def compatible_tags( - python_version: Optional[PythonVersion] = None, - interpreter: Optional[str] = None, - platforms: Optional[Iterable[str]] = None, -) -> Iterator[Tag]: - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str: - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]: - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - if cpu_arch in {"arm64", "x86_64"}: - formats.append("universal2") - - if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}: - formats.append("universal") - - return formats - - -def mac_platforms( - version: Optional[MacVersion] = None, arch: Optional[str] = None -) -> Iterator[str]: - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - - if (10, 0) <= version and version < (11, 0): - # Prior to Mac OS 11, each yearly release of Mac OS bumped the - # "minor" version number. The major version was always 10. - for minor_version in range(version[1], -1, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=10, minor=minor_version, binary_format=binary_format - ) - - if version >= (11, 0): - # Starting with Mac OS 11, each yearly release bumps the major version - # number. The minor versions are now the midyear updates. - for major_version in range(version[0], 10, -1): - compat_version = major_version, 0 - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=major_version, minor=0, binary_format=binary_format - ) - - if version >= (11, 0): - # Mac OS 11 on x86_64 is compatible with binaries from previous releases. - # Arm64 support was introduced in 11.0, so no Arm binaries from previous - # releases exist. - # - # However, the "universal2" binary format can have a - # macOS version earlier than 11.0 when the x86_64 part of the binary supports - # that version of macOS. - if arch == "x86_64": - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - else: - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_format = "universal2" - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]: - linux = _normalize_string(sysconfig.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - _, arch = linux.split("_", 1) - yield from _manylinux.platform_tags(linux, arch) - yield from _musllinux.platform_tags(arch) - yield linux - - -def _generic_platforms() -> Iterator[str]: - yield _normalize_string(sysconfig.get_platform()) - - -def platform_tags() -> Iterator[str]: - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name() -> str: - """ - Returns the name of the running interpreter. - """ - name = sys.implementation.name - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(*, warn: bool = False) -> str: - """ - Returns the version of the running interpreter. - """ - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version: PythonVersion) -> str: - return "".join(map(str, version)) - - -def sys_tags(*, warn: bool = False) -> Iterator[Tag]: - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - - interp_name = interpreter_name() - if interp_name == "cp": - yield from cpython_tags(warn=warn) - else: - yield from generic_tags() - - if interp_name == "pp": - yield from compatible_tags(interpreter="pp3") - else: - yield from compatible_tags() diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/core.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/core.py deleted file mode 100644 index 454bd57d0419439944b455c9c06958a97e7c8925..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/core.py +++ /dev/null @@ -1,5812 +0,0 @@ -# -# core.py -# -import os -from typing import ( - Optional as OptionalType, - Iterable as IterableType, - NamedTuple, - Union, - Callable, - Any, - Generator, - Tuple, - List, - TextIO, - Set, - Dict as DictType, - Sequence, -) -from abc import ABC, abstractmethod -from enum import Enum -import string -import copy -import warnings -import re -import sys -from collections.abc import Iterable -import traceback -import types -from operator import itemgetter -from functools import wraps -from threading import RLock -from pathlib import Path - -from .util import ( - _FifoCache, - _UnboundedCache, - __config_flags, - _collapse_string_to_ranges, - _escape_regex_range_chars, - _bslash, - _flatten, - LRUMemo as _LRUMemo, - UnboundedMemo as _UnboundedMemo, -) -from .exceptions import * -from .actions import * -from .results import ParseResults, _ParseResultsWithOffset -from .unicode import pyparsing_unicode - -_MAX_INT = sys.maxsize -str_type: Tuple[type, ...] = (str, bytes) - -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - - -if sys.version_info >= (3, 8): - from functools import cached_property -else: - - class cached_property: - def __init__(self, func): - self._func = func - - def __get__(self, instance, owner=None): - ret = instance.__dict__[self._func.__name__] = self._func(instance) - return ret - - -class __compat__(__config_flags): - """ - A cross-version compatibility configuration for pyparsing features that will be - released in a future version. By setting values in this configuration to True, - those features can be enabled in prior versions for compatibility development - and testing. - - - ``collect_all_And_tokens`` - flag to enable fix for Issue #63 that fixes erroneous grouping - of results names when an :class:`And` expression is nested within an :class:`Or` or :class:`MatchFirst`; - maintained for compatibility, but setting to ``False`` no longer restores pre-2.3.1 - behavior - """ - - _type_desc = "compatibility" - - collect_all_And_tokens = True - - _all_names = [__ for __ in locals() if not __.startswith("_")] - _fixed_names = """ - collect_all_And_tokens - """.split() - - -class __diag__(__config_flags): - _type_desc = "diagnostic" - - warn_multiple_tokens_in_named_alternation = False - warn_ungrouped_named_tokens_in_collection = False - warn_name_set_on_empty_Forward = False - warn_on_parse_using_empty_Forward = False - warn_on_assignment_to_Forward = False - warn_on_multiple_string_args_to_oneof = False - warn_on_match_first_with_lshift_operator = False - enable_debug_on_named_expressions = False - - _all_names = [__ for __ in locals() if not __.startswith("_")] - _warning_names = [name for name in _all_names if name.startswith("warn")] - _debug_names = [name for name in _all_names if name.startswith("enable_debug")] - - @classmethod - def enable_all_warnings(cls) -> None: - for name in cls._warning_names: - cls.enable(name) - - -class Diagnostics(Enum): - """ - Diagnostic configuration (all default to disabled) - - ``warn_multiple_tokens_in_named_alternation`` - flag to enable warnings when a results - name is defined on a :class:`MatchFirst` or :class:`Or` expression with one or more :class:`And` subexpressions - - ``warn_ungrouped_named_tokens_in_collection`` - flag to enable warnings when a results - name is defined on a containing expression with ungrouped subexpressions that also - have results names - - ``warn_name_set_on_empty_Forward`` - flag to enable warnings when a :class:`Forward` is defined - with a results name, but has no contents defined - - ``warn_on_parse_using_empty_Forward`` - flag to enable warnings when a :class:`Forward` is - defined in a grammar but has never had an expression attached to it - - ``warn_on_assignment_to_Forward`` - flag to enable warnings when a :class:`Forward` is defined - but is overwritten by assigning using ``'='`` instead of ``'<<='`` or ``'<<'`` - - ``warn_on_multiple_string_args_to_oneof`` - flag to enable warnings when :class:`one_of` is - incorrectly called with multiple str arguments - - ``enable_debug_on_named_expressions`` - flag to auto-enable debug on all subsequent - calls to :class:`ParserElement.set_name` - - Diagnostics are enabled/disabled by calling :class:`enable_diag` and :class:`disable_diag`. - All warnings can be enabled by calling :class:`enable_all_warnings`. - """ - - warn_multiple_tokens_in_named_alternation = 0 - warn_ungrouped_named_tokens_in_collection = 1 - warn_name_set_on_empty_Forward = 2 - warn_on_parse_using_empty_Forward = 3 - warn_on_assignment_to_Forward = 4 - warn_on_multiple_string_args_to_oneof = 5 - warn_on_match_first_with_lshift_operator = 6 - enable_debug_on_named_expressions = 7 - - -def enable_diag(diag_enum: Diagnostics) -> None: - """ - Enable a global pyparsing diagnostic flag (see :class:`Diagnostics`). - """ - __diag__.enable(diag_enum.name) - - -def disable_diag(diag_enum: Diagnostics) -> None: - """ - Disable a global pyparsing diagnostic flag (see :class:`Diagnostics`). - """ - __diag__.disable(diag_enum.name) - - -def enable_all_warnings() -> None: - """ - Enable all global pyparsing diagnostic warnings (see :class:`Diagnostics`). - """ - __diag__.enable_all_warnings() - - -# hide abstract class -del __config_flags - - -def _should_enable_warnings( - cmd_line_warn_options: IterableType[str], warn_env_var: OptionalType[str] -) -> bool: - enable = bool(warn_env_var) - for warn_opt in cmd_line_warn_options: - w_action, w_message, w_category, w_module, w_line = (warn_opt + "::::").split( - ":" - )[:5] - if not w_action.lower().startswith("i") and ( - not (w_message or w_category or w_module) or w_module == "pyparsing" - ): - enable = True - elif w_action.lower().startswith("i") and w_module in ("pyparsing", ""): - enable = False - return enable - - -if _should_enable_warnings( - sys.warnoptions, os.environ.get("PYPARSINGENABLEALLWARNINGS") -): - enable_all_warnings() - - -# build list of single arg builtins, that can be used as parse actions -_single_arg_builtins = { - sum, - len, - sorted, - reversed, - list, - tuple, - set, - any, - all, - min, - max, -} - -_generatorType = types.GeneratorType -ParseAction = Union[ - Callable[[], Any], - Callable[[ParseResults], Any], - Callable[[int, ParseResults], Any], - Callable[[str, int, ParseResults], Any], -] -ParseCondition = Union[ - Callable[[], bool], - Callable[[ParseResults], bool], - Callable[[int, ParseResults], bool], - Callable[[str, int, ParseResults], bool], -] -ParseFailAction = Callable[[str, int, "ParserElement", Exception], None] -DebugStartAction = Callable[[str, int, "ParserElement", bool], None] -DebugSuccessAction = Callable[ - [str, int, int, "ParserElement", ParseResults, bool], None -] -DebugExceptionAction = Callable[[str, int, "ParserElement", Exception, bool], None] - - -alphas = string.ascii_uppercase + string.ascii_lowercase -identchars = pyparsing_unicode.Latin1.identchars -identbodychars = pyparsing_unicode.Latin1.identbodychars -nums = "0123456789" -hexnums = nums + "ABCDEFabcdef" -alphanums = alphas + nums -printables = "".join([c for c in string.printable if c not in string.whitespace]) - -_trim_arity_call_line: traceback.StackSummary = None - - -def _trim_arity(func, max_limit=3): - """decorator to trim function calls to match the arity of the target""" - global _trim_arity_call_line - - if func in _single_arg_builtins: - return lambda s, l, t: func(t) - - limit = 0 - found_arity = False - - def extract_tb(tb, limit=0): - frames = traceback.extract_tb(tb, limit=limit) - frame_summary = frames[-1] - return [frame_summary[:2]] - - # synthesize what would be returned by traceback.extract_stack at the call to - # user's parse action 'func', so that we don't incur call penalty at parse time - - # fmt: off - LINE_DIFF = 7 - # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND - # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! - _trim_arity_call_line = (_trim_arity_call_line or traceback.extract_stack(limit=2)[-1]) - pa_call_line_synth = (_trim_arity_call_line[0], _trim_arity_call_line[1] + LINE_DIFF) - - def wrapper(*args): - nonlocal found_arity, limit - while 1: - try: - ret = func(*args[limit:]) - found_arity = True - return ret - except TypeError as te: - # re-raise TypeErrors if they did not come from our arity testing - if found_arity: - raise - else: - tb = te.__traceback__ - trim_arity_type_error = ( - extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth - ) - del tb - - if trim_arity_type_error: - if limit < max_limit: - limit += 1 - continue - - raise - # fmt: on - - # copy func name to wrapper for sensible debug output - # (can't use functools.wraps, since that messes with function signature) - func_name = getattr(func, "__name__", getattr(func, "__class__").__name__) - wrapper.__name__ = func_name - wrapper.__doc__ = func.__doc__ - - return wrapper - - -def condition_as_parse_action( - fn: ParseCondition, message: str = None, fatal: bool = False -) -> ParseAction: - """ - Function to convert a simple predicate function that returns ``True`` or ``False`` - into a parse action. Can be used in places when a parse action is required - and :class:`ParserElement.add_condition` cannot be used (such as when adding a condition - to an operator level in :class:`infix_notation`). - - Optional keyword arguments: - - - ``message`` - define a custom message to be used in the raised exception - - ``fatal`` - if True, will raise :class:`ParseFatalException` to stop parsing immediately; - otherwise will raise :class:`ParseException` - - """ - msg = message if message is not None else "failed user-defined condition" - exc_type = ParseFatalException if fatal else ParseException - fn = _trim_arity(fn) - - @wraps(fn) - def pa(s, l, t): - if not bool(fn(s, l, t)): - raise exc_type(s, l, msg) - - return pa - - -def _default_start_debug_action( - instring: str, loc: int, expr: "ParserElement", cache_hit: bool = False -): - cache_hit_str = "*" if cache_hit else "" - print( - ( - "{}Match {} at loc {}({},{})\n {}\n {}^".format( - cache_hit_str, - expr, - loc, - lineno(loc, instring), - col(loc, instring), - line(loc, instring), - " " * (col(loc, instring) - 1), - ) - ) - ) - - -def _default_success_debug_action( - instring: str, - startloc: int, - endloc: int, - expr: "ParserElement", - toks: ParseResults, - cache_hit: bool = False, -): - cache_hit_str = "*" if cache_hit else "" - print("{}Matched {} -> {}".format(cache_hit_str, expr, toks.as_list())) - - -def _default_exception_debug_action( - instring: str, - loc: int, - expr: "ParserElement", - exc: Exception, - cache_hit: bool = False, -): - cache_hit_str = "*" if cache_hit else "" - print( - "{}Match {} failed, {} raised: {}".format( - cache_hit_str, expr, type(exc).__name__, exc - ) - ) - - -def null_debug_action(*args): - """'Do-nothing' debug action, to suppress debugging output during parsing.""" - - -class ParserElement(ABC): - """Abstract base level parser element class.""" - - DEFAULT_WHITE_CHARS: str = " \n\t\r" - verbose_stacktrace: bool = False - _literalStringClass: OptionalType[type] = None - - @staticmethod - def set_default_whitespace_chars(chars: str) -> None: - r""" - Overrides the default whitespace chars - - Example:: - - # default whitespace chars are space, and newline - OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] - - # change to just treat newline as significant - ParserElement.set_default_whitespace_chars(" \t") - OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def'] - """ - ParserElement.DEFAULT_WHITE_CHARS = chars - - # update whitespace all parse expressions defined in this module - for expr in _builtin_exprs: - if expr.copyDefaultWhiteChars: - expr.whiteChars = set(chars) - - @staticmethod - def inline_literals_using(cls: type) -> None: - """ - Set class to be used for inclusion of string literals into a parser. - - Example:: - - # default literal class used is Literal - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parse_string("1999/12/31") # -> ['1999', '/', '12', '/', '31'] - - - # change to Suppress - ParserElement.inline_literals_using(Suppress) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parse_string("1999/12/31") # -> ['1999', '12', '31'] - """ - ParserElement._literalStringClass = cls - - class DebugActions(NamedTuple): - debug_try: OptionalType[DebugStartAction] - debug_match: OptionalType[DebugSuccessAction] - debug_fail: OptionalType[DebugExceptionAction] - - def __init__(self, savelist: bool = False): - self.parseAction: List[ParseAction] = list() - self.failAction: OptionalType[ParseFailAction] = None - self.customName = None - self._defaultName = None - self.resultsName = None - self.saveAsList = savelist - self.skipWhitespace = True - self.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS) - self.copyDefaultWhiteChars = True - # used when checking for left-recursion - self.mayReturnEmpty = False - self.keepTabs = False - self.ignoreExprs: List["ParserElement"] = list() - self.debug = False - self.streamlined = False - # optimize exception handling for subclasses that don't advance parse index - self.mayIndexError = True - self.errmsg = "" - # mark results names as modal (report only last) or cumulative (list all) - self.modalResults = True - # custom debug actions - self.debugActions = self.DebugActions(None, None, None) - # avoid redundant calls to preParse - self.callPreparse = True - self.callDuringTry = False - self.suppress_warnings_: List[Diagnostics] = [] - - def suppress_warning(self, warning_type: Diagnostics) -> "ParserElement": - """ - Suppress warnings emitted for a particular diagnostic on this expression. - - Example:: - - base = pp.Forward() - base.suppress_warning(Diagnostics.warn_on_parse_using_empty_Forward) - - # statement would normally raise a warning, but is now suppressed - print(base.parseString("x")) - - """ - self.suppress_warnings_.append(warning_type) - return self - - def copy(self) -> "ParserElement": - """ - Make a copy of this :class:`ParserElement`. Useful for defining - different parse actions for the same parsing pattern, using copies of - the original parse element. - - Example:: - - integer = Word(nums).set_parse_action(lambda toks: int(toks[0])) - integerK = integer.copy().add_parse_action(lambda toks: toks[0] * 1024) + Suppress("K") - integerM = integer.copy().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M") - - print(OneOrMore(integerK | integerM | integer).parse_string("5K 100 640K 256M")) - - prints:: - - [5120, 100, 655360, 268435456] - - Equivalent form of ``expr.copy()`` is just ``expr()``:: - - integerM = integer().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M") - """ - cpy = copy.copy(self) - cpy.parseAction = self.parseAction[:] - cpy.ignoreExprs = self.ignoreExprs[:] - if self.copyDefaultWhiteChars: - cpy.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS) - return cpy - - def set_results_name( - self, name: str, list_all_matches: bool = False, *, listAllMatches: bool = False - ) -> "ParserElement": - """ - Define name for referencing matching tokens as a nested attribute - of the returned parse results. - - Normally, results names are assigned as you would assign keys in a dict: - any existing value is overwritten by later values. If it is necessary to - keep all values captured for a particular results name, call ``set_results_name`` - with ``list_all_matches`` = True. - - NOTE: ``set_results_name`` returns a *copy* of the original :class:`ParserElement` object; - this is so that the client can define a basic element, such as an - integer, and reference it in multiple places with different names. - - You can also set results names using the abbreviated syntax, - ``expr("name")`` in place of ``expr.set_results_name("name")`` - - see :class:`__call__`. If ``list_all_matches`` is required, use - ``expr("name*")``. - - Example:: - - date_str = (integer.set_results_name("year") + '/' - + integer.set_results_name("month") + '/' - + integer.set_results_name("day")) - - # equivalent form: - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - """ - listAllMatches = listAllMatches or list_all_matches - return self._setResultsName(name, listAllMatches) - - def _setResultsName(self, name, listAllMatches=False): - if name is None: - return self - newself = self.copy() - if name.endswith("*"): - name = name[:-1] - listAllMatches = True - newself.resultsName = name - newself.modalResults = not listAllMatches - return newself - - def set_break(self, break_flag: bool = True) -> "ParserElement": - """ - Method to invoke the Python pdb debugger when this element is - about to be parsed. Set ``break_flag`` to ``True`` to enable, ``False`` to - disable. - """ - if break_flag: - _parseMethod = self._parse - - def breaker(instring, loc, doActions=True, callPreParse=True): - import pdb - - # this call to pdb.set_trace() is intentional, not a checkin error - pdb.set_trace() - return _parseMethod(instring, loc, doActions, callPreParse) - - breaker._originalParseMethod = _parseMethod - self._parse = breaker - else: - if hasattr(self._parse, "_originalParseMethod"): - self._parse = self._parse._originalParseMethod - return self - - def set_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement": - """ - Define one or more actions to perform when successfully matching parse element definition. - - Parse actions can be called to perform data conversions, do extra validation, - update external data structures, or enhance or replace the parsed tokens. - Each parse action ``fn`` is a callable method with 0-3 arguments, called as - ``fn(s, loc, toks)`` , ``fn(loc, toks)`` , ``fn(toks)`` , or just ``fn()`` , where: - - - s = the original string being parsed (see note below) - - loc = the location of the matching substring - - toks = a list of the matched tokens, packaged as a :class:`ParseResults` object - - The parsed tokens are passed to the parse action as ParseResults. They can be - modified in place using list-style append, extend, and pop operations to update - the parsed list elements; and with dictionary-style item set and del operations - to add, update, or remove any named results. If the tokens are modified in place, - it is not necessary to return them with a return statement. - - Parse actions can also completely replace the given tokens, with another ``ParseResults`` - object, or with some entirely different object (common for parse actions that perform data - conversions). A convenient way to build a new parse result is to define the values - using a dict, and then create the return value using :class:`ParseResults.from_dict`. - - If None is passed as the ``fn`` parse action, all previously added parse actions for this - expression are cleared. - - Optional keyword arguments: - - - call_during_try = (default= ``False``) indicate if parse action should be run during - lookaheads and alternate testing. For parse actions that have side effects, it is - important to only call the parse action once it is determined that it is being - called as part of a successful parse. For parse actions that perform additional - validation, then call_during_try should be passed as True, so that the validation - code is included in the preliminary "try" parses. - - Note: the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See :class:`parse_string` for more - information on parsing strings containing ```` s, and suggested - methods to maintain a consistent view of the parsed string, the parse - location, and line and column positions within the parsed string. - - Example:: - - # parse dates in the form YYYY/MM/DD - - # use parse action to convert toks from str to int at parse time - def convert_to_int(toks): - return int(toks[0]) - - # use a parse action to verify that the date is a valid date - def is_valid_date(instring, loc, toks): - from datetime import date - year, month, day = toks[::2] - try: - date(year, month, day) - except ValueError: - raise ParseException(instring, loc, "invalid date given") - - integer = Word(nums) - date_str = integer + '/' + integer + '/' + integer - - # add parse actions - integer.set_parse_action(convert_to_int) - date_str.set_parse_action(is_valid_date) - - # note that integer fields are now ints, not strings - date_str.run_tests(''' - # successful parse - note that integer fields were converted to ints - 1999/12/31 - - # fail - invalid date - 1999/13/31 - ''') - """ - if list(fns) == [None]: - self.parseAction = [] - else: - if not all(callable(fn) for fn in fns): - raise TypeError("parse actions must be callable") - self.parseAction = [_trim_arity(fn) for fn in fns] - self.callDuringTry = kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def add_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement": - """ - Add one or more parse actions to expression's list of parse actions. See :class:`set_parse_action`. - - See examples in :class:`copy`. - """ - self.parseAction += [_trim_arity(fn) for fn in fns] - self.callDuringTry = self.callDuringTry or kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def add_condition(self, *fns: ParseCondition, **kwargs) -> "ParserElement": - """Add a boolean predicate function to expression's list of parse actions. See - :class:`set_parse_action` for function call signatures. Unlike ``set_parse_action``, - functions passed to ``add_condition`` need to return boolean success/fail of the condition. - - Optional keyword arguments: - - - message = define a custom message to be used in the raised exception - - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise - ParseException - - call_during_try = boolean to indicate if this method should be called during internal tryParse calls, - default=False - - Example:: - - integer = Word(nums).set_parse_action(lambda toks: int(toks[0])) - year_int = integer.copy() - year_int.add_condition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") - date_str = year_int + '/' + integer + '/' + integer - - result = date_str.parse_string("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), - (line:1, col:1) - """ - for fn in fns: - self.parseAction.append( - condition_as_parse_action( - fn, message=kwargs.get("message"), fatal=kwargs.get("fatal", False) - ) - ) - - self.callDuringTry = self.callDuringTry or kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def set_fail_action(self, fn: ParseFailAction) -> "ParserElement": - """ - Define action to perform if parsing fails at this expression. - Fail acton fn is a callable function that takes the arguments - ``fn(s, loc, expr, err)`` where: - - - s = string being parsed - - loc = location where expression match was attempted and failed - - expr = the parse expression that failed - - err = the exception thrown - - The function returns no value. It may throw :class:`ParseFatalException` - if it is desired to stop parsing immediately.""" - self.failAction = fn - return self - - def _skipIgnorables(self, instring, loc): - exprsFound = True - while exprsFound: - exprsFound = False - for e in self.ignoreExprs: - try: - while 1: - loc, dummy = e._parse(instring, loc) - exprsFound = True - except ParseException: - pass - return loc - - def preParse(self, instring, loc): - if self.ignoreExprs: - loc = self._skipIgnorables(instring, loc) - - if self.skipWhitespace: - instrlen = len(instring) - white_chars = self.whiteChars - while loc < instrlen and instring[loc] in white_chars: - loc += 1 - - return loc - - def parseImpl(self, instring, loc, doActions=True): - return loc, [] - - def postParse(self, instring, loc, tokenlist): - return tokenlist - - # @profile - def _parseNoCache( - self, instring, loc, doActions=True, callPreParse=True - ) -> Tuple[int, ParseResults]: - TRY, MATCH, FAIL = 0, 1, 2 - debugging = self.debug # and doActions) - len_instring = len(instring) - - if debugging or self.failAction: - # print("Match {} at loc {}({}, {})".format(self, loc, lineno(loc, instring), col(loc, instring))) - try: - if callPreParse and self.callPreparse: - pre_loc = self.preParse(instring, loc) - else: - pre_loc = loc - tokens_start = pre_loc - if self.debugActions.debug_try: - self.debugActions.debug_try(instring, tokens_start, self, False) - if self.mayIndexError or pre_loc >= len_instring: - try: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except IndexError: - raise ParseException(instring, len_instring, self.errmsg, self) - else: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except Exception as err: - # print("Exception raised:", err) - if self.debugActions.debug_fail: - self.debugActions.debug_fail( - instring, tokens_start, self, err, False - ) - if self.failAction: - self.failAction(instring, tokens_start, self, err) - raise - else: - if callPreParse and self.callPreparse: - pre_loc = self.preParse(instring, loc) - else: - pre_loc = loc - tokens_start = pre_loc - if self.mayIndexError or pre_loc >= len_instring: - try: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except IndexError: - raise ParseException(instring, len_instring, self.errmsg, self) - else: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - - tokens = self.postParse(instring, loc, tokens) - - ret_tokens = ParseResults( - tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults - ) - if self.parseAction and (doActions or self.callDuringTry): - if debugging: - try: - for fn in self.parseAction: - try: - tokens = fn(instring, tokens_start, ret_tokens) - except IndexError as parse_action_exc: - exc = ParseException("exception raised in parse action") - raise exc from parse_action_exc - - if tokens is not None and tokens is not ret_tokens: - ret_tokens = ParseResults( - tokens, - self.resultsName, - asList=self.saveAsList - and isinstance(tokens, (ParseResults, list)), - modal=self.modalResults, - ) - except Exception as err: - # print "Exception raised in user parse action:", err - if self.debugActions.debug_fail: - self.debugActions.debug_fail( - instring, tokens_start, self, err, False - ) - raise - else: - for fn in self.parseAction: - try: - tokens = fn(instring, tokens_start, ret_tokens) - except IndexError as parse_action_exc: - exc = ParseException("exception raised in parse action") - raise exc from parse_action_exc - - if tokens is not None and tokens is not ret_tokens: - ret_tokens = ParseResults( - tokens, - self.resultsName, - asList=self.saveAsList - and isinstance(tokens, (ParseResults, list)), - modal=self.modalResults, - ) - if debugging: - # print("Matched", self, "->", ret_tokens.as_list()) - if self.debugActions.debug_match: - self.debugActions.debug_match( - instring, tokens_start, loc, self, ret_tokens, False - ) - - return loc, ret_tokens - - def try_parse(self, instring: str, loc: int, raise_fatal: bool = False) -> int: - try: - return self._parse(instring, loc, doActions=False)[0] - except ParseFatalException: - if raise_fatal: - raise - raise ParseException(instring, loc, self.errmsg, self) - - def can_parse_next(self, instring: str, loc: int) -> bool: - try: - self.try_parse(instring, loc) - except (ParseException, IndexError): - return False - else: - return True - - # cache for left-recursion in Forward references - recursion_lock = RLock() - recursion_memos: DictType[ - Tuple[int, "Forward", bool], Tuple[int, Union[ParseResults, Exception]] - ] = {} - - # argument cache for optimizing repeated calls when backtracking through recursive expressions - packrat_cache = ( - {} - ) # this is set later by enabled_packrat(); this is here so that reset_cache() doesn't fail - packrat_cache_lock = RLock() - packrat_cache_stats = [0, 0] - - # this method gets repeatedly called during backtracking with the same arguments - - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression - def _parseCache( - self, instring, loc, doActions=True, callPreParse=True - ) -> Tuple[int, ParseResults]: - HIT, MISS = 0, 1 - TRY, MATCH, FAIL = 0, 1, 2 - lookup = (self, instring, loc, callPreParse, doActions) - with ParserElement.packrat_cache_lock: - cache = ParserElement.packrat_cache - value = cache.get(lookup) - if value is cache.not_in_cache: - ParserElement.packrat_cache_stats[MISS] += 1 - try: - value = self._parseNoCache(instring, loc, doActions, callPreParse) - except ParseBaseException as pe: - # cache a copy of the exception, without the traceback - cache.set(lookup, pe.__class__(*pe.args)) - raise - else: - cache.set(lookup, (value[0], value[1].copy(), loc)) - return value - else: - ParserElement.packrat_cache_stats[HIT] += 1 - if self.debug and self.debugActions.debug_try: - try: - self.debugActions.debug_try(instring, loc, self, cache_hit=True) - except TypeError: - pass - if isinstance(value, Exception): - if self.debug and self.debugActions.debug_fail: - try: - self.debugActions.debug_fail( - instring, loc, self, value, cache_hit=True - ) - except TypeError: - pass - raise value - - loc_, result, endloc = value[0], value[1].copy(), value[2] - if self.debug and self.debugActions.debug_match: - try: - self.debugActions.debug_match( - instring, loc_, endloc, self, result, cache_hit=True - ) - except TypeError: - pass - - return loc_, result - - _parse = _parseNoCache - - @staticmethod - def reset_cache() -> None: - ParserElement.packrat_cache.clear() - ParserElement.packrat_cache_stats[:] = [0] * len( - ParserElement.packrat_cache_stats - ) - ParserElement.recursion_memos.clear() - - _packratEnabled = False - _left_recursion_enabled = False - - @staticmethod - def disable_memoization() -> None: - """ - Disables active Packrat or Left Recursion parsing and their memoization - - This method also works if neither Packrat nor Left Recursion are enabled. - This makes it safe to call before activating Packrat nor Left Recursion - to clear any previous settings. - """ - ParserElement.reset_cache() - ParserElement._left_recursion_enabled = False - ParserElement._packratEnabled = False - ParserElement._parse = ParserElement._parseNoCache - - @staticmethod - def enable_left_recursion( - cache_size_limit: OptionalType[int] = None, *, force=False - ) -> None: - """ - Enables "bounded recursion" parsing, which allows for both direct and indirect - left-recursion. During parsing, left-recursive :class:`Forward` elements are - repeatedly matched with a fixed recursion depth that is gradually increased - until finding the longest match. - - Example:: - - import pyparsing as pp - pp.ParserElement.enable_left_recursion() - - E = pp.Forward("E") - num = pp.Word(pp.nums) - # match `num`, or `num '+' num`, or `num '+' num '+' num`, ... - E <<= E + '+' - num | num - - print(E.parse_string("1+2+3")) - - Recursion search naturally memoizes matches of ``Forward`` elements and may - thus skip reevaluation of parse actions during backtracking. This may break - programs with parse actions which rely on strict ordering of side-effects. - - Parameters: - - - cache_size_limit - (default=``None``) - memoize at most this many - ``Forward`` elements during matching; if ``None`` (the default), - memoize all ``Forward`` elements. - - Bounded Recursion parsing works similar but not identical to Packrat parsing, - thus the two cannot be used together. Use ``force=True`` to disable any - previous, conflicting settings. - """ - if force: - ParserElement.disable_memoization() - elif ParserElement._packratEnabled: - raise RuntimeError("Packrat and Bounded Recursion are not compatible") - if cache_size_limit is None: - ParserElement.recursion_memos = _UnboundedMemo() - elif cache_size_limit > 0: - ParserElement.recursion_memos = _LRUMemo(capacity=cache_size_limit) - else: - raise NotImplementedError("Memo size of %s" % cache_size_limit) - ParserElement._left_recursion_enabled = True - - @staticmethod - def enable_packrat(cache_size_limit: int = 128, *, force: bool = False) -> None: - """ - Enables "packrat" parsing, which adds memoizing to the parsing logic. - Repeated parse attempts at the same string location (which happens - often in many complex grammars) can immediately return a cached value, - instead of re-executing parsing/validating code. Memoizing is done of - both valid results and parsing exceptions. - - Parameters: - - - cache_size_limit - (default= ``128``) - if an integer value is provided - will limit the size of the packrat cache; if None is passed, then - the cache size will be unbounded; if 0 is passed, the cache will - be effectively disabled. - - This speedup may break existing programs that use parse actions that - have side-effects. For this reason, packrat parsing is disabled when - you first import pyparsing. To activate the packrat feature, your - program must call the class method :class:`ParserElement.enable_packrat`. - For best results, call ``enable_packrat()`` immediately after - importing pyparsing. - - Example:: - - import pyparsing - pyparsing.ParserElement.enable_packrat() - - Packrat parsing works similar but not identical to Bounded Recursion parsing, - thus the two cannot be used together. Use ``force=True`` to disable any - previous, conflicting settings. - """ - if force: - ParserElement.disable_memoization() - elif ParserElement._left_recursion_enabled: - raise RuntimeError("Packrat and Bounded Recursion are not compatible") - if not ParserElement._packratEnabled: - ParserElement._packratEnabled = True - if cache_size_limit is None: - ParserElement.packrat_cache = _UnboundedCache() - else: - ParserElement.packrat_cache = _FifoCache(cache_size_limit) - ParserElement._parse = ParserElement._parseCache - - def parse_string( - self, instring: str, parse_all: bool = False, *, parseAll: bool = False - ) -> ParseResults: - """ - Parse a string with respect to the parser definition. This function is intended as the primary interface to the - client code. - - :param instring: The input string to be parsed. - :param parse_all: If set, the entire input string must match the grammar. - :param parseAll: retained for pre-PEP8 compatibility, will be removed in a future release. - :raises ParseException: Raised if ``parse_all`` is set and the input string does not match the whole grammar. - :returns: the parsed data as a :class:`ParseResults` object, which may be accessed as a `list`, a `dict`, or - an object with attributes if the given parser includes results names. - - If the input string is required to match the entire grammar, ``parse_all`` flag must be set to ``True``. This - is also equivalent to ending the grammar with :class:`StringEnd`(). - - To report proper column numbers, ``parse_string`` operates on a copy of the input string where all tabs are - converted to spaces (8 spaces per tab, as per the default in ``string.expandtabs``). If the input string - contains tabs and the grammar uses parse actions that use the ``loc`` argument to index into the string - being parsed, one can ensure a consistent view of the input string by doing one of the following: - - - calling ``parse_with_tabs`` on your grammar before calling ``parse_string`` (see :class:`parse_with_tabs`), - - define your parse action using the full ``(s,loc,toks)`` signature, and reference the input string using the - parse action's ``s`` argument, or - - explicitly expand the tabs in your input string before calling ``parse_string``. - - Examples: - - By default, partial matches are OK. - - >>> res = Word('a').parse_string('aaaaabaaa') - >>> print(res) - ['aaaaa'] - - The parsing behavior varies by the inheriting class of this abstract class. Please refer to the children - directly to see more examples. - - It raises an exception if parse_all flag is set and instring does not match the whole grammar. - - >>> res = Word('a').parse_string('aaaaabaaa', parse_all=True) - Traceback (most recent call last): - ... - pyparsing.ParseException: Expected end of text, found 'b' (at char 5), (line:1, col:6) - """ - parseAll = parse_all or parseAll - - ParserElement.reset_cache() - if not self.streamlined: - self.streamline() - for e in self.ignoreExprs: - e.streamline() - if not self.keepTabs: - instring = instring.expandtabs() - try: - loc, tokens = self._parse(instring, 0) - if parseAll: - loc = self.preParse(instring, loc) - se = Empty() + StringEnd() - se._parse(instring, loc) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clearing out pyparsing internal stack trace - raise exc.with_traceback(None) - else: - return tokens - - def scan_string( - self, - instring: str, - max_matches: int = _MAX_INT, - overlap: bool = False, - *, - debug: bool = False, - maxMatches: int = _MAX_INT, - ) -> Generator[Tuple[ParseResults, int, int], None, None]: - """ - Scan the input string for expression matches. Each match will return the - matching tokens, start location, and end location. May be called with optional - ``max_matches`` argument, to clip scanning after 'n' matches are found. If - ``overlap`` is specified, then overlapping matches will be reported. - - Note that the start and end locations are reported relative to the string - being parsed. See :class:`parse_string` for more information on parsing - strings with embedded tabs. - - Example:: - - source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" - print(source) - for tokens, start, end in Word(alphas).scan_string(source): - print(' '*start + '^'*(end-start)) - print(' '*start + tokens[0]) - - prints:: - - sldjf123lsdjjkf345sldkjf879lkjsfd987 - ^^^^^ - sldjf - ^^^^^^^ - lsdjjkf - ^^^^^^ - sldkjf - ^^^^^^ - lkjsfd - """ - maxMatches = min(maxMatches, max_matches) - if not self.streamlined: - self.streamline() - for e in self.ignoreExprs: - e.streamline() - - if not self.keepTabs: - instring = str(instring).expandtabs() - instrlen = len(instring) - loc = 0 - preparseFn = self.preParse - parseFn = self._parse - ParserElement.resetCache() - matches = 0 - try: - while loc <= instrlen and matches < maxMatches: - try: - preloc = preparseFn(instring, loc) - nextLoc, tokens = parseFn(instring, preloc, callPreParse=False) - except ParseException: - loc = preloc + 1 - else: - if nextLoc > loc: - matches += 1 - if debug: - print( - { - "tokens": tokens.asList(), - "start": preloc, - "end": nextLoc, - } - ) - yield tokens, preloc, nextLoc - if overlap: - nextloc = preparseFn(instring, loc) - if nextloc > loc: - loc = nextLoc - else: - loc += 1 - else: - loc = nextLoc - else: - loc = preloc + 1 - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def transform_string(self, instring: str, *, debug: bool = False) -> str: - """ - Extension to :class:`scan_string`, to modify matching text with modified tokens that may - be returned from a parse action. To use ``transform_string``, define a grammar and - attach a parse action to it that modifies the returned token list. - Invoking ``transform_string()`` on a target string will then scan for matches, - and replace the matched text patterns according to the logic in the parse - action. ``transform_string()`` returns the resulting transformed string. - - Example:: - - wd = Word(alphas) - wd.set_parse_action(lambda toks: toks[0].title()) - - print(wd.transform_string("now is the winter of our discontent made glorious summer by this sun of york.")) - - prints:: - - Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. - """ - out: List[str] = [] - lastE = 0 - # force preservation of s, to minimize unwanted transformation of string, and to - # keep string locs straight between transform_string and scan_string - self.keepTabs = True - try: - for t, s, e in self.scan_string(instring, debug=debug): - out.append(instring[lastE:s]) - if t: - if isinstance(t, ParseResults): - out += t.as_list() - elif isinstance(t, Iterable) and not isinstance(t, str_type): - out.extend(t) - else: - out.append(t) - lastE = e - out.append(instring[lastE:]) - out = [o for o in out if o] - return "".join([str(s) for s in _flatten(out)]) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def search_string( - self, - instring: str, - max_matches: int = _MAX_INT, - *, - debug: bool = False, - maxMatches: int = _MAX_INT, - ) -> ParseResults: - """ - Another extension to :class:`scan_string`, simplifying the access to the tokens found - to match the given parse expression. May be called with optional - ``max_matches`` argument, to clip searching after 'n' matches are found. - - Example:: - - # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters - cap_word = Word(alphas.upper(), alphas.lower()) - - print(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity")) - - # the sum() builtin can be used to merge results into a single ParseResults object - print(sum(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity"))) - - prints:: - - [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] - ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] - """ - maxMatches = min(maxMatches, max_matches) - try: - return ParseResults( - [t for t, s, e in self.scan_string(instring, maxMatches, debug=debug)] - ) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def split( - self, - instring: str, - maxsplit: int = _MAX_INT, - include_separators: bool = False, - *, - includeSeparators=False, - ) -> Generator[str, None, None]: - """ - Generator method to split a string using the given expression as a separator. - May be called with optional ``maxsplit`` argument, to limit the number of splits; - and the optional ``include_separators`` argument (default= ``False``), if the separating - matching text should be included in the split results. - - Example:: - - punc = one_of(list(".,;:/-!?")) - print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) - - prints:: - - ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] - """ - includeSeparators = includeSeparators or include_separators - last = 0 - for t, s, e in self.scan_string(instring, max_matches=maxsplit): - yield instring[last:s] - if includeSeparators: - yield t[0] - last = e - yield instring[last:] - - def __add__(self, other) -> "ParserElement": - """ - Implementation of ``+`` operator - returns :class:`And`. Adding strings to a :class:`ParserElement` - converts them to :class:`Literal`s by default. - - Example:: - - greet = Word(alphas) + "," + Word(alphas) + "!" - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - - prints:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - - ``...`` may be used as a parse expression as a short form of :class:`SkipTo`. - - Literal('start') + ... + Literal('end') - - is equivalent to: - - Literal('start') + SkipTo('end')("_skipped*") + Literal('end') - - Note that the skipped text is returned with '_skipped' as a results name, - and to support having multiple skips in the same parser, the value returned is - a list of all skipped text. - """ - if other is Ellipsis: - return _PendingSkip(self) - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return And([self, other]) - - def __radd__(self, other) -> "ParserElement": - """ - Implementation of ``+`` operator when left operand is not a :class:`ParserElement` - """ - if other is Ellipsis: - return SkipTo(self)("_skipped*") + self - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other + self - - def __sub__(self, other) -> "ParserElement": - """ - Implementation of ``-`` operator, returns :class:`And` with error stop - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return self + And._ErrorStop() + other - - def __rsub__(self, other) -> "ParserElement": - """ - Implementation of ``-`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other - self - - def __mul__(self, other) -> "ParserElement": - """ - Implementation of ``*`` operator, allows use of ``expr * 3`` in place of - ``expr + expr + expr``. Expressions may also be multiplied by a 2-integer - tuple, similar to ``{min, max}`` multipliers in regular expressions. Tuples - may also include ``None`` as in: - - ``expr*(n, None)`` or ``expr*(n, )`` is equivalent - to ``expr*n + ZeroOrMore(expr)`` - (read as "at least n instances of ``expr``") - - ``expr*(None, n)`` is equivalent to ``expr*(0, n)`` - (read as "0 to n instances of ``expr``") - - ``expr*(None, None)`` is equivalent to ``ZeroOrMore(expr)`` - - ``expr*(1, None)`` is equivalent to ``OneOrMore(expr)`` - - Note that ``expr*(None, n)`` does not raise an exception if - more than n exprs exist in the input stream; that is, - ``expr*(None, n)`` does not enforce a maximum number of expr - occurrences. If this behavior is desired, then write - ``expr*(None, n) + ~expr`` - """ - if other is Ellipsis: - other = (0, None) - elif isinstance(other, tuple) and other[:1] == (Ellipsis,): - other = ((0,) + other[1:] + (None,))[:2] - - if isinstance(other, int): - minElements, optElements = other, 0 - elif isinstance(other, tuple): - other = tuple(o if o is not Ellipsis else None for o in other) - other = (other + (None, None))[:2] - if other[0] is None: - other = (0, other[1]) - if isinstance(other[0], int) and other[1] is None: - if other[0] == 0: - return ZeroOrMore(self) - if other[0] == 1: - return OneOrMore(self) - else: - return self * other[0] + ZeroOrMore(self) - elif isinstance(other[0], int) and isinstance(other[1], int): - minElements, optElements = other - optElements -= minElements - else: - raise TypeError( - "cannot multiply ParserElement and ({}) objects".format( - ",".join(type(item).__name__ for item in other) - ) - ) - else: - raise TypeError( - "cannot multiply ParserElement and {} objects".format( - type(other).__name__ - ) - ) - - if minElements < 0: - raise ValueError("cannot multiply ParserElement by negative value") - if optElements < 0: - raise ValueError( - "second tuple value must be greater or equal to first tuple value" - ) - if minElements == optElements == 0: - return And([]) - - if optElements: - - def makeOptionalList(n): - if n > 1: - return Opt(self + makeOptionalList(n - 1)) - else: - return Opt(self) - - if minElements: - if minElements == 1: - ret = self + makeOptionalList(optElements) - else: - ret = And([self] * minElements) + makeOptionalList(optElements) - else: - ret = makeOptionalList(optElements) - else: - if minElements == 1: - ret = self - else: - ret = And([self] * minElements) - return ret - - def __rmul__(self, other) -> "ParserElement": - return self.__mul__(other) - - def __or__(self, other) -> "ParserElement": - """ - Implementation of ``|`` operator - returns :class:`MatchFirst` - """ - if other is Ellipsis: - return _PendingSkip(self, must_skip=True) - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return MatchFirst([self, other]) - - def __ror__(self, other) -> "ParserElement": - """ - Implementation of ``|`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other | self - - def __xor__(self, other) -> "ParserElement": - """ - Implementation of ``^`` operator - returns :class:`Or` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return Or([self, other]) - - def __rxor__(self, other) -> "ParserElement": - """ - Implementation of ``^`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other ^ self - - def __and__(self, other) -> "ParserElement": - """ - Implementation of ``&`` operator - returns :class:`Each` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return Each([self, other]) - - def __rand__(self, other) -> "ParserElement": - """ - Implementation of ``&`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other & self - - def __invert__(self) -> "ParserElement": - """ - Implementation of ``~`` operator - returns :class:`NotAny` - """ - return NotAny(self) - - # disable __iter__ to override legacy use of sequential access to __getitem__ to - # iterate over a sequence - __iter__ = None - - def __getitem__(self, key): - """ - use ``[]`` indexing notation as a short form for expression repetition: - - - ``expr[n]`` is equivalent to ``expr*n`` - - ``expr[m, n]`` is equivalent to ``expr*(m, n)`` - - ``expr[n, ...]`` or ``expr[n,]`` is equivalent - to ``expr*n + ZeroOrMore(expr)`` - (read as "at least n instances of ``expr``") - - ``expr[..., n]`` is equivalent to ``expr*(0, n)`` - (read as "0 to n instances of ``expr``") - - ``expr[...]`` and ``expr[0, ...]`` are equivalent to ``ZeroOrMore(expr)`` - - ``expr[1, ...]`` is equivalent to ``OneOrMore(expr)`` - - ``None`` may be used in place of ``...``. - - Note that ``expr[..., n]`` and ``expr[m, n]``do not raise an exception - if more than ``n`` ``expr``s exist in the input stream. If this behavior is - desired, then write ``expr[..., n] + ~expr``. - """ - - # convert single arg keys to tuples - try: - if isinstance(key, str_type): - key = (key,) - iter(key) - except TypeError: - key = (key, key) - - if len(key) > 2: - raise TypeError( - "only 1 or 2 index arguments supported ({}{})".format( - key[:5], "... [{}]".format(len(key)) if len(key) > 5 else "" - ) - ) - - # clip to 2 elements - ret = self * tuple(key[:2]) - return ret - - def __call__(self, name: str = None) -> "ParserElement": - """ - Shortcut for :class:`set_results_name`, with ``list_all_matches=False``. - - If ``name`` is given with a trailing ``'*'`` character, then ``list_all_matches`` will be - passed as ``True``. - - If ``name` is omitted, same as calling :class:`copy`. - - Example:: - - # these are equivalent - userdata = Word(alphas).set_results_name("name") + Word(nums + "-").set_results_name("socsecno") - userdata = Word(alphas)("name") + Word(nums + "-")("socsecno") - """ - if name is not None: - return self._setResultsName(name) - else: - return self.copy() - - def suppress(self) -> "ParserElement": - """ - Suppresses the output of this :class:`ParserElement`; useful to keep punctuation from - cluttering up returned output. - """ - return Suppress(self) - - def ignore_whitespace(self, recursive: bool = True) -> "ParserElement": - """ - Enables the skipping of whitespace before matching the characters in the - :class:`ParserElement`'s defined pattern. - - :param recursive: If ``True`` (the default), also enable whitespace skipping in child elements (if any) - """ - self.skipWhitespace = True - return self - - def leave_whitespace(self, recursive: bool = True) -> "ParserElement": - """ - Disables the skipping of whitespace before matching the characters in the - :class:`ParserElement`'s defined pattern. This is normally only used internally by - the pyparsing module, but may be needed in some whitespace-sensitive grammars. - - :param recursive: If true (the default), also disable whitespace skipping in child elements (if any) - """ - self.skipWhitespace = False - return self - - def set_whitespace_chars( - self, chars: Union[Set[str], str], copy_defaults: bool = False - ) -> "ParserElement": - """ - Overrides the default whitespace chars - """ - self.skipWhitespace = True - self.whiteChars = set(chars) - self.copyDefaultWhiteChars = copy_defaults - return self - - def parse_with_tabs(self) -> "ParserElement": - """ - Overrides default behavior to expand ```` s to spaces before parsing the input string. - Must be called before ``parse_string`` when the input grammar contains elements that - match ```` characters. - """ - self.keepTabs = True - return self - - def ignore(self, other: "ParserElement") -> "ParserElement": - """ - Define expression to be ignored (e.g., comments) while doing pattern - matching; may be called repeatedly, to define multiple comment or other - ignorable patterns. - - Example:: - - patt = OneOrMore(Word(alphas)) - patt.parse_string('ablaj /* comment */ lskjd') - # -> ['ablaj'] - - patt.ignore(c_style_comment) - patt.parse_string('ablaj /* comment */ lskjd') - # -> ['ablaj', 'lskjd'] - """ - import typing - - if isinstance(other, str_type): - other = Suppress(other) - - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - self.ignoreExprs.append(other) - else: - self.ignoreExprs.append(Suppress(other.copy())) - return self - - def set_debug_actions( - self, - start_action: DebugStartAction, - success_action: DebugSuccessAction, - exception_action: DebugExceptionAction, - ) -> "ParserElement": - """ - Customize display of debugging messages while doing pattern matching: - - - ``start_action`` - method to be called when an expression is about to be parsed; - should have the signature ``fn(input_string: str, location: int, expression: ParserElement, cache_hit: bool)`` - - - ``success_action`` - method to be called when an expression has successfully parsed; - should have the signature ``fn(input_string: str, start_location: int, end_location: int, expression: ParserELement, parsed_tokens: ParseResults, cache_hit: bool)`` - - - ``exception_action`` - method to be called when expression fails to parse; - should have the signature ``fn(input_string: str, location: int, expression: ParserElement, exception: Exception, cache_hit: bool)`` - """ - self.debugActions = self.DebugActions( - start_action or _default_start_debug_action, - success_action or _default_success_debug_action, - exception_action or _default_exception_debug_action, - ) - self.debug = True - return self - - def set_debug(self, flag: bool = True) -> "ParserElement": - """ - Enable display of debugging messages while doing pattern matching. - Set ``flag`` to ``True`` to enable, ``False`` to disable. - - Example:: - - wd = Word(alphas).set_name("alphaword") - integer = Word(nums).set_name("numword") - term = wd | integer - - # turn on debugging for wd - wd.set_debug() - - OneOrMore(term).parse_string("abc 123 xyz 890") - - prints:: - - Match alphaword at loc 0(1,1) - Matched alphaword -> ['abc'] - Match alphaword at loc 3(1,4) - Exception raised:Expected alphaword (at char 4), (line:1, col:5) - Match alphaword at loc 7(1,8) - Matched alphaword -> ['xyz'] - Match alphaword at loc 11(1,12) - Exception raised:Expected alphaword (at char 12), (line:1, col:13) - Match alphaword at loc 15(1,16) - Exception raised:Expected alphaword (at char 15), (line:1, col:16) - - The output shown is that produced by the default debug actions - custom debug actions can be - specified using :class:`set_debug_actions`. Prior to attempting - to match the ``wd`` expression, the debugging message ``"Match at loc (,)"`` - is shown. Then if the parse succeeds, a ``"Matched"`` message is shown, or an ``"Exception raised"`` - message is shown. Also note the use of :class:`set_name` to assign a human-readable name to the expression, - which makes debugging and exception messages easier to understand - for instance, the default - name created for the :class:`Word` expression without calling ``set_name`` is ``"W:(A-Za-z)"``. - """ - if flag: - self.set_debug_actions( - _default_start_debug_action, - _default_success_debug_action, - _default_exception_debug_action, - ) - else: - self.debug = False - return self - - @property - def default_name(self) -> str: - if self._defaultName is None: - self._defaultName = self._generateDefaultName() - return self._defaultName - - @abstractmethod - def _generateDefaultName(self): - """ - Child classes must define this method, which defines how the ``default_name`` is set. - """ - - def set_name(self, name: str) -> "ParserElement": - """ - Define name for this expression, makes debugging and exception messages clearer. - Example:: - Word(nums).parse_string("ABC") # -> Exception: Expected W:(0-9) (at char 0), (line:1, col:1) - Word(nums).set_name("integer").parse_string("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) - """ - self.customName = name - self.errmsg = "Expected " + self.name - if __diag__.enable_debug_on_named_expressions: - self.set_debug() - return self - - @property - def name(self) -> str: - # This will use a user-defined name if available, but otherwise defaults back to the auto-generated name - return self.customName if self.customName is not None else self.default_name - - def __str__(self) -> str: - return self.name - - def __repr__(self) -> str: - return str(self) - - def streamline(self) -> "ParserElement": - self.streamlined = True - self._defaultName = None - return self - - def recurse(self) -> Sequence["ParserElement"]: - return [] - - def _checkRecursion(self, parseElementList): - subRecCheckList = parseElementList[:] + [self] - for e in self.recurse(): - e._checkRecursion(subRecCheckList) - - def validate(self, validateTrace=None) -> None: - """ - Check defined expressions for valid structure, check for infinite recursive definitions. - """ - self._checkRecursion([]) - - def parse_file( - self, - file_or_filename: Union[str, Path, TextIO], - encoding: str = "utf-8", - parse_all: bool = False, - *, - parseAll: bool = False, - ) -> ParseResults: - """ - Execute the parse expression on the given file or filename. - If a filename is specified (instead of a file object), - the entire file is opened, read, and closed before parsing. - """ - parseAll = parseAll or parse_all - try: - file_contents = file_or_filename.read() - except AttributeError: - with open(file_or_filename, "r", encoding=encoding) as f: - file_contents = f.read() - try: - return self.parse_string(file_contents, parseAll) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def __eq__(self, other): - if self is other: - return True - elif isinstance(other, str_type): - return self.matches(other, parse_all=True) - elif isinstance(other, ParserElement): - return vars(self) == vars(other) - return False - - def __hash__(self): - return id(self) - - def matches( - self, test_string: str, parse_all: bool = True, *, parseAll: bool = True - ) -> bool: - """ - Method for quick testing of a parser against a test string. Good for simple - inline microtests of sub expressions while building up larger parser. - - Parameters: - - ``test_string`` - to test against this expression for a match - - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests - - Example:: - - expr = Word(nums) - assert expr.matches("100") - """ - parseAll = parseAll and parse_all - try: - self.parse_string(str(test_string), parse_all=parseAll) - return True - except ParseBaseException: - return False - - def run_tests( - self, - tests: Union[str, List[str]], - parse_all: bool = True, - comment: OptionalType[Union["ParserElement", str]] = "#", - full_dump: bool = True, - print_results: bool = True, - failure_tests: bool = False, - post_parse: Callable[[str, ParseResults], str] = None, - file: OptionalType[TextIO] = None, - with_line_numbers: bool = False, - *, - parseAll: bool = True, - fullDump: bool = True, - printResults: bool = True, - failureTests: bool = False, - postParse: Callable[[str, ParseResults], str] = None, - ) -> Tuple[bool, List[Tuple[str, Union[ParseResults, Exception]]]]: - """ - Execute the parse expression on a series of test strings, showing each - test, the parsed results or where the parse failed. Quick and easy way to - run a parse expression against a list of sample strings. - - Parameters: - - ``tests`` - a list of separate test strings, or a multiline string of test strings - - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests - - ``comment`` - (default= ``'#'``) - expression for indicating embedded comments in the test - string; pass None to disable comment filtering - - ``full_dump`` - (default= ``True``) - dump results as list followed by results names in nested outline; - if False, only dump nested list - - ``print_results`` - (default= ``True``) prints test output to stdout - - ``failure_tests`` - (default= ``False``) indicates if these tests are expected to fail parsing - - ``post_parse`` - (default= ``None``) optional callback for successful parse results; called as - `fn(test_string, parse_results)` and returns a string to be added to the test output - - ``file`` - (default= ``None``) optional file-like object to which test output will be written; - if None, will default to ``sys.stdout`` - - ``with_line_numbers`` - default= ``False``) show test strings with line and column numbers - - Returns: a (success, results) tuple, where success indicates that all tests succeeded - (or failed if ``failure_tests`` is True), and the results contain a list of lines of each - test's output - - Example:: - - number_expr = pyparsing_common.number.copy() - - result = number_expr.run_tests(''' - # unsigned integer - 100 - # negative integer - -100 - # float with scientific notation - 6.02e23 - # integer with scientific notation - 1e-12 - ''') - print("Success" if result[0] else "Failed!") - - result = number_expr.run_tests(''' - # stray character - 100Z - # missing leading digit before '.' - -.100 - # too many '.' - 3.14.159 - ''', failure_tests=True) - print("Success" if result[0] else "Failed!") - - prints:: - - # unsigned integer - 100 - [100] - - # negative integer - -100 - [-100] - - # float with scientific notation - 6.02e23 - [6.02e+23] - - # integer with scientific notation - 1e-12 - [1e-12] - - Success - - # stray character - 100Z - ^ - FAIL: Expected end of text (at char 3), (line:1, col:4) - - # missing leading digit before '.' - -.100 - ^ - FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) - - # too many '.' - 3.14.159 - ^ - FAIL: Expected end of text (at char 4), (line:1, col:5) - - Success - - Each test string must be on a single line. If you want to test a string that spans multiple - lines, create a test like this:: - - expr.run_tests(r"this is a test\\n of strings that spans \\n 3 lines") - - (Note that this is a raw string literal, you must include the leading ``'r'``.) - """ - from .testing import pyparsing_test - - parseAll = parseAll and parse_all - fullDump = fullDump and full_dump - printResults = printResults and print_results - failureTests = failureTests or failure_tests - postParse = postParse or post_parse - if isinstance(tests, str_type): - line_strip = type(tests).strip - tests = [line_strip(test_line) for test_line in tests.rstrip().splitlines()] - if isinstance(comment, str_type): - comment = Literal(comment) - if file is None: - file = sys.stdout - print_ = file.write - - result: Union[ParseResults, Exception] - allResults = [] - comments = [] - success = True - NL = Literal(r"\n").add_parse_action(replace_with("\n")).ignore(quoted_string) - BOM = "\ufeff" - for t in tests: - if comment is not None and comment.matches(t, False) or comments and not t: - comments.append( - pyparsing_test.with_line_numbers(t) if with_line_numbers else t - ) - continue - if not t: - continue - out = [ - "\n" + "\n".join(comments) if comments else "", - pyparsing_test.with_line_numbers(t) if with_line_numbers else t, - ] - comments = [] - try: - # convert newline marks to actual newlines, and strip leading BOM if present - t = NL.transform_string(t.lstrip(BOM)) - result = self.parse_string(t, parse_all=parseAll) - except ParseBaseException as pe: - fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" - out.append(pe.explain()) - out.append("FAIL: " + str(pe)) - if ParserElement.verbose_stacktrace: - out.extend(traceback.format_tb(pe.__traceback__)) - success = success and failureTests - result = pe - except Exception as exc: - out.append("FAIL-EXCEPTION: {}: {}".format(type(exc).__name__, exc)) - if ParserElement.verbose_stacktrace: - out.extend(traceback.format_tb(exc.__traceback__)) - success = success and failureTests - result = exc - else: - success = success and not failureTests - if postParse is not None: - try: - pp_value = postParse(t, result) - if pp_value is not None: - if isinstance(pp_value, ParseResults): - out.append(pp_value.dump()) - else: - out.append(str(pp_value)) - else: - out.append(result.dump()) - except Exception as e: - out.append(result.dump(full=fullDump)) - out.append( - "{} failed: {}: {}".format( - postParse.__name__, type(e).__name__, e - ) - ) - else: - out.append(result.dump(full=fullDump)) - out.append("") - - if printResults: - print_("\n".join(out)) - - allResults.append((t, result)) - - return success, allResults - - def create_diagram( - self, - output_html: Union[TextIO, Path, str], - vertical: int = 3, - show_results_names: bool = False, - show_groups: bool = False, - **kwargs, - ) -> None: - """ - Create a railroad diagram for the parser. - - Parameters: - - output_html (str or file-like object) - output target for generated - diagram HTML - - vertical (int) - threshold for formatting multiple alternatives vertically - instead of horizontally (default=3) - - show_results_names - bool flag whether diagram should show annotations for - defined results names - - show_groups - bool flag whether groups should be highlighted with an unlabeled surrounding box - Additional diagram-formatting keyword arguments can also be included; - see railroad.Diagram class. - """ - - try: - from .diagram import to_railroad, railroad_to_html - except ImportError as ie: - raise Exception( - "must ``pip install pyparsing[diagrams]`` to generate parser railroad diagrams" - ) from ie - - self.streamline() - - railroad = to_railroad( - self, - vertical=vertical, - show_results_names=show_results_names, - show_groups=show_groups, - diagram_kwargs=kwargs, - ) - if isinstance(output_html, (str, Path)): - with open(output_html, "w", encoding="utf-8") as diag_file: - diag_file.write(railroad_to_html(railroad)) - else: - # we were passed a file-like object, just write to it - output_html.write(railroad_to_html(railroad)) - - setDefaultWhitespaceChars = set_default_whitespace_chars - inlineLiteralsUsing = inline_literals_using - setResultsName = set_results_name - setBreak = set_break - setParseAction = set_parse_action - addParseAction = add_parse_action - addCondition = add_condition - setFailAction = set_fail_action - tryParse = try_parse - canParseNext = can_parse_next - resetCache = reset_cache - enableLeftRecursion = enable_left_recursion - enablePackrat = enable_packrat - parseString = parse_string - scanString = scan_string - searchString = search_string - transformString = transform_string - setWhitespaceChars = set_whitespace_chars - parseWithTabs = parse_with_tabs - setDebugActions = set_debug_actions - setDebug = set_debug - defaultName = default_name - setName = set_name - parseFile = parse_file - runTests = run_tests - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class _PendingSkip(ParserElement): - # internal placeholder class to hold a place were '...' is added to a parser element, - # once another ParserElement is added, this placeholder will be replaced with a SkipTo - def __init__(self, expr: ParserElement, must_skip: bool = False): - super().__init__() - self.anchor = expr - self.must_skip = must_skip - - def _generateDefaultName(self): - return str(self.anchor + Empty()).replace("Empty", "...") - - def __add__(self, other) -> "ParserElement": - skipper = SkipTo(other).set_name("...")("_skipped*") - if self.must_skip: - - def must_skip(t): - if not t._skipped or t._skipped.as_list() == [""]: - del t[0] - t.pop("_skipped", None) - - def show_skip(t): - if t._skipped.as_list()[-1:] == [""]: - t.pop("_skipped") - t["_skipped"] = "missing <" + repr(self.anchor) + ">" - - return ( - self.anchor + skipper().add_parse_action(must_skip) - | skipper().add_parse_action(show_skip) - ) + other - - return self.anchor + skipper + other - - def __repr__(self): - return self.defaultName - - def parseImpl(self, *args): - raise Exception( - "use of `...` expression without following SkipTo target expression" - ) - - -class Token(ParserElement): - """Abstract :class:`ParserElement` subclass, for defining atomic - matching patterns. - """ - - def __init__(self): - super().__init__(savelist=False) - - def _generateDefaultName(self): - return type(self).__name__ - - -class Empty(Token): - """ - An empty token, will always match. - """ - - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - - -class NoMatch(Token): - """ - A token that will never match. - """ - - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - self.errmsg = "Unmatchable token" - - def parseImpl(self, instring, loc, doActions=True): - raise ParseException(instring, loc, self.errmsg, self) - - -class Literal(Token): - """ - Token to exactly match a specified string. - - Example:: - - Literal('blah').parse_string('blah') # -> ['blah'] - Literal('blah').parse_string('blahfooblah') # -> ['blah'] - Literal('blah').parse_string('bla') # -> Exception: Expected "blah" - - For case-insensitive matching, use :class:`CaselessLiteral`. - - For keyword matching (force word break before and after the matched string), - use :class:`Keyword` or :class:`CaselessKeyword`. - """ - - def __init__(self, match_string: str = "", *, matchString: str = ""): - super().__init__() - match_string = matchString or match_string - self.match = match_string - self.matchLen = len(match_string) - try: - self.firstMatchChar = match_string[0] - except IndexError: - raise ValueError("null string passed to Literal; use Empty() instead") - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = False - self.mayIndexError = False - - # Performance tuning: modify __class__ to select - # a parseImpl optimized for single-character check - if self.matchLen == 1 and type(self) is Literal: - self.__class__ = _SingleCharLiteral - - def _generateDefaultName(self): - return repr(self.match) - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] == self.firstMatchChar and instring.startswith( - self.match, loc - ): - return loc + self.matchLen, self.match - raise ParseException(instring, loc, self.errmsg, self) - - -class _SingleCharLiteral(Literal): - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] == self.firstMatchChar: - return loc + 1, self.match - raise ParseException(instring, loc, self.errmsg, self) - - -ParserElement._literalStringClass = Literal - - -class Keyword(Token): - """ - Token to exactly match a specified string as a keyword, that is, - it must be immediately followed by a non-keyword character. Compare - with :class:`Literal`: - - - ``Literal("if")`` will match the leading ``'if'`` in - ``'ifAndOnlyIf'``. - - ``Keyword("if")`` will not; it will only match the leading - ``'if'`` in ``'if x=1'``, or ``'if(y==2)'`` - - Accepts two optional constructor arguments in addition to the - keyword string: - - - ``identChars`` is a string of characters that would be valid - identifier characters, defaulting to all alphanumerics + "_" and - "$" - - ``caseless`` allows case-insensitive matching, default is ``False``. - - Example:: - - Keyword("start").parse_string("start") # -> ['start'] - Keyword("start").parse_string("starting") # -> Exception - - For case-insensitive matching, use :class:`CaselessKeyword`. - """ - - DEFAULT_KEYWORD_CHARS = alphanums + "_$" - - def __init__( - self, - match_string: str = "", - ident_chars: OptionalType[str] = None, - caseless: bool = False, - *, - matchString: str = "", - identChars: OptionalType[str] = None, - ): - super().__init__() - identChars = identChars or ident_chars - if identChars is None: - identChars = Keyword.DEFAULT_KEYWORD_CHARS - match_string = matchString or match_string - self.match = match_string - self.matchLen = len(match_string) - try: - self.firstMatchChar = match_string[0] - except IndexError: - raise ValueError("null string passed to Keyword; use Empty() instead") - self.errmsg = "Expected {} {}".format(type(self).__name__, self.name) - self.mayReturnEmpty = False - self.mayIndexError = False - self.caseless = caseless - if caseless: - self.caselessmatch = match_string.upper() - identChars = identChars.upper() - self.identChars = set(identChars) - - def _generateDefaultName(self): - return repr(self.match) - - def parseImpl(self, instring, loc, doActions=True): - errmsg = self.errmsg - errloc = loc - if self.caseless: - if instring[loc : loc + self.matchLen].upper() == self.caselessmatch: - if loc == 0 or instring[loc - 1].upper() not in self.identChars: - if ( - loc >= len(instring) - self.matchLen - or instring[loc + self.matchLen].upper() not in self.identChars - ): - return loc + self.matchLen, self.match - else: - # followed by keyword char - errmsg += ", was immediately followed by keyword character" - errloc = loc + self.matchLen - else: - # preceded by keyword char - errmsg += ", keyword was immediately preceded by keyword character" - errloc = loc - 1 - # else no match just raise plain exception - - else: - if ( - instring[loc] == self.firstMatchChar - and self.matchLen == 1 - or instring.startswith(self.match, loc) - ): - if loc == 0 or instring[loc - 1] not in self.identChars: - if ( - loc >= len(instring) - self.matchLen - or instring[loc + self.matchLen] not in self.identChars - ): - return loc + self.matchLen, self.match - else: - # followed by keyword char - errmsg += ( - ", keyword was immediately followed by keyword character" - ) - errloc = loc + self.matchLen - else: - # preceded by keyword char - errmsg += ", keyword was immediately preceded by keyword character" - errloc = loc - 1 - # else no match just raise plain exception - - raise ParseException(instring, errloc, errmsg, self) - - @staticmethod - def set_default_keyword_chars(chars) -> None: - """ - Overrides the default characters used by :class:`Keyword` expressions. - """ - Keyword.DEFAULT_KEYWORD_CHARS = chars - - setDefaultKeywordChars = set_default_keyword_chars - - -class CaselessLiteral(Literal): - """ - Token to match a specified string, ignoring case of letters. - Note: the matched results will always be in the case of the given - match string, NOT the case of the input text. - - Example:: - - OneOrMore(CaselessLiteral("CMD")).parse_string("cmd CMD Cmd10") - # -> ['CMD', 'CMD', 'CMD'] - - (Contrast with example for :class:`CaselessKeyword`.) - """ - - def __init__(self, match_string: str = "", *, matchString: str = ""): - match_string = matchString or match_string - super().__init__(match_string.upper()) - # Preserve the defining literal. - self.returnString = match_string - self.errmsg = "Expected " + self.name - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc : loc + self.matchLen].upper() == self.match: - return loc + self.matchLen, self.returnString - raise ParseException(instring, loc, self.errmsg, self) - - -class CaselessKeyword(Keyword): - """ - Caseless version of :class:`Keyword`. - - Example:: - - OneOrMore(CaselessKeyword("CMD")).parse_string("cmd CMD Cmd10") - # -> ['CMD', 'CMD'] - - (Contrast with example for :class:`CaselessLiteral`.) - """ - - def __init__( - self, - match_string: str = "", - ident_chars: OptionalType[str] = None, - *, - matchString: str = "", - identChars: OptionalType[str] = None, - ): - identChars = identChars or ident_chars - match_string = matchString or match_string - super().__init__(match_string, identChars, caseless=True) - - -class CloseMatch(Token): - """A variation on :class:`Literal` which matches "close" matches, - that is, strings with at most 'n' mismatching characters. - :class:`CloseMatch` takes parameters: - - - ``match_string`` - string to be matched - - ``caseless`` - a boolean indicating whether to ignore casing when comparing characters - - ``max_mismatches`` - (``default=1``) maximum number of - mismatches allowed to count as a match - - The results from a successful parse will contain the matched text - from the input string and the following named results: - - - ``mismatches`` - a list of the positions within the - match_string where mismatches were found - - ``original`` - the original match_string used to compare - against the input string - - If ``mismatches`` is an empty list, then the match was an exact - match. - - Example:: - - patt = CloseMatch("ATCATCGAATGGA") - patt.parse_string("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) - patt.parse_string("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) - - # exact match - patt.parse_string("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) - - # close match allowing up to 2 mismatches - patt = CloseMatch("ATCATCGAATGGA", max_mismatches=2) - patt.parse_string("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) - """ - - def __init__( - self, - match_string: str, - max_mismatches: int = None, - *, - maxMismatches: int = 1, - caseless=False, - ): - maxMismatches = max_mismatches if max_mismatches is not None else maxMismatches - super().__init__() - self.match_string = match_string - self.maxMismatches = maxMismatches - self.errmsg = "Expected {!r} (with up to {} mismatches)".format( - self.match_string, self.maxMismatches - ) - self.caseless = caseless - self.mayIndexError = False - self.mayReturnEmpty = False - - def _generateDefaultName(self): - return "{}:{!r}".format(type(self).__name__, self.match_string) - - def parseImpl(self, instring, loc, doActions=True): - start = loc - instrlen = len(instring) - maxloc = start + len(self.match_string) - - if maxloc <= instrlen: - match_string = self.match_string - match_stringloc = 0 - mismatches = [] - maxMismatches = self.maxMismatches - - for match_stringloc, s_m in enumerate( - zip(instring[loc:maxloc], match_string) - ): - src, mat = s_m - if self.caseless: - src, mat = src.lower(), mat.lower() - - if src != mat: - mismatches.append(match_stringloc) - if len(mismatches) > maxMismatches: - break - else: - loc = start + match_stringloc + 1 - results = ParseResults([instring[start:loc]]) - results["original"] = match_string - results["mismatches"] = mismatches - return loc, results - - raise ParseException(instring, loc, self.errmsg, self) - - -class Word(Token): - """Token for matching words composed of allowed character sets. - Parameters: - - ``init_chars`` - string of all characters that should be used to - match as a word; "ABC" will match "AAA", "ABAB", "CBAC", etc.; - if ``body_chars`` is also specified, then this is the string of - initial characters - - ``body_chars`` - string of characters that - can be used for matching after a matched initial character as - given in ``init_chars``; if omitted, same as the initial characters - (default=``None``) - - ``min`` - minimum number of characters to match (default=1) - - ``max`` - maximum number of characters to match (default=0) - - ``exact`` - exact number of characters to match (default=0) - - ``as_keyword`` - match as a keyword (default=``False``) - - ``exclude_chars`` - characters that might be - found in the input ``body_chars`` string but which should not be - accepted for matching ;useful to define a word of all - printables except for one or two characters, for instance - (default=``None``) - - :class:`srange` is useful for defining custom character set strings - for defining :class:`Word` expressions, using range notation from - regular expression character sets. - - A common mistake is to use :class:`Word` to match a specific literal - string, as in ``Word("Address")``. Remember that :class:`Word` - uses the string argument to define *sets* of matchable characters. - This expression would match "Add", "AAA", "dAred", or any other word - made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an - exact literal string, use :class:`Literal` or :class:`Keyword`. - - pyparsing includes helper strings for building Words: - - - :class:`alphas` - - :class:`nums` - - :class:`alphanums` - - :class:`hexnums` - - :class:`alphas8bit` (alphabetic characters in ASCII range 128-255 - - accented, tilded, umlauted, etc.) - - :class:`punc8bit` (non-alphabetic characters in ASCII range - 128-255 - currency, symbols, superscripts, diacriticals, etc.) - - :class:`printables` (any non-whitespace character) - - ``alphas``, ``nums``, and ``printables`` are also defined in several - Unicode sets - see :class:`pyparsing_unicode``. - - Example:: - - # a word composed of digits - integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) - - # a word with a leading capital, and zero or more lowercase - capital_word = Word(alphas.upper(), alphas.lower()) - - # hostnames are alphanumeric, with leading alpha, and '-' - hostname = Word(alphas, alphanums + '-') - - # roman numeral (not a strict parser, accepts invalid mix of characters) - roman = Word("IVXLCDM") - - # any string of non-whitespace characters, except for ',' - csv_value = Word(printables, exclude_chars=",") - """ - - def __init__( - self, - init_chars: str = "", - body_chars: OptionalType[str] = None, - min: int = 1, - max: int = 0, - exact: int = 0, - as_keyword: bool = False, - exclude_chars: OptionalType[str] = None, - *, - initChars: OptionalType[str] = None, - bodyChars: OptionalType[str] = None, - asKeyword: bool = False, - excludeChars: OptionalType[str] = None, - ): - initChars = initChars or init_chars - bodyChars = bodyChars or body_chars - asKeyword = asKeyword or as_keyword - excludeChars = excludeChars or exclude_chars - super().__init__() - if not initChars: - raise ValueError( - "invalid {}, initChars cannot be empty string".format( - type(self).__name__ - ) - ) - - initChars = set(initChars) - self.initChars = initChars - if excludeChars: - excludeChars = set(excludeChars) - initChars -= excludeChars - if bodyChars: - bodyChars = set(bodyChars) - excludeChars - self.initCharsOrig = "".join(sorted(initChars)) - - if bodyChars: - self.bodyCharsOrig = "".join(sorted(bodyChars)) - self.bodyChars = set(bodyChars) - else: - self.bodyCharsOrig = "".join(sorted(initChars)) - self.bodyChars = set(initChars) - - self.maxSpecified = max > 0 - - if min < 1: - raise ValueError( - "cannot specify a minimum length < 1; use Opt(Word()) if zero-length word is permitted" - ) - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.asKeyword = asKeyword - - # see if we can make a regex for this Word - if " " not in self.initChars | self.bodyChars and (min == 1 and exact == 0): - if self.bodyChars == self.initChars: - if max == 0: - repeat = "+" - elif max == 1: - repeat = "" - else: - repeat = "{{{},{}}}".format( - self.minLen, "" if self.maxLen == _MAX_INT else self.maxLen - ) - self.reString = "[{}]{}".format( - _collapse_string_to_ranges(self.initChars), - repeat, - ) - elif len(self.initChars) == 1: - if max == 0: - repeat = "*" - else: - repeat = "{{0,{}}}".format(max - 1) - self.reString = "{}[{}]{}".format( - re.escape(self.initCharsOrig), - _collapse_string_to_ranges(self.bodyChars), - repeat, - ) - else: - if max == 0: - repeat = "*" - elif max == 2: - repeat = "" - else: - repeat = "{{0,{}}}".format(max - 1) - self.reString = "[{}][{}]{}".format( - _collapse_string_to_ranges(self.initChars), - _collapse_string_to_ranges(self.bodyChars), - repeat, - ) - if self.asKeyword: - self.reString = r"\b" + self.reString + r"\b" - - try: - self.re = re.compile(self.reString) - except re.error: - self.re = None - else: - self.re_match = self.re.match - self.__class__ = _WordRegex - - def _generateDefaultName(self): - def charsAsStr(s): - max_repr_len = 16 - s = _collapse_string_to_ranges(s, re_escape=False) - if len(s) > max_repr_len: - return s[: max_repr_len - 3] + "..." - else: - return s - - if self.initChars != self.bodyChars: - base = "W:({}, {})".format( - charsAsStr(self.initChars), charsAsStr(self.bodyChars) - ) - else: - base = "W:({})".format(charsAsStr(self.initChars)) - - # add length specification - if self.minLen > 1 or self.maxLen != _MAX_INT: - if self.minLen == self.maxLen: - if self.minLen == 1: - return base[2:] - else: - return base + "{{{}}}".format(self.minLen) - elif self.maxLen == _MAX_INT: - return base + "{{{},...}}".format(self.minLen) - else: - return base + "{{{},{}}}".format(self.minLen, self.maxLen) - return base - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] not in self.initChars: - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - instrlen = len(instring) - bodychars = self.bodyChars - maxloc = start + self.maxLen - maxloc = min(maxloc, instrlen) - while loc < maxloc and instring[loc] in bodychars: - loc += 1 - - throwException = False - if loc - start < self.minLen: - throwException = True - elif self.maxSpecified and loc < instrlen and instring[loc] in bodychars: - throwException = True - elif self.asKeyword: - if ( - start > 0 - and instring[start - 1] in bodychars - or loc < instrlen - and instring[loc] in bodychars - ): - throwException = True - - if throwException: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class _WordRegex(Word): - def parseImpl(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - return loc, result.group() - - -class Char(_WordRegex): - """A short-cut class for defining :class:`Word` ``(characters, exact=1)``, - when defining a match of any single character in a string of - characters. - """ - - def __init__( - self, - charset: str, - as_keyword: bool = False, - exclude_chars: OptionalType[str] = None, - *, - asKeyword: bool = False, - excludeChars: OptionalType[str] = None, - ): - asKeyword = asKeyword or as_keyword - excludeChars = excludeChars or exclude_chars - super().__init__( - charset, exact=1, asKeyword=asKeyword, excludeChars=excludeChars - ) - self.reString = "[{}]".format(_collapse_string_to_ranges(self.initChars)) - if asKeyword: - self.reString = r"\b{}\b".format(self.reString) - self.re = re.compile(self.reString) - self.re_match = self.re.match - - -class Regex(Token): - r"""Token for matching strings that match a given regular - expression. Defined with string specifying the regular expression in - a form recognized by the stdlib Python `re module `_. - If the given regex contains named groups (defined using ``(?P...)``), - these will be preserved as named :class:`ParseResults`. - - If instead of the Python stdlib ``re`` module you wish to use a different RE module - (such as the ``regex`` module), you can do so by building your ``Regex`` object with - a compiled RE that was compiled using ``regex``. - - Example:: - - realnum = Regex(r"[+-]?\d+\.\d*") - # ref: https://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression - roman = Regex(r"M{0,4}(CM|CD|D?{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") - - # named fields in a regex will be returned as named results - date = Regex(r'(?P\d{4})-(?P\d\d?)-(?P\d\d?)') - - # the Regex class will accept re's compiled using the regex module - import regex - parser = pp.Regex(regex.compile(r'[0-9]')) - """ - - def __init__( - self, - pattern: Any, - flags: Union[re.RegexFlag, int] = 0, - as_group_list: bool = False, - as_match: bool = False, - *, - asGroupList: bool = False, - asMatch: bool = False, - ): - """The parameters ``pattern`` and ``flags`` are passed - to the ``re.compile()`` function as-is. See the Python - `re module `_ module for an - explanation of the acceptable patterns and flags. - """ - super().__init__() - asGroupList = asGroupList or as_group_list - asMatch = asMatch or as_match - - if isinstance(pattern, str_type): - if not pattern: - raise ValueError("null string passed to Regex; use Empty() instead") - - self._re = None - self.reString = self.pattern = pattern - self.flags = flags - - elif hasattr(pattern, "pattern") and hasattr(pattern, "match"): - self._re = pattern - self.pattern = self.reString = pattern.pattern - self.flags = flags - - else: - raise TypeError( - "Regex may only be constructed with a string or a compiled RE object" - ) - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.asGroupList = asGroupList - self.asMatch = asMatch - if self.asGroupList: - self.parseImpl = self.parseImplAsGroupList - if self.asMatch: - self.parseImpl = self.parseImplAsMatch - - @cached_property - def re(self): - if self._re: - return self._re - else: - try: - return re.compile(self.pattern, self.flags) - except re.error: - raise ValueError( - "invalid pattern ({!r}) passed to Regex".format(self.pattern) - ) - - @cached_property - def re_match(self): - return self.re.match - - @cached_property - def mayReturnEmpty(self): - return self.re_match("") is not None - - def _generateDefaultName(self): - return "Re:({})".format(repr(self.pattern).replace("\\\\", "\\")) - - def parseImpl(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = ParseResults(result.group()) - d = result.groupdict() - if d: - for k, v in d.items(): - ret[k] = v - return loc, ret - - def parseImplAsGroupList(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result.groups() - return loc, ret - - def parseImplAsMatch(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result - return loc, ret - - def sub(self, repl: str) -> ParserElement: - r""" - Return :class:`Regex` with an attached parse action to transform the parsed - result as if called using `re.sub(expr, repl, string) `_. - - Example:: - - make_html = Regex(r"(\w+):(.*?):").sub(r"<\1>\2") - print(make_html.transform_string("h1:main title:")) - # prints "

          main title

          " - """ - if self.asGroupList: - raise TypeError("cannot use sub() with Regex(asGroupList=True)") - - if self.asMatch and callable(repl): - raise TypeError("cannot use sub() with a callable with Regex(asMatch=True)") - - if self.asMatch: - - def pa(tokens): - return tokens[0].expand(repl) - - else: - - def pa(tokens): - return self.re.sub(repl, tokens[0]) - - return self.add_parse_action(pa) - - -class QuotedString(Token): - r""" - Token for matching strings that are delimited by quoting characters. - - Defined with the following parameters: - - - ``quote_char`` - string of one or more characters defining the - quote delimiting string - - ``esc_char`` - character to re_escape quotes, typically backslash - (default= ``None``) - - ``esc_quote`` - special quote sequence to re_escape an embedded quote - string (such as SQL's ``""`` to re_escape an embedded ``"``) - (default= ``None``) - - ``multiline`` - boolean indicating whether quotes can span - multiple lines (default= ``False``) - - ``unquote_results`` - boolean indicating whether the matched text - should be unquoted (default= ``True``) - - ``end_quote_char`` - string of one or more characters defining the - end of the quote delimited string (default= ``None`` => same as - quote_char) - - ``convert_whitespace_escapes`` - convert escaped whitespace - (``'\t'``, ``'\n'``, etc.) to actual whitespace - (default= ``True``) - - Example:: - - qs = QuotedString('"') - print(qs.search_string('lsjdf "This is the quote" sldjf')) - complex_qs = QuotedString('{{', end_quote_char='}}') - print(complex_qs.search_string('lsjdf {{This is the "quote"}} sldjf')) - sql_qs = QuotedString('"', esc_quote='""') - print(sql_qs.search_string('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) - - prints:: - - [['This is the quote']] - [['This is the "quote"']] - [['This is the quote with "embedded" quotes']] - """ - ws_map = ((r"\t", "\t"), (r"\n", "\n"), (r"\f", "\f"), (r"\r", "\r")) - - def __init__( - self, - quote_char: str = "", - esc_char: OptionalType[str] = None, - esc_quote: OptionalType[str] = None, - multiline: bool = False, - unquote_results: bool = True, - end_quote_char: OptionalType[str] = None, - convert_whitespace_escapes: bool = True, - *, - quoteChar: str = "", - escChar: OptionalType[str] = None, - escQuote: OptionalType[str] = None, - unquoteResults: bool = True, - endQuoteChar: OptionalType[str] = None, - convertWhitespaceEscapes: bool = True, - ): - super().__init__() - escChar = escChar or esc_char - escQuote = escQuote or esc_quote - unquoteResults = unquoteResults and unquote_results - endQuoteChar = endQuoteChar or end_quote_char - convertWhitespaceEscapes = ( - convertWhitespaceEscapes and convert_whitespace_escapes - ) - quote_char = quoteChar or quote_char - - # remove white space from quote chars - wont work anyway - quote_char = quote_char.strip() - if not quote_char: - raise ValueError("quote_char cannot be the empty string") - - if endQuoteChar is None: - endQuoteChar = quote_char - else: - endQuoteChar = endQuoteChar.strip() - if not endQuoteChar: - raise ValueError("endQuoteChar cannot be the empty string") - - self.quoteChar = quote_char - self.quoteCharLen = len(quote_char) - self.firstQuoteChar = quote_char[0] - self.endQuoteChar = endQuoteChar - self.endQuoteCharLen = len(endQuoteChar) - self.escChar = escChar - self.escQuote = escQuote - self.unquoteResults = unquoteResults - self.convertWhitespaceEscapes = convertWhitespaceEscapes - - sep = "" - inner_pattern = "" - - if escQuote: - inner_pattern += r"{}(?:{})".format(sep, re.escape(escQuote)) - sep = "|" - - if escChar: - inner_pattern += r"{}(?:{}.)".format(sep, re.escape(escChar)) - sep = "|" - self.escCharReplacePattern = re.escape(self.escChar) + "(.)" - - if len(self.endQuoteChar) > 1: - inner_pattern += ( - "{}(?:".format(sep) - + "|".join( - "(?:{}(?!{}))".format( - re.escape(self.endQuoteChar[:i]), - re.escape(self.endQuoteChar[i:]), - ) - for i in range(len(self.endQuoteChar) - 1, 0, -1) - ) - + ")" - ) - sep = "|" - - if multiline: - self.flags = re.MULTILINE | re.DOTALL - inner_pattern += r"{}(?:[^{}{}])".format( - sep, - _escape_regex_range_chars(self.endQuoteChar[0]), - (_escape_regex_range_chars(escChar) if escChar is not None else ""), - ) - else: - self.flags = 0 - inner_pattern += r"{}(?:[^{}\n\r{}])".format( - sep, - _escape_regex_range_chars(self.endQuoteChar[0]), - (_escape_regex_range_chars(escChar) if escChar is not None else ""), - ) - - self.pattern = "".join( - [ - re.escape(self.quoteChar), - "(?:", - inner_pattern, - ")*", - re.escape(self.endQuoteChar), - ] - ) - - try: - self.re = re.compile(self.pattern, self.flags) - self.reString = self.pattern - self.re_match = self.re.match - except re.error: - raise ValueError( - "invalid pattern {!r} passed to Regex".format(self.pattern) - ) - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.mayReturnEmpty = True - - def _generateDefaultName(self): - if self.quoteChar == self.endQuoteChar and isinstance(self.quoteChar, str_type): - return "string enclosed in {!r}".format(self.quoteChar) - - return "quoted string, starting with {} ending with {}".format( - self.quoteChar, self.endQuoteChar - ) - - def parseImpl(self, instring, loc, doActions=True): - result = ( - instring[loc] == self.firstQuoteChar - and self.re_match(instring, loc) - or None - ) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result.group() - - if self.unquoteResults: - - # strip off quotes - ret = ret[self.quoteCharLen : -self.endQuoteCharLen] - - if isinstance(ret, str_type): - # replace escaped whitespace - if "\\" in ret and self.convertWhitespaceEscapes: - for wslit, wschar in self.ws_map: - ret = ret.replace(wslit, wschar) - - # replace escaped characters - if self.escChar: - ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) - - # replace escaped quotes - if self.escQuote: - ret = ret.replace(self.escQuote, self.endQuoteChar) - - return loc, ret - - -class CharsNotIn(Token): - """Token for matching words composed of characters *not* in a given - set (will include whitespace in matched characters if not listed in - the provided exclusion set - see example). Defined with string - containing all disallowed characters, and an optional minimum, - maximum, and/or exact length. The default value for ``min`` is - 1 (a minimum value < 1 is not valid); the default values for - ``max`` and ``exact`` are 0, meaning no maximum or exact - length restriction. - - Example:: - - # define a comma-separated-value as anything that is not a ',' - csv_value = CharsNotIn(',') - print(delimited_list(csv_value).parse_string("dkls,lsdkjf,s12 34,@!#,213")) - - prints:: - - ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] - """ - - def __init__( - self, - not_chars: str = "", - min: int = 1, - max: int = 0, - exact: int = 0, - *, - notChars: str = "", - ): - super().__init__() - self.skipWhitespace = False - self.notChars = not_chars or notChars - self.notCharsSet = set(self.notChars) - - if min < 1: - raise ValueError( - "cannot specify a minimum length < 1; use " - "Opt(CharsNotIn()) if zero-length char group is permitted" - ) - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = self.minLen == 0 - self.mayIndexError = False - - def _generateDefaultName(self): - not_chars_str = _collapse_string_to_ranges(self.notChars) - if len(not_chars_str) > 16: - return "!W:({}...)".format(self.notChars[: 16 - 3]) - else: - return "!W:({})".format(self.notChars) - - def parseImpl(self, instring, loc, doActions=True): - notchars = self.notCharsSet - if instring[loc] in notchars: - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - maxlen = min(start + self.maxLen, len(instring)) - while loc < maxlen and instring[loc] not in notchars: - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class White(Token): - """Special matching class for matching whitespace. Normally, - whitespace is ignored by pyparsing grammars. This class is included - when some whitespace structures are significant. Define with - a string containing the whitespace characters to be matched; default - is ``" \\t\\r\\n"``. Also takes optional ``min``, - ``max``, and ``exact`` arguments, as defined for the - :class:`Word` class. - """ - - whiteStrs = { - " ": "", - "\t": "", - "\n": "", - "\r": "", - "\f": "", - "\u00A0": "", - "\u1680": "", - "\u180E": "", - "\u2000": "", - "\u2001": "", - "\u2002": "", - "\u2003": "", - "\u2004": "", - "\u2005": "", - "\u2006": "", - "\u2007": "", - "\u2008": "", - "\u2009": "", - "\u200A": "", - "\u200B": "", - "\u202F": "", - "\u205F": "", - "\u3000": "", - } - - def __init__(self, ws: str = " \t\r\n", min: int = 1, max: int = 0, exact: int = 0): - super().__init__() - self.matchWhite = ws - self.set_whitespace_chars( - "".join(c for c in self.whiteStrs if c not in self.matchWhite), - copy_defaults=True, - ) - # self.leave_whitespace() - self.mayReturnEmpty = True - self.errmsg = "Expected " + self.name - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - def _generateDefaultName(self): - return "".join(White.whiteStrs[c] for c in self.matchWhite) - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] not in self.matchWhite: - raise ParseException(instring, loc, self.errmsg, self) - start = loc - loc += 1 - maxloc = start + self.maxLen - maxloc = min(maxloc, len(instring)) - while loc < maxloc and instring[loc] in self.matchWhite: - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class PositionToken(Token): - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - - -class GoToColumn(PositionToken): - """Token to advance to a specific column of input text; useful for - tabular report scraping. - """ - - def __init__(self, colno: int): - super().__init__() - self.col = colno - - def preParse(self, instring, loc): - if col(loc, instring) != self.col: - instrlen = len(instring) - if self.ignoreExprs: - loc = self._skipIgnorables(instring, loc) - while ( - loc < instrlen - and instring[loc].isspace() - and col(loc, instring) != self.col - ): - loc += 1 - return loc - - def parseImpl(self, instring, loc, doActions=True): - thiscol = col(loc, instring) - if thiscol > self.col: - raise ParseException(instring, loc, "Text not in expected column", self) - newloc = loc + self.col - thiscol - ret = instring[loc:newloc] - return newloc, ret - - -class LineStart(PositionToken): - r"""Matches if current position is at the beginning of a line within - the parse string - - Example:: - - test = '''\ - AAA this line - AAA and this line - AAA but not this one - B AAA and definitely not this one - ''' - - for t in (LineStart() + 'AAA' + restOfLine).search_string(test): - print(t) - - prints:: - - ['AAA', ' this line'] - ['AAA', ' and this line'] - - """ - - def __init__(self): - super().__init__() - self.leave_whitespace() - self.orig_whiteChars = set() | self.whiteChars - self.whiteChars.discard("\n") - self.skipper = Empty().set_whitespace_chars(self.whiteChars) - self.errmsg = "Expected start of line" - - def preParse(self, instring, loc): - if loc == 0: - return loc - else: - ret = self.skipper.preParse(instring, loc) - if "\n" in self.orig_whiteChars: - while instring[ret : ret + 1] == "\n": - ret = self.skipper.preParse(instring, ret + 1) - return ret - - def parseImpl(self, instring, loc, doActions=True): - if col(loc, instring) == 1: - return loc, [] - raise ParseException(instring, loc, self.errmsg, self) - - -class LineEnd(PositionToken): - """Matches if current position is at the end of a line within the - parse string - """ - - def __init__(self): - super().__init__() - self.whiteChars.discard("\n") - self.set_whitespace_chars(self.whiteChars, copy_defaults=False) - self.errmsg = "Expected end of line" - - def parseImpl(self, instring, loc, doActions=True): - if loc < len(instring): - if instring[loc] == "\n": - return loc + 1, "\n" - else: - raise ParseException(instring, loc, self.errmsg, self) - elif loc == len(instring): - return loc + 1, [] - else: - raise ParseException(instring, loc, self.errmsg, self) - - -class StringStart(PositionToken): - """Matches if current position is at the beginning of the parse - string - """ - - def __init__(self): - super().__init__() - self.errmsg = "Expected start of text" - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - # see if entire string up to here is just whitespace and ignoreables - if loc != self.preParse(instring, 0): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class StringEnd(PositionToken): - """ - Matches if current position is at the end of the parse string - """ - - def __init__(self): - super().__init__() - self.errmsg = "Expected end of text" - - def parseImpl(self, instring, loc, doActions=True): - if loc < len(instring): - raise ParseException(instring, loc, self.errmsg, self) - elif loc == len(instring): - return loc + 1, [] - elif loc > len(instring): - return loc, [] - else: - raise ParseException(instring, loc, self.errmsg, self) - - -class WordStart(PositionToken): - """Matches if the current position is at the beginning of a - :class:`Word`, and is not preceded by any character in a given - set of ``word_chars`` (default= ``printables``). To emulate the - ``\b`` behavior of regular expressions, use - ``WordStart(alphanums)``. ``WordStart`` will also match at - the beginning of the string being parsed, or at the beginning of - a line. - """ - - def __init__(self, word_chars: str = printables, *, wordChars: str = printables): - wordChars = word_chars if wordChars == printables else wordChars - super().__init__() - self.wordChars = set(wordChars) - self.errmsg = "Not at the start of a word" - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - if ( - instring[loc - 1] in self.wordChars - or instring[loc] not in self.wordChars - ): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class WordEnd(PositionToken): - """Matches if the current position is at the end of a :class:`Word`, - and is not followed by any character in a given set of ``word_chars`` - (default= ``printables``). To emulate the ``\b`` behavior of - regular expressions, use ``WordEnd(alphanums)``. ``WordEnd`` - will also match at the end of the string being parsed, or at the end - of a line. - """ - - def __init__(self, word_chars: str = printables, *, wordChars: str = printables): - wordChars = word_chars if wordChars == printables else wordChars - super().__init__() - self.wordChars = set(wordChars) - self.skipWhitespace = False - self.errmsg = "Not at the end of a word" - - def parseImpl(self, instring, loc, doActions=True): - instrlen = len(instring) - if instrlen > 0 and loc < instrlen: - if ( - instring[loc] in self.wordChars - or instring[loc - 1] not in self.wordChars - ): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class ParseExpression(ParserElement): - """Abstract subclass of ParserElement, for combining and - post-processing parsed tokens. - """ - - def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False): - super().__init__(savelist) - self.exprs: List[ParserElement] - if isinstance(exprs, _generatorType): - exprs = list(exprs) - - if isinstance(exprs, str_type): - self.exprs = [self._literalStringClass(exprs)] - elif isinstance(exprs, ParserElement): - self.exprs = [exprs] - elif isinstance(exprs, Iterable): - exprs = list(exprs) - # if sequence of strings provided, wrap with Literal - if any(isinstance(expr, str_type) for expr in exprs): - exprs = ( - self._literalStringClass(e) if isinstance(e, str_type) else e - for e in exprs - ) - self.exprs = list(exprs) - else: - try: - self.exprs = list(exprs) - except TypeError: - self.exprs = [exprs] - self.callPreparse = False - - def recurse(self) -> Sequence[ParserElement]: - return self.exprs[:] - - def append(self, other) -> ParserElement: - self.exprs.append(other) - self._defaultName = None - return self - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - """ - Extends ``leave_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on - all contained expressions. - """ - super().leave_whitespace(recursive) - - if recursive: - self.exprs = [e.copy() for e in self.exprs] - for e in self.exprs: - e.leave_whitespace(recursive) - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - """ - Extends ``ignore_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on - all contained expressions. - """ - super().ignore_whitespace(recursive) - if recursive: - self.exprs = [e.copy() for e in self.exprs] - for e in self.exprs: - e.ignore_whitespace(recursive) - return self - - def ignore(self, other) -> ParserElement: - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - super().ignore(other) - for e in self.exprs: - e.ignore(self.ignoreExprs[-1]) - else: - super().ignore(other) - for e in self.exprs: - e.ignore(self.ignoreExprs[-1]) - return self - - def _generateDefaultName(self): - return "{}:({})".format(self.__class__.__name__, str(self.exprs)) - - def streamline(self) -> ParserElement: - if self.streamlined: - return self - - super().streamline() - - for e in self.exprs: - e.streamline() - - # collapse nested :class:`And`'s of the form ``And(And(And(a, b), c), d)`` to ``And(a, b, c, d)`` - # but only if there are no parse actions or resultsNames on the nested And's - # (likewise for :class:`Or`'s and :class:`MatchFirst`'s) - if len(self.exprs) == 2: - other = self.exprs[0] - if ( - isinstance(other, self.__class__) - and not other.parseAction - and other.resultsName is None - and not other.debug - ): - self.exprs = other.exprs[:] + [self.exprs[1]] - self._defaultName = None - self.mayReturnEmpty |= other.mayReturnEmpty - self.mayIndexError |= other.mayIndexError - - other = self.exprs[-1] - if ( - isinstance(other, self.__class__) - and not other.parseAction - and other.resultsName is None - and not other.debug - ): - self.exprs = self.exprs[:-1] + other.exprs[:] - self._defaultName = None - self.mayReturnEmpty |= other.mayReturnEmpty - self.mayIndexError |= other.mayIndexError - - self.errmsg = "Expected " + str(self) - - return self - - def validate(self, validateTrace=None) -> None: - tmp = (validateTrace if validateTrace is not None else [])[:] + [self] - for e in self.exprs: - e.validate(tmp) - self._checkRecursion([]) - - def copy(self) -> ParserElement: - ret = super().copy() - ret.exprs = [e.copy() for e in self.exprs] - return ret - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_ungrouped_named_tokens_in_collection - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in self.suppress_warnings_ - ): - for e in self.exprs: - if ( - isinstance(e, ParserElement) - and e.resultsName - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in e.suppress_warnings_ - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "collides with {!r} on contained expression".format( - "warn_ungrouped_named_tokens_in_collection", - name, - type(self).__name__, - e.resultsName, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class And(ParseExpression): - """ - Requires all given :class:`ParseExpression` s to be found in the given order. - Expressions may be separated by whitespace. - May be constructed using the ``'+'`` operator. - May also be constructed using the ``'-'`` operator, which will - suppress backtracking. - - Example:: - - integer = Word(nums) - name_expr = OneOrMore(Word(alphas)) - - expr = And([integer("id"), name_expr("name"), integer("age")]) - # more easily written as: - expr = integer("id") + name_expr("name") + integer("age") - """ - - class _ErrorStop(Empty): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.leave_whitespace() - - def _generateDefaultName(self): - return "-" - - def __init__(self, exprs_arg: IterableType[ParserElement], savelist: bool = True): - exprs: List[ParserElement] = list(exprs_arg) - if exprs and Ellipsis in exprs: - tmp = [] - for i, expr in enumerate(exprs): - if expr is Ellipsis: - if i < len(exprs) - 1: - skipto_arg: ParserElement = (Empty() + exprs[i + 1]).exprs[-1] - tmp.append(SkipTo(skipto_arg)("_skipped*")) - else: - raise Exception( - "cannot construct And with sequence ending in ..." - ) - else: - tmp.append(expr) - exprs[:] = tmp - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - if not isinstance(self.exprs[0], White): - self.set_whitespace_chars( - self.exprs[0].whiteChars, - copy_defaults=self.exprs[0].copyDefaultWhiteChars, - ) - self.skipWhitespace = self.exprs[0].skipWhitespace - else: - self.skipWhitespace = False - else: - self.mayReturnEmpty = True - self.callPreparse = True - - def streamline(self) -> ParserElement: - # collapse any _PendingSkip's - if self.exprs: - if any( - isinstance(e, ParseExpression) - and e.exprs - and isinstance(e.exprs[-1], _PendingSkip) - for e in self.exprs[:-1] - ): - for i, e in enumerate(self.exprs[:-1]): - if e is None: - continue - if ( - isinstance(e, ParseExpression) - and e.exprs - and isinstance(e.exprs[-1], _PendingSkip) - ): - e.exprs[-1] = e.exprs[-1] + self.exprs[i + 1] - self.exprs[i + 1] = None - self.exprs = [e for e in self.exprs if e is not None] - - super().streamline() - - # link any IndentedBlocks to the prior expression - for prev, cur in zip(self.exprs, self.exprs[1:]): - # traverse cur or any first embedded expr of cur looking for an IndentedBlock - # (but watch out for recursive grammar) - seen = set() - while cur: - if id(cur) in seen: - break - seen.add(id(cur)) - if isinstance(cur, IndentedBlock): - prev.add_parse_action( - lambda s, l, t, cur_=cur: setattr( - cur_, "parent_anchor", col(l, s) - ) - ) - break - subs = cur.recurse() - cur = next(iter(subs), None) - - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - return self - - def parseImpl(self, instring, loc, doActions=True): - # pass False as callPreParse arg to _parse for first element, since we already - # pre-parsed the string as part of our And pre-parsing - loc, resultlist = self.exprs[0]._parse( - instring, loc, doActions, callPreParse=False - ) - errorStop = False - for e in self.exprs[1:]: - # if isinstance(e, And._ErrorStop): - if type(e) is And._ErrorStop: - errorStop = True - continue - if errorStop: - try: - loc, exprtokens = e._parse(instring, loc, doActions) - except ParseSyntaxException: - raise - except ParseBaseException as pe: - pe.__traceback__ = None - raise ParseSyntaxException._from_exception(pe) - except IndexError: - raise ParseSyntaxException( - instring, len(instring), self.errmsg, self - ) - else: - loc, exprtokens = e._parse(instring, loc, doActions) - if exprtokens or exprtokens.haskeys(): - resultlist += exprtokens - return loc, resultlist - - def __iadd__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # And([self, other]) - - def _checkRecursion(self, parseElementList): - subRecCheckList = parseElementList[:] + [self] - for e in self.exprs: - e._checkRecursion(subRecCheckList) - if not e.mayReturnEmpty: - break - - def _generateDefaultName(self): - inner = " ".join(str(e) for e in self.exprs) - # strip off redundant inner {}'s - while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}": - inner = inner[1:-1] - return "{" + inner + "}" - - -class Or(ParseExpression): - """Requires that at least one :class:`ParseExpression` is found. If - two expressions match, the expression that matches the longest - string will be used. May be constructed using the ``'^'`` - operator. - - Example:: - - # construct Or using '^' operator - - number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) - print(number.search_string("123 3.1416 789")) - - prints:: - - [['123'], ['3.1416'], ['789']] - """ - - def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all(e.skipWhitespace for e in self.exprs) - else: - self.mayReturnEmpty = True - - def streamline(self) -> ParserElement: - super().streamline() - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.saveAsList = any(e.saveAsList for e in self.exprs) - self.skipWhitespace = all( - e.skipWhitespace and not isinstance(e, White) for e in self.exprs - ) - else: - self.saveAsList = False - return self - - def parseImpl(self, instring, loc, doActions=True): - maxExcLoc = -1 - maxException = None - matches = [] - fatals = [] - if all(e.callPreparse for e in self.exprs): - loc = self.preParse(instring, loc) - for e in self.exprs: - try: - loc2 = e.try_parse(instring, loc, raise_fatal=True) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - fatals.append(pfe) - maxException = None - maxExcLoc = -1 - except ParseException as err: - if not fatals: - err.__traceback__ = None - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException( - instring, len(instring), e.errmsg, self - ) - maxExcLoc = len(instring) - else: - # save match among all matches, to retry longest to shortest - matches.append((loc2, e)) - - if matches: - # re-evaluate all matches in descending order of length of match, in case attached actions - # might change whether or how much they match of the input. - matches.sort(key=itemgetter(0), reverse=True) - - if not doActions: - # no further conditions or parse actions to change the selection of - # alternative, so the first match will be the best match - best_expr = matches[0][1] - return best_expr._parse(instring, loc, doActions) - - longest = -1, None - for loc1, expr1 in matches: - if loc1 <= longest[0]: - # already have a longer match than this one will deliver, we are done - return longest - - try: - loc2, toks = expr1._parse(instring, loc, doActions) - except ParseException as err: - err.__traceback__ = None - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - else: - if loc2 >= loc1: - return loc2, toks - # didn't match as much as before - elif loc2 > longest[0]: - longest = loc2, toks - - if longest != (-1, None): - return longest - - if fatals: - if len(fatals) > 1: - fatals.sort(key=lambda e: -e.loc) - if fatals[0].loc == fatals[1].loc: - fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement)))) - max_fatal = fatals[0] - raise max_fatal - - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException( - instring, loc, "no defined alternatives to match", self - ) - - def __ixor__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # Or([self, other]) - - def _generateDefaultName(self): - return "{" + " ^ ".join(str(e) for e in self.exprs) + "}" - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_multiple_tokens_in_named_alternation - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in self.suppress_warnings_ - ): - if any( - isinstance(e, And) - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in e.suppress_warnings_ - for e in self.exprs - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "will return a list of all parsed tokens in an And alternative, " - "in prior versions only the first token was returned; enclose " - "contained argument in Group".format( - "warn_multiple_tokens_in_named_alternation", - name, - type(self).__name__, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class MatchFirst(ParseExpression): - """Requires that at least one :class:`ParseExpression` is found. If - more than one expression matches, the first one listed is the one that will - match. May be constructed using the ``'|'`` operator. - - Example:: - - # construct MatchFirst using '|' operator - - # watch the order of expressions to match - number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) - print(number.search_string("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] - - # put more selective expression first - number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) - print(number.search_string("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] - """ - - def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all(e.skipWhitespace for e in self.exprs) - else: - self.mayReturnEmpty = True - - def streamline(self) -> ParserElement: - if self.streamlined: - return self - - super().streamline() - if self.exprs: - self.saveAsList = any(e.saveAsList for e in self.exprs) - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all( - e.skipWhitespace and not isinstance(e, White) for e in self.exprs - ) - else: - self.saveAsList = False - self.mayReturnEmpty = True - return self - - def parseImpl(self, instring, loc, doActions=True): - maxExcLoc = -1 - maxException = None - - for e in self.exprs: - try: - return e._parse( - instring, - loc, - doActions, - ) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - raise - except ParseException as err: - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException( - instring, len(instring), e.errmsg, self - ) - maxExcLoc = len(instring) - - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException( - instring, loc, "no defined alternatives to match", self - ) - - def __ior__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # MatchFirst([self, other]) - - def _generateDefaultName(self): - return "{" + " | ".join(str(e) for e in self.exprs) + "}" - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_multiple_tokens_in_named_alternation - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in self.suppress_warnings_ - ): - if any( - isinstance(e, And) - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in e.suppress_warnings_ - for e in self.exprs - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "will return a list of all parsed tokens in an And alternative, " - "in prior versions only the first token was returned; enclose " - "contained argument in Group".format( - "warn_multiple_tokens_in_named_alternation", - name, - type(self).__name__, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class Each(ParseExpression): - """Requires all given :class:`ParseExpression` s to be found, but in - any order. Expressions may be separated by whitespace. - - May be constructed using the ``'&'`` operator. - - Example:: - - color = one_of("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") - shape_type = one_of("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") - integer = Word(nums) - shape_attr = "shape:" + shape_type("shape") - posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") - color_attr = "color:" + color("color") - size_attr = "size:" + integer("size") - - # use Each (using operator '&') to accept attributes in any order - # (shape and posn are required, color and size are optional) - shape_spec = shape_attr & posn_attr & Opt(color_attr) & Opt(size_attr) - - shape_spec.run_tests(''' - shape: SQUARE color: BLACK posn: 100, 120 - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - color:GREEN size:20 shape:TRIANGLE posn:20,40 - ''' - ) - - prints:: - - shape: SQUARE color: BLACK posn: 100, 120 - ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - - color: BLACK - - posn: ['100', ',', '120'] - - x: 100 - - y: 120 - - shape: SQUARE - - - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - - color: BLUE - - posn: ['50', ',', '80'] - - x: 50 - - y: 80 - - shape: CIRCLE - - size: 50 - - - color: GREEN size: 20 shape: TRIANGLE posn: 20,40 - ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - - color: GREEN - - posn: ['20', ',', '40'] - - x: 20 - - y: 40 - - shape: TRIANGLE - - size: 20 - """ - - def __init__(self, exprs: IterableType[ParserElement], savelist: bool = True): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - else: - self.mayReturnEmpty = True - self.skipWhitespace = True - self.initExprGroups = True - self.saveAsList = True - - def streamline(self) -> ParserElement: - super().streamline() - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - else: - self.mayReturnEmpty = True - return self - - def parseImpl(self, instring, loc, doActions=True): - if self.initExprGroups: - self.opt1map = dict( - (id(e.expr), e) for e in self.exprs if isinstance(e, Opt) - ) - opt1 = [e.expr for e in self.exprs if isinstance(e, Opt)] - opt2 = [ - e - for e in self.exprs - if e.mayReturnEmpty and not isinstance(e, (Opt, Regex, ZeroOrMore)) - ] - self.optionals = opt1 + opt2 - self.multioptionals = [ - e.expr.set_results_name(e.resultsName, list_all_matches=True) - for e in self.exprs - if isinstance(e, _MultipleMatch) - ] - self.multirequired = [ - e.expr.set_results_name(e.resultsName, list_all_matches=True) - for e in self.exprs - if isinstance(e, OneOrMore) - ] - self.required = [ - e for e in self.exprs if not isinstance(e, (Opt, ZeroOrMore, OneOrMore)) - ] - self.required += self.multirequired - self.initExprGroups = False - - tmpLoc = loc - tmpReqd = self.required[:] - tmpOpt = self.optionals[:] - multis = self.multioptionals[:] - matchOrder = [] - - keepMatching = True - failed = [] - fatals = [] - while keepMatching: - tmpExprs = tmpReqd + tmpOpt + multis - failed.clear() - fatals.clear() - for e in tmpExprs: - try: - tmpLoc = e.try_parse(instring, tmpLoc, raise_fatal=True) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - fatals.append(pfe) - failed.append(e) - except ParseException: - failed.append(e) - else: - matchOrder.append(self.opt1map.get(id(e), e)) - if e in tmpReqd: - tmpReqd.remove(e) - elif e in tmpOpt: - tmpOpt.remove(e) - if len(failed) == len(tmpExprs): - keepMatching = False - - # look for any ParseFatalExceptions - if fatals: - if len(fatals) > 1: - fatals.sort(key=lambda e: -e.loc) - if fatals[0].loc == fatals[1].loc: - fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement)))) - max_fatal = fatals[0] - raise max_fatal - - if tmpReqd: - missing = ", ".join([str(e) for e in tmpReqd]) - raise ParseException( - instring, - loc, - "Missing one or more required elements ({})".format(missing), - ) - - # add any unmatched Opts, in case they have default values defined - matchOrder += [e for e in self.exprs if isinstance(e, Opt) and e.expr in tmpOpt] - - total_results = ParseResults([]) - for e in matchOrder: - loc, results = e._parse(instring, loc, doActions) - total_results += results - - return loc, total_results - - def _generateDefaultName(self): - return "{" + " & ".join(str(e) for e in self.exprs) + "}" - - -class ParseElementEnhance(ParserElement): - """Abstract subclass of :class:`ParserElement`, for combining and - post-processing parsed tokens. - """ - - def __init__(self, expr: Union[ParserElement, str], savelist: bool = False): - super().__init__(savelist) - if isinstance(expr, str_type): - if issubclass(self._literalStringClass, Token): - expr = self._literalStringClass(expr) - elif issubclass(type(self), self._literalStringClass): - expr = Literal(expr) - else: - expr = self._literalStringClass(Literal(expr)) - self.expr = expr - if expr is not None: - self.mayIndexError = expr.mayIndexError - self.mayReturnEmpty = expr.mayReturnEmpty - self.set_whitespace_chars( - expr.whiteChars, copy_defaults=expr.copyDefaultWhiteChars - ) - self.skipWhitespace = expr.skipWhitespace - self.saveAsList = expr.saveAsList - self.callPreparse = expr.callPreparse - self.ignoreExprs.extend(expr.ignoreExprs) - - def recurse(self) -> Sequence[ParserElement]: - return [self.expr] if self.expr is not None else [] - - def parseImpl(self, instring, loc, doActions=True): - if self.expr is not None: - return self.expr._parse(instring, loc, doActions, callPreParse=False) - else: - raise ParseException(instring, loc, "No expression defined", self) - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - super().leave_whitespace(recursive) - - if recursive: - self.expr = self.expr.copy() - if self.expr is not None: - self.expr.leave_whitespace(recursive) - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - super().ignore_whitespace(recursive) - - if recursive: - self.expr = self.expr.copy() - if self.expr is not None: - self.expr.ignore_whitespace(recursive) - return self - - def ignore(self, other) -> ParserElement: - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - super().ignore(other) - if self.expr is not None: - self.expr.ignore(self.ignoreExprs[-1]) - else: - super().ignore(other) - if self.expr is not None: - self.expr.ignore(self.ignoreExprs[-1]) - return self - - def streamline(self) -> ParserElement: - super().streamline() - if self.expr is not None: - self.expr.streamline() - return self - - def _checkRecursion(self, parseElementList): - if self in parseElementList: - raise RecursiveGrammarException(parseElementList + [self]) - subRecCheckList = parseElementList[:] + [self] - if self.expr is not None: - self.expr._checkRecursion(subRecCheckList) - - def validate(self, validateTrace=None) -> None: - if validateTrace is None: - validateTrace = [] - tmp = validateTrace[:] + [self] - if self.expr is not None: - self.expr.validate(tmp) - self._checkRecursion([]) - - def _generateDefaultName(self): - return "{}:({})".format(self.__class__.__name__, str(self.expr)) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class IndentedBlock(ParseElementEnhance): - """ - Expression to match one or more expressions at a given indentation level. - Useful for parsing text where structure is implied by indentation (like Python source code). - """ - - class _Indent(Empty): - def __init__(self, ref_col: int): - super().__init__() - self.errmsg = "expected indent at column {}".format(ref_col) - self.add_condition(lambda s, l, t: col(l, s) == ref_col) - - class _IndentGreater(Empty): - def __init__(self, ref_col: int): - super().__init__() - self.errmsg = "expected indent at column greater than {}".format(ref_col) - self.add_condition(lambda s, l, t: col(l, s) > ref_col) - - def __init__( - self, expr: ParserElement, *, recursive: bool = False, grouped: bool = True - ): - super().__init__(expr, savelist=True) - # if recursive: - # raise NotImplementedError("IndentedBlock with recursive is not implemented") - self._recursive = recursive - self._grouped = grouped - self.parent_anchor = 1 - - def parseImpl(self, instring, loc, doActions=True): - # advance parse position to non-whitespace by using an Empty() - # this should be the column to be used for all subsequent indented lines - anchor_loc = Empty().preParse(instring, loc) - - # see if self.expr matches at the current location - if not it will raise an exception - # and no further work is necessary - self.expr.try_parse(instring, anchor_loc, doActions) - - indent_col = col(anchor_loc, instring) - peer_detect_expr = self._Indent(indent_col) - - inner_expr = Empty() + peer_detect_expr + self.expr - if self._recursive: - sub_indent = self._IndentGreater(indent_col) - nested_block = IndentedBlock( - self.expr, recursive=self._recursive, grouped=self._grouped - ) - nested_block.set_debug(self.debug) - nested_block.parent_anchor = indent_col - inner_expr += Opt(sub_indent + nested_block) - - inner_expr.set_name(f"inner {hex(id(inner_expr))[-4:].upper()}@{indent_col}") - block = OneOrMore(inner_expr) - - trailing_undent = self._Indent(self.parent_anchor) | StringEnd() - - if self._grouped: - wrapper = Group - else: - wrapper = lambda expr: expr - return (wrapper(block) + Optional(trailing_undent)).parseImpl( - instring, anchor_loc, doActions - ) - - -class AtStringStart(ParseElementEnhance): - """Matches if expression matches at the beginning of the parse - string:: - - AtStringStart(Word(nums)).parse_string("123") - # prints ["123"] - - AtStringStart(Word(nums)).parse_string(" 123") - # raises ParseException - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.callPreparse = False - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - raise ParseException(instring, loc, "not found at string start") - return super().parseImpl(instring, loc, doActions) - - -class AtLineStart(ParseElementEnhance): - r"""Matches if an expression matches at the beginning of a line within - the parse string - - Example:: - - test = '''\ - AAA this line - AAA and this line - AAA but not this one - B AAA and definitely not this one - ''' - - for t in (AtLineStart('AAA') + restOfLine).search_string(test): - print(t) - - prints:: - - ['AAA', ' this line'] - ['AAA', ' and this line'] - - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.callPreparse = False - - def parseImpl(self, instring, loc, doActions=True): - if col(loc, instring) != 1: - raise ParseException(instring, loc, "not found at line start") - return super().parseImpl(instring, loc, doActions) - - -class FollowedBy(ParseElementEnhance): - """Lookahead matching of the given parse expression. - ``FollowedBy`` does *not* advance the parsing position within - the input string, it only verifies that the specified parse - expression matches at the current position. ``FollowedBy`` - always returns a null token list. If any results names are defined - in the lookahead expression, those *will* be returned for access by - name. - - Example:: - - # use FollowedBy to match a label only if it is followed by a ':' - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - - OneOrMore(attr_expr).parse_string("shape: SQUARE color: BLACK posn: upper left").pprint() - - prints:: - - [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - # by using self._expr.parse and deleting the contents of the returned ParseResults list - # we keep any named results that were defined in the FollowedBy expression - _, ret = self.expr._parse(instring, loc, doActions=doActions) - del ret[:] - - return loc, ret - - -class PrecededBy(ParseElementEnhance): - """Lookbehind matching of the given parse expression. - ``PrecededBy`` does not advance the parsing position within the - input string, it only verifies that the specified parse expression - matches prior to the current position. ``PrecededBy`` always - returns a null token list, but if a results name is defined on the - given expression, it is returned. - - Parameters: - - - expr - expression that must match prior to the current parse - location - - retreat - (default= ``None``) - (int) maximum number of characters - to lookbehind prior to the current parse location - - If the lookbehind expression is a string, :class:`Literal`, - :class:`Keyword`, or a :class:`Word` or :class:`CharsNotIn` - with a specified exact or maximum length, then the retreat - parameter is not required. Otherwise, retreat must be specified to - give a maximum number of characters to look back from - the current parse position for a lookbehind match. - - Example:: - - # VB-style variable names with type prefixes - int_var = PrecededBy("#") + pyparsing_common.identifier - str_var = PrecededBy("$") + pyparsing_common.identifier - - """ - - def __init__( - self, expr: Union[ParserElement, str], retreat: OptionalType[int] = None - ): - super().__init__(expr) - self.expr = self.expr().leave_whitespace() - self.mayReturnEmpty = True - self.mayIndexError = False - self.exact = False - if isinstance(expr, str_type): - retreat = len(expr) - self.exact = True - elif isinstance(expr, (Literal, Keyword)): - retreat = expr.matchLen - self.exact = True - elif isinstance(expr, (Word, CharsNotIn)) and expr.maxLen != _MAX_INT: - retreat = expr.maxLen - self.exact = True - elif isinstance(expr, PositionToken): - retreat = 0 - self.exact = True - self.retreat = retreat - self.errmsg = "not preceded by " + str(expr) - self.skipWhitespace = False - self.parseAction.append(lambda s, l, t: t.__delitem__(slice(None, None))) - - def parseImpl(self, instring, loc=0, doActions=True): - if self.exact: - if loc < self.retreat: - raise ParseException(instring, loc, self.errmsg) - start = loc - self.retreat - _, ret = self.expr._parse(instring, start) - else: - # retreat specified a maximum lookbehind window, iterate - test_expr = self.expr + StringEnd() - instring_slice = instring[max(0, loc - self.retreat) : loc] - last_expr = ParseException(instring, loc, self.errmsg) - for offset in range(1, min(loc, self.retreat + 1) + 1): - try: - # print('trying', offset, instring_slice, repr(instring_slice[loc - offset:])) - _, ret = test_expr._parse( - instring_slice, len(instring_slice) - offset - ) - except ParseBaseException as pbe: - last_expr = pbe - else: - break - else: - raise last_expr - return loc, ret - - -class Located(ParseElementEnhance): - """ - Decorates a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parse_with_tabs` - - Example:: - - wd = Word(alphas) - for match in Located(wd).search_string("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [0, ['ljsdf'], 5] - [8, ['lksdjjf'], 15] - [18, ['lkkjj'], 23] - - """ - - def parseImpl(self, instring, loc, doActions=True): - start = loc - loc, tokens = self.expr._parse(instring, start, doActions, callPreParse=False) - ret_tokens = ParseResults([start, tokens, loc]) - ret_tokens["locn_start"] = start - ret_tokens["value"] = tokens - ret_tokens["locn_end"] = loc - if self.resultsName: - # must return as a list, so that the name will be attached to the complete group - return loc, [ret_tokens] - else: - return loc, ret_tokens - - -class NotAny(ParseElementEnhance): - """ - Lookahead to disallow matching with the given parse expression. - ``NotAny`` does *not* advance the parsing position within the - input string, it only verifies that the specified parse expression - does *not* match at the current position. Also, ``NotAny`` does - *not* skip over leading whitespace. ``NotAny`` always returns - a null token list. May be constructed using the ``'~'`` operator. - - Example:: - - AND, OR, NOT = map(CaselessKeyword, "AND OR NOT".split()) - - # take care not to mistake keywords for identifiers - ident = ~(AND | OR | NOT) + Word(alphas) - boolean_term = Opt(NOT) + ident - - # very crude boolean expression - to support parenthesis groups and - # operation hierarchy, use infix_notation - boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term) - - # integers that are followed by "." are actually floats - integer = Word(nums) + ~Char(".") - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - # do NOT use self.leave_whitespace(), don't want to propagate to exprs - # self.leave_whitespace() - self.skipWhitespace = False - - self.mayReturnEmpty = True - self.errmsg = "Found unwanted token, " + str(self.expr) - - def parseImpl(self, instring, loc, doActions=True): - if self.expr.can_parse_next(instring, loc): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - def _generateDefaultName(self): - return "~{" + str(self.expr) + "}" - - -class _MultipleMatch(ParseElementEnhance): - def __init__( - self, - expr: ParserElement, - stop_on: OptionalType[Union[ParserElement, str]] = None, - *, - stopOn: OptionalType[Union[ParserElement, str]] = None, - ): - super().__init__(expr) - stopOn = stopOn or stop_on - self.saveAsList = True - ender = stopOn - if isinstance(ender, str_type): - ender = self._literalStringClass(ender) - self.stopOn(ender) - - def stopOn(self, ender) -> ParserElement: - if isinstance(ender, str_type): - ender = self._literalStringClass(ender) - self.not_ender = ~ender if ender is not None else None - return self - - def parseImpl(self, instring, loc, doActions=True): - self_expr_parse = self.expr._parse - self_skip_ignorables = self._skipIgnorables - check_ender = self.not_ender is not None - if check_ender: - try_not_ender = self.not_ender.tryParse - - # must be at least one (but first see if we are the stopOn sentinel; - # if so, fail) - if check_ender: - try_not_ender(instring, loc) - loc, tokens = self_expr_parse(instring, loc, doActions) - try: - hasIgnoreExprs = not not self.ignoreExprs - while 1: - if check_ender: - try_not_ender(instring, loc) - if hasIgnoreExprs: - preloc = self_skip_ignorables(instring, loc) - else: - preloc = loc - loc, tmptokens = self_expr_parse(instring, preloc, doActions) - if tmptokens or tmptokens.haskeys(): - tokens += tmptokens - except (ParseException, IndexError): - pass - - return loc, tokens - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_ungrouped_named_tokens_in_collection - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in self.suppress_warnings_ - ): - for e in [self.expr] + self.expr.recurse(): - if ( - isinstance(e, ParserElement) - and e.resultsName - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in e.suppress_warnings_ - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "collides with {!r} on contained expression".format( - "warn_ungrouped_named_tokens_in_collection", - name, - type(self).__name__, - e.resultsName, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class OneOrMore(_MultipleMatch): - """ - Repetition of one or more of the given expression. - - Parameters: - - expr - expression that must match one or more times - - stop_on - (default= ``None``) - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - - Example:: - - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).set_parse_action(' '.join)) - - text = "shape: SQUARE posn: upper left color: BLACK" - OneOrMore(attr_expr).parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] - - # use stop_on attribute for OneOrMore to avoid reading label string as part of the data - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - OneOrMore(attr_expr).parse_string(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] - - # could also be written as - (attr_expr * (1,)).parse_string(text).pprint() - """ - - def _generateDefaultName(self): - return "{" + str(self.expr) + "}..." - - -class ZeroOrMore(_MultipleMatch): - """ - Optional repetition of zero or more of the given expression. - - Parameters: - - ``expr`` - expression that must match zero or more times - - ``stop_on`` - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - (default= ``None``) - - Example: similar to :class:`OneOrMore` - """ - - def __init__( - self, - expr: ParserElement, - stop_on: OptionalType[Union[ParserElement, str]] = None, - *, - stopOn: OptionalType[Union[ParserElement, str]] = None, - ): - super().__init__(expr, stopOn=stopOn or stop_on) - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - try: - return super().parseImpl(instring, loc, doActions) - except (ParseException, IndexError): - return loc, ParseResults([], name=self.resultsName) - - def _generateDefaultName(self): - return "[" + str(self.expr) + "]..." - - -class _NullToken: - def __bool__(self): - return False - - def __str__(self): - return "" - - -class Opt(ParseElementEnhance): - """ - Optional matching of the given expression. - - Parameters: - - ``expr`` - expression that must match zero or more times - - ``default`` (optional) - value to be returned if the optional expression is not found. - - Example:: - - # US postal code can be a 5-digit zip, plus optional 4-digit qualifier - zip = Combine(Word(nums, exact=5) + Opt('-' + Word(nums, exact=4))) - zip.run_tests(''' - # traditional ZIP code - 12345 - - # ZIP+4 form - 12101-0001 - - # invalid ZIP - 98765- - ''') - - prints:: - - # traditional ZIP code - 12345 - ['12345'] - - # ZIP+4 form - 12101-0001 - ['12101-0001'] - - # invalid ZIP - 98765- - ^ - FAIL: Expected end of text (at char 5), (line:1, col:6) - """ - - __optionalNotMatched = _NullToken() - - def __init__( - self, expr: Union[ParserElement, str], default: Any = __optionalNotMatched - ): - super().__init__(expr, savelist=False) - self.saveAsList = self.expr.saveAsList - self.defaultValue = default - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - self_expr = self.expr - try: - loc, tokens = self_expr._parse(instring, loc, doActions, callPreParse=False) - except (ParseException, IndexError): - default_value = self.defaultValue - if default_value is not self.__optionalNotMatched: - if self_expr.resultsName: - tokens = ParseResults([default_value]) - tokens[self_expr.resultsName] = default_value - else: - tokens = [default_value] - else: - tokens = [] - return loc, tokens - - def _generateDefaultName(self): - inner = str(self.expr) - # strip off redundant inner {}'s - while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}": - inner = inner[1:-1] - return "[" + inner + "]" - - -Optional = Opt - - -class SkipTo(ParseElementEnhance): - """ - Token for skipping over all undefined text until the matched - expression is found. - - Parameters: - - ``expr`` - target expression marking the end of the data to be skipped - - ``include`` - if ``True``, the target expression is also parsed - (the skipped text and target expression are returned as a 2-element - list) (default= ``False``). - - ``ignore`` - (default= ``None``) used to define grammars (typically quoted strings and - comments) that might contain false matches to the target expression - - ``fail_on`` - (default= ``None``) define expressions that are not allowed to be - included in the skipped test; if found before the target expression is found, - the :class:`SkipTo` is not a match - - Example:: - - report = ''' - Outstanding Issues Report - 1 Jan 2000 - - # | Severity | Description | Days Open - -----+----------+-------------------------------------------+----------- - 101 | Critical | Intermittent system crash | 6 - 94 | Cosmetic | Spelling error on Login ('log|n') | 14 - 79 | Minor | System slow when running too many reports | 47 - ''' - integer = Word(nums) - SEP = Suppress('|') - # use SkipTo to simply match everything up until the next SEP - # - ignore quoted strings, so that a '|' character inside a quoted string does not match - # - parse action will call token.strip() for each matched token, i.e., the description body - string_data = SkipTo(SEP, ignore=quoted_string) - string_data.set_parse_action(token_map(str.strip)) - ticket_expr = (integer("issue_num") + SEP - + string_data("sev") + SEP - + string_data("desc") + SEP - + integer("days_open")) - - for tkt in ticket_expr.search_string(report): - print tkt.dump() - - prints:: - - ['101', 'Critical', 'Intermittent system crash', '6'] - - days_open: '6' - - desc: 'Intermittent system crash' - - issue_num: '101' - - sev: 'Critical' - ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - - days_open: '14' - - desc: "Spelling error on Login ('log|n')" - - issue_num: '94' - - sev: 'Cosmetic' - ['79', 'Minor', 'System slow when running too many reports', '47'] - - days_open: '47' - - desc: 'System slow when running too many reports' - - issue_num: '79' - - sev: 'Minor' - """ - - def __init__( - self, - other: Union[ParserElement, str], - include: bool = False, - ignore: bool = None, - fail_on: OptionalType[Union[ParserElement, str]] = None, - *, - failOn: Union[ParserElement, str] = None, - ): - super().__init__(other) - failOn = failOn or fail_on - self.ignoreExpr = ignore - self.mayReturnEmpty = True - self.mayIndexError = False - self.includeMatch = include - self.saveAsList = False - if isinstance(failOn, str_type): - self.failOn = self._literalStringClass(failOn) - else: - self.failOn = failOn - self.errmsg = "No match found for " + str(self.expr) - - def parseImpl(self, instring, loc, doActions=True): - startloc = loc - instrlen = len(instring) - self_expr_parse = self.expr._parse - self_failOn_canParseNext = ( - self.failOn.canParseNext if self.failOn is not None else None - ) - self_ignoreExpr_tryParse = ( - self.ignoreExpr.tryParse if self.ignoreExpr is not None else None - ) - - tmploc = loc - while tmploc <= instrlen: - if self_failOn_canParseNext is not None: - # break if failOn expression matches - if self_failOn_canParseNext(instring, tmploc): - break - - if self_ignoreExpr_tryParse is not None: - # advance past ignore expressions - while 1: - try: - tmploc = self_ignoreExpr_tryParse(instring, tmploc) - except ParseBaseException: - break - - try: - self_expr_parse(instring, tmploc, doActions=False, callPreParse=False) - except (ParseException, IndexError): - # no match, advance loc in string - tmploc += 1 - else: - # matched skipto expr, done - break - - else: - # ran off the end of the input string without matching skipto expr, fail - raise ParseException(instring, loc, self.errmsg, self) - - # build up return values - loc = tmploc - skiptext = instring[startloc:loc] - skipresult = ParseResults(skiptext) - - if self.includeMatch: - loc, mat = self_expr_parse(instring, loc, doActions, callPreParse=False) - skipresult += mat - - return loc, skipresult - - -class Forward(ParseElementEnhance): - """ - Forward declaration of an expression to be defined later - - used for recursive grammars, such as algebraic infix notation. - When the expression is known, it is assigned to the ``Forward`` - variable using the ``'<<'`` operator. - - Note: take care when assigning to ``Forward`` not to overlook - precedence of operators. - - Specifically, ``'|'`` has a lower precedence than ``'<<'``, so that:: - - fwd_expr << a | b | c - - will actually be evaluated as:: - - (fwd_expr << a) | b | c - - thereby leaving b and c out as parseable alternatives. It is recommended that you - explicitly group the values inserted into the ``Forward``:: - - fwd_expr << (a | b | c) - - Converting to use the ``'<<='`` operator instead will avoid this problem. - - See :class:`ParseResults.pprint` for an example of a recursive - parser created using ``Forward``. - """ - - def __init__(self, other: OptionalType[Union[ParserElement, str]] = None): - self.caller_frame = traceback.extract_stack(limit=2)[0] - super().__init__(other, savelist=False) - self.lshift_line = None - - def __lshift__(self, other): - if hasattr(self, "caller_frame"): - del self.caller_frame - if isinstance(other, str_type): - other = self._literalStringClass(other) - self.expr = other - self.mayIndexError = self.expr.mayIndexError - self.mayReturnEmpty = self.expr.mayReturnEmpty - self.set_whitespace_chars( - self.expr.whiteChars, copy_defaults=self.expr.copyDefaultWhiteChars - ) - self.skipWhitespace = self.expr.skipWhitespace - self.saveAsList = self.expr.saveAsList - self.ignoreExprs.extend(self.expr.ignoreExprs) - self.lshift_line = traceback.extract_stack(limit=2)[-2] - return self - - def __ilshift__(self, other): - return self << other - - def __or__(self, other): - caller_line = traceback.extract_stack(limit=2)[-2] - if ( - __diag__.warn_on_match_first_with_lshift_operator - and caller_line == self.lshift_line - and Diagnostics.warn_on_match_first_with_lshift_operator - not in self.suppress_warnings_ - ): - warnings.warn( - "using '<<' operator with '|' is probably an error, use '<<='", - stacklevel=2, - ) - ret = super().__or__(other) - return ret - - def __del__(self): - # see if we are getting dropped because of '=' reassignment of var instead of '<<=' or '<<' - if ( - self.expr is None - and __diag__.warn_on_assignment_to_Forward - and Diagnostics.warn_on_assignment_to_Forward not in self.suppress_warnings_ - ): - warnings.warn_explicit( - "Forward defined here but no expression attached later using '<<=' or '<<'", - UserWarning, - filename=self.caller_frame.filename, - lineno=self.caller_frame.lineno, - ) - - def parseImpl(self, instring, loc, doActions=True): - if ( - self.expr is None - and __diag__.warn_on_parse_using_empty_Forward - and Diagnostics.warn_on_parse_using_empty_Forward - not in self.suppress_warnings_ - ): - # walk stack until parse_string, scan_string, search_string, or transform_string is found - parse_fns = [ - "parse_string", - "scan_string", - "search_string", - "transform_string", - ] - tb = traceback.extract_stack(limit=200) - for i, frm in enumerate(reversed(tb), start=1): - if frm.name in parse_fns: - stacklevel = i + 1 - break - else: - stacklevel = 2 - warnings.warn( - "Forward expression was never assigned a value, will not parse any input", - stacklevel=stacklevel, - ) - if not ParserElement._left_recursion_enabled: - return super().parseImpl(instring, loc, doActions) - # ## Bounded Recursion algorithm ## - # Recursion only needs to be processed at ``Forward`` elements, since they are - # the only ones that can actually refer to themselves. The general idea is - # to handle recursion stepwise: We start at no recursion, then recurse once, - # recurse twice, ..., until more recursion offers no benefit (we hit the bound). - # - # The "trick" here is that each ``Forward`` gets evaluated in two contexts - # - to *match* a specific recursion level, and - # - to *search* the bounded recursion level - # and the two run concurrently. The *search* must *match* each recursion level - # to find the best possible match. This is handled by a memo table, which - # provides the previous match to the next level match attempt. - # - # See also "Left Recursion in Parsing Expression Grammars", Medeiros et al. - # - # There is a complication since we not only *parse* but also *transform* via - # actions: We do not want to run the actions too often while expanding. Thus, - # we expand using `doActions=False` and only run `doActions=True` if the next - # recursion level is acceptable. - with ParserElement.recursion_lock: - memo = ParserElement.recursion_memos - try: - # we are parsing at a specific recursion expansion - use it as-is - prev_loc, prev_result = memo[loc, self, doActions] - if isinstance(prev_result, Exception): - raise prev_result - return prev_loc, prev_result.copy() - except KeyError: - act_key = (loc, self, True) - peek_key = (loc, self, False) - # we are searching for the best recursion expansion - keep on improving - # both `doActions` cases must be tracked separately here! - prev_loc, prev_peek = memo[peek_key] = ( - loc - 1, - ParseException( - instring, loc, "Forward recursion without base case", self - ), - ) - if doActions: - memo[act_key] = memo[peek_key] - while True: - try: - new_loc, new_peek = super().parseImpl(instring, loc, False) - except ParseException: - # we failed before getting any match – do not hide the error - if isinstance(prev_peek, Exception): - raise - new_loc, new_peek = prev_loc, prev_peek - # the match did not get better: we are done - if new_loc <= prev_loc: - if doActions: - # replace the match for doActions=False as well, - # in case the action did backtrack - prev_loc, prev_result = memo[peek_key] = memo[act_key] - del memo[peek_key], memo[act_key] - return prev_loc, prev_result.copy() - del memo[peek_key] - return prev_loc, prev_peek.copy() - # the match did get better: see if we can improve further - else: - if doActions: - try: - memo[act_key] = super().parseImpl(instring, loc, True) - except ParseException as e: - memo[peek_key] = memo[act_key] = (new_loc, e) - raise - prev_loc, prev_peek = memo[peek_key] = new_loc, new_peek - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - self.skipWhitespace = False - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - self.skipWhitespace = True - return self - - def streamline(self) -> ParserElement: - if not self.streamlined: - self.streamlined = True - if self.expr is not None: - self.expr.streamline() - return self - - def validate(self, validateTrace=None) -> None: - if validateTrace is None: - validateTrace = [] - - if self not in validateTrace: - tmp = validateTrace[:] + [self] - if self.expr is not None: - self.expr.validate(tmp) - self._checkRecursion([]) - - def _generateDefaultName(self): - # Avoid infinite recursion by setting a temporary _defaultName - self._defaultName = ": ..." - - # Use the string representation of main expression. - retString = "..." - try: - if self.expr is not None: - retString = str(self.expr)[:1000] - else: - retString = "None" - finally: - return self.__class__.__name__ + ": " + retString - - def copy(self) -> ParserElement: - if self.expr is not None: - return super().copy() - else: - ret = Forward() - ret <<= self - return ret - - def _setResultsName(self, name, list_all_matches=False): - if ( - __diag__.warn_name_set_on_empty_Forward - and Diagnostics.warn_name_set_on_empty_Forward - not in self.suppress_warnings_ - ): - if self.expr is None: - warnings.warn( - "{}: setting results name {!r} on {} expression " - "that has no contained expression".format( - "warn_name_set_on_empty_Forward", name, type(self).__name__ - ), - stacklevel=3, - ) - - return super()._setResultsName(name, list_all_matches) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class TokenConverter(ParseElementEnhance): - """ - Abstract subclass of :class:`ParseExpression`, for converting parsed results. - """ - - def __init__(self, expr: Union[ParserElement, str], savelist=False): - super().__init__(expr) # , savelist) - self.saveAsList = False - - -class Combine(TokenConverter): - """Converter to concatenate all matching tokens to a single string. - By default, the matching patterns must also be contiguous in the - input string; this can be disabled by specifying - ``'adjacent=False'`` in the constructor. - - Example:: - - real = Word(nums) + '.' + Word(nums) - print(real.parse_string('3.1416')) # -> ['3', '.', '1416'] - # will also erroneously match the following - print(real.parse_string('3. 1416')) # -> ['3', '.', '1416'] - - real = Combine(Word(nums) + '.' + Word(nums)) - print(real.parse_string('3.1416')) # -> ['3.1416'] - # no match when there are internal spaces - print(real.parse_string('3. 1416')) # -> Exception: Expected W:(0123...) - """ - - def __init__( - self, - expr: ParserElement, - join_string: str = "", - adjacent: bool = True, - *, - joinString: OptionalType[str] = None, - ): - super().__init__(expr) - joinString = joinString if joinString is not None else join_string - # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself - if adjacent: - self.leave_whitespace() - self.adjacent = adjacent - self.skipWhitespace = True - self.joinString = joinString - self.callPreparse = True - - def ignore(self, other) -> ParserElement: - if self.adjacent: - ParserElement.ignore(self, other) - else: - super().ignore(other) - return self - - def postParse(self, instring, loc, tokenlist): - retToks = tokenlist.copy() - del retToks[:] - retToks += ParseResults( - ["".join(tokenlist._asStringList(self.joinString))], modal=self.modalResults - ) - - if self.resultsName and retToks.haskeys(): - return [retToks] - else: - return retToks - - -class Group(TokenConverter): - """Converter to return the matched tokens as a list - useful for - returning tokens of :class:`ZeroOrMore` and :class:`OneOrMore` expressions. - - The optional ``aslist`` argument when set to True will return the - parsed tokens as a Python list instead of a pyparsing ParseResults. - - Example:: - - ident = Word(alphas) - num = Word(nums) - term = ident | num - func = ident + Opt(delimited_list(term)) - print(func.parse_string("fn a, b, 100")) - # -> ['fn', 'a', 'b', '100'] - - func = ident + Group(Opt(delimited_list(term))) - print(func.parse_string("fn a, b, 100")) - # -> ['fn', ['a', 'b', '100']] - """ - - def __init__(self, expr: ParserElement, aslist: bool = False): - super().__init__(expr) - self.saveAsList = True - self._asPythonList = aslist - - def postParse(self, instring, loc, tokenlist): - if self._asPythonList: - return ParseResults.List( - tokenlist.asList() - if isinstance(tokenlist, ParseResults) - else list(tokenlist) - ) - else: - return [tokenlist] - - -class Dict(TokenConverter): - """Converter to return a repetitive expression as a list, but also - as a dictionary. Each element can also be referenced using the first - token in the expression as its key. Useful for tabular report - scraping when the first column can be used as a item key. - - The optional ``asdict`` argument when set to True will return the - parsed tokens as a Python dict instead of a pyparsing ParseResults. - - Example:: - - data_word = Word(alphas) - label = data_word + FollowedBy(':') - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - - # print attributes as plain groups - print(OneOrMore(attr_expr).parse_string(text).dump()) - - # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names - result = Dict(OneOrMore(Group(attr_expr))).parse_string(text) - print(result.dump()) - - # access named fields as dict entries, or output as dict - print(result['shape']) - print(result.as_dict()) - - prints:: - - ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} - - See more examples at :class:`ParseResults` of accessing fields by results name. - """ - - def __init__(self, expr: ParserElement, asdict: bool = False): - super().__init__(expr) - self.saveAsList = True - self._asPythonDict = asdict - - def postParse(self, instring, loc, tokenlist): - for i, tok in enumerate(tokenlist): - if len(tok) == 0: - continue - - ikey = tok[0] - if isinstance(ikey, int): - ikey = str(ikey).strip() - - if len(tok) == 1: - tokenlist[ikey] = _ParseResultsWithOffset("", i) - - elif len(tok) == 2 and not isinstance(tok[1], ParseResults): - tokenlist[ikey] = _ParseResultsWithOffset(tok[1], i) - - else: - try: - dictvalue = tok.copy() # ParseResults(i) - except Exception: - exc = TypeError( - "could not extract dict values from parsed results" - " - Dict expression must contain Grouped expressions" - ) - raise exc from None - - del dictvalue[0] - - if len(dictvalue) != 1 or ( - isinstance(dictvalue, ParseResults) and dictvalue.haskeys() - ): - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue, i) - else: - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0], i) - - if self._asPythonDict: - return [tokenlist.as_dict()] if self.resultsName else tokenlist.as_dict() - else: - return [tokenlist] if self.resultsName else tokenlist - - -class Suppress(TokenConverter): - """Converter for ignoring the results of a parsed expression. - - Example:: - - source = "a, b, c,d" - wd = Word(alphas) - wd_list1 = wd + ZeroOrMore(',' + wd) - print(wd_list1.parse_string(source)) - - # often, delimiters that are useful during parsing are just in the - # way afterward - use Suppress to keep them out of the parsed output - wd_list2 = wd + ZeroOrMore(Suppress(',') + wd) - print(wd_list2.parse_string(source)) - - # Skipped text (using '...') can be suppressed as well - source = "lead in START relevant text END trailing text" - start_marker = Keyword("START") - end_marker = Keyword("END") - find_body = Suppress(...) + start_marker + ... + end_marker - print(find_body.parse_string(source) - - prints:: - - ['a', ',', 'b', ',', 'c', ',', 'd'] - ['a', 'b', 'c', 'd'] - ['START', 'relevant text ', 'END'] - - (See also :class:`delimited_list`.) - """ - - def __init__(self, expr: Union[ParserElement, str], savelist: bool = False): - if expr is ...: - expr = _PendingSkip(NoMatch()) - super().__init__(expr) - - def __add__(self, other) -> "ParserElement": - if isinstance(self.expr, _PendingSkip): - return Suppress(SkipTo(other)) + other - else: - return super().__add__(other) - - def __sub__(self, other) -> "ParserElement": - if isinstance(self.expr, _PendingSkip): - return Suppress(SkipTo(other)) - other - else: - return super().__sub__(other) - - def postParse(self, instring, loc, tokenlist): - return [] - - def suppress(self) -> ParserElement: - return self - - -def trace_parse_action(f: ParseAction) -> ParseAction: - """Decorator for debugging parse actions. - - When the parse action is called, this decorator will print - ``">> entering method-name(line:, , )"``. - When the parse action completes, the decorator will print - ``"<<"`` followed by the returned value, or any exception that the parse action raised. - - Example:: - - wd = Word(alphas) - - @trace_parse_action - def remove_duplicate_chars(tokens): - return ''.join(sorted(set(''.join(tokens)))) - - wds = OneOrMore(wd).set_parse_action(remove_duplicate_chars) - print(wds.parse_string("slkdjs sld sldd sdlf sdljf")) - - prints:: - - >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) - < 3: - thisFunc = paArgs[0].__class__.__name__ + "." + thisFunc - sys.stderr.write( - ">>entering {}(line: {!r}, {}, {!r})\n".format(thisFunc, line(l, s), l, t) - ) - try: - ret = f(*paArgs) - except Exception as exc: - sys.stderr.write("< str: - r"""Helper to easily define string ranges for use in :class:`Word` - construction. Borrows syntax from regexp ``'[]'`` string range - definitions:: - - srange("[0-9]") -> "0123456789" - srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" - srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" - - The input string must be enclosed in []'s, and the returned string - is the expanded character set joined into a single string. The - values enclosed in the []'s may be: - - - a single character - - an escaped character with a leading backslash (such as ``\-`` - or ``\]``) - - an escaped hex character with a leading ``'\x'`` - (``\x21``, which is a ``'!'`` character) (``\0x##`` - is also supported for backwards compatibility) - - an escaped octal character with a leading ``'\0'`` - (``\041``, which is a ``'!'`` character) - - a range of any of the above, separated by a dash (``'a-z'``, - etc.) - - any combination of the above (``'aeiouy'``, - ``'a-zA-Z0-9_$'``, etc.) - """ - _expanded = ( - lambda p: p - if not isinstance(p, ParseResults) - else "".join(chr(c) for c in range(ord(p[0]), ord(p[1]) + 1)) - ) - try: - return "".join(_expanded(part) for part in _reBracketExpr.parse_string(s).body) - except Exception: - return "" - - -def token_map(func, *args) -> ParseAction: - """Helper to define a parse action by mapping a function to all - elements of a :class:`ParseResults` list. If any additional args are passed, - they are forwarded to the given function as additional arguments - after the token, as in - ``hex_integer = Word(hexnums).set_parse_action(token_map(int, 16))``, - which will convert the parsed data to an integer using base 16. - - Example (compare the last to example in :class:`ParserElement.transform_string`:: - - hex_ints = OneOrMore(Word(hexnums)).set_parse_action(token_map(int, 16)) - hex_ints.run_tests(''' - 00 11 22 aa FF 0a 0d 1a - ''') - - upperword = Word(alphas).set_parse_action(token_map(str.upper)) - OneOrMore(upperword).run_tests(''' - my kingdom for a horse - ''') - - wd = Word(alphas).set_parse_action(token_map(str.title)) - OneOrMore(wd).set_parse_action(' '.join).run_tests(''' - now is the winter of our discontent made glorious summer by this sun of york - ''') - - prints:: - - 00 11 22 aa FF 0a 0d 1a - [0, 17, 34, 170, 255, 10, 13, 26] - - my kingdom for a horse - ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] - - now is the winter of our discontent made glorious summer by this sun of york - ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] - """ - - def pa(s, l, t): - return [func(tokn, *args) for tokn in t] - - func_name = getattr(func, "__name__", getattr(func, "__class__").__name__) - pa.__name__ = func_name - - return pa - - -def autoname_elements() -> None: - """ - Utility to simplify mass-naming of parser elements, for - generating railroad diagram with named subdiagrams. - """ - for name, var in sys._getframe().f_back.f_locals.items(): - if isinstance(var, ParserElement) and not var.customName: - var.set_name(name) - - -dbl_quoted_string = Combine( - Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"' -).set_name("string enclosed in double quotes") - -sgl_quoted_string = Combine( - Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'" -).set_name("string enclosed in single quotes") - -quoted_string = Combine( - Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"' - | Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'" -).set_name("quotedString using single or double quotes") - -unicode_string = Combine("u" + quoted_string.copy()).set_name("unicode string literal") - - -alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") -punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)] - -# backward compatibility names -tokenMap = token_map -conditionAsParseAction = condition_as_parse_action -nullDebugAction = null_debug_action -sglQuotedString = sgl_quoted_string -dblQuotedString = dbl_quoted_string -quotedString = quoted_string -unicodeString = unicode_string -lineStart = line_start -lineEnd = line_end -stringStart = string_start -stringEnd = string_end -traceParseAction = trace_parse_action diff --git a/spaces/tomofi/MMOCR/demo/webcam_demo.py b/spaces/tomofi/MMOCR/demo/webcam_demo.py deleted file mode 100644 index 475c29c208867326ee8c6f0ecc0fbfc74b32d65a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/demo/webcam_demo.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse - -import cv2 -import torch - -from mmocr.apis import init_detector, model_inference -from mmocr.datasets import build_dataset # noqa: F401 -from mmocr.models import build_detector # noqa: F401 - - -def parse_args(): - parser = argparse.ArgumentParser(description='MMDetection webcam demo.') - parser.add_argument('config', help='Test config file path.') - parser.add_argument('checkpoint', help='Checkpoint file.') - parser.add_argument( - '--device', type=str, default='cuda:0', help='CPU/CUDA device option.') - parser.add_argument( - '--camera-id', type=int, default=0, help='Camera device id.') - parser.add_argument( - '--score-thr', type=float, default=0.5, help='Bbox score threshold.') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - - device = torch.device(args.device) - - model = init_detector(args.config, args.checkpoint, device=device) - - camera = cv2.VideoCapture(args.camera_id) - - print('Press "Esc", "q" or "Q" to exit.') - while True: - ret_val, img = camera.read() - result = model_inference(model, img) - - ch = cv2.waitKey(1) - if ch == 27 or ch == ord('q') or ch == ord('Q'): - break - - model.show_result( - img, result, score_thr=args.score_thr, wait_time=1, show=True) - - -if __name__ == '__main__': - main() diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py deleted file mode 100644 index d2edab113649c38cac3c7dc3ff425462f7c40ffd..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_1x_coco.py deleted file mode 100644 index 8df2a8f37f8bbebce544c4ca24cb5c174f1d6dae..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_1x_coco.py +++ /dev/null @@ -1,67 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - type='RepPointsDetector', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_input', - num_outs=5), - bbox_head=dict( - type='RepPointsHead', - num_classes=80, - in_channels=256, - feat_channels=256, - point_feat_channels=256, - stacked_convs=3, - num_points=9, - gradient_mul=0.1, - point_strides=[8, 16, 32, 64, 128], - point_base_scale=4, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_init=dict(type='SmoothL1Loss', beta=0.11, loss_weight=0.5), - loss_bbox_refine=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0), - transform_method='moment'), - # training and testing settings - train_cfg=dict( - init=dict( - assigner=dict(type='PointAssigner', scale=4, pos_num=1), - allowed_border=-1, - pos_weight=-1, - debug=False), - refine=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False)), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100)) -optimizer = dict(lr=0.01) diff --git a/spaces/toonist/DualStyleGAN/dualstylegan.py b/spaces/toonist/DualStyleGAN/dualstylegan.py deleted file mode 100644 index f0941c6c44dca748644177df5dab20acb43bdea9..0000000000000000000000000000000000000000 --- a/spaces/toonist/DualStyleGAN/dualstylegan.py +++ /dev/null @@ -1,166 +0,0 @@ -from __future__ import annotations - -import argparse -import os -import pathlib -import subprocess -import sys -from typing import Callable - -import dlib -import huggingface_hub -import numpy as np -import PIL.Image -import torch -import torch.nn as nn -import torchvision.transforms as T - -if os.getenv('SYSTEM') == 'spaces': - os.system("sed -i '10,17d' DualStyleGAN/model/stylegan/op/fused_act.py") - os.system("sed -i '10,17d' DualStyleGAN/model/stylegan/op/upfirdn2d.py") - -app_dir = pathlib.Path(__file__).parent -submodule_dir = app_dir / 'DualStyleGAN' -sys.path.insert(0, submodule_dir.as_posix()) - -from model.dualstylegan import DualStyleGAN -from model.encoder.align_all_parallel import align_face -from model.encoder.psp import pSp - -MODEL_REPO = 'CVPR/DualStyleGAN' - - -class Model: - def __init__(self, device: torch.device | str): - self.device = torch.device(device) - self.landmark_model = self._create_dlib_landmark_model() - self.encoder = self._load_encoder() - self.transform = self._create_transform() - - self.style_types = [ - 'cartoon', - 'caricature', - 'anime', - 'arcane', - 'comic', - 'pixar', - 'slamdunk', - ] - self.generator_dict = { - style_type: self._load_generator(style_type) - for style_type in self.style_types - } - self.exstyle_dict = { - style_type: self._load_exstylecode(style_type) - for style_type in self.style_types - } - - @staticmethod - def _create_dlib_landmark_model(): - url = 'http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2' - path = pathlib.Path('shape_predictor_68_face_landmarks.dat') - if not path.exists(): - bz2_path = 'shape_predictor_68_face_landmarks.dat.bz2' - torch.hub.download_url_to_file(url, bz2_path) - subprocess.run(f'bunzip2 -d {bz2_path}'.split()) - return dlib.shape_predictor(path.as_posix()) - - def _load_encoder(self) -> nn.Module: - ckpt_path = huggingface_hub.hf_hub_download(MODEL_REPO, - 'models/encoder.pt') - ckpt = torch.load(ckpt_path, map_location='cpu') - opts = ckpt['opts'] - opts['device'] = self.device.type - opts['checkpoint_path'] = ckpt_path - opts = argparse.Namespace(**opts) - model = pSp(opts) - model.to(self.device) - model.eval() - return model - - @staticmethod - def _create_transform() -> Callable: - transform = T.Compose([ - T.Resize(256), - T.CenterCrop(256), - T.ToTensor(), - T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ]) - return transform - - def _load_generator(self, style_type: str) -> nn.Module: - model = DualStyleGAN(1024, 512, 8, 2, res_index=6) - ckpt_path = huggingface_hub.hf_hub_download( - MODEL_REPO, f'models/{style_type}/generator.pt') - ckpt = torch.load(ckpt_path, map_location='cpu') - model.load_state_dict(ckpt['g_ema']) - model.to(self.device) - model.eval() - return model - - @staticmethod - def _load_exstylecode(style_type: str) -> dict[str, np.ndarray]: - if style_type in ['cartoon', 'caricature', 'anime']: - filename = 'refined_exstyle_code.npy' - else: - filename = 'exstyle_code.npy' - path = huggingface_hub.hf_hub_download( - MODEL_REPO, f'models/{style_type}/{filename}') - exstyles = np.load(path, allow_pickle=True).item() - return exstyles - - def detect_and_align_face(self, image) -> np.ndarray: - image = align_face(filepath=image.name, predictor=self.landmark_model) - return image - - @staticmethod - def denormalize(tensor: torch.Tensor) -> torch.Tensor: - return torch.clamp((tensor + 1) / 2 * 255, 0, 255).to(torch.uint8) - - def postprocess(self, tensor: torch.Tensor) -> np.ndarray: - tensor = self.denormalize(tensor) - return tensor.cpu().numpy().transpose(1, 2, 0) - - @torch.inference_mode() - def reconstruct_face(self, - image: np.ndarray) -> tuple[np.ndarray, torch.Tensor]: - image = PIL.Image.fromarray(image) - input_data = self.transform(image).unsqueeze(0).to(self.device) - img_rec, instyle = self.encoder(input_data, - randomize_noise=False, - return_latents=True, - z_plus_latent=True, - return_z_plus_latent=True, - resize=False) - img_rec = torch.clamp(img_rec.detach(), -1, 1) - img_rec = self.postprocess(img_rec[0]) - return img_rec, instyle - - @torch.inference_mode() - def generate(self, style_type: str, style_id: int, structure_weight: float, - color_weight: float, structure_only: bool, - instyle: torch.Tensor) -> np.ndarray: - generator = self.generator_dict[style_type] - exstyles = self.exstyle_dict[style_type] - - style_id = int(style_id) - stylename = list(exstyles.keys())[style_id] - - latent = torch.tensor(exstyles[stylename]).to(self.device) - if structure_only: - latent[0, 7:18] = instyle[0, 7:18] - exstyle = generator.generator.style( - latent.reshape(latent.shape[0] * latent.shape[1], - latent.shape[2])).reshape(latent.shape) - - img_gen, _ = generator([instyle], - exstyle, - z_plus_latent=True, - truncation=0.7, - truncation_latent=0, - use_res=True, - interp_weights=[structure_weight] * 7 + - [color_weight] * 11) - img_gen = torch.clamp(img_gen.detach(), -1, 1) - img_gen = self.postprocess(img_gen[0]) - return img_gen diff --git a/spaces/ulysses115/diffsvc_test/infer.py b/spaces/ulysses115/diffsvc_test/infer.py deleted file mode 100644 index 4bf327645428243fffad08baeac8f255ec9ff375..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/infer.py +++ /dev/null @@ -1,98 +0,0 @@ -import io -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from infer_tools import infer_tool -from infer_tools import slicer -from infer_tools.infer_tool import Svc -from utils.hparams import hparams - -chunks_dict = infer_tool.read_temp("./infer_tools/new_chunks_temp.json") - - -def run_clip(svc_model, key, acc, use_pe, use_crepe, thre, use_gt_mel, add_noise_step, project_name='', f_name=None, - file_path=None, out_path=None, slice_db=-40,**kwargs): - print(f'code version:2022-12-04') - use_pe = use_pe if hparams['audio_sample_rate'] == 24000 else False - if file_path is None: - raw_audio_path = f"./raw/{f_name}" - clean_name = f_name[:-4] - else: - raw_audio_path = file_path - clean_name = str(Path(file_path).name)[:-4] - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - global chunks_dict - audio, sr = librosa.load(wav_path, mono=True,sr=None) - wav_hash = infer_tool.get_md5(audio) - if wav_hash in chunks_dict.keys(): - print("load chunks from temp") - chunks = chunks_dict[wav_hash]["chunks"] - else: - chunks = slicer.cut(wav_path, db_thresh=slice_db) - chunks_dict[wav_hash] = {"chunks": chunks, "time": int(time.time())} - infer_tool.write_temp("./infer_tools/new_chunks_temp.json", chunks_dict) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - count = 0 - f0_tst = [] - f0_pred = [] - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * hparams['audio_sample_rate'])) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - if hparams['debug']: - print(np.mean(data), np.var(data)) - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _f0_tst, _f0_pred, _audio = ( - np.zeros(int(np.ceil(length / hparams['hop_size']))), np.zeros(int(np.ceil(length / hparams['hop_size']))), - np.zeros(length)) - else: - _f0_tst, _f0_pred, _audio = svc_model.infer(raw_path, key=key, acc=acc, use_pe=use_pe, use_crepe=use_crepe, - thre=thre, use_gt_mel=use_gt_mel, add_noise_step=add_noise_step) - fix_audio = np.zeros(length) - fix_audio[:] = np.mean(_audio) - fix_audio[:len(_audio)] = _audio[0 if len(_audio) 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def norm_f0(f0, uv, hparams): - is_torch = isinstance(f0, torch.Tensor) - if hparams['pitch_norm'] == 'standard': - f0 = (f0 - hparams['f0_mean']) / hparams['f0_std'] - if hparams['pitch_norm'] == 'log': - f0 = torch.log2(f0) if is_torch else np.log2(f0) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - return f0 - - -def norm_interp_f0(f0, hparams): - is_torch = isinstance(f0, torch.Tensor) - if is_torch: - device = f0.device - f0 = f0.data.cpu().numpy() - uv = f0 == 0 - f0 = norm_f0(f0, uv, hparams) - if sum(uv) == len(f0): - f0[uv] = 0 - elif sum(uv) > 0: - f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv]) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if is_torch: - f0 = f0.to(device) - return f0, uv - - -def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None): - if hparams['pitch_norm'] == 'standard': - f0 = f0 * hparams['f0_std'] + hparams['f0_mean'] - if hparams['pitch_norm'] == 'log': - f0 = 2 ** f0 - if min is not None: - f0 = f0.clamp(min=min) - if max is not None: - f0 = f0.clamp(max=max) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - if pitch_padding is not None: - f0[pitch_padding] = 0 - return f0 diff --git a/spaces/umitgunduz/news-extractor/README.md b/spaces/umitgunduz/news-extractor/README.md deleted file mode 100644 index b41af5e77794df65a00f5ac1c27f2f8700ec003f..0000000000000000000000000000000000000000 --- a/spaces/umitgunduz/news-extractor/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: News Extractor -emoji: 🐢 -colorFrom: yellow -colorTo: pink -sdk: docker -pinned: false -app_port: 7860 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/umitgunduz/news-extractor/src/utils.py b/spaces/umitgunduz/news-extractor/src/utils.py deleted file mode 100644 index b733be1bd3c034d2a67b4f5af03a7673e0209ebb..0000000000000000000000000000000000000000 --- a/spaces/umitgunduz/news-extractor/src/utils.py +++ /dev/null @@ -1,150 +0,0 @@ -import logging -import re -import unicodedata - -import dateparser -import dateparser.search as searcher -from nltk import word_tokenize - - -class TextUtils: - def __init__(self): - logging.debug('TextUtils sınıfı oluşturuldu') - - @staticmethod - def clean_spaces(text): - """ - Verilen metindeki fazla boşlukları temizler. - - Args: - text (str): Girdi metni. - - Returns: - str: Temizlenmiş metin. - - """ - return " ".join(re.split(r"\s+", text.strip())) - - @staticmethod - def clean_format_str(text): - """ - Verilen metindeki unicode kontrol sembollerini, ascii olmayan karakterleri ve fazla boşlukları temizler. - - Args: - text (str): Girdi metni. - - Returns: - str: Temizlenmiş metin. - - """ - text = "".join(ch for ch in text if unicodedata.category(ch)[0] != "C") - text = "".join([c if ord(c) < 128 else "" for c in text]) - text = " ".join(re.split(r"\s+", text.strip())) - # text = re.sub(r"\r\n", " ", text) - return text - - @staticmethod - def cosine(text1, text2): - """ - İki metin arasındaki kosinüs benzerliğini hesaplar. - - Args: - text1 (str): İlk metin. - text2 (str): İkinci metin. - - Returns: - float: Metinler arasındaki kosinüs benzerliği. - - """ - X = text1.lower() - Y = text2.lower() - - X_list = word_tokenize(X) - Y_list = word_tokenize(Y) - - l1 = [] - l2 = [] - - X_set = {w for w in X_list} - Y_set = {w for w in Y_list} - - rvector = X_set.union(Y_set) - - for w in rvector: - if w in X_set: - l1.append(1) - else: - l1.append(0) - if w in Y_set: - l2.append(1) - else: - l2.append(0) - c = 0 - - for i in range(len(rvector)): - c += l1[i] * l2[i] - - x = float((sum(l1) * sum(l2)) ** 0.5) - if x != 0: - sim = c / x - else: - sim = 0 - return sim - - @staticmethod - def parse_date_time(text): - """ - Verilen metinden tarih ve saat bilgisini çözümler. - - Args: - text (str): Girdi metni. - - Returns: - str: Çözümlenen tarih ve saat '%d.%m.%Y %H:%M:%S' formatında. - - """ - result = None - try: - parsed = dateparser.parse(text, settings={'RETURN_AS_TIMEZONE_AWARE': False}) - result = parsed.strftime('%d.%m.%Y %H:%M:%S') - if result is None: - found = searcher.search_dates(text) - dl = [] - for date in found: - if date[0] and date[1]: - item = {"part": date[0], "value": date[1].strftime('%d.%m.%Y %H:%M:%S')} - dl.append(item) - result = dl[0]["value"] - except Exception as e: - logging.error(f"Bir hata oluştu. Metin: {text}, hata: {str(e)}") - return result - - @staticmethod - def text_space_normalizer(text): - """ - Verilen metindeki boşlukları düzenleyen bir yöntem. - - Args: - text (str): Düzenlenecek metin. - - Returns: - str: Boşlukları düzenlenmiş metin. - - """ - regex = r"(?<=[.,?])(?=[^\s])" - subst = " " - text = re.sub(regex, subst, text, 0, re.MULTILINE) - - regex = r"\s\s+" - subst = " " - text = re.sub(regex, subst, text, 0, re.MULTILINE) - - regex = r"\s," - subst = "" - text = re.sub(regex, subst, text, 0, re.MULTILINE) - - regex = r"\s\’" - subst = "" - text = re.sub(regex, subst, text, 0, re.MULTILINE) - - return text diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/A First Book of Classical Music 29 Themes by Beethoven Mozart Chopin and Other Great Composers in a Spiral-Bound Edition with Full-Color Illustrations.md b/spaces/usbethFlerru/sovits-modelsV2/example/A First Book of Classical Music 29 Themes by Beethoven Mozart Chopin and Other Great Composers in a Spiral-Bound Edition with Full-Color Illustrations.md deleted file mode 100644 index 677aca932000aee570b0fc0b4c2f1a6b22a713de..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/A First Book of Classical Music 29 Themes by Beethoven Mozart Chopin and Other Great Composers in a Spiral-Bound Edition with Full-Color Illustrations.md +++ /dev/null @@ -1,6 +0,0 @@ -
          -

          Anthologies are perfect for students who are just being introduced to classical music. They allow you to expose students a variety of styles and composers at a great value. Even if the student does not study all of the pieces in the anthology, they can use the others for sight-reading practice or play them just-for-fun later in their piano study.

          -

          From his earliest years Mozart had a gift for imitating the music he heard; since he traveled widely, he acquired a rare collection of experiences from which to create his unique compositional language. When he went to London as a child, he met J.C. Bach and heard his music; when he went to Paris, Mannheim, and Vienna, he heard the work of composers active there, as well as the spectacular Mannheim orchestra; when he went to Italy, he encountered the Italian overture and opera buffa, both of which were to be hugely influential on his development. Both in London and Italy, the galant style was all the rage: simple, light music, with a mania for cadencing, an emphasis on tonic, dominant, and subdominant to the exclusion of other chords, symmetrical phrases, and clearly articulated structures.[citation needed] This style, out of which the classical style evolved, was a reaction against the complexity of late Baroque music. Some of Mozart's early symphonies are Italian overtures, with three movements running into each other; many are "homotonal" (each movement in the same key, with the slow movement in the parallel minor). Others mimic the works of J.C. Bach, and others show the simple rounded binary forms commonly being written by composers in Vienna. One of the most recognizable features of Mozart's works is a sequence of harmonies or modes that usually leads to a cadence in the dominant or tonic key. This sequence is essentially borrowed from Baroque music, especially J. S. Bach. But Mozart shifted the sequence so that the cadence ended on the stronger half, i.e., the first beat of the bar. Mozart's understanding of modes such as Phrygian is evident in such passages.[citation needed]

          -

          A First Book of Classical Music: 29 Themes by Beethoven, Mozart, Chopin and Other Great Composers in


          DOWNLOADhttps://urlcod.com/2uyUon



          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/modules/ui_tempdir.py b/spaces/user238921933/stable-diffusion-webui/modules/ui_tempdir.py deleted file mode 100644 index 126f73a21d71070887fd094beaf0fe6d7e12df9c..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/ui_tempdir.py +++ /dev/null @@ -1,82 +0,0 @@ -import os -import tempfile -from collections import namedtuple -from pathlib import Path - -import gradio as gr - -from PIL import PngImagePlugin - -from modules import shared - - -Savedfile = namedtuple("Savedfile", ["name"]) - - -def register_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): # gradio 3.15 - gradio.temp_file_sets[0] = gradio.temp_file_sets[0] | {os.path.abspath(filename)} - - if hasattr(gradio, 'temp_dirs'): # gradio 3.9 - gradio.temp_dirs = gradio.temp_dirs | {os.path.abspath(os.path.dirname(filename))} - - -def check_tmp_file(gradio, filename): - if hasattr(gradio, 'temp_file_sets'): - return any([filename in fileset for fileset in gradio.temp_file_sets]) - - if hasattr(gradio, 'temp_dirs'): - return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio.temp_dirs) - - return False - - -def save_pil_to_file(pil_image, dir=None): - already_saved_as = getattr(pil_image, 'already_saved_as', None) - if already_saved_as and os.path.isfile(already_saved_as): - register_tmp_file(shared.demo, already_saved_as) - - file_obj = Savedfile(already_saved_as) - return file_obj - - if shared.opts.temp_dir != "": - dir = shared.opts.temp_dir - - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in pil_image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - - file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir) - pil_image.save(file_obj, pnginfo=(metadata if use_metadata else None)) - return file_obj - - -# override save to file function so that it also writes PNG info -gr.processing_utils.save_pil_to_file = save_pil_to_file - - -def on_tmpdir_changed(): - if shared.opts.temp_dir == "" or shared.demo is None: - return - - os.makedirs(shared.opts.temp_dir, exist_ok=True) - - register_tmp_file(shared.demo, os.path.join(shared.opts.temp_dir, "x")) - - -def cleanup_tmpdr(): - temp_dir = shared.opts.temp_dir - if temp_dir == "" or not os.path.isdir(temp_dir): - return - - for root, dirs, files in os.walk(temp_dir, topdown=False): - for name in files: - _, extension = os.path.splitext(name) - if extension != ".png": - continue - - filename = os.path.join(root, name) - os.remove(filename) diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/val.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/val.md deleted file mode 100644 index 4ffff738dbe818043e2ed37e62ee4580b6ba5788..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/val.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -comments: true -description: Validate and improve YOLOv8n model accuracy on COCO128 and other datasets using hyperparameter & configuration tuning, in Val mode. -keywords: Ultralytics, YOLO, YOLOv8, Val, Validation, Hyperparameters, Performance, Accuracy, Generalization, COCO, Export Formats, PyTorch ---- - - - -**Val mode** is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance. - -!!! tip "Tip" - - * YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()` - -## Usage Examples - -Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments. - -!!! example "" - - === "Python" - - ```python - from ultralytics import YOLO - - # Load a model - model = YOLO('yolov8n.pt') # load an official model - model = YOLO('path/to/best.pt') # load a custom model - - # Validate the model - metrics = model.val() # no arguments needed, dataset and settings remembered - metrics.box.map # map50-95 - metrics.box.map50 # map50 - metrics.box.map75 # map75 - metrics.box.maps # a list contains map50-95 of each category - ``` - === "CLI" - - ```bash - yolo detect val model=yolov8n.pt # val official model - yolo detect val model=path/to/best.pt # val custom model - ``` - -## Arguments - -Validation settings for YOLO models refer to the various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation process include the size and composition of the validation dataset and the specific task the model is being used for. It is important to carefully tune and experiment with these settings to ensure that the model is performing well on the validation dataset and to detect and prevent overfitting. - -| Key | Value | Description | -|---------------|---------|--------------------------------------------------------------------| -| `data` | `None` | path to data file, i.e. coco128.yaml | -| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) | -| `batch` | `16` | number of images per batch (-1 for AutoBatch) | -| `save_json` | `False` | save results to JSON file | -| `save_hybrid` | `False` | save hybrid version of labels (labels + additional predictions) | -| `conf` | `0.001` | object confidence threshold for detection | -| `iou` | `0.6` | intersection over union (IoU) threshold for NMS | -| `max_det` | `300` | maximum number of detections per image | -| `half` | `True` | use half precision (FP16) | -| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu | -| `dnn` | `False` | use OpenCV DNN for ONNX inference | -| `plots` | `False` | show plots during training | -| `rect` | `False` | rectangular val with each batch collated for minimum padding | -| `split` | `val` | dataset split to use for validation, i.e. 'val', 'test' or 'train' | -| \ No newline at end of file diff --git a/spaces/viait/stable-diffusion/README.md b/spaces/viait/stable-diffusion/README.md deleted file mode 100644 index 27bff54aeb7f0ef1c3b2eb031bb9453987024817..0000000000000000000000000000000000000000 --- a/spaces/viait/stable-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stable Diffusion -emoji: 🔥 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: sd.py -pinned: true -license: creativeml-openrail-m ---- - diff --git a/spaces/vinceL/YonKomaMangaGenerator/prompt_templates/stevejobs.md b/spaces/vinceL/YonKomaMangaGenerator/prompt_templates/stevejobs.md deleted file mode 100644 index cb02978a2059fc097d861637ec1516c21b879fd6..0000000000000000000000000000000000000000 --- a/spaces/vinceL/YonKomaMangaGenerator/prompt_templates/stevejobs.md +++ /dev/null @@ -1,78 +0,0 @@ -role: -/// -You are the world's greatest business storyteller and revolutionary inventor, known for creating captivating narratives about products & ideas that revolutionize the world. -/// - -task: -/// -Given the below "business idea" & "audience", create an exceptional storyboard for a slide-deck for a business/product presentation based on the following framework and example: - -1. Make your Promise: -Your hook. The reason your audience should listen to you instead of dozing off or scrolling. Jobs makes a promise. He says he’s introducing “a revolutionary product that changes everything.” - -2. Share need-to-know context: -Set the context by using comparisons on the scale you hope to achieve. Here, Jobs compares what he’s revealing to the Macintosh 1 and the iPod. Two products that, as he reminds you, changed entire industries. He gives a brief backstory but reinforces the promise from earlier. - -3. Introduce conflict / Create a villain: -Every hero needs a villain. Batman has the Joker. Harry has Voldemort. The iPhone is no different, so Jobs makes a villain for it. He chooses the current state of the smartphone. “The problem with smartphones is they’re not so smart and they're not so easy to use.” - -4. Raise the stakes: -Now, the iPhone needs to take on that villain. Jobs says, “Apple is going to reinvent the phone.” He was right. But think about how bold that claim was in 2007 before anyone had heard the word “iPhone.” - -5. Show off the solution: -Jobs shows people the iPhone. But instead of explaining it like a new product, he connects it to ones the audience already recognizes. Jobs cleverly explained the solution as three new products: a phone, a music player and an internet browser. Then finally, he exclaims: “Are you getting it? These are not three separate devices. This is one device, and we are calling it ‘IPhone!’” - -6. Raise the stakes again: - In 30 seconds, Jobs lists 13 different features of the iPhone that traditional phones don’t have. He knows what he wants people to do — buy the iPhone. But he doesn’t say it directly. Instead, he lists out all the features, then brings up the price. Remember those crappy phones he talked about? Well, they cost $499. So does the iPhone. After listening to Jobs, you know which one you’re going to buy. - -7. Reinforce your main message: -In closing, Jobs makes it personal by saying “I didn’t sleep a wink last night, I was so excited.” Jobs then recaps his main message of Apple creating revolutionary products such as the Mac in 1984 and the iPod in 2001. He announces that the company was changing its name from Apple Computer to simply Apple to reflect its new approach to products. Finally, he displays the famous Wayne Gretzky quote: “I skate to where the puck is going to be, not where it has been.” He closes with “we’ve always tried to do that at Apple, since the very beginning, and we always will.” - - -Make sure your slidedeck is in the format of a "storyboard", i.e. a series of slides, each with a "slide_design_description" and a "spoken_text" (see json-sample below). -Be as concrete and specific as possible, avoiding vague, abstract or extravagant language. -/// - -business idea: -/// -{business_idea} -/// - -target_audience: -/// -{target_audience} -/// - -/// - -response format: -/// - -make sure to: -- use at least 240 characters for each "description", ideally focussing on a single image -- the "image_generation_prompt" should follow image-generation prompt best practices, in the format of "subject(s), setting, action, art form, additional quality boosters (artstation, 4k, movie still, manga drawing etc.)", and consistently include the "art style" (and "story style") - -!!! ABOVE ALL, MAKE ABSOLUTELY SURE TO FORMAT YOUR RESPONSE EXACTLY LIKE FOLLOWING JSON-SAMPLE, replace the "..."s, and ONLY RETURN THE JSON !!! -json-sample: -{{ -"slide_deck": {{ - "title": "...", - "step_by_step_thinking_for_designing_your_storyboard": "...", - "step_by_step_thinking_for_effectively_applying_the_framework": "...", - "slides": [ - {{ - "id": 1, - "type": "Make your Promise", - "slide_design_description": "...", - "spoken_text": "..." - }}, - {{ - "id": 2, - "type": "Share need-to-know context", - "slide_design_description": "...", - "spoken_text": "..." - }}, - ... - ] -}} -/// \ No newline at end of file diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/README.md b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/README.md deleted file mode 100644 index e4ea44d3f62bbe14b9ab9235fe9e74d4f6ca4eed..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# BigGAN-PyTorch -The author's officially unofficial PyTorch BigGAN implementation. - -![Dogball? Dogball!](imgs/header_image.jpg?raw=true "Dogball? Dogball!") - - -This repo contains code for 4-8 GPU training of BigGANs from [Large Scale GAN Training for High Fidelity Natural Image Synthesis](https://arxiv.org/abs/1809.11096) by Andrew Brock, Jeff Donahue, and Karen Simonyan. - -This code is by Andy Brock and Alex Andonian. - -## How To Use This Code -You will need: - -- [PyTorch](https://PyTorch.org/), version 1.0.1 -- tqdm, numpy, scipy, and h5py -- The ImageNet training set - -First, you may optionally prepare a pre-processed HDF5 version of your target dataset for faster I/O. Following this (or not), you'll need the Inception moments needed to calculate FID. These can both be done by modifying and running - -```sh -sh scripts/utils/prepare_data.sh -``` - -Which by default assumes your ImageNet training set is downloaded into the root folder `data` in this directory, and will prepare the cached HDF5 at 128x128 pixel resolution. - -In the scripts folder, there are multiple bash scripts which will train BigGANs with different batch sizes. This code assumes you do not have access to a full TPU pod, and accordingly -spoofs mega-batches by using gradient accumulation (averaging grads over multiple minibatches, and only taking an optimizer step after N accumulations). By default, the `launch_BigGAN_bs256x8.sh` script trains a -full-sized BigGAN model with a batch size of 256 and 8 gradient accumulations, for a total batch size of 2048. On 8xV100 with full-precision training (no Tensor cores), this script takes 15 days to train to 150k iterations. - -You will first need to figure out the maximum batch size your setup can support. The pre-trained models provided here were trained on 8xV100 (16GB VRAM each) which can support slightly more than the BS256 used by default. -Once you've determined this, you should modify the script so that the batch size times the number of gradient accumulations is equal to your desired total batch size (BigGAN defaults to 2048). - -Note also that this script uses the `--load_in_mem` arg, which loads the entire (~64GB) I128.hdf5 file into RAM for faster data loading. If you don't have enough RAM to support this (probably 96GB+), remove this argument. - - -## Metrics and Sampling -![I believe I can fly!](imgs/interp_sample.jpg?raw=true "I believe I can fly!") - -During training, this script will output logs with training metrics and test metrics, will save multiple copies (2 most recent and 5 highest-scoring) of the model weights/optimizer params, and will produce samples and interpolations every time it saves weights. -The logs folder contains scripts to process these logs and plot the results using MATLAB (sorry not sorry). - -After training, one can use `sample.py` to produce additional samples and interpolations, test with different truncation values, batch sizes, number of standing stat accumulations, etc. See the `sample_BigGAN_bs256x8.sh` script for an example. - -By default, everything is saved to weights/samples/logs/data folders which are assumed to be in the same folder as this repo. -You can point all of these to a different base folder using the `--base_root` argument, or pick specific locations for each of these with their respective arguments (e.g. `--logs_root`). - -We include scripts to run BigGAN-deep, but we have not fully trained a model using them, so consider them untested. Additionally, we include scripts to run a model on CIFAR, and to run SA-GAN (with EMA) and SN-GAN on ImageNet. The SA-GAN code assumes you have 4xTitanX (or equivalent in terms of GPU RAM) and will run with a batch size of 128 and 2 gradient accumulations. - -## An Important Note on Inception Metrics -This repo uses the PyTorch in-built inception network to calculate IS and FID. -These scores are different from the scores you would get using the official TF inception code, and are only for monitoring purposes! -Run sample.py on your model, with the `--sample_npz` argument, then run inception_tf13 to calculate the actual TensorFlow IS. Note that you will need to have TensorFlow 1.3 or earlier installed, as TF1.4+ breaks the original IS code. - -## Pretrained models -![PyTorch Inception Score and FID](imgs/IS_FID.png) -We include two pretrained model checkpoints (with G, D, the EMA copy of G, the optimizers, and the state dict): -- The main checkpoint is for a BigGAN trained on ImageNet at 128x128, using BS256 and 8 gradient accumulations, taken just before collapse, with a TF Inception Score of 97.35 +/- 1.79: [LINK](https://drive.google.com/open?id=1nAle7FCVFZdix2--ks0r5JBkFnKw8ctW) -- An earlier checkpoint of the first model (100k G iters), at high performance but well before collapse, which may be easier to fine-tune: [LINK](https://drive.google.com/open?id=1dmZrcVJUAWkPBGza_XgswSuT-UODXZcO) - - - -Pretrained models for Places-365 coming soon. - -This repo also contains scripts for porting the original TFHub BigGAN Generator weights to PyTorch. See the scripts in the TFHub folder for more details. - -## Fine-tuning, Using Your Own Dataset, or Making New Training Functions -![That's deep, man](imgs/DeepSamples.png?raw=true "Deep Samples") - -If you wish to resume interrupted training or fine-tune a pre-trained model, run the same launch script but with the `--resume` argument added. -Experiment names are automatically generated from the configuration, but can be overridden using the `--experiment_name` arg (for example, if you wish to fine-tune a model using modified optimizer settings). - -To prep your own dataset, you will need to add it to datasets.py and modify the convenience dicts in utils.py (dset_dict, imsize_dict, root_dict, nclass_dict, classes_per_sheet_dict) to have the appropriate metadata for your dataset. -Repeat the process in prepare_data.sh (optionally produce an HDF5 preprocessed copy, and calculate the Inception Moments for FID). - -By default, the training script will save the top 5 best checkpoints as measured by Inception Score. -For datasets other than ImageNet, Inception Score can be a very poor measure of quality, so you will likely want to use `--which_best FID` instead. - -To use your own training function (e.g. train a BigVAE): either modify train_fns.GAN_training_function or add a new train fn and add it after the `if config['which_train_fn'] == 'GAN':` line in `train.py`. - - -## Neat Stuff -- We include the full training and metrics logs [here](https://drive.google.com/open?id=1ZhY9Mg2b_S4QwxNmt57aXJ9FOC3ZN1qb) for reference. I've found that one of the hardest things about re-implementing a paper can be checking if the logs line up early in training, -especially if training takes multiple weeks. Hopefully these will be helpful for future work. -- We include an accelerated FID calculation--the original scipy version can require upwards of 10 minutes to calculate the matrix sqrt, this version uses an accelerated PyTorch version to calculate it in under a second. -- We include an accelerated, low-memory consumption ortho reg implementation. -- By default, we only compute the top singular value (the spectral norm), but this code supports computing more SVs through the `--num_G_SVs` argument. - -## Key Differences Between This Code And The Original BigGAN -- We use the optimizer settings from SA-GAN (G_lr=1e-4, D_lr=4e-4, num_D_steps=1, as opposed to BigGAN's G_lr=5e-5, D_lr=2e-4, num_D_steps=2). -While slightly less performant, this was the first corner we cut to bring training times down. -- By default, we do not use Cross-Replica BatchNorm (AKA Synced BatchNorm). -The two variants we tried (a custom, naive one and the one included in this repo) have slightly different gradients (albeit identical forward passes) from the built-in BatchNorm, which appear to be sufficient to cripple training. -- Gradient accumulation means that we update the SV estimates and the BN statistics 8 times more frequently. This means that the BN stats are much closer to standing stats, and that the singular value estimates tend to be more accurate. -Because of this, we measure metrics by default with G in test mode (using the BatchNorm running stat estimates instead of computing standing stats as in the paper). We do still support standing stats (see the sample.sh scripts). -This could also conceivably result in gradients from the earlier accumulations being stale, but in practice this does not appear to be a problem. -- The currently provided pretrained models were not trained with orthogonal regularization. Training without ortho reg seems to increase the probability that models will not be amenable to truncation, -but it looks like this particular model got a winning ticket. Regardless, we provide two highly optimized (fast and minimal memory consumption) ortho reg implementations which directly compute the ortho reg. gradients. - -## A Note On The Design Of This Repo -This code is designed from the ground up to serve as an extensible, hackable base for further research code. -We've put a lot of thought into making sure the abstractions are the *right* thickness for research--not so thick as to be impenetrable, but not so thin as to be useless. -The key idea is that if you want to experiment with a SOTA setup and make some modification (try out your own new loss function, architecture, self-attention block, etc) you should be able to easily do so just by dropping your code in one or two places, without having to worry about the rest of the codebase. -Things like the use of self.which_conv and functools.partial in the BigGAN.py model definition were put together with this in mind, as was the design of the Spectral Norm class inheritance. - -With that said, this is a somewhat large codebase for a single project. While we tried to be thorough with the comments, if there's something you think could be more clear, better written, or better refactored, please feel free to raise an issue or a pull request. - -## Feature Requests -Want to work on or improve this code? There are a couple things this repo would benefit from, but which don't yet work. - -- Synchronized BatchNorm (AKA Cross-Replica BatchNorm). We tried out two variants of this, but for some unknown reason it crippled training each time. - We have not tried the [apex](https://github.com/NVIDIA/apex) SyncBN as my school's servers are on ancient NVIDIA drivers that don't support it--apex would probably be a good place to start. -- Mixed precision training and making use of Tensor cores. This repo includes a naive mixed-precision Adam implementation which works early in training but leads to early collapse, and doesn't do anything to activate Tensor cores (it just reduces memory consumption). - As above, integrating [apex](https://github.com/NVIDIA/apex) into this code and employing its mixed-precision training techniques to take advantage of Tensor cores and reduce memory consumption could yield substantial speed gains. - -## Misc Notes -See [This directory](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a) for ImageNet labels. - -If you use this code, please cite -```text -@inproceedings{ -brock2018large, -title={Large Scale {GAN} Training for High Fidelity Natural Image Synthesis}, -author={Andrew Brock and Jeff Donahue and Karen Simonyan}, -booktitle={International Conference on Learning Representations}, -year={2019}, -url={https://openreview.net/forum?id=B1xsqj09Fm}, -} -``` - -## Acknowledgments -Thanks to Google for the generous cloud credit donations. - -[SyncBN](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch) by Jiayuan Mao and Tete Xiao. - -[Progress bar](https://github.com/Lasagne/Recipes/tree/master/papers/densenet) originally from Jan Schlüter. - -Test metrics logger from [VoxNet.](https://github.com/dimatura/voxnet) - -PyTorch [implementation of cov](https://discuss.PyTorch.org/t/covariance-and-gradient-support/16217/2) from Modar M. Alfadly. - -PyTorch [fast Matrix Sqrt](https://github.com/msubhransu/matrix-sqrt) for FID from Tsung-Yu Lin and Subhransu Maji. - -TensorFlow Inception Score code from [OpenAI's Improved-GAN.](https://github.com/openai/improved-gan) - diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/utils/utils.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/utils/utils.py deleted file mode 100644 index 1161612d45d6aa9abaeb98d72e935fd7f5fbf2f9..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/utils/utils.py +++ /dev/null @@ -1,14 +0,0 @@ -import numpy as np -import gdown - -import torch - - -def download_file(file_id: str, output_path: str): - gdown.download(f'https://drive.google.com/uc?id={file_id}', output_path) - - -def sample_labels(labels: torch.Tensor, n: int) -> torch.Tensor: - high = labels.shape[0] - idx = np.random.randint(0, high, size=n) - return labels[idx] diff --git a/spaces/vpivn/Cooling-Water-Thermal-Evolutions/plot.py b/spaces/vpivn/Cooling-Water-Thermal-Evolutions/plot.py deleted file mode 100644 index d7c7de85aedc5b2cd172d6f183dabfd39ee04356..0000000000000000000000000000000000000000 --- a/spaces/vpivn/Cooling-Water-Thermal-Evolutions/plot.py +++ /dev/null @@ -1,56 +0,0 @@ -from plotly.subplots import make_subplots -import plotly.graph_objects as go -import PIL -from PIL import Image -import numpy as np -from matplotlib import cm - -def make_fig(data): - - # create figure - fig = make_subplots(3, 1, shared_xaxes=True, shared_yaxes=True, - x_title="horizontal length [m]", y_title="depth [m]", - subplot_titles=["Temperature evolutions", "Horizontal Velocity", "Vertical Velocity"], - ) - imgs = [] - contour_data = [] - for i in range(data.shape[0]): - field_param = data[i] - field = np.copy(field_param) - field = np.flipud(field.transpose()) - min_value = np.min(field) - contour_data.append(min_value) - max_value = np.max(field) - contour_data.append(min_value) - field -= min_value - max_value -= min_value - field /= max_value - - img = Image.fromarray(cm.jet(field, bytes=True)) # cm.magma - imgs.append(img) - # We use go.Image because subplots require traces, whereas px functions return a figure - for i, img in enumerate(imgs): - fig.add_trace(go.Image(z=img), i+1, 1) - if i==0: - min_val, _ = contour_data[0], contour_data[1] - contour_data1 = np.where(img[0] >= img[0]*(min_val+2)/min_val, - img[0], - np.nan) - fig.add_trace(go.Contour(z=contour_data1, - showscale=False, - coloring='lines'), - line_width=1) - fig.update_xaxes(range=[0, 100], row=i+1, col=1) - fig.update_yaxes(range=[0, 6.5], row=i+1, col=1) - - fig.update_layout(height=900) - fig.add_annotation( - x=0.5, - y=0.9, - text="Discharge Point", - xref="paper", - yref="paper", - showarrow=False, - font_size=20, font_color='cyan') - - return fig \ No newline at end of file diff --git a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/css/bootstrap/mixins/_visibility.css b/spaces/vrajeshbhatt/Automated-Ticket-Management-System/static/css/bootstrap/mixins/_visibility.css deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/distributions/__init__.py b/spaces/vumichien/canvas_controlnet/ldm/modules/distributions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py deleted file mode 100644 index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py +++ /dev/null @@ -1,221 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Backbone modules. -""" - -from typing import Dict, List - -import torch -import torch.nn.functional as F -import torchvision -from torch import nn -from torchvision.models._utils import IntermediateLayerGetter - -from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process - -from .position_encoding import build_position_encoding -from .swin_transformer import build_swin_transformer - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - Copy-paste from torchvision.misc.ops with added eps before rqsrt, - without which any other models than torchvision.models.resnet[18,34,50,101] - produce nans. - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super(FrozenBatchNorm2d, self)._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x): - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - eps = 1e-5 - scale = w * (rv + eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - -class BackboneBase(nn.Module): - def __init__( - self, - backbone: nn.Module, - train_backbone: bool, - num_channels: int, - return_interm_indices: list, - ): - super().__init__() - for name, parameter in backbone.named_parameters(): - if ( - not train_backbone - or "layer2" not in name - and "layer3" not in name - and "layer4" not in name - ): - parameter.requires_grad_(False) - - return_layers = {} - for idx, layer_index in enumerate(return_interm_indices): - return_layers.update( - {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)} - ) - - # if len: - # if use_stage1_feature: - # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"} - # else: - # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"} - # else: - # return_layers = {'layer4': "0"} - self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) - self.num_channels = num_channels - - def forward(self, tensor_list: NestedTensor): - xs = self.body(tensor_list.tensors) - out: Dict[str, NestedTensor] = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - # import ipdb; ipdb.set_trace() - return out - - -class Backbone(BackboneBase): - """ResNet backbone with frozen BatchNorm.""" - - def __init__( - self, - name: str, - train_backbone: bool, - dilation: bool, - return_interm_indices: list, - batch_norm=FrozenBatchNorm2d, - ): - if name in ["resnet18", "resnet34", "resnet50", "resnet101"]: - backbone = getattr(torchvision.models, name)( - replace_stride_with_dilation=[False, False, dilation], - pretrained=is_main_process(), - norm_layer=batch_norm, - ) - else: - raise NotImplementedError("Why you can get here with name {}".format(name)) - # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048 - assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available." - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - num_channels_all = [256, 512, 1024, 2048] - num_channels = num_channels_all[4 - len(return_interm_indices) :] - super().__init__(backbone, train_backbone, num_channels, return_interm_indices) - - -class Joiner(nn.Sequential): - def __init__(self, backbone, position_embedding): - super().__init__(backbone, position_embedding) - - def forward(self, tensor_list: NestedTensor): - xs = self[0](tensor_list) - out: List[NestedTensor] = [] - pos = [] - for name, x in xs.items(): - out.append(x) - # position encoding - pos.append(self[1](x).to(x.tensors.dtype)) - - return out, pos - - -def build_backbone(args): - """ - Useful args: - - backbone: backbone name - - lr_backbone: - - dilation - - return_interm_indices: available: [0,1,2,3], [1,2,3], [3] - - backbone_freeze_keywords: - - use_checkpoint: for swin only for now - - """ - position_embedding = build_position_encoding(args) - train_backbone = True - if not train_backbone: - raise ValueError("Please set lr_backbone > 0") - return_interm_indices = args.return_interm_indices - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - args.backbone_freeze_keywords - use_checkpoint = getattr(args, "use_checkpoint", False) - - if args.backbone in ["resnet50", "resnet101"]: - backbone = Backbone( - args.backbone, - train_backbone, - args.dilation, - return_interm_indices, - batch_norm=FrozenBatchNorm2d, - ) - bb_num_channels = backbone.num_channels - elif args.backbone in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ]: - pretrain_img_size = int(args.backbone.split("_")[-2]) - backbone = build_swin_transformer( - args.backbone, - pretrain_img_size=pretrain_img_size, - out_indices=tuple(return_interm_indices), - dilation=False, - use_checkpoint=use_checkpoint, - ) - - bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :] - else: - raise NotImplementedError("Unknown backbone {}".format(args.backbone)) - - assert len(bb_num_channels) == len( - return_interm_indices - ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}" - - model = Joiner(backbone, position_embedding) - model.num_channels = bb_num_channels - assert isinstance( - bb_num_channels, List - ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels)) - # import ipdb; ipdb.set_trace() - return model diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/write_prd_review.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/write_prd_review.py deleted file mode 100644 index 5ff9624c5b14473667ea7ef246b321a76708bdc6..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/actions/write_prd_review.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 17:45 -@Author : alexanderwu -@File : write_prd_review.py -""" -from metagpt.actions.action import Action - - -class WritePRDReview(Action): - def __init__(self, name, context=None, llm=None): - super().__init__(name, context, llm) - self.prd = None - self.desc = "Based on the PRD, conduct a PRD Review, providing clear and detailed feedback" - self.prd_review_prompt_template = """ - Given the following Product Requirement Document (PRD): - {prd} - - As a project manager, please review it and provide your feedback and suggestions. - """ - - async def run(self, prd): - self.prd = prd - prompt = self.prd_review_prompt_template.format(prd=self.prd) - review = await self._aask(prompt) - return review diff --git a/spaces/whgwd2023/bingo/src/components/external-link.tsx b/spaces/whgwd2023/bingo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/xiaowunv/bingo/Dockerfile b/spaces/xiaowunv/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/xiaowunv/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/xxx1/zh-clip/models/zhclip/modeling_zhclip.py b/spaces/xxx1/zh-clip/models/zhclip/modeling_zhclip.py deleted file mode 100644 index 409b8c35c0de13e4d27c910dd80b4d6769f118ee..0000000000000000000000000000000000000000 --- a/spaces/xxx1/zh-clip/models/zhclip/modeling_zhclip.py +++ /dev/null @@ -1,239 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch ZH-CLIP model.""" - - -from typing import Optional, Tuple, Union -from torch import TensorType - -import torch -from torch import nn - - -from transformers.modeling_utils import PreTrainedModel -from transformers.utils import logging, ModelOutput -from transformers.models.auto.modeling_auto import AutoModel - -from transformers.models.clip.modeling_clip import CLIPVisionConfig, CLIPVisionModel -from .configuration_zhclip import ZhCLIPConfig -from dataclasses import dataclass - -logger = logging.get_logger(__name__) -_CONFIG_FOR_DOC = "ZhCLIPConfig" - -@dataclass -class ZhCLIPModelOutput(ModelOutput): - - text_features: torch.FloatTensor = None - image_features: torch.FloatTensor = None - - -class MeanPooler(nn.Module): - """Mean pooling""" - - def forward(self, last_hidden_state: TensorType, attention_mask: TensorType): - masked_output = last_hidden_state * attention_mask.unsqueeze(-1) - return masked_output.sum(dim=1) / attention_mask.sum(-1, keepdim=True) - - -class ZhCLIPPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization. - """ - - config_class = ZhCLIPConfig - base_model_prefix = "zhclip" - supports_gradient_checkpointing = False - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -class ZhCLIPModel(ZhCLIPPreTrainedModel): - def __init__( - self, - config: Optional[ZhCLIPConfig] = None, - vision_model: Optional[PreTrainedModel] = None, - text_model: Optional[PreTrainedModel] = None, - ): - - if config is None and (vision_model is None or text_model is None): - raise ValueError("Either a configuration or an vision and a text model has to be provided") - - if config is None: - config = ZhCLIPConfig(vision_model.config, text_model.config) - else: - if not isinstance(config, self.config_class): - raise ValueError(f"config: {config} has to be of type {self.config_class}") - - # initialize with config - super().__init__(config) - - if vision_model is None: - if isinstance(config.vision_config, CLIPVisionConfig): - vision_model = CLIPVisionModel(config.vision_config).vision_model - else: - vision_model = AutoModel.from_config(config.vision_config) - - if text_model is None: - text_model = AutoModel.from_config(config.text_config) - - self.vision_model = vision_model - self.text_model = text_model - - # make sure that the individual model's config refers to the shared config - # so that the updates to the config will be synced - self.vision_model.config = self.config.vision_config - self.text_model.config = self.config.text_config - - self.vision_embed_dim = config.vision_config.hidden_size - self.text_embed_dim = config.text_config.hidden_size - self.coattention_dim = config.hidden_size - - # add projection layers - mlp_hidden_size = (self.text_embed_dim + self.coattention_dim) // 2 - self.text_projection = nn.Sequential( - nn.Linear(self.text_embed_dim, mlp_hidden_size, bias=False), - nn.GELU(), - nn.Linear(mlp_hidden_size, self.coattention_dim, bias=False), - ) - self.text_pooler = MeanPooler() - self.visual_projection = nn.Linear(self.vision_embed_dim, self.coattention_dim) - - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - pixel_values: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - patch_ids = None, - extend_token_type_ids = None, - return_loss: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], ZhCLIPModelOutput]: - - return_dict = return_dict if return_dict is not None else self.config.return_dict - image_features = self.get_image_features( - pixel_values=pixel_values, - return_dict=return_dict, - ) - text_features = self.get_text_features( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - return_dict=return_dict, - ) - return ZhCLIPModelOutput( - image_features = image_features, - text_features = text_features, - ) - - - @classmethod - def from_pretrained(cls, *args, **kwargs): - # At the moment fast initialization is not supported - # for composite models - kwargs["_fast_init"] = False - return super().from_pretrained(*args, **kwargs) - - - def get_text_features( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - token_type_ids=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - token_type_ids=token_type_ids, - #output_attentions=output_attentions, - #output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - if attention_mask is None: - attention_mask = (input_ids != self.config.pad_token_id).long() - text_pool = self.text_pooler(text_outputs[0], attention_mask) - text_feat = self.text_projection(text_pool) - return text_feat - - - def get_image_features( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by - applying the projection layer to the pooled output of [`CLIPVisionModel`]. - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, CLIPModel - - >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> image_features = model.get_image_features(**inputs) - ```""" - # Use CLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = vision_outputs[1] # pooled_output - image_features = self.visual_projection(pooled_output) - - return image_features diff --git a/spaces/yahma/rwkv-14b/config.py b/spaces/yahma/rwkv-14b/config.py deleted file mode 100644 index 0126bd7b85585d520003821cfac2ac48d4a10720..0000000000000000000000000000000000000000 --- a/spaces/yahma/rwkv-14b/config.py +++ /dev/null @@ -1,92 +0,0 @@ -from rwkvstic.agnostic.backends import TORCH, TORCH_QUANT -import torch - -quantized = { - "mode": TORCH_QUANT, - "runtimedtype": torch.bfloat16, - "useGPU": torch.cuda.is_available(), - "chunksize": 32, # larger = more accurate, but more memory (and slower) - "target": 15 # your gpu max size, excess vram offloaded to cpu -} - -# UNCOMMENT TO SELECT OPTIONS -# Not full list of options, see https://pypi.org/project/rwkvstic/ and https://huggingface.co/BlinkDL/ for more models/modes - -# RWKV 1B5 instruct test 2 model -# Approximate -# [Vram usage: 6.0GB] -# [File size: 3.0GB] - - -# config = { -# "path": "https://huggingface.co/BlinkDL/rwkv-4-pile-1b5/resolve/main/RWKV-4-Pile-1B5-Instruct-test2-20230209.pth", -# "mode": TORCH, -# "runtimedtype": torch.float32, -# "useGPU": torch.cuda.is_available(), -# "dtype": torch.float32 -# } - -# title = "RWKV-4 (1.5b Instruct Test 2)" - -# RWKV 1B5 instruct model quantized -# Approximate -# [Vram usage: 1.3GB] -# [File size: 3.0GB] - -# config = { -# "path": "https://huggingface.co/BlinkDL/rwkv-4-pile-1b5/resolve/main/RWKV-4-Pile-1B5-Instruct-test1-20230124.pth", -# **quantized -# } - -# title = "RWKV-4 (1.5b Instruct Quantized)" - -# RWKV 7B instruct pre-quantized (settings baked into model) -# Approximate -# [Vram usage: 7.0GB] -# [File size: 8.0GB] - -# config = { -# "path": "https://huggingface.co/Hazzzardous/RWKV-8Bit/resolve/main/RWKV-4-Pile-7B-Instruct.pqth" -# } - -# title = "RWKV-4 (7b Instruct Quantized)" - -# RWKV 14B quantized (latest as of feb 9) -# Approximate -# [Vram usage: 15.0GB] -# [File size: 28.0GB] - -# config = { -# "path": "https://huggingface.co/BlinkDL/rwkv-4-pile-14b/resolve/main/RWKV-4-Pile-14B-20230204-7324.pth", -# **quantized -# } - -# title = "RWKV-4 (14b Quantized)" - - -# RWKV 14B quantized (latest as of feb 13) -# Approximate -# [Vram usage: 15.0GB] -# [File size: 14.4GB] - -config = { -# "path": "https://huggingface.co/Hazzzardous/RWKV-8Bit/resolve/main/RWKV-4-Pile-14B-20230204-7324.pqth" - "path": "https://huggingface.co/yahma/RWKV-14b_quant/resolve/main/RWKV-4-Pile-14B-20230213-8019.pqth" -} - -title = "RWKV-4 (14b Quantized - Feb 13)" - -# RWKV 14B (latest as of feb 9) -# Approximate -# [Vram usage: 27.0GB] -# [File size: 28.4GB] - -# config = { -# "path": "https://huggingface.co/BlinkDL/rwkv-4-pile-14b/resolve/main/RWKV-4-Pile-14B-20230204-7324.pth", -# "mode": TORCH, -# "runtimedtype": torch.bfloat16, -# "useGPU": torch.cuda.is_available(), -# "dtype": torch.bfloat16 -# } - -# title = "RWKV-4 (14b Feb 4 Snapshot)" \ No newline at end of file diff --git a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/__init__.py b/spaces/ybelkada/interfacegan_pp/torch_utils/ops/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/yerfor/SyntaSpeech/modules/tts/syntaspeech/syntactic_graph_encoder.py b/spaces/yerfor/SyntaSpeech/modules/tts/syntaspeech/syntactic_graph_encoder.py deleted file mode 100644 index 0260b3100e6636f9684fc8ddff1775cafd33eba4..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/modules/tts/syntaspeech/syntactic_graph_encoder.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import dgl -from dgl.nn.pytorch import GatedGraphConv - -def sequence_mask(lengths, maxlen, dtype=torch.bool): - if maxlen is None: - maxlen = lengths.max() - mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t() - mask.type(dtype) - return mask - - -def group_hidden_by_segs(h, seg_ids, max_len): - """ - :param h: [B, T, H] - :param seg_ids: [B, T] - :return: h_ph: [B, T_ph, H] - """ - B, T, H = h.shape - h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h) - all_ones = h.new_ones(h.shape[:2]) - cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous() - h_gby_segs = h_gby_segs[:, 1:] - cnt_gby_segs = cnt_gby_segs[:, 1:] - h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1) - # assert h_gby_segs.shape[-1] == 192 - return h_gby_segs - -class GraphAuxEnc(nn.Module): - def __init__(self, in_dim, hid_dim, out_dim, n_iterations=5, n_edge_types=6): - super(GraphAuxEnc, self).__init__() - self.in_dim = in_dim - self.hid_dim = hid_dim - self.out_dim = out_dim - self.skip_connect = True - self.dropout_after_gae = False - - self.ggc_1 = GatedGraphConv(in_feats=in_dim, out_feats=hid_dim - , n_steps=n_iterations, n_etypes=n_edge_types) - self.ggc_2 = GatedGraphConv(in_feats=hid_dim, out_feats=out_dim - , n_steps=n_iterations, n_etypes=n_edge_types) - self.dropout = nn.Dropout(p=0.5) - - @staticmethod - def ph_encoding_to_word_encoding(ph_encoding, ph2word, word_len): - """ - ph_encoding: [batch, t_p, hid] - ph2word: tensor [batch, t_w] - word_len: tensor [batch] - """ - word_encoding_for_graph, batch_word_encoding, has_word_row_idx = GraphAuxEnc._process_ph_to_word_encoding( - ph_encoding, - ph2word, - word_len) - # [batch, t_w, hid] - return batch_word_encoding, word_encoding_for_graph - - def pad_word_encoding_to_phoneme(self, word_encoding, ph2word, t_p): - return self._postprocess_word2ph(word_encoding, ph2word, t_p) - - @staticmethod - def _process_ph_to_word_encoding(ph_encoding, ph2word, word_len=None): - """ - ph_encoding: [batch, t_p, hid] - ph2word: tensor [batch, t_w] - word_len: tensor [batch] - """ - word_len = word_len.reshape([-1,]) - max_len = max(word_len) - num_nodes = sum(word_len) - - batch_word_encoding = group_hidden_by_segs(ph_encoding, ph2word, max_len) - bs, t_p, hid = batch_word_encoding.shape - has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1] - word_encoding = batch_word_encoding.reshape([bs * t_p, hid]) - has_word_row_idx = has_word_mask.reshape([-1]) - word_encoding = word_encoding[has_word_row_idx] - assert word_encoding.shape[0] == num_nodes - return word_encoding, batch_word_encoding, has_word_row_idx - - @staticmethod - def _postprocess_word2ph(word_encoding, ph2word, t_p): - word_encoding = F.pad(word_encoding,[0,0,1,0]) - ph2word_ = ph2word[:, :, None].repeat([1, 1, word_encoding.shape[-1]]) - out = torch.gather(word_encoding, 1, ph2word_) # [B, T, H] - return out - - @staticmethod - def _repeat_one_sequence(x, d, T): - """Repeat each frame according to duration.""" - if d.sum() == 0: - d = d.fill_(1) - hid = x.shape[-1] - expanded_lst = [x_.repeat(int(d_), 1) for x_, d_ in zip(x, d) if d_ != 0] - expanded = torch.cat(expanded_lst, dim=0) - if T > expanded.shape[0]: - expanded = torch.cat([expanded, torch.zeros([T - expanded.shape[0], hid]).to(expanded.device)], dim=0) - return expanded - - def word_forward(self, graph_lst, word_encoding, etypes_lst): - """ - word encoding in, word encoding out. - """ - batched_graph = dgl.batch(graph_lst) - inp = word_encoding - batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1] - assert batched_graph.num_nodes() == inp.shape[0] - - gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes) - if self.dropout_after_gae: - gcc1_out = self.dropout(gcc1_out) - gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin] - if self.dropout_after_gae: - gcc2_out = self.ggc_2(batched_graph, gcc2_out, batched_etypes) - if self.skip_connect: - assert self.in_dim == self.hid_dim and self.hid_dim == self.out_dim - gcc2_out = inp + gcc1_out + gcc1_out - - word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1]) - max_len = max(word_len) - has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1] - has_word_row_idx = has_word_mask.reshape([-1]) - bs = len(graph_lst) - t_w = max([g.num_nodes() for g in graph_lst]) - hid = word_encoding.shape[-1] - output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device) - output[has_word_row_idx] = gcc2_out - output = output.reshape([bs, t_w, hid]) - word_level_output = output - return torch.transpose(word_level_output, 1, 2) - - def forward(self, graph_lst, ph_encoding, ph2word, etypes_lst, return_word_encoding=False): - """ - graph_lst: [list of dgl_graph] - ph_encoding: [batch, hid, t_p] - ph2word: [list of list[1,2,2,2,3,3,3]] - etypes_lst: [list of etypes]; etypes: torch.LongTensor - """ - t_p = ph_encoding.shape[-1] - ph_encoding = ph_encoding.transpose(1,2) # [batch, t_p, hid] - word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1]) - batched_graph = dgl.batch(graph_lst) - inp, batched_word_encoding, has_word_row_idx = self._process_ph_to_word_encoding(ph_encoding, ph2word, - word_len=word_len) # [num_nodes_in_batch, in_dim] - bs, t_w, hid = batched_word_encoding.shape - batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1] - gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes) - gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin] - # skip connection - gcc2_out = inp + gcc1_out + gcc1_out # [n_nodes, hid] - - output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device) - output[has_word_row_idx] = gcc2_out - output = output.reshape([bs, t_w, hid]) - word_level_output = output - output = self._postprocess_word2ph(word_level_output, ph2word, t_p) # [batch, t_p, hid] - output = torch.transpose(output, 1, 2) - - if return_word_encoding: - return output, torch.transpose(word_level_output, 1, 2) - else: - return output - -if __name__ == '__main__': - # Unit Test for batching graphs - from modules.tts.syntaspeech.syntactic_graph_buider import Sentence2GraphParser, plot_dgl_sentence_graph - parser = Sentence2GraphParser("en") - - # Unit Test for English Graph Builder - text1 = "To be or not to be , that 's a question ." - text2 = "I love you . You love me . Mixue ice-scream and tea ." - graph1, etypes1 = parser.parse(text1) - graph2, etypes2 = parser.parse(text2) - batched_text = " " + text1 + " " + " " + " " + text2 + " " - batched_nodes = [graph1.num_nodes(), graph2.num_nodes()] - plot_dgl_sentence_graph(dgl.batch([graph1, graph2]), {i: w for i, w in enumerate(batched_text.split(" "))}) - etypes_lst = [etypes1, etypes2] - - # Unit Test for Graph Encoder forward - in_feats = 4 - out_feats = 4 - enc = GraphAuxEnc(in_dim=in_feats, hid_dim=in_feats, out_dim=out_feats) - ph2word = torch.tensor([ - [1, 2, 3, 3, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0], - [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] - ]) - inp = torch.randn([2, in_feats, 17]) # [N_sentence, feat, ph_length] - graph_lst = [graph1, graph2] - out = enc(graph_lst, inp, ph2word, etypes_lst) - print(out.shape) # [N_sentence, feat, ph_length] diff --git a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/__init__.py b/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/__init__.py deleted file mode 100644 index 296262d4e2e29eaa2afba7bda1f0399d77da24f6..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/__init__.py +++ /dev/null @@ -1,100 +0,0 @@ -import os -import torch -from torch import nn -from copy import deepcopy - -from facelib.utils import load_file_from_url -from facelib.utils import download_pretrained_models -from facelib.detection.yolov5face.models.common import Conv - -from .retinaface.retinaface import RetinaFace -from .yolov5face.face_detector import YoloDetector - - -def init_detection_model(model_name, half=False, device='cuda'): - if 'retinaface' in model_name: - model = init_retinaface_model(model_name, half, device) - elif 'YOLOv5' in model_name: - model = init_yolov5face_model(model_name, device) - else: - raise NotImplementedError(f'{model_name} is not implemented.') - - return model - - -def init_retinaface_model(model_name, half=False, device='cuda'): - if model_name == 'retinaface_resnet50': - model = RetinaFace(network_name='resnet50', half=half) - model_url = 'https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth' - elif model_name == 'retinaface_mobile0.25': - model = RetinaFace(network_name='mobile0.25', half=half) - model_url = 'https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_mobilenet0.25_Final.pth' - else: - raise NotImplementedError(f'{model_name} is not implemented.') - - model_path = load_file_from_url(url=model_url, model_dir='weights/facelib', progress=True, file_name=None) - load_net = torch.load(model_path, map_location=lambda storage, loc: storage) - # remove unnecessary 'module.' - for k, v in deepcopy(load_net).items(): - if k.startswith('module.'): - load_net[k[7:]] = v - load_net.pop(k) - model.load_state_dict(load_net, strict=True) - model.eval() - model = model.to(device) - - return model - - -def init_yolov5face_model(model_name, device='cuda'): - if model_name == 'YOLOv5l': - model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5l.yaml', device=device) - model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth' - elif model_name == 'YOLOv5n': - model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5n.yaml', device=device) - model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5n-face.pth' - else: - raise NotImplementedError(f'{model_name} is not implemented.') - - model_path = load_file_from_url(url=model_url, model_dir='weights/facelib', progress=True, file_name=None) - load_net = torch.load(model_path, map_location=lambda storage, loc: storage) - model.detector.load_state_dict(load_net, strict=True) - model.detector.eval() - model.detector = model.detector.to(device).float() - - for m in model.detector.modules(): - if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True # pytorch 1.7.0 compatibility - elif isinstance(m, Conv): - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - - return model - - -# Download from Google Drive -# def init_yolov5face_model(model_name, device='cuda'): -# if model_name == 'YOLOv5l': -# model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5l.yaml', device=device) -# f_id = {'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV'} -# elif model_name == 'YOLOv5n': -# model = YoloDetector(config_name='facelib/detection/yolov5face/models/yolov5n.yaml', device=device) -# f_id = {'yolov5n-face.pth': '1fhcpFvWZqghpGXjYPIne2sw1Fy4yhw6o'} -# else: -# raise NotImplementedError(f'{model_name} is not implemented.') - -# model_path = os.path.join('weights/facelib', list(f_id.keys())[0]) -# if not os.path.exists(model_path): -# download_pretrained_models(file_ids=f_id, save_path_root='weights/facelib') - -# load_net = torch.load(model_path, map_location=lambda storage, loc: storage) -# model.detector.load_state_dict(load_net, strict=True) -# model.detector.eval() -# model.detector = model.detector.to(device).float() - -# for m in model.detector.modules(): -# if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: -# m.inplace = True # pytorch 1.7.0 compatibility -# elif isinstance(m, Conv): -# m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - -# return model \ No newline at end of file diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/README.md b/spaces/ygtxr1997/ReliableSwap_Demo/README.md deleted file mode 100644 index ee0ec015f1ef14f511346e5738cc9bccb026f889..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ReliableSwap -emoji: 🌍 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yhavinga/rosetta/generator.py b/spaces/yhavinga/rosetta/generator.py deleted file mode 100644 index d953d118b0a001bbda93d8c0aef43317848f1903..0000000000000000000000000000000000000000 --- a/spaces/yhavinga/rosetta/generator.py +++ /dev/null @@ -1,169 +0,0 @@ -import _thread -import os -import re - -import streamlit as st -import torch -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer - -device = torch.cuda.device_count() - 1 - - -def get_access_token(): - try: - if not os.path.exists(".streamlit/secrets.toml"): - raise FileNotFoundError - access_token = st.secrets.get("babel") - except FileNotFoundError: - access_token = os.environ.get("HF_ACCESS_TOKEN", None) - return access_token - - -# @st.cache(hash_funcs={_thread.RLock: lambda _: None}, suppress_st_warning=True, allow_output_mutation=True) -def load_model(model_name): - os.environ.setdefault("TOKENIZERS_PARALLELISM", "false") - tokenizer = AutoTokenizer.from_pretrained( - model_name, - use_fast=("ul2" not in model_name), - use_auth_token=get_access_token(), - ) - if tokenizer.pad_token is None: - print("Adding pad_token to the tokenizer") - tokenizer.pad_token = tokenizer.eos_token - for framework in [None, "flax", "tf"]: - try: - model = AutoModelForSeq2SeqLM.from_pretrained( - model_name, - from_flax=(framework == "flax"), - from_tf=(framework == "tf"), - use_auth_token=get_access_token(), - ) - break - except EnvironmentError: - if framework == "tf": - raise - if device != -1: - model.to(f"cuda:{device}") - return tokenizer, model - - -class Generator: - def __init__(self, model_name, task, desc, split_sentences): - self.model_name = model_name - self.task = task - self.desc = desc - self.split_sentences = split_sentences - self.tokenizer = None - self.model = None - self.prefix = "" - self.gen_kwargs = { - "max_length": 128, - "num_beams": 6, - "num_beam_groups": 3, - "no_repeat_ngram_size": 0, - "early_stopping": True, - "num_return_sequences": 1, - "length_penalty": 1.0, - } - self.load() - - def load(self): - if not self.model: - print(f"Loading model {self.model_name}") - self.tokenizer, self.model = load_model(self.model_name) - - for key in self.gen_kwargs: - if key in self.model.config.__dict__: - self.gen_kwargs[key] = self.model.config.__dict__[key] - try: - if self.task in self.model.config.task_specific_params: - task_specific_params = self.model.config.task_specific_params[ - self.task - ] - if "prefix" in task_specific_params: - self.prefix = task_specific_params["prefix"] - for key in self.gen_kwargs: - if key in task_specific_params: - self.gen_kwargs[key] = task_specific_params[key] - except TypeError: - pass - - def generate(self, text: str, streamer=None, **generate_kwargs) -> (str, dict): - # Replace two or more newlines with a single newline in text - text = re.sub(r"\n{2,}", "\n", text) - - generate_kwargs = {**self.gen_kwargs, **generate_kwargs} - - # if there are newlines in the text, and the model needs line-splitting, split the text and recurse - if re.search(r"\n", text) and self.split_sentences: - lines = text.splitlines() - translated = [ - self.generate(line, streamer, **generate_kwargs)[0] for line in lines - ] - return "\n".join(translated), generate_kwargs - - # if self.tokenizer has a newline_token attribute, replace \n with it - if hasattr(self.tokenizer, "newline_token"): - text = re.sub(r"\n", self.tokenizer.newline_token, text) - - batch_encoded = self.tokenizer( - self.prefix + text, - max_length=generate_kwargs["max_length"], - padding=False, - truncation=False, - return_tensors="pt", - ) - if device != -1: - batch_encoded.to(f"cuda:{device}") - logits = self.model.generate( - batch_encoded["input_ids"], - attention_mask=batch_encoded["attention_mask"], - streamer=streamer, - **generate_kwargs, - ) - decoded_preds = self.tokenizer.batch_decode( - logits.cpu().numpy(), skip_special_tokens=False - ) - - def replace_tokens(pred): - pred = pred.replace(" ", "").replace("", "").replace("
          ", "") - if hasattr(self.tokenizer, "newline_token"): - pred = pred.replace(self.tokenizer.newline_token, "\n") - return pred - - decoded_preds = list(map(replace_tokens, decoded_preds)) - return decoded_preds[0], generate_kwargs - - def __str__(self): - return self.model_name - - -class GeneratorFactory: - def __init__(self, generator_list): - self.generators = [] - for g in generator_list: - with st.spinner(text=f"Loading the model {g['desc']} ..."): - self.add_generator(**g) - - def add_generator(self, model_name, task, desc, split_sentences): - # If the generator is not yet present, add it - if not self.get_generator(model_name=model_name, task=task, desc=desc): - g = Generator(model_name, task, desc, split_sentences) - g.load() - self.generators.append(g) - - def get_generator(self, **kwargs): - for g in self.generators: - if all([g.__dict__.get(k) == v for k, v in kwargs.items()]): - return g - return None - - def __iter__(self): - return iter(self.generators) - - def filter(self, **kwargs): - return [ - g - for g in self.generators - if all([g.__dict__.get(k) == v for k, v in kwargs.items()]) - ] diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/configuration_utils.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/configuration_utils.py deleted file mode 100644 index c718fc532311a1e70077b048efe29856ee22a7f0..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/configuration_utils.py +++ /dev/null @@ -1,1075 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Configuration base class and utilities.""" - - -import copy -import json -import os -import re -import warnings -from typing import Any, Dict, List, Optional, Tuple, Union - -from packaging import version - -from . import __version__ -from .dynamic_module_utils import custom_object_save -from .utils import ( - CONFIG_NAME, - PushToHubMixin, - add_model_info_to_auto_map, - cached_file, - copy_func, - download_url, - extract_commit_hash, - is_remote_url, - is_torch_available, - logging, -) - - -logger = logging.get_logger(__name__) - -_re_configuration_file = re.compile(r"config\.(.*)\.json") - - -class PretrainedConfig(PushToHubMixin): - # no-format - r""" - Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as - methods for loading/downloading/saving configurations. - - - - A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to - initialize a model does **not** load the model weights. It only affects the model's configuration. - - - - Class attributes (overridden by derived classes): - - - **model_type** (`str`) -- An identifier for the model type, serialized into the JSON file, and used to recreate - the correct object in [`~transformers.AutoConfig`]. - - **is_composition** (`bool`) -- Whether the config class is composed of multiple sub-configs. In this case the - config has to be initialized from two or more configs of type [`~transformers.PretrainedConfig`] like: - [`~transformers.EncoderDecoderConfig`] or [`~RagConfig`]. - - **keys_to_ignore_at_inference** (`List[str]`) -- A list of keys to ignore by default when looking at dictionary - outputs of the model during inference. - - **attribute_map** (`Dict[str, str]`) -- A dict that maps model specific attribute names to the standardized - naming of attributes. - - Common attributes (present in all subclasses): - - - **vocab_size** (`int`) -- The number of tokens in the vocabulary, which is also the first dimension of the - embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT). - - **hidden_size** (`int`) -- The hidden size of the model. - - **num_attention_heads** (`int`) -- The number of attention heads used in the multi-head attention layers of the - model. - - **num_hidden_layers** (`int`) -- The number of blocks in the model. - - Arg: - name_or_path (`str`, *optional*, defaults to `""`): - Store the string that was passed to [`PreTrainedModel.from_pretrained`] or - [`TFPreTrainedModel.from_pretrained`] as `pretrained_model_name_or_path` if the configuration was created - with such a method. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not the model should return all hidden-states. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not the model should returns all attentions. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not the model should return a [`~transformers.utils.ModelOutput`] instead of a plain tuple. - is_encoder_decoder (`bool`, *optional*, defaults to `False`): - Whether the model is used as an encoder/decoder or not. - is_decoder (`bool`, *optional*, defaults to `False`): - Whether the model is used as decoder or not (in which case it's used as an encoder). - cross_attention_hidden_size** (`bool`, *optional*): - The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder - setting and the cross-attention hidden dimension differs from `self.config.hidden_size`. - add_cross_attention (`bool`, *optional*, defaults to `False`): - Whether cross-attention layers should be added to the model. Note, this option is only relevant for models - that can be used as decoder models within the [`EncoderDecoderModel`] class, which consists of all models - in `AUTO_MODELS_FOR_CAUSAL_LM`. - tie_encoder_decoder (`bool`, *optional*, defaults to `False`): - Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder - and decoder model to have the exact same parameter names. - prune_heads (`Dict[int, List[int]]`, *optional*, defaults to `{}`): - Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of - heads to prune in said layer. - - For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. - chunk_size_feed_forward (`int`, *optional*, defaults to `0`): - The chunk size of all feed forward layers in the residual attention blocks. A chunk size of `0` means that - the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes `n` < - sequence_length embeddings at a time. For more information on feed forward chunking, see [How does Feed - Forward Chunking work?](../glossary.html#feed-forward-chunking). - - > Parameters for sequence generation - - max_length (`int`, *optional*, defaults to 20): - Maximum length that will be used by default in the `generate` method of the model. - min_length (`int`, *optional*, defaults to 0): - Minimum length that will be used by default in the `generate` method of the model. - do_sample (`bool`, *optional*, defaults to `False`): - Flag that will be used by default in the `generate` method of the model. Whether or not to use sampling ; - use greedy decoding otherwise. - early_stopping (`bool`, *optional*, defaults to `False`): - Flag that will be used by default in the `generate` method of the model. Whether to stop the beam search - when at least `num_beams` sentences are finished per batch or not. - num_beams (`int`, *optional*, defaults to 1): - Number of beams for beam search that will be used by default in the `generate` method of the model. 1 means - no beam search. - num_beam_groups (`int`, *optional*, defaults to 1): - Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams - that will be used by default in the `generate` method of the model. 1 means no group beam search. - diversity_penalty (`float`, *optional*, defaults to 0.0): - Value to control diversity for group beam search. that will be used by default in the `generate` method of - the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs. - temperature (`float`, *optional*, defaults to 1.0): - The value used to module the next token probabilities that will be used by default in the `generate` method - of the model. Must be strictly positive. - top_k (`int`, *optional*, defaults to 50): - Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in - the `generate` method of the model. - top_p (`float`, *optional*, defaults to 1): - Value that will be used by default in the `generate` method of the model for `top_p`. If set to float < 1, - only the most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. - typical_p (`float`, *optional*, defaults to 1): - Local typicality measures how similar the conditional probability of predicting a target token next is to - the expected conditional probability of predicting a random token next, given the partial text already - generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that - add up to `typical_p` or higher are kept for generation. See [this - paper](https://arxiv.org/pdf/2202.00666.pdf) for more details. - repetition_penalty (`float`, *optional*, defaults to 1): - Parameter for repetition penalty that will be used by default in the `generate` method of the model. 1.0 - means no penalty. - length_penalty (`float`, *optional*, defaults to 1): - Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to - the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log - likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while - `length_penalty` < 0.0 encourages shorter sequences. - no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by default in the - `generate` method of the model for `no_repeat_ngram_size`. If set to int > 0, all ngrams of that size can - only occur once. - encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by - default in the `generate` method of the model for `encoder_no_repeat_ngram_size`. If set to int > 0, all - ngrams of that size that occur in the `encoder_input_ids` cannot occur in the `decoder_input_ids`. - bad_words_ids (`List[int]`, *optional*): - List of token ids that are not allowed to be generated that will be used by default in the `generate` - method of the model. In order to get the tokens of the words that should not appear in the generated text, - use `tokenizer.encode(bad_word, add_prefix_space=True)`. - num_return_sequences (`int`, *optional*, defaults to 1): - Number of independently computed returned sequences for each element in the batch that will be used by - default in the `generate` method of the model. - output_scores (`bool`, *optional*, defaults to `False`): - Whether the model should return the logits when used for generation. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether the model should return a [`~transformers.utils.ModelOutput`] instead of a `torch.LongTensor`. - forced_bos_token_id (`int`, *optional*): - The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for - multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target - language token. - forced_eos_token_id (`int`, *optional*): - The id of the token to force as the last generated token when `max_length` is reached. - remove_invalid_values (`bool`, *optional*): - Whether to remove possible _nan_ and _inf_ outputs of the model to prevent the generation method to crash. - Note that using `remove_invalid_values` can slow down generation. - - > Parameters for fine-tuning tasks - - architectures (`List[str]`, *optional*): - Model architectures that can be used with the model pretrained weights. - finetuning_task (`str`, *optional*): - Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow - or PyTorch) checkpoint. - id2label (`Dict[int, str]`, *optional*): - A map from index (for instance prediction index, or target index) to label. - label2id (`Dict[str, int]`, *optional*): A map from label to index for the model. - num_labels (`int`, *optional*): - Number of labels to use in the last layer added to the model, typically for a classification task. - task_specific_params (`Dict[str, Any]`, *optional*): - Additional keyword arguments to store for the current task. - problem_type (`str`, *optional*): - Problem type for `XxxForSequenceClassification` models. Can be one of `"regression"`, - `"single_label_classification"` or `"multi_label_classification"`. - - > Parameters linked to the tokenizer - - tokenizer_class (`str`, *optional*): - The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the - model by default). - prefix (`str`, *optional*): - A specific prompt that should be added at the beginning of each text before calling the model. - bos_token_id (`int`, *optional*): The id of the _beginning-of-stream_ token. - pad_token_id (`int`, *optional*): The id of the _padding_ token. - eos_token_id (`int`, *optional*): The id of the _end-of-stream_ token. - decoder_start_token_id (`int`, *optional*): - If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token. - sep_token_id (`int`, *optional*): The id of the _separation_ token. - - > PyTorch specific parameters - - torchscript (`bool`, *optional*, defaults to `False`): - Whether or not the model should be used with Torchscript. - tie_word_embeddings (`bool`, *optional*, defaults to `True`): - Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the - model has a output word embedding layer. - torch_dtype (`str`, *optional*): - The `dtype` of the weights. This attribute can be used to initialize the model to a non-default `dtype` - (which is normally `float32`) and thus allow for optimal storage allocation. For example, if the saved - model is `float16`, ideally we want to load it back using the minimal amount of memory needed to load - `float16` weights. Since the config object is stored in plain text, this attribute contains just the - floating type string without the `torch.` prefix. For example, for `torch.float16` ``torch_dtype` is the - `"float16"` string. - - This attribute is currently not being used during model loading time, but this may change in the future - versions. But we can already start preparing for the future by saving the dtype with save_pretrained. - - > TensorFlow specific parameters - - use_bfloat16 (`bool`, *optional*, defaults to `False`): - Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models). - tf_legacy_loss (`bool`, *optional*, defaults to `False`): - Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may - not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers - v5. - """ - model_type: str = "" - is_composition: bool = False - attribute_map: Dict[str, str] = {} - _auto_class: Optional[str] = None - - def __setattr__(self, key, value): - if key in super().__getattribute__("attribute_map"): - key = super().__getattribute__("attribute_map")[key] - super().__setattr__(key, value) - - def __getattribute__(self, key): - if key != "attribute_map" and key in super().__getattribute__("attribute_map"): - key = super().__getattribute__("attribute_map")[key] - return super().__getattribute__(key) - - def __init__(self, **kwargs): - # Attributes with defaults - self.return_dict = kwargs.pop("return_dict", True) - self.output_hidden_states = kwargs.pop("output_hidden_states", False) - self.output_attentions = kwargs.pop("output_attentions", False) - self.torchscript = kwargs.pop("torchscript", False) # Only used by PyTorch models - self.torch_dtype = kwargs.pop("torch_dtype", None) # Only used by PyTorch models - self.use_bfloat16 = kwargs.pop("use_bfloat16", False) - self.tf_legacy_loss = kwargs.pop("tf_legacy_loss", False) # Only used by TensorFlow models - self.pruned_heads = kwargs.pop("pruned_heads", {}) - self.tie_word_embeddings = kwargs.pop( - "tie_word_embeddings", True - ) # Whether input and output word embeddings should be tied for all MLM, LM and Seq2Seq models. - - # Is decoder is used in encoder-decoder models to differentiate encoder from decoder - self.is_encoder_decoder = kwargs.pop("is_encoder_decoder", False) - self.is_decoder = kwargs.pop("is_decoder", False) - self.cross_attention_hidden_size = kwargs.pop("cross_attention_hidden_size", None) - self.add_cross_attention = kwargs.pop("add_cross_attention", False) - self.tie_encoder_decoder = kwargs.pop("tie_encoder_decoder", False) - - # Parameters for sequence generation - self.max_length = kwargs.pop("max_length", 20) - self.min_length = kwargs.pop("min_length", 0) - self.do_sample = kwargs.pop("do_sample", False) - self.early_stopping = kwargs.pop("early_stopping", False) - self.num_beams = kwargs.pop("num_beams", 1) - self.num_beam_groups = kwargs.pop("num_beam_groups", 1) - self.diversity_penalty = kwargs.pop("diversity_penalty", 0.0) - self.temperature = kwargs.pop("temperature", 1.0) - self.top_k = kwargs.pop("top_k", 50) - self.top_p = kwargs.pop("top_p", 1.0) - self.typical_p = kwargs.pop("typical_p", 1.0) - self.repetition_penalty = kwargs.pop("repetition_penalty", 1.0) - self.length_penalty = kwargs.pop("length_penalty", 1.0) - self.no_repeat_ngram_size = kwargs.pop("no_repeat_ngram_size", 0) - self.encoder_no_repeat_ngram_size = kwargs.pop("encoder_no_repeat_ngram_size", 0) - self.bad_words_ids = kwargs.pop("bad_words_ids", None) - self.num_return_sequences = kwargs.pop("num_return_sequences", 1) - self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forward", 0) - self.output_scores = kwargs.pop("output_scores", False) - self.return_dict_in_generate = kwargs.pop("return_dict_in_generate", False) - self.forced_bos_token_id = kwargs.pop("forced_bos_token_id", None) - self.forced_eos_token_id = kwargs.pop("forced_eos_token_id", None) - self.remove_invalid_values = kwargs.pop("remove_invalid_values", False) - self.exponential_decay_length_penalty = kwargs.pop("exponential_decay_length_penalty", None) - self.suppress_tokens = kwargs.pop("suppress_tokens", None) - self.begin_suppress_tokens = kwargs.pop("begin_suppress_tokens", None) - - # Fine-tuning task arguments - self.architectures = kwargs.pop("architectures", None) - self.finetuning_task = kwargs.pop("finetuning_task", None) - self.id2label = kwargs.pop("id2label", None) - self.label2id = kwargs.pop("label2id", None) - if self.label2id is not None and not isinstance(self.label2id, dict): - raise ValueError("Argument label2id should be a dictionary.") - if self.id2label is not None: - if not isinstance(self.id2label, dict): - raise ValueError("Argument id2label should be a dictionary.") - num_labels = kwargs.pop("num_labels", None) - if num_labels is not None and len(self.id2label) != num_labels: - logger.warning( - f"You passed along `num_labels={num_labels}` with an incompatible id to label map: " - f"{self.id2label}. The number of labels wil be overwritten to {self.num_labels}." - ) - self.id2label = {int(key): value for key, value in self.id2label.items()} - # Keys are always strings in JSON so convert ids to int here. - else: - self.num_labels = kwargs.pop("num_labels", 2) - - if self.torch_dtype is not None and isinstance(self.torch_dtype, str): - # we will start using self.torch_dtype in v5, but to be consistent with - # from_pretrained's torch_dtype arg convert it to an actual torch.dtype object - if is_torch_available(): - import torch - - self.torch_dtype = getattr(torch, self.torch_dtype) - - # Tokenizer arguments TODO: eventually tokenizer and models should share the same config - self.tokenizer_class = kwargs.pop("tokenizer_class", None) - self.prefix = kwargs.pop("prefix", None) - self.bos_token_id = kwargs.pop("bos_token_id", None) - self.pad_token_id = kwargs.pop("pad_token_id", None) - self.eos_token_id = kwargs.pop("eos_token_id", None) - self.sep_token_id = kwargs.pop("sep_token_id", None) - - self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None) - - # task specific arguments - self.task_specific_params = kwargs.pop("task_specific_params", None) - - # regression / multi-label classification - self.problem_type = kwargs.pop("problem_type", None) - allowed_problem_types = ("regression", "single_label_classification", "multi_label_classification") - if self.problem_type is not None and self.problem_type not in allowed_problem_types: - raise ValueError( - f"The config parameter `problem_type` was not understood: received {self.problem_type} " - "but only 'regression', 'single_label_classification' and 'multi_label_classification' are valid." - ) - - # TPU arguments - if kwargs.pop("xla_device", None) is not None: - logger.warning( - "The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can " - "safely remove it from your `config.json` file." - ) - - # Name or path to the pretrained checkpoint - self._name_or_path = str(kwargs.pop("name_or_path", "")) - # Config hash - self._commit_hash = kwargs.pop("_commit_hash", None) - - # Drop the transformers version info - self.transformers_version = kwargs.pop("transformers_version", None) - - # Deal with gradient checkpointing - if kwargs.get("gradient_checkpointing", False): - warnings.warn( - "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " - "Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the " - "`Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`." - ) - - # Additional attributes without default values - for key, value in kwargs.items(): - try: - setattr(self, key, value) - except AttributeError as err: - logger.error(f"Can't set {key} with value {value} for {self}") - raise err - - @property - def name_or_path(self) -> str: - return getattr(self, "_name_or_path", None) - - @name_or_path.setter - def name_or_path(self, value): - self._name_or_path = str(value) # Make sure that name_or_path is a string (for JSON encoding) - - @property - def use_return_dict(self) -> bool: - """ - `bool`: Whether or not return [`~utils.ModelOutput`] instead of tuples. - """ - # If torchscript is set, force `return_dict=False` to avoid jit errors - return self.return_dict and not self.torchscript - - @property - def num_labels(self) -> int: - """ - `int`: The number of labels for classification models. - """ - return len(self.id2label) - - @num_labels.setter - def num_labels(self, num_labels: int): - if not hasattr(self, "id2label") or self.id2label is None or len(self.id2label) != num_labels: - self.id2label = {i: f"LABEL_{i}" for i in range(num_labels)} - self.label2id = dict(zip(self.id2label.values(), self.id2label.keys())) - - def save_pretrained(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): - """ - Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the - [`~PretrainedConfig.from_pretrained`] class method. - - Args: - save_directory (`str` or `os.PathLike`): - Directory where the configuration JSON file will be saved (will be created if it does not exist). - push_to_hub (`bool`, *optional*, defaults to `False`): - Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the - repository you want to push to with `repo_id` (will default to the name of `save_directory` in your - namespace). - kwargs (`Dict[str, Any]`, *optional*): - Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. - """ - self._set_token_in_kwargs(kwargs) - - if os.path.isfile(save_directory): - raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file") - - os.makedirs(save_directory, exist_ok=True) - - if push_to_hub: - commit_message = kwargs.pop("commit_message", None) - repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) - repo_id = self._create_repo(repo_id, **kwargs) - files_timestamps = self._get_files_timestamps(save_directory) - - # If we have a custom config, we copy the file defining it in the folder and set the attributes so it can be - # loaded from the Hub. - if self._auto_class is not None: - custom_object_save(self, save_directory, config=self) - - # If we save using the predefined names, we can load using `from_pretrained` - output_config_file = os.path.join(save_directory, CONFIG_NAME) - - self.to_json_file(output_config_file, use_diff=True) - logger.info(f"Configuration saved in {output_config_file}") - - if push_to_hub: - self._upload_modified_files( - save_directory, - repo_id, - files_timestamps, - commit_message=commit_message, - token=kwargs.get("token"), - ) - - @staticmethod - def _set_token_in_kwargs(kwargs, token=None): - """Temporary method to deal with `token` and `use_auth_token`. - - This method is to avoid apply the same changes in all model config classes that overwrite `from_pretrained`. - - Need to clean up `use_auth_token` in a follow PR. - """ - # Some model config classes like CLIP define their own `from_pretrained` without the new argument `token` yet. - if token is None: - token = kwargs.pop("token", None) - use_auth_token = kwargs.pop("use_auth_token", None) - - if use_auth_token is not None: - warnings.warn( - "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.", FutureWarning - ) - if token is not None: - raise ValueError( - "`token` and `use_auth_token` are both specified. Please set only the argument `token`." - ) - token = use_auth_token - - if token is not None: - kwargs["token"] = token - - @classmethod - def from_pretrained( - cls, - pretrained_model_name_or_path: Union[str, os.PathLike], - cache_dir: Optional[Union[str, os.PathLike]] = None, - force_download: bool = False, - local_files_only: bool = False, - token: Optional[Union[str, bool]] = None, - revision: str = "main", - **kwargs, - ) -> "PretrainedConfig": - r""" - Instantiate a [`PretrainedConfig`] (or a derived class) from a pretrained model configuration. - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - This can be either: - - - a string, the *model id* of a pretrained model configuration hosted inside a model repo on - huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or - namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`. - - a path to a *directory* containing a configuration file saved using the - [`~PretrainedConfig.save_pretrained`] method, e.g., `./my_model_directory/`. - - a path or url to a saved configuration JSON *file*, e.g., `./my_model_directory/configuration.json`. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force to (re-)download the configuration files and override the cached versions if - they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received file. Attempts to resume the download if such a file - exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request. - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use - the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - - - - To test a pull request you made on the Hub, you can pass `revision="refs/pr/". - - - - return_unused_kwargs (`bool`, *optional*, defaults to `False`): - If `False`, then this function returns just the final configuration object. - - If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a - dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the - part of `kwargs` which has not been used to update `config` and is otherwise ignored. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can - specify the folder name here. - kwargs (`Dict[str, Any]`, *optional*): - The values in kwargs of any keys which are configuration attributes will be used to override the loaded - values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled - by the `return_unused_kwargs` keyword parameter. - - Returns: - [`PretrainedConfig`]: The configuration object instantiated from this pretrained model. - - Examples: - - ```python - # We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a - # derived class: BertConfig - config = BertConfig.from_pretrained( - "bert-base-uncased" - ) # Download configuration from huggingface.co and cache. - config = BertConfig.from_pretrained( - "./test/saved_model/" - ) # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')* - config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json") - config = BertConfig.from_pretrained("bert-base-uncased", output_attentions=True, foo=False) - assert config.output_attentions == True - config, unused_kwargs = BertConfig.from_pretrained( - "bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True - ) - assert config.output_attentions == True - assert unused_kwargs == {"foo": False} - ```""" - kwargs["cache_dir"] = cache_dir - kwargs["force_download"] = force_download - kwargs["local_files_only"] = local_files_only - kwargs["revision"] = revision - - cls._set_token_in_kwargs(kwargs, token) - - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - @classmethod - def get_config_dict( - cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs - ) -> Tuple[Dict[str, Any], Dict[str, Any]]: - """ - From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a - [`PretrainedConfig`] using `from_dict`. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`): - The identifier of the pre-trained checkpoint from which we want the dictionary of parameters. - - Returns: - `Tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the configuration object. - - """ - cls._set_token_in_kwargs(kwargs) - - original_kwargs = copy.deepcopy(kwargs) - # Get config dict associated with the base config file - config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) - if "_commit_hash" in config_dict: - original_kwargs["_commit_hash"] = config_dict["_commit_hash"] - - # That config file may point us toward another config file to use. - if "configuration_files" in config_dict: - configuration_file = get_configuration_file(config_dict["configuration_files"]) - config_dict, kwargs = cls._get_config_dict( - pretrained_model_name_or_path, _configuration_file=configuration_file, **original_kwargs - ) - - return config_dict, kwargs - - @classmethod - def _get_config_dict( - cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs - ) -> Tuple[Dict[str, Any], Dict[str, Any]]: - cache_dir = kwargs.pop("cache_dir", None) - force_download = kwargs.pop("force_download", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - token = kwargs.pop("token", None) - local_files_only = kwargs.pop("local_files_only", False) - revision = kwargs.pop("revision", None) - trust_remote_code = kwargs.pop("trust_remote_code", None) - subfolder = kwargs.pop("subfolder", "") - from_pipeline = kwargs.pop("_from_pipeline", None) - from_auto_class = kwargs.pop("_from_auto", False) - commit_hash = kwargs.pop("_commit_hash", None) - - if trust_remote_code is True: - logger.warning( - "The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is" - " ignored." - ) - - user_agent = {"file_type": "config", "from_auto_class": from_auto_class} - if from_pipeline is not None: - user_agent["using_pipeline"] = from_pipeline - - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - - is_local = os.path.isdir(pretrained_model_name_or_path) - if os.path.isfile(os.path.join(subfolder, pretrained_model_name_or_path)): - # Special case when pretrained_model_name_or_path is a local file - resolved_config_file = pretrained_model_name_or_path - is_local = True - elif is_remote_url(pretrained_model_name_or_path): - configuration_file = pretrained_model_name_or_path - resolved_config_file = download_url(pretrained_model_name_or_path) - else: - configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME) - - try: - # Load from local folder or from cache or download from model Hub and cache - resolved_config_file = cached_file( - pretrained_model_name_or_path, - configuration_file, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - token=token, - user_agent=user_agent, - revision=revision, - subfolder=subfolder, - _commit_hash=commit_hash, - ) - commit_hash = extract_commit_hash(resolved_config_file, commit_hash) - except EnvironmentError: - # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to - # the original exception. - raise - except Exception: - # For any other exception, we throw a generic error. - raise EnvironmentError( - f"Can't load the configuration of '{pretrained_model_name_or_path}'. If you were trying to load it" - " from 'https://huggingface.co/models', make sure you don't have a local directory with the same" - f" name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory" - f" containing a {configuration_file} file" - ) - - try: - # Load config dict - config_dict = cls._dict_from_json_file(resolved_config_file) - config_dict["_commit_hash"] = commit_hash - except (json.JSONDecodeError, UnicodeDecodeError): - raise EnvironmentError( - f"It looks like the config file at '{resolved_config_file}' is not a valid JSON file." - ) - - if is_local: - logger.info(f"loading configuration file {resolved_config_file}") - else: - logger.info(f"loading configuration file {configuration_file} from cache at {resolved_config_file}") - - if "auto_map" in config_dict and not is_local: - config_dict["auto_map"] = add_model_info_to_auto_map( - config_dict["auto_map"], pretrained_model_name_or_path - ) - return config_dict, kwargs - - @classmethod - def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "PretrainedConfig": - """ - Instantiates a [`PretrainedConfig`] from a Python dictionary of parameters. - - Args: - config_dict (`Dict[str, Any]`): - Dictionary that will be used to instantiate the configuration object. Such a dictionary can be - retrieved from a pretrained checkpoint by leveraging the [`~PretrainedConfig.get_config_dict`] method. - kwargs (`Dict[str, Any]`): - Additional parameters from which to initialize the configuration object. - - Returns: - [`PretrainedConfig`]: The configuration object instantiated from those parameters. - """ - return_unused_kwargs = kwargs.pop("return_unused_kwargs", False) - # Those arguments may be passed along for our internal telemetry. - # We remove them so they don't appear in `return_unused_kwargs`. - kwargs.pop("_from_auto", None) - kwargs.pop("_from_pipeline", None) - # The commit hash might have been updated in the `config_dict`, we don't want the kwargs to erase that update. - if "_commit_hash" in kwargs and "_commit_hash" in config_dict: - kwargs["_commit_hash"] = config_dict["_commit_hash"] - - config = cls(**config_dict) - - if hasattr(config, "pruned_heads"): - config.pruned_heads = {int(key): value for key, value in config.pruned_heads.items()} - - # Update config with kwargs if needed - if "num_labels" in kwargs and "id2label" in kwargs: - num_labels = kwargs["num_labels"] - id2label = kwargs["id2label"] if kwargs["id2label"] is not None else [] - if len(id2label) != num_labels: - raise ValueError( - f"You passed along `num_labels={num_labels }` with an incompatible id to label map: " - f"{kwargs['id2label']}. Since those arguments are inconsistent with each other, you should remove " - "one of them." - ) - to_remove = [] - for key, value in kwargs.items(): - if hasattr(config, key): - current_attr = getattr(config, key) - # To authorize passing a custom subconfig as kwarg in models that have nested configs. - if isinstance(current_attr, PretrainedConfig) and isinstance(value, dict): - value = current_attr.__class__(**value) - setattr(config, key, value) - if key != "torch_dtype": - to_remove.append(key) - for key in to_remove: - kwargs.pop(key, None) - - logger.info(f"Model config {config}") - if return_unused_kwargs: - return config, kwargs - else: - return config - - @classmethod - def from_json_file(cls, json_file: Union[str, os.PathLike]) -> "PretrainedConfig": - """ - Instantiates a [`PretrainedConfig`] from the path to a JSON file of parameters. - - Args: - json_file (`str` or `os.PathLike`): - Path to the JSON file containing the parameters. - - Returns: - [`PretrainedConfig`]: The configuration object instantiated from that JSON file. - - """ - config_dict = cls._dict_from_json_file(json_file) - return cls(**config_dict) - - @classmethod - def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]): - with open(json_file, "r", encoding="utf-8") as reader: - text = reader.read() - return json.loads(text) - - def __eq__(self, other): - return isinstance(other, PretrainedConfig) and (self.__dict__ == other.__dict__) - - def __repr__(self): - return f"{self.__class__.__name__} {self.to_json_string()}" - - def to_diff_dict(self) -> Dict[str, Any]: - """ - Removes all attributes from config which correspond to the default config attributes for better readability and - serializes to a Python dictionary. - - Returns: - `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance, - """ - config_dict = self.to_dict() - - # get the default config dict - default_config_dict = PretrainedConfig().to_dict() - - # get class specific config dict - class_config_dict = self.__class__().to_dict() if not self.is_composition else {} - - serializable_config_dict = {} - - # only serialize values that differ from the default config - for key, value in config_dict.items(): - if ( - isinstance(getattr(self, key, None), PretrainedConfig) - and key in class_config_dict - and isinstance(class_config_dict[key], dict) - ): - # For nested configs we need to clean the diff recursively - diff = recursive_diff_dict(value, class_config_dict[key], config_obj=getattr(self, key, None)) - if "model_type" in value: - # Needs to be set even if it's not in the diff - diff["model_type"] = value["model_type"] - if len(diff) > 0: - serializable_config_dict[key] = diff - elif ( - key not in default_config_dict - or key == "transformers_version" - or value != default_config_dict[key] - or (key in class_config_dict and value != class_config_dict[key]) - ): - serializable_config_dict[key] = value - - if hasattr(self, "quantization_config"): - serializable_config_dict["quantization_config"] = ( - self.quantization_config.to_dict() - if not isinstance(self.quantization_config, dict) - else self.quantization_config - ) - - self.dict_torch_dtype_to_str(serializable_config_dict) - - if "_flash_attn_2_enabled" in serializable_config_dict: - del serializable_config_dict["_flash_attn_2_enabled"] - - return serializable_config_dict - - def to_dict(self) -> Dict[str, Any]: - """ - Serializes this instance to a Python dictionary. - - Returns: - `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance. - """ - output = copy.deepcopy(self.__dict__) - if hasattr(self.__class__, "model_type"): - output["model_type"] = self.__class__.model_type - if "_auto_class" in output: - del output["_auto_class"] - if "_commit_hash" in output: - del output["_commit_hash"] - if "_flash_attn_2_enabled" in output: - del output["_flash_attn_2_enabled"] - - # Transformers version when serializing the model - output["transformers_version"] = __version__ - - for key, value in output.items(): - # Deal with nested configs like CLIP - if isinstance(value, PretrainedConfig): - value = value.to_dict() - del value["transformers_version"] - - output[key] = value - - if hasattr(self, "quantization_config"): - output["quantization_config"] = ( - self.quantization_config.to_dict() - if not isinstance(self.quantization_config, dict) - else self.quantization_config - ) - - self.dict_torch_dtype_to_str(output) - - return output - - def to_json_string(self, use_diff: bool = True) -> str: - """ - Serializes this instance to a JSON string. - - Args: - use_diff (`bool`, *optional*, defaults to `True`): - If set to `True`, only the difference between the config instance and the default `PretrainedConfig()` - is serialized to JSON string. - - Returns: - `str`: String containing all the attributes that make up this configuration instance in JSON format. - """ - if use_diff is True: - config_dict = self.to_diff_dict() - else: - config_dict = self.to_dict() - return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" - - def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True): - """ - Save this instance to a JSON file. - - Args: - json_file_path (`str` or `os.PathLike`): - Path to the JSON file in which this configuration instance's parameters will be saved. - use_diff (`bool`, *optional*, defaults to `True`): - If set to `True`, only the difference between the config instance and the default `PretrainedConfig()` - is serialized to JSON file. - """ - with open(json_file_path, "w", encoding="utf-8") as writer: - writer.write(self.to_json_string(use_diff=use_diff)) - - def update(self, config_dict: Dict[str, Any]): - """ - Updates attributes of this class with attributes from `config_dict`. - - Args: - config_dict (`Dict[str, Any]`): Dictionary of attributes that should be updated for this class. - """ - for key, value in config_dict.items(): - setattr(self, key, value) - - def update_from_string(self, update_str: str): - """ - Updates attributes of this class with attributes from `update_str`. - - The expected format is ints, floats and strings as is, and for booleans use `true` or `false`. For example: - "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index" - - The keys to change have to already exist in the config object. - - Args: - update_str (`str`): String with attributes that should be updated for this class. - - """ - - d = dict(x.split("=") for x in update_str.split(",")) - for k, v in d.items(): - if not hasattr(self, k): - raise ValueError(f"key {k} isn't in the original config dict") - - old_v = getattr(self, k) - if isinstance(old_v, bool): - if v.lower() in ["true", "1", "y", "yes"]: - v = True - elif v.lower() in ["false", "0", "n", "no"]: - v = False - else: - raise ValueError(f"can't derive true or false from {v} (key {k})") - elif isinstance(old_v, int): - v = int(v) - elif isinstance(old_v, float): - v = float(v) - elif not isinstance(old_v, str): - raise ValueError( - f"You can only update int, float, bool or string values in the config, got {v} for key {k}" - ) - - setattr(self, k, v) - - def dict_torch_dtype_to_str(self, d: Dict[str, Any]) -> None: - """ - Checks whether the passed dictionary and its nested dicts have a *torch_dtype* key and if it's not None, - converts torch.dtype to a string of just the type. For example, `torch.float32` get converted into *"float32"* - string, which can then be stored in the json format. - """ - if d.get("torch_dtype", None) is not None and not isinstance(d["torch_dtype"], str): - d["torch_dtype"] = str(d["torch_dtype"]).split(".")[1] - for value in d.values(): - if isinstance(value, dict): - self.dict_torch_dtype_to_str(value) - - @classmethod - def register_for_auto_class(cls, auto_class="AutoConfig"): - """ - Register this class with a given auto class. This should only be used for custom configurations as the ones in - the library are already mapped with `AutoConfig`. - - - - This API is experimental and may have some slight breaking changes in the next releases. - - - - Args: - auto_class (`str` or `type`, *optional*, defaults to `"AutoConfig"`): - The auto class to register this new configuration with. - """ - if not isinstance(auto_class, str): - auto_class = auto_class.__name__ - - import transformers.models.auto as auto_module - - if not hasattr(auto_module, auto_class): - raise ValueError(f"{auto_class} is not a valid auto class.") - - cls._auto_class = auto_class - - -def get_configuration_file(configuration_files: List[str]) -> str: - """ - Get the configuration file to use for this version of transformers. - - Args: - configuration_files (`List[str]`): The list of available configuration files. - - Returns: - `str`: The configuration file to use. - """ - configuration_files_map = {} - for file_name in configuration_files: - search = _re_configuration_file.search(file_name) - if search is not None: - v = search.groups()[0] - configuration_files_map[v] = file_name - available_versions = sorted(configuration_files_map.keys()) - - # Defaults to FULL_CONFIGURATION_FILE and then try to look at some newer versions. - configuration_file = CONFIG_NAME - transformers_version = version.parse(__version__) - for v in available_versions: - if version.parse(v) <= transformers_version: - configuration_file = configuration_files_map[v] - else: - # No point going further since the versions are sorted. - break - - return configuration_file - - -def recursive_diff_dict(dict_a, dict_b, config_obj=None): - """ - Helper function to recursively take the diff between two nested dictionaries. The resulting diff only contains the - values from `dict_a` that are different from values in `dict_b`. - """ - diff = {} - default = config_obj.__class__().to_dict() if config_obj is not None else {} - for key, value in dict_a.items(): - obj_value = getattr(config_obj, str(key), None) - if isinstance(obj_value, PretrainedConfig) and key in dict_b and isinstance(dict_b[key], dict): - diff_value = recursive_diff_dict(value, dict_b[key], config_obj=obj_value) - if len(diff_value) > 0: - diff[key] = diff_value - elif key not in dict_b or value != dict_b[key] or key not in default or value != default[key]: - diff[key] = value - return diff - - -PretrainedConfig.push_to_hub = copy_func(PretrainedConfig.push_to_hub) -if PretrainedConfig.push_to_hub.__doc__ is not None: - PretrainedConfig.push_to_hub.__doc__ = PretrainedConfig.push_to_hub.__doc__.format( - object="config", object_class="AutoConfig", object_files="configuration file" - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptj/modeling_gptj.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptj/modeling_gptj.py deleted file mode 100644 index a93bdeaacd9d2332319e9fe1b0ce0c18ac716c75..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gptj/modeling_gptj.py +++ /dev/null @@ -1,1151 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The EleutherAI and HuggingFace Teams. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch GPT-J model.""" - -import warnings -from typing import Optional, Tuple, Union - -import torch -import torch.fx -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutputWithPast, - CausalLMOutputWithPast, - QuestionAnsweringModelOutput, - SequenceClassifierOutputWithPast, -) -from ...modeling_utils import PreTrainedModel -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - is_torch_fx_proxy, - logging, -) -from ...utils.model_parallel_utils import assert_device_map, get_device_map -from .configuration_gptj import GPTJConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "hf-internal-testing/tiny-random-gptj" -_REAL_CHECKPOINT_FOR_DOC = "EleutherAI/gpt-j-6B" -_CONFIG_FOR_DOC = "GPTJConfig" - - -GPTJ_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "EleutherAI/gpt-j-6B", - # See all GPT-J models at https://huggingface.co/models?filter=gptj -] - - -def create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor: - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim)) - sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=torch.float), inv_freq).float() - return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1) - - -@torch.fx.wrap -def get_embed_positions(embed_positions, position_ids): - return embed_positions.to(position_ids.device).repeat(position_ids.shape[0], 1, 1) - - -def rotate_every_two(x: torch.Tensor) -> torch.Tensor: - x1 = x[:, :, :, ::2] - x2 = x[:, :, :, 1::2] - x = torch.stack((-x2, x1), dim=-1) - return x.flatten(-2) # in einsum notation: rearrange(x, '... d j -> ... (d j)') - - -def apply_rotary_pos_emb(tensor: torch.Tensor, sin: torch.Tensor, cos: torch.Tensor) -> torch.Tensor: - sin = torch.repeat_interleave(sin[:, :, None, :], 2, 3) - cos = torch.repeat_interleave(cos[:, :, None, :], 2, 3) - return (tensor * cos) + (rotate_every_two(tensor) * sin) - - -class GPTJAttention(nn.Module): - def __init__(self, config): - super().__init__() - - max_positions = config.max_position_embeddings - self.register_buffer( - "bias", - torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( - 1, 1, max_positions, max_positions - ), - persistent=False, - ) - self.register_buffer("masked_bias", torch.tensor(-1e9), persistent=False) - - self.attn_dropout = nn.Dropout(config.attn_pdrop) - self.resid_dropout = nn.Dropout(config.resid_pdrop) - - self.embed_dim = config.hidden_size - self.num_attention_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_attention_heads - if self.head_dim * self.num_attention_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_attention_heads (got `embed_dim`: {self.embed_dim} and" - f" `num_attention_heads`: {self.num_attention_heads})." - ) - self.scale_attn = torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32)).to(torch.get_default_dtype()) - - self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) - self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) - self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) - self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) - self.rotary_dim = config.rotary_dim - pos_embd_dim = self.rotary_dim or self.embed_dim - self.embed_positions = create_sinusoidal_positions(max_positions, pos_embd_dim) - - def _split_heads(self, tensor, num_attention_heads, attn_head_size, rotary): - """ - Splits hidden dim into attn_head_size and num_attention_heads - """ - new_shape = tensor.size()[:-1] + (num_attention_heads, attn_head_size) - tensor = tensor.view(new_shape) - if rotary: - return tensor - if len(tensor.shape) == 5: - return tensor.permute(0, 1, 3, 2, 4) # (batch, blocks, head, block_length, head_features) - elif len(tensor.shape) == 4: - return tensor.permute(0, 2, 1, 3) # (batch, head, seq_length, head_features) - else: - raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}") - - def _merge_heads(self, tensor, num_attention_heads, attn_head_size): - """ - Merges attn_head_size dim and num_attn_heads dim into hidden dim - """ - if len(tensor.shape) == 5: - tensor = tensor.permute(0, 1, 3, 2, 4).contiguous() - elif len(tensor.shape) == 4: - tensor = tensor.permute(0, 2, 1, 3).contiguous() - else: - raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}") - new_shape = tensor.size()[:-2] + (num_attention_heads * attn_head_size,) - return tensor.view(new_shape) - - def _attn( - self, - query, - key, - value, - attention_mask=None, - head_mask=None, - ): - # compute causal mask from causal mask buffer - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length] - - # Keep the attention weights computation in fp32 to avoid overflow issues - query = query.to(torch.float32) - key = key.to(torch.float32) - - attn_weights = torch.matmul(query, key.transpose(-1, -2)) - - mask_value = torch.finfo(attn_weights.dtype).min - # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`. - # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device` - mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device) - attn_weights = torch.where(causal_mask, attn_weights, mask_value) - - attn_weights = attn_weights / self.scale_attn - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - attn_weights = attn_weights.to(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def _get_embed_positions(self, position_ids): - embed_positions = self.embed_positions - if embed_positions.device != position_ids.device: - embed_positions = embed_positions.to(position_ids.device) - self.embed_positions = embed_positions - return embed_positions.repeat(position_ids.shape[0], 1, 1) - - def forward( - self, - hidden_states: torch.FloatTensor, - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[ - Tuple[torch.Tensor, Tuple[torch.Tensor]], - Optional[Tuple[torch.Tensor, Tuple[torch.Tensor], Tuple[torch.Tensor, ...]]], - ]: - query = self.q_proj(hidden_states) - key = self.k_proj(hidden_states) - value = self.v_proj(hidden_states) - - query = self._split_heads(query, self.num_attention_heads, self.head_dim, True) - key = self._split_heads(key, self.num_attention_heads, self.head_dim, True) - value = self._split_heads(value, self.num_attention_heads, self.head_dim, False) - - if is_torch_fx_proxy(position_ids) or torch.jit.is_tracing(): - # The logic to conditionally copy to GPU could not be traced, so we do this - # every time in the torch.fx case - embed_positions = get_embed_positions(self.embed_positions, position_ids) - else: - embed_positions = self._get_embed_positions(position_ids) - - repeated_position_ids = position_ids.unsqueeze(-1).repeat(1, 1, embed_positions.shape[-1]) - sincos = torch.gather(embed_positions, 1, repeated_position_ids) - sin, cos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1) - - if self.rotary_dim is not None: - k_rot = key[:, :, :, : self.rotary_dim] - k_pass = key[:, :, :, self.rotary_dim :] - - q_rot = query[:, :, :, : self.rotary_dim] - q_pass = query[:, :, :, self.rotary_dim :] - - k_rot = apply_rotary_pos_emb(k_rot, sin, cos) - q_rot = apply_rotary_pos_emb(q_rot, sin, cos) - - key = torch.cat([k_rot, k_pass], dim=-1) - query = torch.cat([q_rot, q_pass], dim=-1) - else: - key = apply_rotary_pos_emb(key, sin, cos) - query = apply_rotary_pos_emb(query, sin, cos) - - key = key.permute(0, 2, 1, 3) - query = query.permute(0, 2, 1, 3) - - if layer_past is not None: - past_key = layer_past[0] - past_value = layer_past[1] - key = torch.cat((past_key, key), dim=-2) - value = torch.cat((past_value, value), dim=-2) - - if use_cache is True: - present = (key, value) - else: - present = None - - # compute self-attention: V x Softmax(QK^T) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) - - attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim) - attn_output = self.out_proj(attn_output) - attn_output = self.resid_dropout(attn_output) - - outputs = (attn_output, present) - if output_attentions: - outputs += (attn_weights,) - - return outputs # a, present, (attentions) - - -class GPTJMLP(nn.Module): - def __init__(self, intermediate_size, config): # in MLP: intermediate_size= 4 * embed_dim - super().__init__() - embed_dim = config.n_embd - - self.fc_in = nn.Linear(embed_dim, intermediate_size) - self.fc_out = nn.Linear(intermediate_size, embed_dim) - - self.act = ACT2FN[config.activation_function] - self.dropout = nn.Dropout(config.resid_pdrop) - - def forward(self, hidden_states: Optional[torch.FloatTensor]) -> torch.FloatTensor: - hidden_states = self.fc_in(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.fc_out(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -class GPTJBlock(nn.Module): - def __init__(self, config): - super().__init__() - inner_dim = config.n_inner if config.n_inner is not None else 4 * config.n_embd - self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon) - self.attn = GPTJAttention(config) - self.mlp = GPTJMLP(inner_dim, config) - - def forward( - self, - hidden_states: Optional[torch.FloatTensor], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]: - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - attn_outputs = self.attn( - hidden_states=hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - attn_output = attn_outputs[0] # output_attn: a, present, (attentions) - outputs = attn_outputs[1:] - - feed_forward_hidden_states = self.mlp(hidden_states) - hidden_states = attn_output + feed_forward_hidden_states + residual - - if use_cache: - outputs = (hidden_states,) + outputs - else: - outputs = (hidden_states,) + outputs[1:] - - return outputs # hidden_states, present, (attentions) - - -class GPTJPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPTJConfig - base_model_prefix = "transformer" - is_parallelizable = True - supports_gradient_checkpointing = True - _no_split_modules = ["GPTJBlock"] - _skip_keys_device_placement = "past_key_values" - - def __init__(self, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear,)): - # Slightly different from Mesh Transformer JAX which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, GPTJModel): - module.gradient_checkpointing = value - - -GPTJ_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use - it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`GPTJConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -GPTJ_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_attention_heads,)` or `(n_layer, num_attention_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_dim)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert *input_ids* indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -PARALLELIZE_DOCSTRING = r""" - This is an experimental feature and is a subject to change at a moment's notice. Uses a device map to distribute - attention modules of the model across several devices. If no device map is given, it will evenly distribute blocks - across all devices. - - Args: - device_map (`Dict[int, list]`, optional, defaults to None): - A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always - automatically mapped to the first device (for esoteric reasons). That means that the first device should - have fewer attention modules mapped to it than other devices. For reference, the GPT-J models have the - following number of attention modules: - - - gpt-j-6B: 28 - - Example: - - ```python - # Here is an example of a device map on a machine with 4 GPUs using gpt-j-6B, which has a total of 28 attention modules: - model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6], - 1: [7, 8, 9, 10, 11, 12, 13], - 2: [14, 15, 16, 17, 18, 19, 20], - 3: [21, 22, 23, 24, 25, 26, 27], - } - model.parallelize(device_map) - ``` -""" - -DEPARALLELIZE_DOCSTRING = r""" - Moves the model to CPU from a model parallel state. - - Example: - - ```python - # On a 4 GPU machine with gpt-j-6B: - model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") - device_map = { - 0: [0, 1, 2, 3, 4, 5, 6], - 1: [7, 8, 9, 10, 11, 12, 13], - 2: [14, 15, 16, 17, 18, 19, 20], - 3: [21, 22, 23, 24, 25, 26, 27], - } - model.parallelize(device_map) # Splits the model across several devices - model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache() - ``` -""" - - -@add_start_docstrings( - "The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top.", - GPTJ_START_DOCSTRING, -) -class GPTJModel(GPTJPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.embed_dim = config.n_embd - self.vocab_size = config.vocab_size - self.wte = nn.Embedding(config.vocab_size, self.embed_dim) - self.drop = nn.Dropout(config.embd_pdrop) - self.h = nn.ModuleList([GPTJBlock(config) for _ in range(config.n_layer)]) - self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - warnings.warn( - "`GPTJModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should load your" - " model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own" - " `device_map` but it needs to be a dictionary module_name to device, so for instance {'h.0': 0, 'h.1': 1," - " ...}", - FutureWarning, - ) - # Check validity of device_map - self.device_map = ( - get_device_map(len(self.h), range(torch.cuda.device_count())) if device_map is None else device_map - ) - assert_device_map(self.device_map, len(self.h)) - self.model_parallel = True - self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys())) - self.last_device = "cuda:" + str(max(self.device_map.keys())) - self.wte = self.wte.to(self.first_device) - # Load onto devices - for k, v in self.device_map.items(): - for block in v: - cuda_device = "cuda:" + str(k) - self.h[block] = self.h[block].to(cuda_device) - # ln_f to last - self.ln_f = self.ln_f.to(self.last_device) - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - warnings.warn( - "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.", - FutureWarning, - ) - self.model_parallel = False - self.device_map = None - self.first_device = "cpu" - self.last_device = "cpu" - self.wte = self.wte.to("cpu") - for index in range(len(self.h)): - self.h[index] = self.h[index].to("cpu") - self.ln_f = self.ln_f.to("cpu") - torch.cuda.empty_cache() - - def get_input_embeddings(self): - return self.wte - - def set_input_embeddings(self, new_embeddings): - self.wte = new_embeddings - - @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPast, - config_class=_CONFIG_FOR_DOC, - real_checkpoint=_REAL_CHECKPOINT_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - batch_size = input_ids.shape[0] - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size = inputs_embeds.shape[0] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if token_type_ids is not None: - token_type_ids = token_type_ids.view(-1, input_shape[-1]) - - if past_key_values is None: - past_length = 0 - past_key_values = tuple([None] * len(self.h)) - else: - past_length = past_key_values[0][0].size(-2) - - if position_ids is None: - position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0) - - # Attention mask. - if attention_mask is not None: - if batch_size <= 0: - raise ValueError("batch_size has to be defined and > 0") - attention_mask = attention_mask.view(batch_size, -1) - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = attention_mask[:, None, None, :] - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and the dtype's smallest value for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility - attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_attention_heads x N x N - # head_mask has shape n_layer x batch x num_attention_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.wte(input_ids) - - hidden_states = inputs_embeds - - if token_type_ids is not None: - token_type_embeds = self.wte(token_type_ids) - hidden_states = hidden_states + token_type_embeds - - hidden_states = self.drop(hidden_states) - - output_shape = (-1,) + input_shape[1:] + (hidden_states.size(-1),) - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - presents = () if use_cache else None - all_self_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - # Model parallel - if self.model_parallel: - torch.cuda.set_device(hidden_states.device) - # Ensure layer_past is on same device as hidden_states (might not be correct) - if layer_past is not None: - layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) - # Ensure that attention_mask is always on the same device as hidden_states - if attention_mask is not None: - attention_mask = attention_mask.to(hidden_states.device) - if isinstance(head_mask, torch.Tensor): - head_mask = head_mask.to(hidden_states.device) - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, use_cache, output_attentions) - - return custom_forward - - outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - None, - attention_mask, - position_ids, - head_mask[i], - ) - else: - outputs = block( - hidden_states=hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask[i], - use_cache=use_cache, - output_attentions=output_attentions, - ) - - hidden_states = outputs[0] - if use_cache is True: - presents = presents + (outputs[1],) - - if output_attentions: - all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) - - # Model Parallel: If it's the last layer for that device, put things on the next device - if self.model_parallel: - for k, v in self.device_map.items(): - if i == v[-1] and "cuda:" + str(k) != self.last_device: - hidden_states = hidden_states.to("cuda:" + str(k + 1)) - - hidden_states = self.ln_f(hidden_states) - - hidden_states = hidden_states.view(output_shape) - # Add last hidden state - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None) - - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -@add_start_docstrings( - """ - The GPT-J Model transformer with a language modeling head on top. - """, - GPTJ_START_DOCSTRING, -) -class GPTJForCausalLM(GPTJPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - super().__init__(config) - self.transformer = GPTJModel(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings(PARALLELIZE_DOCSTRING) - def parallelize(self, device_map=None): - warnings.warn( - "`GPTJForCausalLM.parallelize` is deprecated and will be removed in v5 of Transformers, you should load" - " your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own" - " `device_map` but it needs to be a dictionary module_name to device, so for instance {'transformer.h.0':" - " 0, 'transformer.h.1': 1, ...}", - FutureWarning, - ) - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - @add_start_docstrings(DEPARALLELIZE_DOCSTRING) - def deparallelize(self): - warnings.warn( - "Like `parallelize`, `deparallelize` is deprecated and will be removed in v5 of Transformers.", - FutureWarning, - ) - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - ) - - return model_inputs - - @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutputWithPast, - config_class=_CONFIG_FOR_DOC, - real_checkpoint=_REAL_CHECKPOINT_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - # make sure sampling in fp16 works correctly and - # compute loss in fp32 to match with mesh-tf version - # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179 - lm_logits = self.lm_head(hidden_states).to(torch.float32) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(lm_logits.device) - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - loss = loss.to(hidden_states.dtype) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutputWithPast( - loss=loss, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - @staticmethod - def _reorder_cache( - past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor - ) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or - [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past_key_values - ) - - -@add_start_docstrings( - """ - The GPT-J Model transformer with a sequence classification head on top (linear layer). - - [`GPTJForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT, GPT-2, GPT-Neo) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - GPTJ_START_DOCSTRING, -) -class GPTJForSequenceClassification(GPTJPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = GPTJModel(config) - self.score = nn.Linear(config.n_embd, self.num_labels, bias=False) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="ydshieh/tiny-random-gptj-for-sequence-classification", - output_type=SequenceClassifierOutputWithPast, - config_class=_CONFIG_FOR_DOC, - real_checkpoint=_REAL_CHECKPOINT_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size = input_ids.shape[0] - else: - batch_size = inputs_embeds.shape[0] - - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to( - logits.device - ) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - - pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] - - loss = None - if labels is not None: - labels = labels.to(pooled_logits.device) - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like - SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - GPTJ_START_DOCSTRING, -) -class GPTJForQuestionAnswering(GPTJPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = GPTJModel(config) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Model parallel - self.model_parallel = False - self.device_map = None - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GPTJ_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - real_checkpoint=_REAL_CHECKPOINT_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1).to(start_logits.device) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1).to(end_logits.device) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/zhang-wei-jian/docker/node_modules/toidentifier/README.md b/spaces/zhang-wei-jian/docker/node_modules/toidentifier/README.md deleted file mode 100644 index 57e8a78ab5218e7d424eabde5b6865997a14f500..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/toidentifier/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# toidentifier - -[![NPM Version][npm-image]][npm-url] -[![NPM Downloads][downloads-image]][downloads-url] -[![Build Status][github-actions-ci-image]][github-actions-ci-url] -[![Test Coverage][codecov-image]][codecov-url] - -> Convert a string of words to a JavaScript identifier - -## Install - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```bash -$ npm install toidentifier -``` - -## Example - -```js -var toIdentifier = require('toidentifier') - -console.log(toIdentifier('Bad Request')) -// => "BadRequest" -``` - -## API - -This CommonJS module exports a single default function: `toIdentifier`. - -### toIdentifier(string) - -Given a string as the argument, it will be transformed according to -the following rules and the new string will be returned: - -1. Split into words separated by space characters (`0x20`). -2. Upper case the first character of each word. -3. Join the words together with no separator. -4. Remove all non-word (`[0-9a-z_]`) characters. - -## License - -[MIT](LICENSE) - -[codecov-image]: https://img.shields.io/codecov/c/github/component/toidentifier.svg -[codecov-url]: https://codecov.io/gh/component/toidentifier -[downloads-image]: https://img.shields.io/npm/dm/toidentifier.svg -[downloads-url]: https://npmjs.org/package/toidentifier -[github-actions-ci-image]: https://img.shields.io/github/workflow/status/component/toidentifier/ci/master?label=ci -[github-actions-ci-url]: https://github.com/component/toidentifier?query=workflow%3Aci -[npm-image]: https://img.shields.io/npm/v/toidentifier.svg -[npm-url]: https://npmjs.org/package/toidentifier - - -## - -[npm]: https://www.npmjs.com/ - -[yarn]: https://yarnpkg.com/ diff --git a/spaces/zhaoys/wfms-kuiwenc/src/lib/isomorphic/index.ts b/spaces/zhaoys/wfms-kuiwenc/src/lib/isomorphic/index.ts deleted file mode 100644 index d4ebae951004bc8ec388f82548f4204a6c2a0a50..0000000000000000000000000000000000000000 --- a/spaces/zhaoys/wfms-kuiwenc/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,8 +0,0 @@ -'use client' - -import Debug from 'debug' -export * from 'ifw' - -export const debug = typeof document === 'undefined' ? Debug('bingo') - : process.env.NEXT_PUBLIC_DEBUG ? console.info.bind(console) - : () => {} diff --git a/spaces/zideliu/styledrop/open_clip/factory.py b/spaces/zideliu/styledrop/open_clip/factory.py deleted file mode 100644 index 14011f9340fc6e54876c3c5bcb9e23a8cd57849d..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/open_clip/factory.py +++ /dev/null @@ -1,366 +0,0 @@ -import json -import logging -import os -import pathlib -import re -from copy import deepcopy -from pathlib import Path -from typing import Any, Dict, Optional, Tuple, Union - -import torch - -from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD -from .model import CLIP, CustomTextCLIP, convert_weights_to_lp, convert_to_custom_text_state_dict,\ - resize_pos_embed, get_cast_dtype -from .coca_model import CoCa -from .loss import ClipLoss, DistillClipLoss, CoCaLoss -from .openai import load_openai_model -from .pretrained import is_pretrained_cfg, get_pretrained_cfg, download_pretrained, list_pretrained_tags_by_model, download_pretrained_from_hf -from .transform import image_transform, AugmentationCfg -from .tokenizer import HFTokenizer, tokenize - - -HF_HUB_PREFIX = 'hf-hub:' -_MODEL_CONFIG_PATHS = [Path(__file__).parent / f"model_configs/"] -_MODEL_CONFIGS = {} # directory (model_name: config) of model architecture configs - - -def _natural_key(string_): - return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())] - - -def _rescan_model_configs(): - global _MODEL_CONFIGS - - config_ext = ('.json',) - config_files = [] - for config_path in _MODEL_CONFIG_PATHS: - if config_path.is_file() and config_path.suffix in config_ext: - config_files.append(config_path) - elif config_path.is_dir(): - for ext in config_ext: - config_files.extend(config_path.glob(f'*{ext}')) - - for cf in config_files: - with open(cf, 'r') as f: - model_cfg = json.load(f) - if all(a in model_cfg for a in ('embed_dim', 'vision_cfg', 'text_cfg')): - _MODEL_CONFIGS[cf.stem] = model_cfg - - _MODEL_CONFIGS = {k: v for k, v in sorted(_MODEL_CONFIGS.items(), key=lambda x: _natural_key(x[0]))} - - -_rescan_model_configs() # initial populate of model config registry - - -def list_models(): - """ enumerate available model architectures based on config files """ - return list(_MODEL_CONFIGS.keys()) - - -def add_model_config(path): - """ add model config path or file and update registry """ - if not isinstance(path, Path): - path = Path(path) - _MODEL_CONFIG_PATHS.append(path) - _rescan_model_configs() - - -def get_model_config(model_name): - if model_name in _MODEL_CONFIGS: - return deepcopy(_MODEL_CONFIGS[model_name]) - else: - return None - - -def get_tokenizer(model_name): - if model_name.startswith(HF_HUB_PREFIX): - tokenizer = HFTokenizer(model_name[len(HF_HUB_PREFIX):]) - else: - config = get_model_config(model_name) - tokenizer = HFTokenizer( - config['text_cfg']['hf_tokenizer_name']) if 'hf_tokenizer_name' in config['text_cfg'] else tokenize - return tokenizer - - -def load_state_dict(checkpoint_path: str, map_location='cpu'): - checkpoint = torch.load(checkpoint_path, map_location=map_location) - if isinstance(checkpoint, dict) and 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if next(iter(state_dict.items()))[0].startswith('module'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - return state_dict - - -def load_checkpoint(model, checkpoint_path, strict=True): - state_dict = load_state_dict(checkpoint_path) - # detect old format and make compatible with new format - if 'positional_embedding' in state_dict and not hasattr(model, 'positional_embedding'): - state_dict = convert_to_custom_text_state_dict(state_dict) - resize_pos_embed(state_dict, model) - incompatible_keys = model.load_state_dict(state_dict, strict=strict) - return incompatible_keys - - -def create_model( - model_name: str, - pretrained: Optional[str] = None, - precision: str = 'fp32', - device: Union[str, torch.device] = 'cpu', - jit: bool = False, - force_quick_gelu: bool = False, - force_custom_text: bool = False, - force_patch_dropout: Optional[float] = None, - force_image_size: Optional[Union[int, Tuple[int, int]]] = None, - pretrained_image: bool = False, - pretrained_hf: bool = True, - cache_dir: Optional[str] = None, - output_dict: Optional[bool] = None, - require_pretrained: bool = False, -): - has_hf_hub_prefix = model_name.startswith(HF_HUB_PREFIX) - if has_hf_hub_prefix: - model_id = model_name[len(HF_HUB_PREFIX):] - checkpoint_path = download_pretrained_from_hf(model_id, cache_dir=cache_dir) - config_path = download_pretrained_from_hf(model_id, filename='open_clip_config.json', cache_dir=cache_dir) - - with open(config_path, 'r', encoding='utf-8') as f: - config = json.load(f) - pretrained_cfg = config['preprocess_cfg'] - model_cfg = config['model_cfg'] - else: - model_name = model_name.replace('/', '-') # for callers using old naming with / in ViT names - checkpoint_path = None - pretrained_cfg = {} - model_cfg = None - - if isinstance(device, str): - device = torch.device(device) - - if pretrained and pretrained.lower() == 'openai': - logging.info(f'Loading pretrained {model_name} from OpenAI.') - model = load_openai_model( - model_name, - precision=precision, - device=device, - jit=jit, - cache_dir=cache_dir, - ) - - # to always output dict even if it is clip - if output_dict and hasattr(model, "output_dict"): - model.output_dict = True - else: - model_cfg = model_cfg or get_model_config(model_name) - if model_cfg is not None: - logging.info(f'Loaded {model_name} model config.') - else: - logging.error(f'Model config for {model_name} not found; available models {list_models()}.') - raise RuntimeError(f'Model config for {model_name} not found.') - - if force_quick_gelu: - # override for use of QuickGELU on non-OpenAI transformer models - model_cfg["quick_gelu"] = True - - if force_patch_dropout is not None: - # override the default patch dropout value - model_cfg["vision_cfg"]["patch_dropout"] = force_patch_dropout - - if force_image_size is not None: - # override model config's image size - model_cfg["vision_cfg"]["image_size"] = force_image_size - - if pretrained_image: - if 'timm_model_name' in model_cfg.get('vision_cfg', {}): - # pretrained weight loading for timm models set via vision_cfg - model_cfg['vision_cfg']['timm_model_pretrained'] = True - else: - assert False, 'pretrained image towers currently only supported for timm models' - - cast_dtype = get_cast_dtype(precision) - is_hf_model = 'hf_model_name' in model_cfg.get('text_cfg', {}) - custom_text = model_cfg.pop('custom_text', False) or force_custom_text or is_hf_model - - if custom_text: - if is_hf_model: - model_cfg['text_cfg']['hf_model_pretrained'] = pretrained_hf - if "coca" in model_name: - model = CoCa(**model_cfg, cast_dtype=cast_dtype) - else: - model = CustomTextCLIP(**model_cfg, cast_dtype=cast_dtype) - else: - model = CLIP(**model_cfg, cast_dtype=cast_dtype) - - pretrained_loaded = False - if pretrained: - checkpoint_path = '' - pretrained_cfg = get_pretrained_cfg(model_name, pretrained) - if pretrained_cfg: - checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir) - elif os.path.exists(pretrained): - checkpoint_path = pretrained - - if checkpoint_path: - logging.info(f'Loading pretrained {model_name} weights ({pretrained}).') - load_checkpoint(model, checkpoint_path) - else: - error_str = ( - f'Pretrained weights ({pretrained}) not found for model {model_name}.' - f'Available pretrained tags ({list_pretrained_tags_by_model(model_name)}.') - logging.warning(error_str) - raise RuntimeError(error_str) - pretrained_loaded = True - elif has_hf_hub_prefix: - logging.info(f'Loading pretrained {model_name} weights ({pretrained}).') - load_checkpoint(model, checkpoint_path) - pretrained_loaded = True - - if require_pretrained and not pretrained_loaded: - # callers of create_model_from_pretrained always expect pretrained weights - raise RuntimeError( - f'Pretrained weights were required for (model: {model_name}, pretrained: {pretrained}) but not loaded.') - - model.to(device=device) - if precision in ("fp16", "bf16"): - convert_weights_to_lp(model, dtype=torch.bfloat16 if precision == 'bf16' else torch.float16) - - # set image / mean metadata from pretrained_cfg if available, or use default - model.visual.image_mean = pretrained_cfg.get('mean', None) or OPENAI_DATASET_MEAN - model.visual.image_std = pretrained_cfg.get('std', None) or OPENAI_DATASET_STD - - # to always output dict even if it is clip - if output_dict and hasattr(model, "output_dict"): - model.output_dict = True - - if jit: - model = torch.jit.script(model) - - return model - - -def create_loss(args): - if args.distill: - return DistillClipLoss( - local_loss=args.local_loss, - gather_with_grad=args.gather_with_grad, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - ) - elif "coca" in args.model.lower(): - return CoCaLoss( - caption_loss_weight=args.coca_caption_loss_weight, - clip_loss_weight=args.coca_contrastive_loss_weight, - local_loss=args.local_loss, - gather_with_grad=args.gather_with_grad, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - ) - return ClipLoss( - local_loss=args.local_loss, - gather_with_grad=args.gather_with_grad, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod, - ) - - -def create_model_and_transforms( - model_name: str, - pretrained: Optional[str] = None, - precision: str = 'fp32', - device: Union[str, torch.device] = 'cpu', - jit: bool = False, - force_quick_gelu: bool = False, - force_custom_text: bool = False, - force_patch_dropout: Optional[float] = None, - force_image_size: Optional[Union[int, Tuple[int, int]]] = None, - pretrained_image: bool = False, - pretrained_hf: bool = True, - image_mean: Optional[Tuple[float, ...]] = None, - image_std: Optional[Tuple[float, ...]] = None, - aug_cfg: Optional[Union[Dict[str, Any], AugmentationCfg]] = None, - cache_dir: Optional[str] = None, - output_dict: Optional[bool] = None, -): - model = create_model( - model_name, - pretrained, - precision=precision, - device=device, - jit=jit, - force_quick_gelu=force_quick_gelu, - force_custom_text=force_custom_text, - force_patch_dropout=force_patch_dropout, - force_image_size=force_image_size, - pretrained_image=pretrained_image, - pretrained_hf=pretrained_hf, - cache_dir=cache_dir, - output_dict=output_dict, - ) - - image_mean = image_mean or getattr(model.visual, 'image_mean', None) - image_std = image_std or getattr(model.visual, 'image_std', None) - preprocess_train = image_transform( - model.visual.image_size, - is_train=True, - mean=image_mean, - std=image_std, - aug_cfg=aug_cfg, - ) - preprocess_val = image_transform( - model.visual.image_size, - is_train=False, - mean=image_mean, - std=image_std, - ) - - return model, preprocess_train, preprocess_val - - -def create_model_from_pretrained( - model_name: str, - pretrained: Optional[str] = None, - precision: str = 'fp32', - device: Union[str, torch.device] = 'cpu', - jit: bool = False, - force_quick_gelu: bool = False, - force_custom_text: bool = False, - force_image_size: Optional[Union[int, Tuple[int, int]]] = None, - return_transform: bool = True, - image_mean: Optional[Tuple[float, ...]] = None, - image_std: Optional[Tuple[float, ...]] = None, - cache_dir: Optional[str] = None, -): - model = create_model( - model_name, - pretrained, - precision=precision, - device=device, - jit=jit, - force_quick_gelu=force_quick_gelu, - force_custom_text=force_custom_text, - force_image_size=force_image_size, - cache_dir=cache_dir, - require_pretrained=True, - ) - - if not return_transform: - return model - - image_mean = image_mean or getattr(model.visual, 'image_mean', None) - image_std = image_std or getattr(model.visual, 'image_std', None) - preprocess = image_transform( - model.visual.image_size, - is_train=False, - mean=image_mean, - std=image_std, - ) - - return model, preprocess